effects of an explicit instruction and video modeling
TRANSCRIPT
The Pennsylvania State University
The Graduate School
EFFECTS OF AN EXPLICIT INSTRUCTION AND VIDEO MODELING
INTERVENTION WITH AUGMENTED REALITY ON THE RATIONAL
NUMBER MATHEMATICS OUTCOMES OF STUDENTS WITH DISABILITIES
A Dissertation in
Special Education
by
Jared R. Morris
© 2019 Jared R. Morris
Submitted in Partial Fulfillment
of the Requirements
for the Degree of
Doctor of Philosophy
August 2019
ii
The dissertation of Jared R. Morris was reviewed and approved* by the following:
Elizabeth M. Hughes
Assistant Professor, Special Education
Dissertation Co-Advisor
Co-Chair of Committee
David L. Lee
Department Head of Educational Psychology, Counseling, and Special Education
Professor, Special Education
Dissertation Co-Advisor
Co-Chair of Committee
Amy C. Crosson
Assistant Professor, Curriculum and Instruction
Jenifer L. Frank
Assistant Professor, Special Education
Bonnie J. Meyer
Professor, Educational Psychology
Mary Catherine Scheeler
Associate Professor, Special Education
*Signatures are on file in the Graduate School
iii
ABSTRACT
The purpose of this study was to examine the effects of utilizing explicit instruction, point-of-
view video modeling, and augmented reality technology to teach mathematics to students with
disabilities. A multiple probe single-case research design was used. Three students with learning
disabilities who were receiving special education services in mathematics participated in the
study. The results were analyzed using visual analysis of trend, level, and variability. Tau-U was
calculated to be 0.98 on the participants’ overall results from baseline to intervention. The results
demonstrated a functional relation between the intervention and the students’ performance on
three rational number mathematics skills. Participants’ maintenance and generalization of the
rational number skills were measured with variable findings. The intervention was determined to
be socially valid by the participants and teachers.
Keywords: explicit instruction, video modeling, augmented reality, rational numbers,
computer assisted instruction, educational technology, effective instruction, mathematics,
mathematics instruction, mathematics disability, dyscalculia, specific learning disability,
fractions
iv
TABLE OF CONTENTS
LIST OF FIGURES ..................................................................................................... vii LIST OF TABLES ....................................................................................................... viii ACKNOWLEDGEMENTS ......................................................................................... ix FUNDING.................................................................................................................... xi
INTRODUCTION ....................................................................................................... 1 Instructional Methods for Teaching Students with Disabilities .................... 3
Explicit Instruction ................................................................................. 4 Video Modeling ...................................................................................... 5
Augmented Reality ................................................................................. 6 Contribution to the Literature ........................................................................ 7
METHOD .................................................................................................................... 9 Participants .................................................................................................... 9
Musette ................................................................................................... 10
Jaren........................................................................................................ 10 Alaric ...................................................................................................... 10
Setting ............................................................................................................ 10 Independent Variable ..................................................................................... 11
Explicit Instruction and Video Modeling ............................................... 11
Augmented Reality ................................................................................. 12 Intervention Packet ................................................................................. 13
Dependent Variable ....................................................................................... 13 Worksheets ............................................................................................. 14 Skill 1 ..................................................................................................... 14
Skill 2 ..................................................................................................... 14 Skill 3 ..................................................................................................... 15
Experimental Design ..................................................................................... 15 Procedures ..................................................................................................... 16
Baseline .................................................................................................. 16
Intervention ............................................................................................ 16
Maintenance ........................................................................................... 19 Generalization ........................................................................................ 19
Interrater Reliability and Treatment Integrity ............................................... 20 Social Validity ............................................................................................... 21
Student Social Validity ........................................................................... 21
Teacher Social Validity .......................................................................... 21 Data Analysis................................................................................................. 22
v
RESULTS .................................................................................................................... 24 Musette .......................................................................................................... 25 Jaren ............................................................................................................... 27 Alaric ............................................................................................................. 30 Social Validity ............................................................................................... 33
Student .................................................................................................... 33 Teacher ................................................................................................... 34
DISCUSSION .............................................................................................................. 37 Overlapping Data........................................................................................... 39 Intervention Trend ......................................................................................... 41 Maintenance .................................................................................................. 41
Generalization................................................................................................ 42 Social Validity ............................................................................................... 43 Implications for Practice ................................................................................ 45 Implications for Research .............................................................................. 46 Limitations ..................................................................................................... 47
Conclusion ..................................................................................................... 48
REFERENCES ............................................................................................................ 49
APPENDIX A: Literature Review and Supporting Information ................................ 64 Explicit Instruction ............................................................................................... 65
History of Explicit Instruction ....................................................................... 65 Direct Instruction .................................................................................... 65 Process-Product Research ...................................................................... 66
Literature Reviews and Meta-Analyses ......................................................... 67 IES Practice Guides ....................................................................................... 68
Explicit Instruction Today ............................................................................. 70
Opening .................................................................................................. 71 Body ....................................................................................................... 72 Close ....................................................................................................... 74
Explicit Instruction and Mathematics ............................................................ 74 Video Modeling .................................................................................................... 76
Types of Video Modeling .............................................................................. 77 Video Modeling ...................................................................................... 77 Video Prompting .................................................................................... 77 Video Self-Modeling .............................................................................. 78 Point-of-View Video Modeling.............................................................. 78
Video Modeling and Explicit Instruction ...................................................... 79
Research on Video Modeling and Point-of-View Modeling ......................... 79
vi
Literature Reviews and Meta-Analysis for Teaching Fractions to Students with
or At-risk for Disabilities ....................................................................... 85 Video Modeling, Mathematics, and Learning Disability .............................. 86
Augmented Reality ............................................................................................... 91 Review of Instructional Procedures for Teaching Rational Numbers .................. 97 Summary of Reports and Publications from National Institutions ....................... 98
Internal Review Board .......................................................................................... 101 Skill Identification and Details ............................................................................. 102 Rational Numbers ................................................................................................. 102
Reasons Underlying Students’ Difficulty with Rational Numbers ............... 104 Experimental Design ............................................................................................ 105
Treatment Integrity ............................................................................................... 106
Guided Access Settings ........................................................................................ 107 Instructional Strategies for Teaching Students with Learning Disabilities .......... 107 References ............................................................................................................. 110
APPENDIX B: Results from Participants AimswebPlus Benchmark Assessments .. 127
APPENDIX C: Social Validity Questionnaires .......................................................... 132 Student Social Validity Questionnaire .................................................................. 132 Teacher Social Validity Questionnaire ................................................................. 134
APPENDIX D: Timeline and Schedules for the Intervention .................................... 136 Study Timeline ...................................................................................................... 136 Outline of Intervention Schedule .......................................................................... 137 Anticipated Schedule of Intervention Implementation ......................................... 138
APPENDIX E: Procedural Fidelity Checklists ........................................................... 140
APPENDIX F: Example Explicit Instruction Lesson Template ................................. 145
APPENDIX G: Example Intervention Packet for Skill #1: Adding and Subtracting
Fractions with Common Denominators ................................................................ 147
APPENDIX H: Example Materials for Skill #1: Adding and Subtracting Fractions with
Common Denominators ........................................................................................ 151
APPENDIX I: Example Materials for Skill #2: Completing Equivalent Fractions.... 157
APPENDIX J: Example Materials for Skills #3: Converting Fractions to Decimals
Notation and Converting Decimals Notation to Fractions.................................... 162
APPENDIX K: Signed Problem Difficulty Checklist ................................................ 166
vii
LIST OF FIGURES
FIGURE 1: Percent of response accuracy for Musette across three fraction-related skills,
including: addition and subtraction of fractions with like denominators, completing
equivalent fractions, and converting fractions to decimal notation and from decimal
notation to fractions .............................................................................................. 26
FIGURE 2: Percent of response accuracy for Jaren across three fraction-related skills,
including: addition and subtraction of fractions with like denominators, completing
equivalent fractions, and converting fractions to decimal notation and from decimal
notation to fractions .............................................................................................. 29
FIGURE 3: Percent of response accuracy for Alaric across three fraction-related skills,
including: addition and subtraction of fractions with like denominators, completing
equivalent fractions, and converting fractions to decimal notation and from decimal
notation to fractions .............................................................................................. 32
FIGURE 4: The Prototypical Structure of an Explicit Lesson. (Note: Adapted from
Archer & Hughes, 2011) ....................................................................................... 72
FIGURE 5: Number Systems. Real numbers (R) include the rational numbers (Q), which
include the integers (Z), which include the natural numbers (N) ......................... 103
viii
LIST OF TABLES
TABLE 1: Student Social Validity Questionnaire Responses ..................................... 33
TABLE 2: Teacher Social Validity Questionnaire Responses .................................... 35
TABLE 3: Teacher Social Validity Open-ended Question Responses ........................ 36
TABLE 4: Summary of Participant Characteristics and Settings for Point-of-View Video
Modeling ............................................................................................................... 82
TABLE 5: Summary of Research Point-of-View Video Modeling............................. 83
ix
ACKNOWLEDGEMENTS
Completing this dissertation and doctoral program would not have been possible without the help
and support of many individuals. I want to express my gratitude to all who have helped me along
the way.
I first want to express my gratitude to my Father in Heaven, for preparing the way, supporting
me, and guiding me and my family to The Pennsylvania State University to study, learn, meet
many brilliant people, have great experiences, and see many wonderful places. I am grateful for
Thy love, help, and strengthening.
I want to thank my wife Ché, for your love, support, and encouragement while I have pursued
my education. Thank you for believing in me. I could not have accomplished this without you. I
love you forever.
I would like to thank my co-chairs Dr. David Lee and Dr. Elizabeth Hughes. You were both
instrumental to my success and I will be forever grateful for your support. Dr. Lee, thank you for
advising me throughout my program and for helping me through each stage of my doctoral
program. Thank you for your guidance, direction, and for helping me successfully navigate my
doctoral studies. Thank you for calling me “ Future Dr. Morris.” It was a small gesture that truly
helped me to believe more in myself. Dr. Hughes, thank you for working with me over the last
few years on several research and writing projects. Especially, thank you for your
encouragement and support with the planning, researching, and writing of this dissertation and
for helping me to finish in a timely manner.
Drs. Crosson, Frank, Meyer, and Scheeler, thank you very much for serving on my dissertation
committee. Thank you for your advice, feedback, support, and encouragement! I have enjoyed
working with each of you in various capacities.
Dr. Charles Hughes, thank you for your mentorship in writing, publishing, presenting and
teaching.
I want to thank my family, first my children, Finley, Liam, Hugh, Pearl and Gilbert. You went
through a lot with me not being home as much as I would like to have been. I love each of you
and am so happy for the wonderful memories we made in Pennsylvania and on the East Coast. I
look forward to many more positive memories. Thank you to my parents, Gary and Karen Morris
for your unconditional love. Thank you for the many sacrifices you have made on my behalf. I
am very blessed and grateful to have such wonderful parents. Thank you for loving me and
supporting me in so many ways. Thank you to my in-laws, Gary and Jerri Sume for supporting
me and my family in my educational journey.
Thank you to Lauren Cozad, your support and friendship to both Ché and I has been a great
blessing for us. Thank you to Ben Riden for your support. Thank you to the students, math
x
specialist, teachers, and PhD candidates who made this dissertation and the pilot study possible.
It was great working with you and learning from you.
xi
FUNDING
The contents of this report were developed under a grant from the US Department of Education,
#H325D130021. However, those contents do not necessarily represent the policy of the US
Department of Education, and you should not assume endorsement by the Federal Government.
Project Officer, Patricia Gonzalez.
1
INTRODUCTION
Disabilities in mathematics are widespread in both primary and secondary settings
(Fuchs, Fuchs, & Hollenbeck, 2007; Rivera, 1997). While the severity of these difficulties varies,
they often become more pronounced with time (Nelson & Powell, 2018; Wei, Lenz, &
Blackorby, 2013). National assessment scores in the United States (US) indicate a trend of
consistently low mathematics performance by students with disabilities (NAEP, 2015, 2017).
The National Assessment of Educational Progress (NAEP), referred to as the Nation’s
Report Card, is conducted every two years for students in Grades 4 and 8, and every four years
for students in Grade 12. It provides macro-level academic performance measures for students
across the US (NAEP, 2017). Areas that are assessed across all three of these grades are: civics,
geography, mathematics, reading, science, US history, and writing. The results from the
assessments are reported as scaled scores (1-500 for Grades 4 & 8; 1-300 for Grade 12) and then
the scaled scores are translated to one of four categorical groupings: Below Basic, Basic,
Proficient, and Advanced. The average mathematics NAEP performance of students in the US
identified as having disabilities has either decreased or remained level since 2011. On average,
students with disabilities in Grade 4 perform at the basic level (NAEP, 2017), while students
with disabilities in Grade 8 and Grade 12 perform below basic in mathematics (NAEP, 2015,
2017).
Because the NAEP is conducted only every four years for students in Grade 12, the most
recent data available for comparison is 2015. While the scale changes for Grade 12 results, the
data from that assessment indicate that students with disabilities continued to trend further
beneath the basic level than they were at in Grade 8 (NAEP, 2015, 2017). Similarly, the average
2
percentage of students with disabilities meeting the proficiency standard decreases at each
successive grade level that the assessment is given (e.g., Grade 4 = 14% see NAEP, 2017; Grade
8 = 9%, see NAEP, 2017; Grade 12 = 5%, see NAEP, 2015).
National institutions have been working to address the nation’s deficiency in mathematics
performance. Many of these institutions have prepared substantial publications synthesizing the
findings from published research and experts in the field (e.g., NCTM, 2000, 2006;
NGA/CCSSO, 2010; NMAP, 2008; NRC, 2001; see Appendix A). Each publication has taken a
different approach for presenting their findings, recommending practices, or proposing changes;
however, they all have had the overarching goal of improving the mathematical outcomes for
students in the US. While these reports, resources, and standards have impacted the direction of
mathematics education, there is a continued need to address the pronounced and widening
achievement gap in mathematics performance for students with disabilities.
A specific area of mathematics where students have difficulty is rational numbers (i.e.,
fractions, decimals, and percentages; Ni, 2001). Rational numbers have been identified by
researchers as being vital for future success in mathematics (Hansen, Jordan, & Rodrigues, 2017)
and have also been identified as a “major challenge” for students in pre-K to grade 8 (NRC,
2001, p. x) and a “pervasive” and “major obstacle” in students’ mathematics progression
(NMAP, 2008, p. xix). Two areas are particularly difficult for students, first, understanding the
fundamental principles behind rational numbers, and second, performing mathematical
operations with rational numbers (Ni, 2001; Vamvakoussi, 2015).
Multiple theories have been posited as reasons students have difficulty with rational
numbers. One reason suggested for students’ difficulty with rational numbers is called natural or
whole number bias (Ni & Zhou, 2005). The whole number bias has been defined as individuals
3
generalizing whole number properties to rational number tasks, whether appropriate or not (Ni &
Zhou, 2005). A second theory proposed is the framework theory of conceptual change
(Vamvakoussi, Vosniadou, & Dooren, 2013; Vosniadou & Skopeliti, 2014). This theory
examines students’ difficulty with rational numbers through a cognitive development lens. The
framework theory, similar to whole number bias, suggests that as students develop, a discrepancy
materializes between rational number concepts and the principles that govern reasoning with
natural numbers (Vosniadou, 2014). Another proposed theory is called the integrated theory of
numerical development. This theory proposes that development of numerical understanding
includes learning about the various characteristics that unite and differentiate all types of real
numbers (Siegler, Thompson, & Schneider, 2011). Combined, these theories describe some of
the underlying difficulties students have with rational numbers.
Students with specific learning disabilities (SLD) in mathematics (e.g., dyscalculia) have
particular difficulty with rational numbers, in comparison to their peers, and exhibit an “extra
delay in their rational number understanding” (Van Hoof, Verschaffel, Ghesquière, & Van
Dooren, 2017). Van Hoof and associates (2017) found the students with dyscalculia were more
affected by natural number bias than the control groups which led to added difficulty in their
understanding of rational numbers.
Instructional Methods for Teaching Students with Disabilities
Researchers have recommended that intensive academic interventions coupled with
effective mathematics instruction are necessary to improving the trends of student performance
in mathematics (Fuchs, Fuchs, & Malone, 2017; Nelson & Powell, 2018; Powel & Fuchs, 2015).
Hwang, Riccomini, Hwang, and Morano (2019) suggest that there is a need for developing
“interventions with specifically designed instruction to better address the conceptual and
4
procedural knowledge of fractions” (p. 58). Two practices that have been found to be effective in
teaching mathematics to students with disabilities are explicit instruction and video modeling
(Gersten et al., 2009, Hughes & Yakubova, 2019).
Explicit instruction. Explicit instruction is recognized to be a systematic, direct, and
engaging way to design lessons and deliver content in a way that is success oriented (e.g., Archer
& Hughes, 2011; Goeke, 2009; Hall & Vue, 2004; Hollingsworth & Ybarra, 2009). Explicit
instruction is defined as:
“… a group of research-supported instructional behaviors used to design and deliver
instruction that provides needed supports for successful learning through clarity of
language and purpose, and reduction of cognitive load. It promotes active student
engagement by requiring frequent and varied responses followed by appropriate
affirmative and corrective feedback and assists long-term retention through use of
purposeful practice strategies.” (Hughes, Morris, Therrien, & Benson, 2017, p.4).
Explicit instruction is an assemblage of instructional design procedures and delivery methods
and often follows a three-tiered structure: the model, prompt, and check (Archer & Hughes,
2011). Explicit instruction helps teachers maximize instructional time, an area which has been
identified as being critical for helping students struggling in mathematics (Jitendra et al., 2018).
Explicit instruction has a considerable amount of empirical evidence supporting its use
both as an instructional design and as a collection of delivery techniques (e.g., Archer & Hughes,
2011; Brophy & Good, 1986; Hughes et al., 2017; Hughes, Riccomini, & Morris, 2019;
Rosenshine & Stevens, 1986). Explicit instruction has been identified as a high-leverage practice
for teaching students with disabilities (McLeskey et al., 2017) and as an essential instructional
component of effective mathematics instruction (e.g., Gersten et al., 2009; Kroesbergen & Van
5
Luit, 2003; NMAP, 2008), specifically for teaching fractions (Misquitta, 2011).
Video modeling. Video modeling involves presenting a model in video and audio format
to an individual, and the participant is then provided the opportunity to imitate the skill that was
presented (Hughes & Yakubova, 2016). This video modeling process has many benefits
including allowing for multiple stimulus and response opportunities, and standardization of
presentation (Morgan & Salzberg, 1992).
Video modeling is a behavioral technique wherein the video acts as a stimulus for
learning new behaviors (McCoy & Hermansen, 2007); for example, the behavior of the model in
the video acts as a discriminative stimulus for the observer to imitate (Nikopoulos & Keenan,
2007). It is often utilized in concert with prompting, forward and backward chaining, utilization
of reinforcement, and other components of applied behavior analysis (Nikopoulos, Canavan, &
Nikopoulou-Smyrni, 2009; Nikopoulos & Keenan, 2004).
Video modeling can be conducted in various ways, including: video self-modeling
(Hughes & Yakubova, 2016; Prater, Carter, Hitchcock, & Dowrick, 2012), point-of-view
modeling (POVM), video modeling, and video prompting (Hughes & Yakubova, 2016; Kellems
& Edwards, 2015). When speaking about these interventions and methods in general, the term
video-based interventions is often used. Video modeling is an intervention with empirical
support, behavioral underpinnings, and a strong theoretical framework (Corbett & Abdullah,
2005; Hughes & Yakubova, 2019).
Research about video modeling is well established as a robust intervention with positive
effects across disabilities to teach academic, functional, social, life skills, and behaviors (e.g.,
Aldi et al., 2016; Bellini & Akullian, 2007; Burton, Anderson, Prater, & Dyches, 2013; Cihak &
Bowlin, 2009; Hichcock, Dowrick, & Prater, 2003; Saunders, Spooner, & Ley Davis, 2018;
6
Yakubova, Hughes, & Hornberger, 2015; Yakubova, Hughes, & Shinaberry, 2016). Video
modeling has been found to be particularly effective for students with autism spectrum disorders
(ASD; Bellini & Akullian, 2007; Hughes & Yakubova, 2019), other developmental disorders
(DD; Banda, Dogoe, & Matuszny, 2011) and learning disabilities (Cihak & Bowlin, 2009;
Hughes, 2019; Santangi, Hammer, & Hogan, 2019). Video-based interventions meet the criteria
to be considered an evidence-based practice for teaching mathematics to students with ASD
(Hughes & Yakubova, 2019). Additional evidence supports the effectiveness of video modeling
for skill acquisition, maintenance, and generalization (Dowrick, 1999; Mason et al., 2013) and its
efficacy for teaching mathematics to students with disabilities (Kellems, et al., 2016),
particularly fractions (Yakubova, Hughes, & Hornberger, 2015).
Augmented reality. Augmented reality combines real life and virtual information using
pictures, videos, and or audio to enhance environmental information and can be utilized in
different ways. It can be used to reflect a shape, object, or image that a computer or mobile
device converts into other content (e.g., picture, information, videos; Cakir & Korkmaz, 2018). It
can also be used as a platform to assist with delivering instruction for interventions because of its
ability to be customized for video content delivery and instruction (Bacca, Baldiris, Fabregat,
Graf, & Kinshuk, 2014). Additionally, the augmented reality platform provides flexibility for the
participants to progress through an intervention and instruction at an individualized pace.
Research on the use of augmented reality technology in academic settings is new but
growing (e.g., Akçayır & Akçayır, 2017; Bacca et al., 2014; Garzón, Pavón, & Baldiris, 2019).
Augmented reality has been used to assist with mathematics instruction (Bacca, et al., 2014;
Cihak et al., 2016).
7
Contribution to the Literature
Combining explicit instruction and video modeling into a targeted intensive intervention
may have potential for helping students with disabilities increase proficiency in mathematics.
Additionally, utilizing augmented reality as a platform for delivering explicit instruction video
models in a self-directed manner could assist in automating portions of the instruction. Kiru,
Doabler, Sorrells, & Cooc (2018) pointed to a need for additional researchers to evaluate
technology-mediated mathematics interventions and incorporate more key components of
explicit instruction (e.g., overt demonstrations, guided and independent practice, and specific
academic feedback). The components utilized in this study will address multiple strands of
proficiency identified by the NRC (2001) by increasing students’ understanding of concepts and
strengthening their procedural fluency, and also it is hoped to improve the participants’
productive disposition (e.g., their view of the usefulness of mathematics and their self-efficacy in
doing mathematics). Additionally, if found effective, this intervention has potential to assist
special education teachers in meeting the diverse needs of students with disabilities.
Considering the evidence base for explicit instruction and video modeling to teach
mathematics this study will increase the research on digital applications of explicit instruction for
mathematics instruction using video modeling and augmented reality technology to teach
rational number skills to students with SLD (Ennis & Losinski, 2019; Hughes, 2019 in press;
Kiru et al., 2018). The research questions to be addressed include:
• What are the effects of an intervention featuring explicit instruction, point-of-view
video modeling, and augmented reality on the rational number problem-solving
performance of students with disabilities?
8
• What are the effects of the intervention on participants’ skill maintenance?
(Maintenance is defined as a data probe 7 to 14 calendar days after the intervention
has concluded.)
• What are the effects of the intervention on participants’ ability to generalize their
performance to applied word problems?
• To what degree is the intervention socially valid?
9
METHOD
Participants
Prior to starting the research project, an Internal Review Board (IRB) and the school’s
administration approved of the study. Teachers were asked to identify students who were
experiencing difficulties in mathematics for possible inclusion in the study. Two assessments
were administered by school personnel to verify the need for intervention and identify target
skills. The first was the i-Ready diagnostic assessment. This assessment provided an overview of
the students’ academic needs in mathematics. The second assessment was the aimswebPlus
benchmark assessment. This assessment evaluated the students in five areas of mathematics: (a)
geometry, (b) measurement and data, (c) base 10 number and operations, (d) number and
operations related to fractions, and (e) operations and algebraic thinking. The fractions section of
this assessment included subtraction word problems involving fractions with common
denominators, identification of equivalent fractions, identifying fractions on a number line, and
comparing the magnitude of fractions by writing fractions with common denominators.
Three 4th-grade students were identified for inclusion in the study. Parental or guardian
consent and student assent was obtained. Each participant was classified as having SLD and
were receiving special education services in mathematics. The students identified for inclusion in
the study were determined to be in need of intensive mathematics instruction in rational numbers
based on their performance on the progress monitoring benchmark assessment and diagnostic
assessment. Rational number skills were also identified as high-priorities by the participants’
teachers and the mathematics specialist.
10
Musette. Musette, a Caucasian female student receiving special education services in
mathematics, was classified as having SLD. Her performance on the aimswebPlus benchmark
placed her in the third percentile in relation to the national norm-referenced sample. Results from
the i-Ready assessment suggested that Musette’s mathematics performance was in the second
percentile. These test results indicated that Musette needed intensive intervention in
mathematics.
Jaren. Jaren, a Caucasian male student receiving special education services in
mathematics, also had a SLD classification. His performance on the aimswebPlus benchmark
placed him in the tenth percentile in relation to the national norm-referenced sample. Moreover,
Jaren scores on the i-Ready diagnostic assessment indicated that compared to a nationally
normed sample, he was in the seventeenth percentile. These test results indicated that Jaren
needed intensive intervention in mathematics.
Alaric. Alaric, a Caucasian male student with a SLD classification was also receiving
special education services in mathematics. His performance on the aimswebPlus benchmark
placed him in the tenth percentile in relation to the national norm-referenced sample. Alaric’s
scores on the i-Ready diagnostic assessment indicated that compared to a nationally normed
sample he was in the twenty-first percentile. His performance on these assessments indicated that
Alaric needed intensive intervention in mathematics.
Setting
The setting of the project was a public charter school in the northeastern US that enrolled
students attending kindergarten through grade 8.8 The school had approximately 420 students.
Approximately 52% of the students were female and 48% were male. Thirty percent of the
students enrolled at the school reported an ethnicity other than white. Student enrollment data
11
indicated that 32% of the students came from low-income families and approximately 5% of the
students are considered immigrant children or youth. The school does not meet the requirements
for the Schoolwide Title 1 program, but it is eligible for the Title 1 Targeted Assistance School
Program. The school employs 34 full-time teachers, 67% of whom have received a bachelor’s
degree, and 32% have at least a master’s degree.
The intervention took place during a time in the early afternoon when the students were
pulled out for intensive mathematics instruction. The intervention occurred in one of two
locations, a conference room approximately seven meters by four and one-half meters that had a
three and one-half by one and one-quarter meter conference table with ten office chairs around it,
or a small instructional room about three and one-quarter meters by four meters with a two by
one and one-quarter meter table with six non-rolling chairs around it. The small instructional
room also had a teacher’s desk, a rolling chair and a few bookshelves with miscellaneous items
including math manipulatives and curriculum books.
Independent Variable
The independent variable included explicit mathematics instruction delivered using point-
of-view video modeling to teach rational number skills. Three rational number skills were
identified for the intervention, (a) adding and subtracting fractions with common denominators,
(b) completing equivalent fractions, and (c) converting fractions to decimal notation and
converting decimal notation to fractions.
Explicit instruction and video modeling. The videos for each skill incorporated the
structure of a prototypical explicit instruction lesson and various other explicit instruction
delivery techniques (Archer & Hughes, 2011; Gersten et al., 2009; see Appendix A). Two videos
were recorded for each skill. The first video included the lesson opening and teacher model of
12
the skill. The lesson opening included: (a) an overview of the skill, (b) a description of the skill’s
relevance, (c) a conceptual explanation, and (d) a review of applicable prerequisite skills. The
model demonstrated the skill and provided multiple examples including a “think aloud.” The
second video included a guided practice portion of the instruction. The guided practice portion of
the instruction included examples where the students practiced with the instruction and followed
a pattern of systematic fading of prompts, “tell,” “ask,” and “remind” (Archer & Hughes, 2011).
A check stage came after the guided practice segment that evaluated the students’ ability to
perform the skill and practice independently without prompts.
The video lessons were recorded from the point-of-view perspective (i.e., the camera was
pointing at the instructor’s hands, similar to how the participant would see their own hands
performing the skill; Hughes & Yakubova, 2016). The videos were recorded using the camera on
an iPad.
Augmented reality. The intervention was administered using marker-based augmented
reality technology, HP Reveal, for delivering the instructional videos, and was adapted from the
Augmented Reality Implementation Checklist published by Kellems and colleagues (2019).
Markers were created for each instructional video and placed into the intervention packet. Each
participant used an Apple iPad to access the videos. The iPads cases held them vertically in
landscape orientation when opened (e.g., similar to the orientation of a laptop screen). The
participants scanned the markers in the packet using the HP Reveal iOS app on an iPad.
Scanning the markers initiated the instructional videos that were preprogrammed to play. Each of
the videos were reviewed by the interventionist to: (a) ensure that each of the key explicit
instruction components were present, (b) to confirm that the correct video was attached to each
marker image, and (c) that the settings worked properly.
13
Intervention packet. Worksheets and the intervention packets were created for the
intervention using Math Resource Studio 6 by Schoolhouse Technologies for each of the three
skills (see Appendix G). The intervention packets incorporated augmented reality marker
images, which led the participants through the various stages of the explicit instruction sequence
(see Appendix G for an example intervention packet). There were four pages in each intervention
packet, (a) a cover page, (b) a page for the introduction and teacher model, (c) a page for the
guided practice portion of the instruction, and (d) a “check” page. The cover page of the
worksheet packet contained instructions for the intervention. These instructions were read to the
participants at the beginning of each session. The next page in the intervention packet contained
brief instructions and a marker for the introduction and teacher model video. The third page
contained brief instructions, a marker image, and five guided practice problems that the students
completed with the video. The fourth page did not contain a marker image, rather it contained
instructions directing the students to complete the check problems without looking back at
previous pages in the workbook. This allowed the participant to demonstrate their ability to
perform the skill without prompting. There were between two and four check problems for each
skill. Students were prompted to raise their hand when they had completed the check problems.
The interventionist then reviewed the check problems for accuracy. If the participants completed
the check problems accurately the intervention packet was taken, and the participant was
provided with a one-page worksheet with five problems on it.
Dependent Variable
The dependent variable was the permanent product mathematics measures collected from
each participant. Student responses were scored for overall correctness. An answer was
determined correct if the complete answer was provided. Partial credit was not given for partially
14
correct answers or correct answer sequences. Answer keys were created for grading of baseline,
intervention, and maintenance worksheets. These worksheets were used by both the
interventionist and another rater to provide agreement data between the two scorers.
Worksheets. Mathematics worksheets were created to evaluate the participants’ ability to
perform rational number skills. Each worksheet had a place for the participant to write their
name and the date. Seven to nine worksheets were created for each phase, baseline, intervention,
and maintenance. The problems were randomized using the “generate sets” function in the
worksheet creation software. The problems were evaluated by both the interventionist and an
outside rater with expertise in mathematics and were determined to have no significant
difference in difficulty across phases.
Skill 1. The first skill included both adding and subtracting fractions with common
denominators. Every worksheet had three addition problems and two subtraction problems. The
instructional prompt stated: “Find the sum or difference.” The criteria for both addends, and the
minuend and subtrahend, were that the denominators were either 2, 3, 4, 5, 6, 7, 8, 10, 12, or 100.
Each individual fraction was aligned vertically but the relationship between the two fractions
were horizontal (e.g., 3
12+
7
12; see Appendix H). Students were not asked to simplify the
fractions.
Skill 2. The second skill selected was completing equivalent fractions. Each worksheet
had five problems. The instructional prompt stated: “Complete the equivalent fractions.” The
problems were presented with a fraction in vertical orientation (e.g., 3
4) with an equal sign and
then the given denominator of another fraction but a missing numerator, or vice versa, with the
numerator missing from the first fraction and the second fraction complete (e.g., either 3
4=
20 or
15
3=
7
21). Denominators were 3, 4, 5, 6, 7, 8 or an equivalent multiple (up to 10 times) of one of
those numbers (see Appendix I).
Skill 3. The third skill included converting a fraction to decimal notation and converting
decimal notation to a fraction. There were five total problems on each worksheet. The
instructional prompt stated: “Convert fractions to decimal notation and convert decimals to a
fraction.” Denominators for the fractions were either 10 or 100. Examples of the two types of
problems are as follows: 3
10 =_______________, or 0.29 = __________________ (see Appendix
J).
Experimental Design
Because learning an academic skill is often not reversible, time-lagged designs (e.g.,
multiple baseline, multiple probe, and changing criterion designs) are often used instead of
withdrawal designs (e.g., ABAB; Cooper et al., 2007). Time-lagged designs have been found to
be effective at evaluating academic interventions without having to use a reversal design (Cooper
et al., 2007). The multiple probe design incorporates baseline logic (i.e., prediction, replication,
and verification), similar to other single-subject research designs (Cooper et al., 2007; Gast et al.,
2018). One of the signature features of a multiple probe design is the staggered onset of the
intervention (Horner & Odom, 2014). This study utilized the multiple probe across skills single-
subject research design that was replicated across participants (Gast et al., 2018; Horner & Baer,
1978). The skills identified were determined to be functionally independent of each other,
meaning that the acquisition of one skill should not lead to the acquisition of the others (Gast,
Lloyd, & Ledford, 2018). The skills were also determined to be functionally similar, meaning
16
that each skill individually was likely to be impacted by the independent variable (Gast et al.,
2018).
Procedures
Baseline. For baseline data collection, the interventionist presented the participants with
a worksheet (for each of the three skills), two pencils, and a calculator (TI 30XA student
scientific calculator). Each baseline page had five problems for that given skill. The teacher or
interventionist did not provide any instruction or feedback to the participants about their
performance for data collected in the baseline phase. Each of the participants began the baseline
phase for all of the skills at the same time. Baseline data were collected concurrently for all of
the skills for the first three baseline sessions. For the sessions thereafter, baseline data were
probed for the skills that were not immediately receiving intervention. The baseline sessions
continued for the first skill until the baseline data reached a minimum of five data points at 80%
or above and the data was determined to be stable (e.g., steady state responding). If these data
were stable, the intervention was then introduced for that skill. Each remaining skill was probed
until the participant had completed five of the intervention worksheets with 80% accuracy or
above at which time the intervention was then introduced for the second skill and the third skill
was probed in similar fashion. Following each baseline session, the participants were thanked for
their work and were returned to their classroom to continue their regular scholastic activities.
Intervention. Prior to the first intervention session students were oriented to an example
intervention packet and given opportunity to practice using the iPads to scan marker images.
Before each intervention session, the iPads were unlocked and the HP Reveal app was opened
and set into search mode, the mode where it is searching for marker images. Guided Access was
enabled on the device. The guided access settings allowed the participants to utilize the iPads to
17
perform the needed functions for the intervention but prevented them from closing the HP
Reveal app or accessing non-intervention related features or apps.
For the intervention sessions, students were given an intervention packet, an iPad, a pair
of on-ear youth size headphones, two pencils, and a calculator (TI 30XA student scientific
calculator). The interventionist read aloud the directions on the cover page to the participants and
then directed them to begin working through the instructional packet. The participants turned to
the second page in the packet and used their iPad to scan the marker image to start their first
instructional video, the lesson opening and model. The teacher modeling the skills instructed the
participants to keep their pencils down and to watch and listen to the instruction. The lessons
opened by stating that the lesson was beginning, to gain the students’ attention, stated the
lesson’s goal and reviewed relevant prerequisite knowledge. This first video for each skill also
provided conceptual information for each skill. The instructor in the video then modeled the
skill. Multiple examples (three to five) were provided and the instructor provided visual and
verbal demonstration of the steps to perform the skill as well as cognitive modeling (i.e., the
vocalization of internal dialogue or thought processes that are typically unspoken; Archer &
Hughes, 2011; Denney, 1975). The lengths of the instructional videos were as follows: 4 minutes
33 seconds for skill 1, adding and subtracting fractions with common denominators; 8 minutes
56 seconds for skill 2, completing equivalent fractions; and 5 minutes 58 seconds for skill 3,
converting fractions to decimal notation and converting decimal notation to fractions.
When the introduction and model videos concluded, the participants turned the page in
their intervention packet to the guided practice page. On the guided practice page, the
participants would read the instructions (if applicable), scan the marker image, and practice the
skill along with the video. The prompts and supports were systematically faded in the guided
18
practice portion using a tell, ask, remind (TAR) procedure (Archer & Hughes, 2011). In this
process, the instructor initially provided high levels of scaffolding by “telling” the students how
to perform each step of the skill. The instructor then faded the scaffolding by “asking” the
participants how to do each step of the skill while providing opportunities for students to
respond. Finally, in the lowest level of prompting in guided practice, students were simply
“reminded” to follow the steps or procedures. This fading process gradually places more
responsibility on and increased the cognitive effort required by students, while simultaneously
fading the teacher prompting. The length of the guided practice videos was as follows: 2 minutes
54 seconds for skill 1, adding common denominators; 3 minutes 18 seconds for skill 2,
completing equivalent fractions; and 3 minutes 10 seconds for skill 3, converting fractions to
decimal notation and converting decimal notation to fractions.
Following the guided practice portion, the participants raised their hands and the
instructional packet was exchanged for a page containing check problems, to verify their ability
to accurately perform the mathematic skill. The check stage of the lesson provided an
opportunity for each student to independently, and without prompts, demonstrate their ability to
accurately perform the skill and for the interventionist to provide feedback to the student. The
check problems were on a separate page from the guided practice in order to ensure the students’
ability to perform the skill without prompts. During the check, each student performed two to
four iterations of the newly learned skill. The interventionist checked and provided feedback
before the participant was able to continue. If the participant had an error in one or more of the
check problems, the interventionist identified the area where the error occurred and helped the
participant correct the error. Then, depending on the nature of the error, the participant would
either do an additional check page (for simple calculation errors), or if necessary, was directed to
19
begin the guided practice or model videos again (for multiple or more severe errors, i.e., process
or conceptual errors). If the participant was directed to watch a video again, the participant
completed an additional check page afterwards. When the participant passed each of the check
problems correctly, he or she would be directed to continue to the independent practice portion
which contained five practice opportunities.
If a participant answered the additional check problems incorrectly a second time, the
interventionist would direct them to watch the “guided practice” video again, completing the
guided practice problems again, before reattempting an additional unique page of check
problems following the same procedure as before. The check procedure continued until the
participant completed each of the check problems with 100% accuracy. Five to eight extra check
pages were prepared in advance for each skill before the commencement of the intervention
stage, in the event that students made errors and needed to do an additional check page.
Maintenance. One important part of learning academic skills is determining the level to
which participants maintain the skills (Cooper et al., 2013). Collecting maintenance data
provides valuable information about the continued outcomes of an intervention after it has
concluded (Barton, Meadan-Kaplansky, & Ledford, 2018). Maintenance is defined as a durable
change in behavior (Cooper et al., 2007), which in this intervention is defined as the continued
ability to perform a skill after the intervention is concluded. Maintenance probes were
administered for each skill, 7 to 14 days post-intervention, to evaluate the participants’ continued
ability to perform each of the skills.
Generalization. Generalization was conducted for each skill. Response generalization
rather than stimulus generalization was collected. This intervention evaluated response
generalization by presenting word problems for each skill to evaluate the participants’ ability to
20
generalize the mathematics skills in applied situations. Generalization probes were collected in
the baseline, intervention, and maintenance phases of the study to demonstrate increased rigor
and allow for potential correlational conclusions about the participants ability to generalize the
skills (Ledford, Lane, & Tate, 2018).
Interrater Reliability and Treatment Integrity
Interrater reliability was conducted on 100% of the dependent variable data. Scores that
were obtained from each data collection session were scored by the interventionist and by a
second rater. The second rater, a PhD candidate, was trained to compare the students’ written
answers to the mathematics problems with the answer keys. Interobserver agreement (IOA) was
calculated on all permanent products produced by students. The agreement between the first and
second rater was 100%.
Treatment integrity, also known as fidelity, is defined as the level to which the
intervention was implemented as described (Ledford & Gast, 2014). Treatment integrity was
collected by the interventionist across baseline, intervention, maintenance, and generalization
phases (see forms in Appendix E). The intervention was implemented as described in the
methods section 95% (mean) of the time (range 94-100%). Second raters observed the
intervention to verify the fidelity of the treatment implementation. The second raters were
provided with a copy of the treatment integrity form ahead of observing the intervention and
were provided with opportunity to ask questions and seek clarifications if needed. The second
raters observed 25% of the sessions across the baseline, intervention, maintenance, and
generalization phases. Agreement for the treatment integrity was 98%.
Participants were evaluated on their fidelity to the intervention. Agreement was 95%.
Additionally, each instructional video was analyzed to verify that each of the components of
21
explicit instruction contained in videos for each skill with 97% overall fidelity (see Appendix E,
p. 150). Lastly, the problem difficulty across baseline, intervention, and maintenance was
reviewed by an external expert to verify that the problem difficulty did not vary between phases.
One hundred percent of the data was verified and the reviewer found no significant difference in
problem difficulty (see Appendix E & K).
Social Validity
Student social validity. The student social validity measure was created using statements
about the intervention and mathematics in general that the students responded to on a Likert-type
scale. Additionally, the measure contained open-ended questions about the intervention. Each of
the participants received paper copies of the social validity measure to be filled out with pencils.
The questions were read out loud to the students and the students then wrote their responses. The
students were told not to write their name on the social validity questionnaire for anonymity (see
Appendix C for the participant social validity form).
Teacher social validity. The teacher social validity measure was delivered in paper and
pencil format. The measure contained 11 statements that the teachers responded to on a Likert-
type scale that ranged from 1 (strongly disagree) to 6 (strongly agree; modified from Witt &
Elliott, 1985). The teacher social validity measure also contained four open-ended questions for
the teachers to respond to with short written responses. The teachers’ responses were anonymous
(see Appendix C for the teacher social validity form).
It was determined that social validity would not be collected from the participants’
parents or guardians. This decision was made in part because the parents or guardians were
removed from the intervention and would not likely be able to provide important information
about the intervention and its social acceptability.
22
Data Analysis
The results were evaluated using visual analysis and Tau-U. These methods assisted in
determining if: (a) experimental control occurred, (b) there were intervention effects, and (c) a
functional relation was demonstrated. Trend, level, variability, immediacy, overlap, and
consistency are areas that were evaluated using visual analysis (Cooper, Heron, & Heward,
2007). Trend evaluates whether the data is accelerating (increasing), decelerating (decreasing), or
continuing unchanged (remaining level) in each condition. Level assesses if there are visible
gaps in the data, either up or down. Variability evaluates the stability or lack of stability in the
data. Additional data patterns considered were the immediacy of the effect, overlap, and
consistency of data patterns across similar phases (Kratochwill et al., 2010). This study evaluated
the visual changes in level for each skill and participant. Additionally, descriptive statistics (e.g.,
mean) were used to determine changes in level from baseline to intervention phases. The split-
middle method was used to evaluate the trend. This was done by first splitting the data in each
phase in half, selecting the median data point from each half of the data, drawing a line between
those two points, and evaluating the direction of the trend (White & Haring, 1980). The data
were visually analyzed for variability and a stability envelope was also calculated. The stability
envelope was calculated by (a) selecting the median value from the data in the intervention
phase, (b) creating a band or range that is 25% higher and 25% lower than the median value, and
(c) evaluating the remaining data points to see if they fell within that band (i.e., range; Barton,
Lloyd, Spriggs, & Gast, 2018).
Tau-U was calculated as an additional means to demonstrate the effectiveness of the
intervention. More specifically, Tau-U is a nonparametric statistical measure that combines four
indices that use regressive statistics to account for (a) overall improvement, which compares the
23
phase A (baseline) versus phase B (intervention) nonoverlap; (b) the improvement of non-
overlapping data (e.g., the nonoverlap and the intervention trend); (c) improvement, taking trend
intervention into consideration, meaning the nonoverlap when controlling for the baseline trend;
and (d) controlling for baseline trend, the nonoverlap and trend of the intervention controlling for
the baseline trend (Parker, Vannest, Davis, & Sauber, 2011). Because Tau-U follows the “S”
distribution it is able to report p-values and confidence intervals (Parker, Davis, & Sauber, 2010).
In this study, Tau-U was calculated on the between-phase execution difference of the
baseline and intervention phases. An online calculator was used to calculate Tau-U (Vannest,
Parker, & Gonen, 2011). To calculate Tau-U using this calculator, the first step is to evaluate the
phase contrasts. To do this, the calculator contrasts the baseline against itself and provides a Tau-
U score for each baseline. A Tau-U score of the contrast baseline above 0.2 may indicate the
need to correct for a trending baseline (Vannest & Ninci, 2015). If so, procedures for calculating
Tau-C, a baseline-corrected version of Tau-U could be used (see Tarlow, 2017). If baseline does
not need to be corrected for, Tau-U compares the A phase (baseline) and the B phase
(intervention) contrasts for each tier of the intervention across each student. Additionally, the
Tau-U score for each individual tier was combined for a weighted mean Tau-U for each
participant, and a combined weighted mean across each tier and each participant was calculated
for an overall Tau-U. Tau-U effect sizes can be interpreted as follows: 0.65 or lower = small
effect, 0.66 to 0.92 = medium effect, and 0.93 to 1.0 = very high effect (Parker & Vannest,
2009). While researchers have attached effect size measures to Tau-U ranges, this can be
confusing to readers because they are not equivalent to effect sizes in group research (Parker,
Vannest, & Davis, 2014), and for that reason effect sizes will not be reported for the results of
this intervention.
24
RESULTS
This intervention utilized explicit instruction and video modeling delivered through an
augmented reality platform to teach rational number mathematics skills to three 4th-grade
students with SLD. The omnibus results will first be presented followed by individual participant
results. Across the intervention, none of the participants produced any correct answers in any of
the baseline phases. The overall average accuracy across participants and skills was 88.4%
(range of 0-100) in the intervention phase. Visual analysis of the graphs for all the participants
and skills determined that an immediate change in level occurred when the intervention was
implemented on eight of the nine graphs. The trend for the data in the intervention phase was
determined to be level or increasing for seven of the nine intervention phases, using the split-
middle method (White & Haring, 1980). Variability of the data in the intervention phase were
deemed to be stable, with eight of the nine graphs having 80% of the data being within a 25%
range of the median value (Barton et al., 2018).
The overall combined and weighted Tau-U score for the intervention across skills was
0.9838 with a 95% confidence interval of (CI; [0.7492, 1]). The combined Tau-U score across all
three participants for the first skill, adding and subtracting fractions with common denominators,
was 1.0 with a 95% confidence interval of (CI; [0.5868, 1]). The combined and weighted Tau-U
score across participants for the second skill, calculating equivalent fractions, was 0.9523 with a
95% confidence interval of (CI; [0.5528, 1]). The combined and weighted Tau-U score across
participants for the third skill, converting fractions to decimal notation and decimal notation to
fractions, was 1.0 with a 95% confidence interval of (CI; [0.5935, 1]).
25
Musette
Musette did not produce any correct answers in any of the baseline phases (see Figure 1).
Across the three skills, in the intervention phases, her overall average accuracy was 89.0%
(range of 0-100). For adding and subtracting fractions with common denominators, the first skill,
her average accuracy was 96.6% (range of 80-100). Her average accuracy was 74.3% (range of
0-100) for the second skill, completing equivalent fractions, and 96.0% (range of 80-100) for the
third skill, converting decimal notation to fractions and converting fractions to decimal notation.
Visual analysis of Musette’s results are as follows. The level changed when the
intervention was implemented, two of the three graphs had an immediate upward gap and all data
points except one were non-overlapping, meaning that each data point in the intervention was
higher than in baseline. Using the split middle method, the trend for the data in the intervention
phase is increasing for skill one, (adding and subtracting fractions) and skill three (converting
decimals to fractions and fractions to decimals; White & Haring, 1980) and for second skill
(completing equivalent fractions) using the semi-average trend estimation method. The
variability of the intervention phase data was deemed to be stable for two of the three graphs
(Skill 1 & Skill 3), the intervention phase data were deemed to be stable, with 80% of the data
being within a 25% range of the median value (Barton et al., 2018). With the outlier excluded
(the first data point in the intervention phase [0]), for the second skill, the data for that graph was
also stable.
26
Figure 1. Percent of response accuracy for Musette across three rational number skills,
including: addition and subtraction of fractions with like denominators, completing equivalent
fractions, and converting fractions to decimal notation and converting decimal notation to
fractions. Note: // = Sessions more than five days apart. ∆ = Generalization data point. EI=
Explicit instruction. POVM = Point-of-view video modeling.
27
The overall combined and weighted Tau-U score for Musette across skills was 0.9512
with a 95% confidence interval of (CI; [0.5429, 1]). Her Tau-U score for the first skill, adding
and subtracting fractions with common denominators, was 1.0 with a 90% confidence interval of
(CI; [0.399, 1]). Her Tau-U for the second skill, calculating equivalent fractions, was 0.8571 with
a 90% confidence interval of (CI; [0.278, 1]). Lastly, her Tau-U score for the third skill,
converting fractions to decimal notation and decimal notation to fractions, was 1.0 with a 90%
confidence interval of (CI; [0.399, 1]).
Musette had an average maintenance score of 90% accuracy across the three skills. She
scored 80% and 100% on the maintenance probe for the first skill, 40% on the second skill, and
100% on the third skill. For generalization, she scored 0% for all three skills in the baseline
phase of the intervention. She scored 100% on the generalization probe for all three skills. Her
average generalization score in the maintenance phase was 73.3%.
Jaren
For all three of the baseline phases, Jaren did not provide any correct answers (see Figure
2). In the intervention phases, Jaren’s overall average accuracy was 91.8% (range of 60-100). For
the first skill, adding and subtracting fractions with common denominators, his average accuracy
was 93.3% (range of 80-100). On average his accuracy was 90.0% (range of 60-100) for the
second skill, completing equivalent fractions, and 92.0% (range of 80-100) for the third skill,
converting decimal notation to fractions and converting fractions to decimals.
Visual analysis of Jaren’s results was as follows. There was an immediate upward level
change when the intervention was implemented for all three of the skills, with no overlapping
data points from baseline to intervention phase, i.e., each data point in the intervention was
28
higher than in baseline. Using the split-middle method (White & Haring, 1980), the trend for the
data in the intervention phase was increasing for all three intervention phases, skill one, adding
and subtracting fractions, skill two, completing equivalent fractions, and skill three, converting
fractions to decimal notation and converting decimal notation to fractions. The variability of the
intervention phase data were deemed to be stable for all three graphs (skill 1, skill 2, and skill 3),
the intervention phase data were deemed to be stable, with 80% of the data being within a 25%
range of the median value (Barton et al., 2018).
The overall combined and weighted Tau-U score for Jaren across skills was 1.0 with a
95% confidence interval of (CI; [0.5868, 1]). Jaren’s Tau-U score for the first skill, adding and
subtracting fractions with common denominators, was 1.0 with a 90% confidence interval of (CI;
[0.399, 1]). His Tau-U score for the second skill, calculating equivalent fractions, was 1.0 with a
90% confidence interval of (CI; [0.399, 1]). His Tau-U score for the third skill, converting
fractions to decimal notation and converting decimal notation to fractions, was 1.0 with a 90%
confidence interval of (CI; [0.399, 1]).
29
Figure 2. Percent of response accuracy for Jaren across three rational number skills, including:
addition and subtraction of fractions with like denominators, completing equivalent fractions,
and converting fractions to decimal notation and converting decimal notation to fractions. Note:
// = Sessions more than 5 days apart. ∆ = Generalization data point. EI= Explicit instruction.
POVM = Point-of-view video modeling.
30
Jaren’s overall average maintenance across the three skills was 75%. His two
maintenance scores for the first skill were 100%, 0% for the second skill, and 100% on the third
skill. Jaren scored 0% on the generalization baseline probes for all three skills. His generalization
in the intervention phase was 40% for the first skill and 100% for the second and third skills for
an average score of 80%. In the maintenance phase he scored 80% for the first skill, 0% on the
second skill, and 100% for the third skill. His average generalization score in the maintenance
phase was 60%.
Alaric
Alaric did not produce any correct answers in any of the baseline phases across the three
skills (see Figure 3). For the first skill, adding fractions with common denominators, he produced
incorrect answers for all five consecutive baseline sessions. The consecutive sessions and the
probes for the other two skills were also zero. In the intervention phase, Alaric’s overall average
accuracy was 84.4% (range of 0-100). His average accuracy for adding and subtracting fractions
with common denominators, the first skill, was 86.7% (range of 60-100). His average accuracy
was 76.6% (range of 40-100) for the second skill, completing equivalent fractions, and 90.0%
(range of 60-100) for converting decimal notation to fractions and converting fractions to
decimal notation, the third skill.
Alaric’s graphs were analyzed using visual analysis as follows. The level changed when
the intervention was implemented for all three of the graphs and had an immediate upward gap
for each skill. None of the data points in the intervention overlapped with baseline data points.
Using the split middle method (White & Haring, 1980), the trend for the data in the intervention
phase is increasing for two of the three intervention phases, skill two, completing equivalent
31
fractions, and skill three, converting decimals to fractions and fractions to decimals. The
variability of the intervention phase data were deemed to be stable for all three of the graphs
(skill 1, skill 2, and skill 3), with 80% of the data being within a 25% range of the median value
(Barton et al., 2018).
The overall combined and weighted Tau-U score for Alaric across skills was 1.0 with a
95% confidence interval of (CI; [0.6024, 1]). Alaric’s Tau-U score for the first skill, adding and
subtracting fractions with common denominators, was 1.0 with a 90% confidence interval of (CI;
[0.399, 1]). His Tau-U score for the second skill, calculating equivalent fractions, was 1.0 with a
90% confidence interval of (CI; [0.438, 1]). Finally, his Tau-U score for the third skill,
converting fractions to decimal notation and decimal notation to fractions, was 1.0 with a 90%
confidence interval of (CI; [0.429, 1]).
Alaric’s overall average maintenance score across the three skills was 65%. His two
maintenance scores on the first skill were 40% and 0%, and 100% for the second and third skills.
Alaric scored 0% on the generalization baseline probes for all three skills. His generalization in
intervention was 0% for the first skill, and 100% for the second and third skills for an average
score of 66.67%. In the maintenance phase he scored 0% for the first and second skills, and
100% for the third skill. His average generalization score in the maintenance phase was 33.33%.
32
Figure 3. Percent of response accuracy for Alaric across three rational number skills, including:
addition and subtraction of fractions with like denominators, completing equivalent fractions,
and converting fractions to decimal notation and converting decimal notation to fractions. Note:
// = Sessions more than five days apart. ∆ = Generalization data point. EI= Explicit instruction.
POVM = Point-of-view video modeling.
33
Social Validity
After the intervention concluded, the students and teachers were given opportunity to
provide feedback about the practicality and social validity of the intervention (Horner et al.,
2005). The student participants and the teachers were administered social validity questionnaires.
Both the student and teachers were given the social validity forms in paper pencil format.
Student. The results for the student social validity questionnaire are presented in Table 1.
The directions at the top of the form stated: “Read each statement carefully. Circle the number
below that best describes your experiences and feelings.” The students were presented with nine
statements and were invited to respond to the statements on a Likert-type rating scale about the
goals, procedures, and the outcomes of the explicit instruction and video modeling intervention
(see Appendix C for the student social validity form). The participant responses included an
emoji with each level of response. 1= Strongly disagree, 2 = Disagree, 3 = Somewhat
disagree, 4 = Somewhat agree, 5 = Agree, 6 = Strongly agree. Overall, the students rated
the intervention favorably, an average rating of 5.
Table 1
Student Social Validity Questionnaire Responses
Statement P 1 P 2 P 3 Mean
Watching videos on an iPad helped me learn math. 2 6 6 4.67
I enjoyed learning math from an iPad. 4 6 6 5.33
I learn math easily on an iPad. 5 6 6 5.67
I could easily hear the instructor in the video. 1 6 6 4.33
It was simple for me to practice with the video. 4 6 6 5.33
I thought the length of the instructional videos were appropriate. 2 6 6 4.67
My math skills improved. 6 6 5 5.67
I would like to learn other math skills in the same way. 5 6 2 4.33
I like math. 4 6 4 4.67
Note. P = Participant
34
In addition to responding to the above statements students were asked three open ended
questions: (a) What did you like about learning math from the videos on an iPad? (b) What did
you dislike about learning math from the videos on an iPad? and (c) What would you change
about how you learned math on an iPad? The students’ responses to the first question about what
they liked were “it rily [really] helped,” “great,” and “it is kole [cool].” The student responses to
the second question about what they disliked were: “being bord [bored], “nothing,” “to sort [too
short].” Their responses to the final question about what they would change were “I do not no
[know],” “nothing,” and “more fon [fun].”
Teacher. The teachers were invited to respond to statements about the social validity of
the intervention. The results are presented in Table 2. The directions at the top of the teachers
form stated:
The purpose of this questionnaire is to obtain information that will aid in the selection of
future classroom interventions and programs. These programs will be used by teachers of
children with identified needs. Please circle the number which best describes your
agreement or disagreement with each statement.
Eleven statements were presented to the teachers and they were invited to respond on a Likert-
type rating scale about the goals, procedures, and the outcomes of the explicit instruction and
video modeling intervention (Adapted from Witt & Elliott, 1985; see Appendix C for the form
presented to the teachers). The Likert-type scale included the following six options: 1= Strongly
disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Somewhat agree, 5 = Agree, 6 = Strongly
agree. The teachers rated the intervention favorably, with 5.67 being the average rating.
35
Table 2
Teacher Social Validity Questionnaire Responses
Statement T 1 T 2 T 3 Mean
1. This was an acceptable intervention for the
students' needs. 6 5 6 5.67
2. Most teachers would find this intervention
appropriate for students with similar needs. 5 5 6 5.33
3. This intervention proved effective in supporting
the students' needs. 6 5 6 5.67
4. I would recommend the use of this intervention to other teachers.
6 6 6 6
5. The students' needs were severe enough to
warrant use of this intervention. 6 6 6 6
6. I would be willing to use this intervention in the
classroom setting. 5 6 5.5
7. This intervention did not result in negative side
effects for the students. 6 5 6 5.67
8. This intervention would be appropriate for a
variety of students. 5 6 5.5
9. This intervention was reasonable for the needs of
the students. 5 6 5.5
10. I liked the procedures used in this intervention. 5 6 5.5
11. Overall, this intervention was beneficial for the
students. 6 5 6 5.67
Note. T = Teacher
In addition to responding to the above statements, the teachers were asked four open-
ended questions: (1) What were some of the program's strengths? (2) What were some of the
program's weaknesses? (3) Is there anything you would want to change about the intervention?
and (4) Are there any additional comments you have? The teacher responses are presented in
Table 3. Anecdotally, one teacher mentioned that her student had a greater self-efficacy and
confidence in mathematics after experiencing success in the intervention.
36
Table 3
Teacher Social Validity Open-ended Question Responses
Question T 1 Responses T 2 Responses T 3 Responses
1. What were some of
the program's
strengths?
Much needed small
group/teacher ratio
was great!
The videos were
tailored to the level of
the students.
2. What were some of
the program's
weaknesses?
The timing came
when students
weren't at their best,
right after lunch.
Too much time
between sessions may
have affected
retention.
3. Is there anything
you would want to
change about the
intervention?
It would be better in
the morning or during
that subject's period.
4. Are there any
additional comments
you have?
I saw the effects of
this program…The
effects were
tremendous!
I was pleased with
the end data!
Note. T= Teacher
37
DISCUSSION
This study extends the research base on explicit instruction, video modeling, and
augmented reality. In the US, there is a need for improved mathematics outcomes for students
with disabilities. Specifically, students often have difficulty with rational numbers, yet, their
performance on rational numbers is critical to their ability to succeed in Algebra and other more
advanced mathematics (Siegler et al., 2012). Explicit instruction and video modeling each
independently have a strong research base supporting their effectiveness in increasing students’
mathematics performance with rational numbers (e.g., Doabler et al., 2014; Gersten et al., 2009
[explicit instruction]; Ennis & Losinski, 2019; Morris et al., 2019; Yakubova et al., 2015 [video
modeling]) and research using an augmented reality platform to deliver video modeling is
emerging (Kellems et al., 2016, 2019; Morris et al., 2019).
The purpose of this study was to evaluate the effects of an intervention featuring explicit
instruction, point-of-view video modeling, and augmented reality to teach rational number
problem-solving mathematics skills to students identified with SLD. The study also sought to
evaluate the effects of the intervention on participants’ skill maintenance, their ability to
generalize their performance to applied word problems, and assess the social validity of the
intervention.
The results demonstrated that a functional relation exists between the intervention and the
rational number problem-solving performance for three 4th-grade students identified with SLD
who were receiving special education services in mathematics. Visual analysis was conducted on
the trend, level, and variability, of the graphed data. While each participant responded to the
intervention in slightly different ways, there was a marked increase in their rational number
38
problem-solving performance at the implementation of the intervention. The Tau-U measure also
supports this finding with an overall score of 0.98. On average, each of the students performed
better on the fractions in the intervention phase as compared to the baseline phase. The
participants were in the intervention phase for an average of 6.1 sessions in order to have
completed five data points at 80% or above.
The participants, in general, had the most difficulty with equivalent fractions. This was
expected because of the conceptual difficulty of equivalent fractions computation (Li, 2001).
One possible reason for their difficulty is that, in relation to the other skills in this intervention,
the fraction equivalence problems involved more steps to solve. The increased cognitive
demands involved in the additional steps and the difficulty of the concept may have affected
their performance (e.g., cognitive or working memory overload; Mammarella, Caviola, Giofrè,
& Szűcs, 2018). Fraction equivalence is a foundational concept for students’ understanding of
rational numbers, though it has proven to be difficult for many students (Kamii & Clark, 1995;
Li, 2001). Number lines have been shown to be effective at helping students understand fraction
equivalence (Schumacher et al., 2018; Zhang, Stecker, & Beqiri, 2017) and number lines were
used by the instructor in the conceptual portion of the instructional videos to teach fraction
equivalence. A specific way to utilize a number line that has been shown to be effective at
increasing students’ understanding of fraction equivalence is to increase their understanding of
fraction magnitude (e.g., Fuchs, Malone, Schumacher, Namkung, & Wang, 2017; Tien &
Siegler, 2017). A possible change in the delivery of this intervention that may have provided
additional support to the participants for learning fraction equivalence could have been to
provide printed copies of number lines to them.
39
Overlapping Data
Musette had one overlapping data point on the second skill, completing equivalent
fractions. Fraction equivalence is considered to be one of the most difficult concepts a student is
exposed to in elementary school (Kamii & Clark, 1995; Ni, 2001). It is likely that a combination
of equivalent fractions being difficult conceptually, in addition to the multiple steps involved in
solving them, led to her not being able to accurately solve any of the problems in the first session
in intervention. For that session, she provided correct answers for all five of the guided practice
problems; however, she needed some additional support at the check stage.
In post-hoc analysis of the permanent product intervention sheets that the participants
completed (Riccomini, 2005), it was found that the student performed the first step correctly on
all five of the intervention problems, which required that she divide the greater denominator by
the lesser denominator. However, in the second step, rather than using the quotient as either the
multiplier or divisor for the known numerator (depending on which operation was appropriate
for the problems), she used the greater denominator as the divisor for each of the problems. One
possible explanation for this is that students with SLD often have limited working memory
capacity, visuospatial processing, and can experience cognitive overload, especially with multi-
step problems (Asghar, Sladeczek, Mercier, & Beaudoin, 2017; Mammarella et al., 2018).
Because this was possibly her first exposure to fraction equivalence and the method presented for
solving the problems, she may have been experiencing cognitive overload.
Another possible explanation for this is what a group of researchers termed the “initial
academic deficit severity hypothesis” (Fuchs, Sterba, Fuchs, & Malone, 2016). This hypothesis
suggests that the effectiveness of an intervention may provide less benefit to those with lower
prior academic performance than those with higher performance. Because Musette had
40
substantially lower scores on academic measures than the other two participants (e.g., the
aimswebPlus benchmark and the i-Ready diagnostic assessment), her initial difficulty with this
skill could justifiably be attributed to this hypothesis. However, while her overall Tau-U score
was lower than the other two participants, as a result of this overlapping data point, a combined
mean of her results across the intervention, maintenance, and generalization phases was higher
than the other two participants (mean score of 87.9, in comparison to 86.2 for Jaren, and 74.7 for
Alaric). So, when examining the intervention effects from a macro perspective, it had overall
positive effects for Musette. The initial academic deficit severity hypothesis may be supported
for her initial learning of fraction equivalence, but does not appear to be a factor in her overall
performance.
Musette’s session that resulted in an overlapping data point could be due in part to
aptitude-by-treatment interactions (Lloyd & Therrien, 2019). Aptitude-by-treatment suggests that
certain characteristics of a participant may impact their probability for benefiting from the
intervention. Aptitudes that may affect the participants’ performance include prior knowledge,
the level of previous exposure to a given concept or skill, and other characteristics that may
influence each individual’s learning and their differing rates of acquiring skills. Because of this,
some variability is expected across each skill and between the students.
Anecdotally, one characteristic that Musette possesses, that may have positively impacted
the overall effectiveness of this intervention, was her focus when she was watching and
practicing with the videos. She seemed less likely to be distracted than the other participants and
appeared to exhibit a higher rate of on-task behavior, even though time on-task data was not
collected.
41
Intervention Trend
The split-middle method for analysis of trend in the intervention condition indicated that
two of the nine graphs were decreasing. One was Musette’s second skill, finding the equivalent
fraction, and the other was Alaric’s first skill, adding and subtracting fractions. Utilizing the
semi-average trend estimation method, a method that is comparable to the split-middle method,
except it uses means from each half of the data points rather than median values, the graph for
Musette’s second tier is increasing. It was determined that because (a) both of these graphs had
met the criteria of five or more data points at 80% or above, (b) the most recent data point score
was at 100%, and (c) the data for the six most recent data points in the intervention phase were
stable, with 80% of the data being within a 25% range of the median value (Barton et al., 2018),
the participants in both of these situations were moved into the next tier of intervention.
A post-hoc analysis of participants’ errors was conducted. This provides researchers the
ability to learn from the errors that were made (Radatz, 1979; Riccomini, 2005). In this study, the
errors that the participants made varied in type and do not appear to have a noticeable trend.
Some of the errors were transfer errors and calculation mistakes while others were process
errors. Graphic organizers could potentially support students with multistep problems. If used,
they could be imbedded early on in the intervention’s instructional process (e.g., taught in the
model and guided practice and visually represented on the early worksheets before fading the
visual presentation of the support).
Maintenance
Because mathematics skills often build upon each other, maintenance of skills is
important. The results in the maintenance phase varied for the first two skills. This could have
been a result of the participants not obtaining enough practice (e.g., intervention dosage and
42
intensity, see Fuchs et al., 2017). Even though each participant demonstrated that they were able
to do five problems with 80% or higher accuracy on at least five separate occasions, this may not
have provided a dense enough practice schedule for optimal retention of these rational number
skills.
Another possibility is that the maintenance phase for the first two skills occurred
concurrently with the intervention phase of the following skill. This may have impacted the
students’ performance because of working memory constraints or cognitive overload (Swanson
& Jerman, 2006). Various holiday breaks, snow days, and participant absences that made the
maintenance data collection often more than seven days beyond the termination of the
intervention for the first two skills, may have been another contributing factor to the varied
maintenance results. The participants’ maintenance was 100% for the skill in the third tier. The
sessions for this skill were, on average, in closer succession to one another where the previous
skills were more spread out. Also, the maintenance was not collected during the intervention
phase of a succeeding skill. These factors may have positively impacted the maintenance for this
skill over the other skills. Also, additional massed practice with feedback would likely help the
students to have increased maintenance.
Generalization
The ability to apply skills or behaviors in situations, settings, or formats other than those
explicitly taught is known as generalization, which is an important part of instruction (Dennis,
Sorrells, Falcomata, 2016; Stokes & Baer, 1977). Generalization was conducted by presenting
participants with word problems. The word problems presented applied situations where the
fraction skills could be assessed. Generalization was collected in baseline, intervention, and
maintenance phases. In baseline, both the intervention baseline measures and the generalization
43
baseline measures were zero. For the remaining generalization probes, the students’ performance
was either equal to or lower than the intervention phase and maintenance phase data.
One limitation was that the generalization was not programed for in the instruction and
students were not taught strategies to successfully generalize the skills to word problems (Baer,
Wolf, & Risley, 1968; e.g., identify relevant information in the text in order to accurately
perform the generalization measures). Additionally, the students’ reading ability may have been
a confounding variable negatively affecting their ability to generalize the skills taught through
this intervention.
Social Validity
While direct assessment is ideal because it provides objective data on student
performance, indirect assessment can also provide beneficial information about the effects of an
intervention (Heward, 2003). Indirect assessments, in the form of social validity questionnaires
were administered to both the participants and the teachers. In general, both the teachers and the
students rated the intervention positively. Students responded to the first open-ended question,
asking about things they liked about the intervention, with statements such as it “really helped,”
it was “great,” and that it was “cool.” Anecdotally, the participants were frequently eager to
participate in the intervention. However, while the students reported that they liked the
intervention, they were not always enthusiastic to leave their class for the intervention.
Sometimes an intervention session occurred when their class was performing an enjoyable in-
class activity or, on occasion (e.g., when the class had earned a reward), a non-academic activity
(e.g., watching a movie), or for one student, the read-aloud time of high interests books, which
made going to do mathematics less appealing. The interventionist did work around the
participants’ schedule and if there was a “special” (e.g., art or physical education) or other
44
special extracurricular activity, we did not pull the students at that time. If there was a time
available on days that the student had a “special,” the interventionist still attempted to do an
intervention session right after the “special.”
One student mentioned that one thing he disliked about the intervention was being
“bored.” That is a valid concern because the average video length was longer than might be ideal
for their age and attention span. A possible way to address this could be to create shorter videos
or break the instructional sequence into shorter sections. Additionally, using a prop, such as a
finger puppet, to review key points in the last 10-20 seconds of the video clips, may liven up the
videos. However, this may also be a distraction to some students.
The teacher social validity responses reported that they felt that they were “pleased” with
the results, and that the “effects were tremendous!” At the same time, they felt that there were a
few areas that could be improved on. One teacher commented that there could have been a better
time to take the students, mentioning that she felt that they “weren’t at their best right after
lunch.” While determining when to pull the students for the intervention was a collaborative
decision, this feedback is worth noting. The time was chosen in part because it allowed the
students to participate in all of the teacher-delivered, core content instruction which happened
earlier in the day. Although it may not have been the most ideal time, the consensus was that it
was the best time available.
Another teacher’s comment was that there was possibly too much time between the
sessions and identified this as potentially affecting their retention (e.g., the varied maintenance
scores). While this is a possibility, it is more likely that the students needed more practice. One
possibility for future research would be to have the dependent variable have 10 practice
problems, which would provide the participants with twice the practice.
45
Implications for Practice
It is well known that students’ success in mathematics impacts their post-secondary
educational prospects and opportunities for employment (Adelman, 2006; Lee, 2012; NMAP,
2008). This intervention demonstrates the effects of employing explicit instruction, video
modeling, and augmented reality to teach rational number mathematics skills to students with
disabilities.
This intervention may facilitate special educators’ ability to supplement the diverse needs
of learners with exceptionalities. We demonstrated potential for helping increase special
educators’ efficiency at differentiating instruction by being able to simultaneously deliver
explicit instruction to a number of students at differing instructional levels. To do this a teacher,
a group of teachers, or researchers, would create a library of videos and worksheets. The teachers
would then be able to provide their students with an iPad or tablet device that would be used to
assist in explicitly teaching mathematics skills while the teacher assumes more of a facilitating
role (e.g., to check for student understanding, answer questions, conduct error correction
procedures, provide feedback, and reteach). This instruction would include an (a) opening that:
gains students’ attention, provides a preview of the topic, provides a brief conceptual preview of
the skill, and reviews relevant prerequisite skills; (b) a teacher model with examples and
cognitive modeling; (c) guided practice with intentional and systematically reduced scaffolding;
(d) a teacher check for understanding; and (e) independent practice. Each of these components as
a package are validated by research. At the same time, explicit instruction is flexible and some
skills or concepts may not need every component of a prototypical explicit instruction lesson
(Archer & Hughes, 2011).
46
Additionally, this intervention is useful to individuals and families from diverse
backgrounds because explicit instruction has been found effective in teaching academic content
across cultural and language barriers. This intervention also allows students to progress at a pace
according to their ability and skill level, while the teacher working with the students can help at
key points in the instructional process (e.g., checking for understanding, error correction,
determining the sequence of skill presentation, and providing reinforcement for appropriate
behavior).
Implications for Research
Future research should continue to evaluate the effects of explicit instruction combined
with video modeling to teach academic and behavioral skills to students with SLD. Areas where
this could be done include implementing this intervention to other rational number-related skills,
other mathematics skills beyond rational numbers, participants at different grade levels, and
students with other disabilities. Additionally, a replication of this study could be done to further
support the findings. Research should also consider identifying ways to incorporate text input to
further automate student progress through various stages of the explicit instruction (e.g., to verify
students’ ability to perform prerequisite skills, to determine the rate of systematic fading in the
guided practice stage, to check for understanding, etc.). Conducting a study that incorporated
explicit instruction and video modeling and delivering it on a device that records eye gaze may
provide information about where students visual attention is focused and if that impacts their
results. Other future research could evaluate the effects of this intervention on a wider range of
participants without disabilities and within inclusionary settings. A key area that would
strengthen this research would be to have the teachers themselves implement the intervention.
47
Limitations
A limitation with the digital delivery of explicit instruction is an inadequate ability to
provide authentic feedback through the videos. The interventionist provided feedback in the
check stage of the explicit instruction process, but the feedback was limited during the two
instructional videos. Having text input or intelligent software with voice recognition could help
to automate the instruction even further and allow greater ability for authentic feedback to
students about their performance.
Time required to prepare the worksheets can be viewed as a limitation. Worksheets can
easily be created if the proper software is available. The software used to create the worksheets
for this intervention worked well (Math Studio Pro Version 6), however, there is a small fee to
purchase the software. Teachers may not have access to this software or have funds to purchase
it, therefore, it is listed as a limitation. At the same time, all of the worksheets could be created
using another software solution that most teachers would likely have access to, including
Microsoft Word or Google Documents.
Recording and editing the videos can also be viewed as a limitation to implementing this
intervention. Also, there is a learning curve to video editing, and learning to navigate the
augmented reality software can take some time. Each of these can be viewed as a limitation.
There are a few limitations to the Tau-U measure. One is that Tau-U cannot be displayed
graphically in a meaningful way (Parker, Vannest, Davis, & Sauber, 2011). Another limitation is
that chances of error are greater in Tau-U calculation and interpretation than they are with visual
analysis (Tarlow, 2017). Some researchers even suggest that overlap metrics, such as Tau-U and
PND, not be used for evaluation of single-subject research, and if they are used, they should not
be reported as effect sizes (Moeyaert, Zimmerman, & Ledford, 2018; Tarlow, 2017). Though
48
there is some discord in opinions about reporting overlap-based metrics in addition to visual
analysis, it is recommended that both types of analyses can improve our “understanding of
statistical outcomes” (Vannest, Peltier, & Haas, 2018, p. 12).
Conclusion
The field of special education has a great need for socially-valid intensive interventions
to help students learn mathematics skills. This study demonstrated that an intervention with
explicit instruction using video modeling and delivered through an augmented reality platform
was found to be an effective means for teaching rational number concepts and calculation skills
to students with SLD. The intervention taught three rational number skills, adding fractions with
common denominators, completing equivalent fractions, and converting fractions to decimal
notation and decimal notation to fractions. Three 4th-grade students participated in the study. The
effects were determined from visual analysis of the level, trend, and variability. The Tau-U also
indicated a high potential that a functional relation exists. The generalization and maintenance
had variable results. The students rated the intervention positively using social validity
questionnaires. Additionally, the teachers’ overall social validity ratings and comments were
positive. Future research should continue to evaluate this intervention on a range of participants,
with and without disabilities, on additional mathematics skills, on other academic skills, and
behavioral skills.
49
REFERENCES
Adelman, C. (2006). The toolbox revisited: Paths to degree completion from high school through
college. Washington, DC: US Department of Education
Akçayır, G., & Akçayır, M. (2017). Advantages and challenges associated with augmented
reality for education: A systematic review of the literature. Educational Research Review,
20, 1–11. doi:10.1016/j.edurev.2016.11.002
Aldi, C., Crigler, A., Kates-McElrath, K., Long, B., Smith, H., Rehak, K., & Wilkinson, L.
(2016). Examining the effects of video modeling and prompts to teach activities of daily
living skills. Behavior Analysis in Practice, 9(4), 384–388. doi:10.1007/s40617-016-
0127-y
Archer, A. L., & Hughes, C. A. (2011). Explicit instruction: Effective and efficient teaching.
New York, NY: Guilford Press.
Asghar, A., Sladeczek, I., Mercier, J., & Beaudoin, E. (2017). Learning in science, technology,
engineering, and mathematics: Supporting students with learning disabilities. Canadian
Psychology-Psychologie Canadienne, 58; 36(3; 2A), 238–249. doi:10.1037/cap0000111
Bacca, J., Baldiris, S., Fabregat, R., Graf, S., & Kinshuk. (2014). Augmented reality trends in
education: A systematic review of research and applications. Journal of Educational
Technology & Society, 17(4), 133–149.
Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior
analysis. Journal of Applied Behavior Analysis, 1(1), 91–97. doi:10.1901/jaba.1968.1-91
50
Banda, D. R., Dogoe, M. S., & Matuszny, R. M. (2011). Review of video prompting studies with
persons with developmental disabilities. Education and Training in Autism and
Developmental Disabilities, 46(4), 514–527.
Barton, E. E., Meadan-Kaplansky, H., Ledford, J. R. (2018). Independent variables, fidelity and
social validity. In J. Ledford, & D. Gast (Eds.), Single case research methodology:
Applications in special education and behavioral sciences. (3rd ed., pp. 365–391). New
York, NY: Routledge.
Barton, E. E., Lloyd, B. P., Spriggs, A. D., & Gast, D. L. (2018). Multiple baseline and multiple
probe designs. In J. Ledford, & D. Gast (Eds.), Single case research methodology:
Applications in special education and behavioral sciences. (3rd ed., pp. 239–281). New
York, NY: Routledge.
Bellini, S., & Akullian, J. (2007). A meta-analysis of video modeling and video self-modeling
interventions for children and adolescents with autism spectrum disorders. Exceptional
Children, 73(3), 264–287. doi:10.1177/001440290707300301
Brophy, J., & Good, T. L. (1986). Teacher behavior and student achievement. In M. Wittrock
(Ed.), Handbook of research on teaching (3rd ed., pp. 225–296). New York, NY:
Macmillan.
Burton, C. E., Anderson, D. H., Prater, M. A., & Dyches, T. T. (2013). Video self-modeling on
an iPad to teach functional math skills to adolescents with autism and intellectual
disability. Focus on Autism and Other Developmental Disabilities, 28(2), 67–77.
doi:10.1177/1088357613478829
51
Cakir, R., & Korkmaz, O. (2019). The effectiveness of augmented reality environments on
individuals with special education needs. Education and Information
Technologies, 24(2), 1631–1659. doi:10.1007/s10639-018-9848-6
Cihak, D. F., & Bowlin, T. (2009). Using video modeling via handheld computers to improve
geometry skills for high school students with learning disabilities. Journal of Special
Education Technology, 24(4), 17–30. doi:10.1177/016264340902400402
Cihak, D. F., Moore, E. J., Wright, R. E., McMahon, D. D., Gibbons, M. M., & Smith, C. (2016).
Evaluating augmented reality to complete a chain task for elementary students with
autism. Journal of Special Education Technology, 31(2), 99–108.
doi:10.1177/0162643416651724
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Upper
Saddle River, NJ: Pearson/Merrill-Prentice Hall.
Corbett, B. A., & Abdullah, M. (2005). Video modeling: Why does it work for children with
autism? Journal of Early and Intensive Behavior Intervention, 2(1), 2–8.
doi:10.1037/h0100294
Denney, D. R. (1975). The effects of exemplary and cognitive models and self-rehearsal on
children's interrogative strategies. Journal of Experimental Child Psychology, 19(3),
476–488. doi:10.1016/0022-0965(75)90077-6
Dennis, M. S., Sorrells, A. M., & Falcomata, T. S. (2016). Effects of two interventions on
solving basic fact problems by second graders with mathematics learning disabilities.
Learning Disability Quarterly, 39(2), 95–112. doi:10.1177/0731948715595943
Doabler, C. T., Nelson, N. J., Kosty, D. B., Fien, H., Baker, S. K., Smolkowski, K., & Clarke, B.
(2014). Examining teachers’ use of evidence-based practices during core mathematics
52
instruction. Assessment for Effective Intervention, 39(2), 99–111.
doi:10.1177/1534508413511848
Dowrick, R. W. (1991). Practical guide to using video in the behavioral sciences. New York,
NY: John Wiley & Sons
Ennis, R. P., & Losinski, M. (2019). Interventions to improve fraction skills for students with
disabilities: A meta-analysis. Exceptional Children, 85(3), 367–386.
doi:10.1177/0014402918817504
Fuchs, L. S., Fuchs, D., & Hollenbeck, K. N. (2007). Expanding responsiveness to intervention
to mathematics at first and third grade. Learning Disabilities Research & Practice, 22(1),
13–24. doi:10.1111/j.1540-5826.2007.00227.x
Fuchs, L. S., Fuchs, D., & Malone, A. S. (2017). The taxonomy of intervention intensity.
Teaching Exceptional Children, 50(1), 35–43. doi:10.1177/0040059917703962
Fuchs, L. S., Malone, A. S., Schumacher, R. F., Namkung, J., Hamlett, C. L., Jordan, N. C., . . .
Changas, P. (2016). Supported self-explaining during fraction intervention. Journal of
Educational Psychology, 108(4), 493–508. doi:10.1037/edu0000073
Fuchs, L. S., Sterba, S. K., Fuchs, D., & Malone, A. S. (2016). Does evidence-based fractions
intervention address the needs of very low-performing students? Journal of Research on
Educational Effectiveness, 9(4), 662–677. doi:10.1080/19345747.2015.1123336
Garzón, J., Pavón, J., & Baldiris, S. (2019). Systematic review and meta-analysis of augmented
reality in educational settings. Virtual Reality, Advance online publication.
doi:10.1007/s10055-019-00379-9
Gast, D. L., Lloyd, B. P., & Ledford, J. R. (2018). Multiple baseline and multiple probe designs.
In J. Ledford, & D. Gast (Eds.), Single case research methodology: Applications in
53
special education and behavioral sciences. (3rd ed., pp. 239–281). New York, NY:
Routledge.
Gersten, R., Beckmann, S., Clarke, B., Foegen, A., Marsh, L., Star, J. R., & Witzel, B. (2009).
Assisting students struggling with mathematics: Response to INTERVENTION (RtI) for
elementary and middle schools (NCEE 2009–4060). Washington, DC: National Center
for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S.
Department of Education. Retrieved from https://ies.ed.gov/ncee/wwc/PracticeGuides
Goeke, J. L. (2009). Explicit instruction: A framework for meaningful direct teaching. Upper
Saddle River, NJ: Merrill.
Hall, T., & Vue, G. (2004). Explicit instruction. Wakefield, MA: National Center on Accessing
the General Curriculum. Retrieved from
http://aem.cast.org/about/publications/2002/ncac-explicit-
instruction.html#.XN7Ks45KhhE
Hansen, N., Jordan, N. C., & Rodrigues, J. (2017). Identifying learning difficulties with
fractions: A longitudinal study of student growth from third through sixth grade.
Contemporary Educational Psychology, 50, 45-59. doi:10.1016/j.cedpsych.2015.11.002
Heward, W. L. (2003). Ten faulty notions about teaching and learning that hinder the
effectiveness of special education. The Journal of Special Education, 36(4), 186–205.
doi:10.1177/002246690303600401
Hitchcock, C. H., Dowrick, P. W., & Prater, M. A. (2003). Video self-modeling intervention in
school-based settings: A review. Remedial and Special Education, 24(1), 36–45.
doi:10.1177/074193250302400104
54
Hollingsworth, J., & Ybarra, S. (2009). Explicit direct instruction (EDI): The power of the well-
crafted, well-taught lesson. Thousand Oaks, CA: Corwin Press.
Horner, R. D., & Baer, D. M. (1978). Multiple probe technique: A variation of the multiple
baseline design. Journal of Applied Behavior Analysis, 11(1), 189–196.
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of
single-subject research to identify evidence-based practice in special
education. Exceptional Children, 71(2), 165–179. doi:10.1177/001440290507100203
Horner, R. H., & Odom, S. L. (2014). Constructing single-case research designs: Logic and
options. In T. R. Kratochwill & J. R. Levin (Eds.), Single-case intervention research:
Methodological and statistical advances. (pp. 127-151). Washington, DC: American
Psychological Association.
Hughes, C. A., Morris, J. R., Therrien, W. J., & Benson, S. K. (2017). Explicit instruction:
Historical and contemporary contexts. Learning Disabilities Research & Practice, 32(3),
140–148. doi:10.1111/ldrp.12142
Hughes, C. A., Riccomini, P. J., & Morris, J. R. (2019). Use explicit instruction. In, J.
McLeskey, L. Maheady, B. Billingsley, M. Brownell, & T. Lewis (Eds.), High-leverage
practices for inclusive classrooms (pp. 215–236). New York, NY: Routledge.
Hughes, E. M. (In press). Point of view video modeling to teach simplifying fractions to middle
school students with mathematical learning disabilities. Learning Disabilities: A
Contemporary Journal, 17(1) 41–57.
Hughes, E. M., & Yakubova, G. (2016). Developing handheld video intervention for students
with autism spectrum disorder. Intervention in School and Clinic, 52(2), 115–121.
doi:10.1177/1053451216636059
55
Hughes, E. M., & Yakubova, G. (2019). Addressing the mathematics gap for students with ASD:
An evidence-based systematic review of video-based mathematics interventions. Review
Journal of Autism and Developmental Disorders, 6(2), 147–158. doi:10.1007/s40489-
019-00160-3
Hwang, J., Riccomini, P. J., Hwang, S. Y., & Morano, S. (2019). A systematic analysis of
experimental studies targeting fractions for students with mathematics difficulties.
Learning Disabilities Research & Practice, 34(1), 47–61. doi:10.1111/ldrp.12187
Jitendra, A. K., Lein, A. E., Im, S., Alghamdi, A. A., Hefte, S. B., & Mouanoutoua, J. (2018).
Mathematical interventions for secondary students with learning disabilities and
mathematics difficulties: A meta-analysis. Exceptional Children, 84(2), 177–196.
doi:10.1177/0014402917737467
Kamii, C., & Clark, F. B. (1995). Equivalent fractions: Their difficulty and educational
implications. Journal of Mathematical Behavior, 14(4), 365-378. doi:10.1016/0732-
3123(95)90035-7
Kellems, R. O., Cacciatore, G., & Osborne, K. (2019). Using an augmented reality–based
teaching strategy to teach mathematics to secondary students with disabilities. Career
Development and Transition for Exceptional Individuals, Advance online publication.
doi:10.1177/2165143418822800
Kellems, R. O., & Edwards, S. (2016). Using video modeling and video prompting to teach core
academic content to students with learning disabilities. Preventing School Failure:
Alternative Education for Children and Youth, 60(3), 207–214.
doi:10.1080/1045988X.2015.1067875
56
Kellems, R. O., Frandsen, K., Hansen, B., Gabrielsen, T., Clarke, B., Simons, K., & Clements,
K. (2016). Teaching multi-step math skills to adults with disabilities via video prompting.
Research in Developmental Disabilities, 58, 31–44. doi:10.1016/j.ridd.2016.08.013
Kiru, E. W., Doabler, C. T., Sorrells, A. M., & Cooc, N. A. (2018). A synthesis of technology-
mediated mathematics interventions for students with or at risk for mathematics learning
disabilities. Journal of Special Education Technology, 33(2), 111–123.
doi:10.1177/0162643417745835
Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., &
Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from
What Works Clearinghouse website: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf.
Kroesbergen, E. H., & Van Luit, J. E. H. (2003). Mathematics interventions for children with
special educational needs: A meta-analysis. Remedial and Special Education, 24(2), 97–
114. doi:10.1177/07419325030240020501
Ledford, J. R., & Gast, D. L. (2014). Measuring procedural fidelity in behavioural research.
Neuropsychological Rehabilitation, 24(3-4), 332–348.
doi:10.1080/09602011.2013.861352
Ledford, J. R.., Lane, J. D., & Tate, R. (2018). Evaluating quality and rigor in single case
research. In J. Ledford, & D. Gast (Eds.), Single case research methodology:
Applications in special education and behavioral sciences. (3rd ed., pp. 365–391). New
York, NY: Routledge.
Lee, J. (2012). College for all: Gaps between desirable and actual P–12 math achievement
trajectories for college readiness. Educational Researcher, 41(2), 43–55.
doi:10.3102/0013189X11432746
57
Lloyd, J. W., & Therrien, W. J. (2019). Preview. Exceptional Children, 85(2), 124–125.
doi:10.1177/0014402918811447
Mammarella, I. C., Caviola, S., Giofrè, D., & Szűcs, D. (2018). The underlying structure of
visuospatial working memory in children with mathematical learning disability. British
Journal of Developmental Psychology, 36(2), 220–235. doi:10.1111/bjdp.12202
Mason, R. A., Ganz, J. B., Parker, R. I., Boles, M. B., Davis, H. S., & Rispoli, M. J. (2013).
Video-based modeling: Differential effects due to treatment protocol. Research in Autism
Spectrum Disorders, 7(1), 120–131. doi:10.1016/j.rasd.2012.08.003
McCoy, K., & Hermansen, E. (2007). Video modeling for individuals with autism: A review of
model types and effects. Education and Treatment of Children, 30(4), 183–213.
doi:10.1353/etc.2007.0029
McLeskey J., Barringer M-D., Billingsley B., Brownell M., Jackson D., Kennedy M., …Ziegler
D. (2017). High-leverage practices in special education. Arlington, VA: Council for
Exceptional Children & CEEDAR Center.
Misquitta, R. (2011). A review of the literature: Fraction instruction for struggling learners in
mathematics. Learning Disabilities Research & Practice, 26(2), 109–119.
doi:10.1111/j.1540-5826.2011.00330.x
Moeyaert, M., Zimmerman, K. N., & Ledford, J. R. (2018). Synthesis and meta-analysis of
single case research. In J. Ledford, & D. Gast (Eds.), Single case research methodology:
Applications in special education and behavioral sciences. (3rd ed., pp. 365–391). New
York, NY: Routledge.
58
Morgan, R., & Salzberg, C. (1992). Effects of video-assisted training on employment-related
social skills of adults with severe mental-retardation. Journal of Applied Behavior
Analysis, 25(2), 365–383. doi:10.1901/jaba.1992.25-365
Morris, J. R., Hughes, E. M., & Stocker, J. D. (2019). Effects of Augmented Reality and Video
Modeling to Explicitly Teach Mathematics. Manuscript in progress.
National Assessment of Educational Progress (NAEP). (2015, 2017). The condition of education.
Washington, DC: U.S. Department of Education.
National Council of Teachers of Mathematics (NCTM). (2000). Principles and standards for
school mathematics: An overview. Retrieved from www.nctm.org/standards/
National Council of Teachers of Mathematics (NCTM). (2006). Curriculum focal points for
prekindergarten through grade 8 mathematics: A quest for coherence. Reston, VA:
Author.
National Governors Association Center for Best Practices & Council of Chief State School
Officers (NGA/CCSSO). (2010). Common Core State Standards for mathematics.
Washington, DC: Authors.
National Mathematics Advisory Panel (NMAP). (2008). Foundations for Success: The Final
Report of the National Mathematics Advisory Panel, Washington, DC: U.S. Department
of Education.
National Research Council (NRC). (2001). Adding it up: Helping children learn mathematics.
Washington, DC: National Academy Press.
Nelson, G., & Powell, S. R. (2018). A systematic review of longitudinal studies of mathematics
difficulty. Journal of Learning Disabilities, 51(6), 523–539.
doi:10.1177/0022219417714773
59
Ni, Y. (2001). Semantic domains of rational numbers and the acquisition of fraction equivalence.
Contemporary Educational Psychology, 26(3), 400–417. doi:10.1006/ceps.2000.1072
Ni, Y., & Zhou, Y. (2005). Teaching and learning fraction and rational numbers: The origins and
implications of whole number bias. Educational Psychologist, 40(1), 27–52.
doi:10.1207/s15326985ep4001_3
Nikopoulos, C. K., Canavan, C., & Nikopoulou-Smyrni, P. (2009). Generalized effects of video
modeling on establishing instructional stimulus control in children with autism: Results
of a preliminary study. Journal of Positive Behavior Interventions, 11(4), 198–207.
doi:10.1177/1098300708325263
Nikopoulos, C., & Keenan, M. (2004). Effects of video modeling on social initiations by children
with autism. Journal of Applied Behavior Analysis, 37(1), 93–96.
doi:10.1901/jaba.2004.37–93
Nikopoulos, C. K., & Keenan, M. (2007). Using video modeling to teach complex social
sequences to children with autism. Journal of Autism and Developmental Disorders,
37(4), 678–93. doi:http://dx.doi.org/10.1007/s10803-006-0195-x
Parker, R. I., Vannest, K. J., Davis, J. L., & Sauber, S. B. (2011). Combining nonoverlap and
trend for single-case research: Tau-U. Behavior Therapy, 42(2), 284–299.
doi:10.1016/j.beth.2010.08.006
Parker, R. I., & Vannest, K. J. (2009). An improved effect size for single-case research:
Nonoverlap of all pairs. Behavior Therapy, 40(4), 357–367.
doi:10.1016/j.beth.2008.10.006
Parker, R. I., Vannest, K. J., & Davis, J. L. (2014). Non-overlap analysis for single-case research.
In T. R. Kratochwill & J. R. Levin (Eds.), Single-case intervention research:
60
Methodological and statistical advances. (pp. 127-151). Washington, DC: American
Psychological Association.
Powell, S. R., & Fuchs, L. S. (2015). Intensive intervention in mathematics. Learning
Disabilities Research & Practice, 30(4), 182–192. doi:10.1111/ldrp.12087
Prater, M.A., Carter, N., Hitchcock, C., & Dowrick, P. (2012). Video self-modeling to improve
academic performance: A literature review. Psychology in Schools, 49(1), 71–81.
doi:10.1002/pits.20617.
Radatz, H. (1979). Error analysis in mathematics education. Journal for Research in
Mathematics Education, 10(3), 163-172. doi:10.2307/748804
Riccomini, P. J. (2005). Identification and remediation of systematic error patterns in
subtraction. Learning Disability Quarterly, 28(3), 233–242. doi:10.2307/1593661
Rivera, D. P. (1997). Mathematics education and students with learning disabilities: Introduction
to the special series. Journal of Learning Disabilities, 30(1), 2–19.
Rosenshine, B., & Stevens, R. (1986). Teaching functions. In M. Wittrock (Ed.), Handbook of
research on teaching (3rd ed., pp. 376–391). New York, NY: Macmillan.
Saunders, A. F., Spooner, F., & Ley Davis, L. (2018). Using video prompting to teach
mathematical problem solving of real-world video-simulation problems. Remedial and
Special Education, 39(1), 53–64. doi:10.1177/0741932517717042
Schoolhouse Technologies [Computer software]. (2018). Math Resource Studio (Version
6.1.6.2). Retrieved from https://www.schoolhousetech.com
Schumacher, R. F., Jayanthi, M., Gersten, R., Dimino, J., Spallone, S., & Haymond, K. S.
(2018). Using the number line to promote understanding of fractions for struggling fifth
61
graders: A formative pilot study. Learning Disabilities Research & Practice, 33(4), 192–
206. doi:10.1111/ldrp.12169
Siegler, R., Carpenter, T., Fennell, F., Geary, D., Lewis, J., Okamoto, Y., … Wray, J. (2010).
Developing effective fractions instruction for kindergarten through 8th grade: A practice
guide (NCEE 2010-4039). Washington, DC: National Center for Education Evaluation
and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Retrieved from https://ies.ed.gov/ncee/wwc/PracticeGuides
Siegler, R. S., Thompson, C., & Schneider, M. (2011). An integrated theory of whole number
and fractions development. Cognitive Psychology, 62, 273–296.
doi:10.1016/j.cogpsych.2011.03.001
Stokes, T. F., & Baer, D. M. (1977). An implicit technology of generalization. Journal of
Applied Behavior Analysis, 10(2), 349–367. doi:10.1901/jaba.1977.10-349
Swanson, H. L., & Jerman, O. (2006). Math disabilities: A selective meta-analysis of the
literature. Review of Educational Research, 76(2), 249–274.
doi:10.3102/00346543076002249
Tarlow, K. R. (2017). An improved rank correlation effect size statistic for single-case designs:
Baseline corrected TAU. Behavior Modification, 41(4), 427–467.
doi:10.1177/0145445516676750
Vamvakoussi, X. (2015). The development of rational number knowledge: Old topic, new
insights. Learning and Instruction, 37, 50–55. doi:10.1016/j.learninstruc.2015.01.002
Van Hoof, J., Verschaffel, L., Ghesquière, P., & Van Dooren, W. (2017). The natural number
bias and its role in rational number understanding in children with dyscalculia: Delay or
62
deficit? Research in Developmental Disabilities, 71, 181–190.
doi:10.1016/j.ridd.2017.10.006
Vannest, K. J., & Ninci, J. (2015). Evaluating intervention effects in Single‐Case research
designs. Journal of Counseling & Development, 93(4), 403–411. doi:10.1002/jcad.12038
Vannest, K. J., Parker, R. I., & Gonen, O. (2011). Single case research: Web based calculators
for SCR analysis. (Version 10 [Web-based application]. College Station, TX: Texas
A&M University. Retrieved from http://singlecaseresearch.org.
Vannest, K. J., Peltier, C., & Haas, A. (2018). Results reporting in single case experiments and
single case meta-analysis. Research in Developmental Disabilities, 79, 10–18.
doi:10.1016/j.ridd.2018.04.029
Vosniadou, S. (2014). Examining cognitive development from a conceptual change point of
view: The framework theory approach. European Journal of Developmental Psychology,
11(6), 645-661. doi:10.1080/17405629.2014.921153
Vosniadou, S., & Skopeliti, I. (2014). Conceptual change from the framework theory side of the
fence. Science & Education, 23(7), 1427–1445. doi:10.1007/s11191-013-9640-3
Vamvakoussi, X., Vosniadou, S., & Dooren, W. V. (2013). The framework theory approach
applied to mathematics. In. S. Vosniadou (Ed.), International handbook of research on
conceptual change (2nd ed., pp. 305-321). New York, NY: Routledge
Wei, X., Lenz, K. B., & Blackorby, J. (2013). Math growth trajectories of students with
disabilities: Disability category, gender, racial, and socioeconomic status differences
from ages 7 to 17. Remedial and Special Education, 34, 154–165.
doi:10.1177/0741932512448253
63
White, O. R., & Haring, N. G. (1980). Exceptional teaching: A multimedia training package.
Columbus, OH: Merrill.
Witt, J. C., & Elliott, S. N. (1985). Acceptability of classroom intervention strategies. In T.R.
Kratochwill, (Ed.), Advances in school psychology, (Vol. 4, pp. 251–288). Mahwah, NJ:
Erlbaum.
Yakubova, G., Hughes, E. M., & Hornberger, E. (2015). Video-based intervention in teaching
fraction problem-solving to students with autism spectrum disorder. Journal of Autism
and Developmental Disorders, 45(9), 2865–2875. doi:10.1007/s10803-015-2449-y
Yakubova, G., Hughes, E. M., & Shinaberry, M. (2016). Learning with technology: Video
modeling with concrete–representational–abstract sequencing for students with autism
spectrum disorder. Journal of Autism and Developmental Disorders, 46(7), 2349–2362.
doi:10.1007/s10803-016-2768-7
Zhang, D., Stecker, P., & Beqiri, K. (2017). Strategies students with and without mathematics
disabilities use when estimating fractions on number lines. Learning Disability Quarterly,
40(4), 225–236. doi:10.1177/0731948717704966
64
APPENDIX A
Literature Review and Supporting Information
A Literature Review Designed to Present Supporting Information for the Preceding
Dissertation
Jared R. Morris
The Pennsylvania State University
© 2019 Jared R. Morris
65
Explicit Instruction
Explicit instruction has a considerable amount of empirical evidence supporting its use
(e.g., Archer & Hughes, 2011; Brophy & Good, 1986; Hughes, Morris, Therrien, & Benson,
2017; Hughes, Riccomini, & Morris, 2019; Rosenshine & Stevens, 1986). Explicit instruction
has been identified as a high-leverage practice for teaching students with disabilities (McLeskey
et al., 2017). Explicit instruction has also been identified as an essential instructional component
of effective mathematics instruction (e.g., Gersten et al., 2009; NMAP, 2008) and specifically for
teaching fractions (Misquitta, 2011).
History of Explicit Instruction
Direct instruction. Explicit instruction is built upon research about the designs and
techniques of Direct Instruction that go back to the 1960’s (Engelmann & Carnine, 1982).
Research on Direct Instruction is supported by the student outcomes from participants in Project
Follow Through (Watkins & Slokum, 2003). Project Follow Through began as an ambitious
project to extend into elementary schools the effects of Head Start, an educational program for
economically disadvantaged children. When funding was not appropriated, it became a national
longitudinal educational study continuing experimentally for 10 years and funded for close to 20
additional years (Watkins, 1997). Direct Instruction’s continued effectiveness was reaffirmed by
Adams and Engelmann (1996) in the publication, Research on Direct Instruction: 25 years
beyond DISTAR.
Direct Instruction is based around three constructs key to effectively teaching students:
(a) program design, (b) organization of instruction, and (c) student–teacher interactions (Watkins
& Slokum, 2003). Program design can be divided into five elements: (a) identifying central ideas
and generalizable strategies that maximize student learning, (b) clear communication, (c)
66
structured dialogue between students and teachers, (d) sequenced skill presentation, and (e) a
systematic program for delivering and reviewing topics and objectives (Watkins & Slokum,
2003). One of the benefits of Direct Instruction is that the effective instructional design
components are already built into the program and scripts. The use of scripts allows the teacher
to focus on effective delivery strategies (Watkins & Slokum, 2003). While explicit instruction
and Direct Instruction have a similar instructional design and share many similar delivery
components (e.g., quick pace, scaffolding, frequent responding, choral responding, feedback,
maximizing instructional time) explicit instruction is not scripted as Direct Instruction is (Hughes
et al., 2017).
Process-product research. Brophy and Good (1986) reviewed the history and effects of
process-product research, summarized the effects of teacher behavior on students’ achievement,
and helped solidify a foundational case for explicit direct instructional methods. From the
evidence reviewed, the authors concluded that teachers’ behaviors and the amount of time
students are on task directly impacts student outcomes (Brophy & Good, 1986).
In the same handbook, Rosenshine and Stevens (1986) synthesized research on
instructional procedures. Their findings also supported the need for an explicit and direct
approach to instruction. They also highlighted key instructional components that are connected
to effective student outcomes and suggested an instructional model, with six functions of
effective instruction. The main functions and key components of their instructional model are:
(a) review previous instruction, assignments, prerequisite skills and reteach if necessary; (b)
present new content and skills using demonstrations and models, including a short statement of
objectives, rapid pace, and highlight main points; (c) employ guided practice, including: relevant
questions, (d) give all students opportunities to respond, prompts, and long enough repetitions or
67
duration that students are firm and can complete the skills with high levels of accuracy; (e)
provide correctives and feedback (differentiated according to the confidence in which the
response was provided), monitoring, and praise; (f) plan independent practice until mastery (e.g.,
95% accuracy), and fluency or automaticity; and (g) provide weekly and monthly systematic
reviews (e.g., distributed practice) and frequent tests (e.g., progress monitoring; Rosenshine &
Stevens, 1986).
Literature Reviews and Meta-Analyses
In addition to the literature reviews listed previously that found explicit instruction to be
an effective means of instruction for teaching fractions (e.g., Ennis & Losinski, 2019; Misquitta,
2011; Shin & Bryant, 2015) a number of literature reviews and meta-analyses have identified
explicit instruction as an effective instructional method across content areas (Solis et al., 2012;
Stevens, Rodgers, & Powell, 2017). Solis and collaborators (2012) conducted a literature review
about reading comprehension interventions for students in Grades 6 through 8 identified as
having a learning disability. Twelve studies, published between the years of 1979 and 2009, were
identified for inclusion in their review. They found that strategy instruction, including effective
summarization, self-questioning, mnemonics, and graphic organizers, was a key component of
the majority of the studies and that it had positive effects on the participants’ reading
comprehension. They state that explicit instruction was the most regular finding in the studies
and that explicit instruction would benefit middle school students reading comprehension (Solis
et al., 2012).
Researchers Stevens and colleagues (2017) conducted a meta-analysis of 25 years of
mathematics interventions for students with or at risk for disabilities for Grades 4 through 12.
Their systematic review utilized the PRISMA procedures for their search process (Moher,
68
Liberati, Tetzlaff, & Altman, 2009; Stevens et al., 2017) and 25 group design studies met their
criteria (single case designs were excluded) published from 1990 to 2015. A key finding was that
fractions intervention “significantly improved students’ mathematics outcomes” (Stevens et al.,
2017, p. 335). Additionally, they found that explicit instruction is a significant element of
mathematics interventions (Stevens et al., 2017).
IES Practice Guides
The Institute of Education Sciences (IES) was created under the Education Sciences
Reform Act of 2002. IES is the statistics, research and evaluation division of the US Department
of Education and has a mission to increase the academic outcomes through scientific evidence
(IES, n.d.). IES has six main divisions, which include, (a) national assessments of student
achievement, (b) educational surveys, (c) research support, (d) evaluation of federally funded
education programs, (e) strive to increase the use of data in decision making through the What
Works Clearinghouse, Regional Educational Laboratories, and Statewide longitudinal grants;
and (f) funding training and development of methods and measures (IES, n.d.).
IES Educator’s Practice Guides (hereafter referred to as IES Practice Guides) utilize a
panel of experts who combine their knowledge with published research. The panel makes
recommendations from their findings and then evaluates their recommendations according to
stringent standards. The front matter to each of the practice guides explain the criteria each panel
uses for rating the recommendations. The panel utilizes the WWC evidence standards and
procedures handbooks (note: the most current handbooks are version 4.0 published in October of
2017 [WWC, 2017]), and assesses the research in seven areas: (a) validity, (b) effects on relevant
outcomes, (c) relevance to scope, (d) relationship between research and recommendations, (e)
panel confidence in the effectiveness, (f) role of expert opinion, and (g) when assessment is the
69
focus of the recommendation the assessments are evaluated against The Standards for
Educational and Psychological Testing (e.g., Baker et al., 2014, JCSEPT, 2014). The
recommendations are then assigned a rating of having (a) strong evidence, meaning that the
recommended strategies have strong evidence and they are generalizable to a wide population;
(b) moderate evidence, still strong research evidence but can be generalized with less confidence;
and (c) minimal evidence, which could mean one of three things, that there is conflicting or weak
evidence, that there is insufficient evidence, or that the practice is difficult to study
experimentally (e.g., Graham et al., 2012/Revised 2018, p. 3).
A number of the IES Practice Guides published by What Works Clearinghouse have
identified explicit instruction as being a fundamental instructional approach. Recent IES Practice
Guides have identified explicit instruction as being highly recommended or having strong effects
across content areas (Baker et al., 2014; Graham et al., 2012/Revised 2018), and specifically in
mathematics (Gersten et al., 2009).
The purpose of the IES Practice Guide by Baker et al., (2014) was to identify and
evaluate research on teaching academic content including literacy, numeracy, social studies, and
science topics to elementary and middle school English Learners. This guide presents four
recommendations, two of which include using explicit instruction (Baker et al., 2014).
Graham and colleagues (2012/2018) authored a practice guide for teaching elementary
school students to be effective writers. This practice guide was revised in October of 2018 and
presents four recommendations, (a) provide daily time for students to write, (b) teach students to
use the writing process for a variety of purposes, (c) teach students to become fluent with
handwriting, spelling, sentence construction, typing and word processing; and (d) create an
engaged community of writers (Graham et al., 2012; 2018). The second recommendation
70
discusses the importance of using explicit instruction, including a gradual release of
responsibility (e.g., systematic reduction of scaffolding; Graham et al., 2012/2018).
In a practice guide specific to mathematics, Gersten et al., (2009) present
recommendations for implementing Response to Intervention (RtI). Eight recommendations were
classified as having low, moderate, and strong rating. The recommendations include, (a)
screening all students to identify those at risk, (b) for students receiving interventions, focus
instructional materials on whole numbers for Grades kindergarten through Grade 5, and on
rational numbers in Grades 4 through Grade 8; (c) Use explicit and systematic instruction,
including modeling the skill, with think aloud (e.g., cognitive modeling), guided and prompted
practice, corrective feedback and a frequent review; (d) include strategies for solving word
problems based on common underlying structures, (e) utilize visual representations and provide
opportunities for students to manipulate visual representations, (f) incorporate 10 minutes of
fluency training, (g) use progress monitoring, and (h) incorporate motivational strategies in tier 2
& 3 interventions (Gersten et al.,. 2009). Of these recommendations, two of them were
determined to have a strong level of evidence for their use; these included the third (explicit
instruction) and fourth (instruction on word problems) recommendations (Gersten et al., 2014).
Explicit Instruction Today
Today, explicit instruction is recognized to be a systematic, direct, and engaging
approach to design lessons and deliver content in a way that is success oriented (e.g., Archer &
Hughes, 2011; Goeke, 2009; Hall & Vue, 2004; Hollingsworth & Ybarra, 2009). Explicit
instruction is defined as:
“… a group of research-supported instructional behaviors used to design and deliver
instruction that provides needed supports for successful learning through clarity of
71
language and purpose, and reduction of cognitive load. It promotes active student
engagement by requiring frequent and varied responses followed by appropriate
affirmative and corrective feedback and assists long-term retention through use of
purposeful practice strategies.” (Hughes et al., 2017, p.4).
Explicit instruction is an assemblage of instructional design procedures and delivery
methods (Archer & Hughes, 2011) and often follows a three-tiered structure: the model, prompt,
and check. In the model, the teacher models the skill or behavior to be learned. The prompt
includes the teacher guiding the students as they initially attempt performing the skill. The
prompts and guidance are systematically faded as the students’ ability increases. The teacher
then checks that the student is able to perform the skill with high levels of accuracy before being
assigned to practice the skill independently. In summary, common features in explicit instruction
are unambiguity, defined structure, systematic delivery, and intentional scaffolding (Goeke,
2009; Hall & Vue, 2004).
While explicit instruction today retains many of the same components outlined by
Rosenshine and Stevens (1986), the structure now often includes three main divisions. These
three divisions of a lesson are the opening, followed by the body of the lesson, and then the close
of the lesson (see Figure 4).
Opening. An explicit lesson’s opening contains four parts: (a) gain students’ attention,
(b) state the goal of the lesson, (c) discuss the relevance of the target skill, and (d) verify relevant
prerequisite skills (Archer & Hughes, 2011). Each of these components help to set the stage for
the lesson and increase the chances that the students are prepared and ready for learning.
72
Opening • Gain attention
• State the lesson’s goal
• Discuss the relevance of the target skill
• Verify relevant prerequisite knowledge
Body • Model
• Prompt or “guide” practice
• Check
Close • Review
• Preview
• Assign independent practice
Figure 4. The Prototypical Structure of an Explicit Lesson. Note: Adapted from Archer &
Hughes, 2011.
Body. After the opening, the body of the lesson comes next. This is where main portion
of the teacher-led instruction occurs. The body of a lesson is comprised of three processes: the
model, prompt, and check, sometimes referred as “I do” “we do” and “you do” (Archer &
Hughes, 2011).
During the model (I do), the teacher provides an explicit demonstration of how to
perform a given skill (Rosenshine & Stevens, 1986). This model may need to be slow and more
exaggerated initially, but it depends on the students’ level and readiness and the content being
taught (Archer & Hughes, 2011). While modeling, the teacher vocalizes thought processes or
internal dialogue. This is referred to as a “think aloud” or “cognitive modeling” (Archer &
Hughes, 2011; Denney, 1975). For example, a teacher might vocalize things like, “I ask myself,
does this problem fit the rule” (a self-question that could be used for teaching a math rule), or
“this problem is in the form of the rule I learned so I need to do ___________” (self-instruction;
Archer & Hughes, 2011).
73
It has been recommended to provide multiple models that include the students (Archer &
Hughes, 2011). Ideally, these models would present a wide range of examples and non-
examples. Student participation is highly encouraged throughout the model. Providing
opportunity for frequent responses and giving feedback to those responses is one of the earliest
forms of practice and increases student outcomes (Adamson & Lewis, 2017; Hattie, 2009).
Providing frequent opportunities to respond have been shown to increase the time students are
engaged and reduce behavioral disruptions (Adamson & Lewis, 2017). Focusing on helping
students improve can increase the effectiveness of feedback (Hattie & Clarke, 2019). When these
components are utilized in concert one with another, the model portion of a lesson helps students
feel confident and prepared for the guided practice portion of the lesson. It is important to note
that in many cases, student participation in the model includes answering carefully crafted
questions about the content or skill, not performing the skill independently (Archer & Hughes,
2011).
The prompt (we do) is the next element of a prototypical explicit instruction lesson and
involves students performing, or practicing, the skill with high levels of teacher involvement
(Archer & Hughes 2011). In this stage, students first begin practicing new skills with high levels
of guidance. This guidance and scaffolding are systematically reduced as students’ ability
increases. Guided practice is followed by an assessment, the check stage (you do).
In the check stage (you do), a teacher has a small number of problems prepared for the
students to do independently and a method for quickly verifying the accuracy of each student’s
responses. By this point errors should be minimal, however, if errors are made, they are
corrected, and the students are then again directed to independently perform the skill to recheck
accuracy. This step is repeated until the student can perform the skill with high levels of
74
accuracy, 80% accuracy has been suggested as adequate for newly learned material (Rosenshine
& Stevens, 1986; Note: this percentage would be higher [e.g., 95%] for material being
reviewed). The check stage helps to ensure that the student is able to do the work accurately so
as to not be practicing errors.
Close. The final section of an explicit lesson, the close of the lesson, consists of briefly
reviewing the material that was presented in that class period. The close is not a time to reteach,
unless that is necessary, but rather to summarize the key points of the content that was covered.
Including the students in the review is a way to keep the students actively responding and
participating. After reviewing the key points of the lesson, presenting a preview of the content or
skills that will be presented in the next lesson will help students have an idea of where the
current lesson fits in with future lessons. The preview could simply be one or two sentences and
does not need to be elaborate. The final portion of the close is providing an assignment of
independent practice. An independent practice assignment may not be needed every class period,
especially when it takes multiple days to completely teach a skill.
Each of these steps of an explicit lesson is designed, and validated through research, to
maximize student learning. The next section examines the effectiveness of explicit instruction for
promoting more proficiency in mathematics.
Explicit Instruction and Mathematics
Baker, Gersten, and Lee (2002) reviewed instructional components of mathematics
interventions for teaching mathematics to low-achieving students and found explicit instruction
to be highly effective. Another review also identified explicit instruction as an effective
mathematics instructional method for students with disabilities (Kroesbergen & Van Luit, 2003).
Explicit instruction helps to maximize instructional time, an area identified by Jitendra et al.
75
(2018), as critical for helping students struggling in mathematics.
One section of the Final Report of the National Mathematics Advisory Panel is a review
of 26 studies that met high standards of research on explicit instruction and mathematics
(NMAP, 2008). The panel reported that explicit instruction was found to be an effective means
for teaching mathematics to low achieving students and students with learning disabilities.
Therefore, one key recommendation from the panel was that low-achieving students and students
with learning disabilities receive regular explicit mathematics instruction (NMAP, 2008).
A recent review by Kiru, Doabler, Sorrells, and Cooc (2018) synthesized 19 studies that
utilized technology for teaching mathematics for students with or at risk for mathematics
learning disabilities. One of their research questions was to evaluate the extent to which explicit
instruction is utilized in intervention research about technology mediated mathematics. Kiru et
al. identified studies in their literature review as incorporating features of explicit instruction if
they contained one of three components: (a) overt demonstrations, (b) student practice, and (c)
academic feedback. Only four of the 19 studies contained all three components of explicit
instruction (e.g., Fede, Pierce, Matthews, & Wells, 2013; Fien et al., 2016; Seo & Bryant, 2012;
Shin & Bryant, 2015). [Note: there is a discrepancy in the Kiru et al. (2018) article. The table
indicates that only three of the studies contained all three components, see p. 118, but in the text
of the article they indicate that four studies contained all three components].
Many students with disabilities have limited working memory capacity and other
cognitive impairments that can make mathematics especially difficult for them (Swanson &
Jerman, 2006). Explicit instruction helps to address these cognitive challenges (Martin & Evans,
2018). The arrangement of a prototypical explicit instruction lesson contains components that
promote learning and help reduce the cognitive load that students with disabilities experience
76
while learning mathematics; these include, modeling, guided and scaffolded practice, and
cognitive modeling or thinking aloud. The model stage in explicit instruction provides a teacher-
directed model that uses clear and consistent language and methods to model the skill being
taught, thus increasing students’ problems-solving transfer ability. Guided practice is another
component of explicit instruction that helps reduce students’ cognitive load by scaffolding
instruction and systematically removing the support as students’ abilities increase. Lastly,
providing a cognitive model explicitly exemplifies internal processes that are often not apparent
for students with disabilities (Denny, 1975; Prater, 2018).
Video Modeling
Video modeling is an intervention with empirical support, behavioral underpinnings, and
a strong theoretical framework (Corbett & Abdullah, 2005; Hughes & Yakubova, 2019).
Research about video modeling is well established as a robust intervention with positive effects
across disabilities to teach academic, functional, social, and life skills and behaviors (e.g., Aldi et
al., 2016; Bellini & Akullian, 2007; Burton, Anderson, Prater, & Dyches, 2013; Hichcock,
Dowrick, & Prater, 2003; Yakubova, Hughes, & Hornberger, 2015; Yakubova, Hughes, &
Shinaberry, 2016). Video modeling has been found to be particularly effective for students with
autism spectrum disorders (ASD; Bellini & Akullian, 2007) and other developmental disorders
(DD; Banda, Dogoe, & Matuszny, 2011). Additionally, video-based interventions meet the
criteria to be considered an evidence-based practice for teaching mathematics to students with
ASD (Hughes & Yakubova, 2019).
Some benefits of video modeling are that “the instruction can be edited for instructional
precision, paused for learner processing time, and re-watched for consistent demonstration of a
skill (Hughes, 2019, p. 43). This provides increased accessibility for instruction to be
77
individualized to each learner’s needs and abilities. Additionally, instruction through videos may
reduce distracting stimuli that could impede learning (Hughes & Yakubova, 2016).
The use of video modeling involves presenting a model in video format to an individual.
The participant is then provided the opportunity to imitate the skill that was presented. This
video modeling process has many benefits including allowing for multiple stimulus and response
opportunities, and standardization of presentation (Morgan & Salzberg, 1992). Dowrick (1991)
found video modeling to be an effective method for gaining and keeping a student’s attention.
Types of Video Modeling
Video modeling can be conducted in various ways, including: video self-modeling
(Hughes & Yakubova, 2016; Prater, Carter, Hitchcock, & Dowrick, 2012), point-of-view video
modeling, video modeling, and video prompting (Hughes & Yakubova, 2016; Kellems &
Edwards, 2015). When speaking about these interventions and methods in general, the term
video-based interventions is often used.
Video modeling. For video modeling, an adult or peer, act as the model to demonstrate
the behavior or skill, not the individual himself or herself who will be participating in the
intervention. To create these videos, the model can follow a script and the videos would often be
recorded in a natural setting where the skill would likely occur. In addition to the video
presentation or model of the skill or behavior, specific verbal instructions are provided.
Video prompting. Video prompting is similar to forward chaining in applied behavior
analysis. However, in video prompting the presentation of the task is accomplished through
video clips. As in forward chaining, the task or skill to be taught is broken into smaller chunks;
this can be done by conducting a task analysis, which can be taught to mastery. These chunks are
recorded as separate video clips, or a full-length video of the full task is trimmed (divided) into
78
smaller video clips using video editing software (e.g., Adobe Premiere Pro, iMovie, Windows
Movie Maker, or Final Cut Pro, among others). The individual learning the behavior or skill
watches the first video clip and then performs that skill before watching the second video clip
and performing the second step and so forth for as many steps as there are. One thing to note
about video prompting is that having multiple steps is not a requirement; a behavior or skill with
only a single step is acceptable (Hughes & Yakubova, 2016).
Video self-modeling. Video self-modeling is another way to implement video-based
interventions. In this strategy, the participant is the same individual who is video and audio
recorded doing (or modeling) the behavior or skill being taught or shaped (Hughes & Yakubova,
2016). If the participant is already able to do some of the steps independently, then those steps
can be video recorded in their authentic setting. If there are not opportunities to video record the
individual performing steps of the skill or behavior in an authentic situation, or if the participant
is not currently able to perform the skill or behavior, then a script or a task analysis can be
prepared in advance and provided to the participant. This script or task analysis can then be
rehearsed, and then video recorded in segments. However, with video self-modeling, the video is
presented as a complete presentation of all of the steps of a skill rather than being presented in
segments (Hughes & Yakubova, 2016). This being the case, video editing software would be
required to splice the multiple clips into one complete video.
Point-of-view video modeling.* In point-of-view video modeling, the video is recorded
from the perspective of the viewer who will be imitating the behavior (Kellems & Edwards,
2015). This may be accomplished by using a document-type camera and recording the hands of
the individual. The video often shows the hands of the individual modeling the skill (McCoy &
Hermansen, 2007). However, if the skill or behavior to be imitated is a social skill, a job skill, or
79
another type of skill, the video would display the situation as it would occur from the
participants’ visual “point-of-view.”
*The videos for this study were recorded from the point-of-view perspective.
Video Modeling and Explicit Instruction
Video modeling most commonly provides an explicit visual and verbal model. This type
of instruction has been found to be effective in a number of studies to teach mathematics (Ennis
& Losinski, 2019). However, one component of explicit instruction that is often absent from
standard video modeling, but that has demonstrated positive effects for teaching students with
disabilities, is guided practice. Guided practice is done after a skill or behavior has been modeled
to a student or participant. Guided practice, or prompted practice as it is sometimes called, is
where the student(s) or participant(s) practice the skill with high levels of scaffolding by the
instructor. That scaffolding is systematically faded as the participants’ ability to accurately
perform the skill increases. An additional component in explicit instruction that is not usually
included as a component in video modeling intervention studies is the check stage. The check
stage is where participants perform a skill once or twice and the instructor verifies that they are
doing it correctly before being allowed to move on to independent practice. This study used
explicit instruction as the instructional design. Specific instructional components included a
teacher model (recorded in point-of-view video modeling), guided practice, and a teacher check
for accuracy.
Research on Video Modeling and Point-of-View Modeling
Several recent reviews have evaluated video-based interventions across content areas. In
their meta-analyses of video-based interventions Mason et al. (2013) conducted a systematic
literature review and meta-analysis on studies evaluating the effects of point-of-view video
80
modeling. The researchers identified 17 studies that met their inclusion criteria which were as
follows: (a) the independent variable incorporated point-of-view video modeling, (b) the study
was published in English, (c) the outcome variable was a measurable skill, (d) at least one of the
studies participants was identified as having a disability, (e) the study was conducted using a
single subject research design, and (f) the raw data was accessible through either a graph or table
(Mason et al., 2013).
The studies were evaluated by Mason and colleagues according to the What Works
Clearinghouse evidence standards for single-subject research (Kratochwill et al., 2010) and
found that 82% (N=14) of the studies met the quality standards (Mason et al., 2013). The
participants in the studies either had a diagnosis of ASD or DD. The settings ranged from
preschool to post-secondary. A variety of skills were targeted as outcome variables in the studies,
with targeted independent living skills being the most common (Mason et al., 2013).
Additionally, meta-analytic procedures were conducted on the study. The results suggest
that video-based modeling is most effective when an adult is the model and when reinforcement
is applied to the students (Mason et al., 2013). The effect size (ES) of adults as the model was
.88. Video modeling with “self” as the model was next in effectiveness (ES = .79), followed by
video modeling with peers as models (ES = .73; Mason et al., 2013). The authors note that the
study only included participants with ASD and DD and that there is not sufficient evidence
supporting its use with other disabilities (Mason et al., 2013).
In another review, Banda, Dogoe, and Matuszny (2011) conducted a systematic review
on video prompting for students with DD. They selected studies that (a) utilized video prompting
as the independent variable, (b) at least one of the participants was classified as having DD, (c)
the study was published in a peer reviewed journal during the years 1990 through 2010 (Banda et
81
al., 2011). The researchers identified 18 studies with a total of 68 participants. While they did not
calculate effect sizes, they identified positive results for studies using video prompting to teach
or increase behaviors in areas of domestic skills (e.g., cooking), vocational skills (e.g., cleaning),
and independent living skills (e.g., making purchases using a debit card; Banda et al., 2011).
A third review conducted a systematic search for published studies between 2004 and
2014 on point-of-view video modeling (e.g., first-person perspective) for students with ASD
with social skills or play as the dependent variable (Lee, 2015). The review identified only five
studies (Lee, 2015). While the review reported positive effects for point-of-view video modeling
for teaching play and social skills to children with ASD, due to the limitations in available
studies, the authors noted that no particular components could be identified to promote
effectiveness or generalizability (Lee, 2015).
Further, a review of the point-of-view video modeling literature conducted for this study
found positive effects in multiple areas (see Tables 4 & 5). Point-of-view video modeling has
been experimentally evaluated to teach play (Hine & Wolery, 2006), social skills (Sancho,
Sidener, Reeve, & Sidener, 2010), vocational tasks (Yakubova & Taber-Daughty, 2017), and
transition related tasks (Yakubova & Zeleke, 2016). Additionally, letter writing and mathematics
are academic areas where POVM has been evaluated (Moore et al., 2013; Morris, Hughes, &
Stocker, 2019; Yakubova, Hughes, & Hornberger, 2015).
82
Table 4
Summary of Participant Characteristics and Settings for Point-of-View Video Modeling
Study Partic. Age
(yr.)
Gender Disab. Setting
Aldi et al., 2016 N = 2 2 2 male ASD Participants residence
Hammond, Whatley,
Ayres, & Gast, 2010
N = 3 3 3 female ASD Self-contained classroom
Hine & Wolery, 2006 N = 2 30, 43
(mo.)
2 female ASD Inclusive, full-day,
university-based
preschool program
Moore et al., 2013 N = 1 5 ASD Family living room
Morris, Hughes, &
Stocker, 2019
N = 2 14, 15 2 female ASD,
SLD
Pull-out classroom
Sancho, Sidener,
Reeve, & Sidener,
2010
N = 2 5 1 male, 1
female
ASD Classroom, conference
room, office, gym, room
at participant’s home
Scheflen, Stephanny
Freeman, &
Paparella, 2012
N = 4 2-3 4 male ASD Classroom
Shrestha, Anderson,
Moore, 2013
N = 1 4 1 male ASD Kitchen/ dining area of
the family home
Tereshko, MacDonald,
& Ahearn, 2010
N = 4 4-6 4 male ASD Therapy room
Tetreault & Lerman,
2010
N = 3 4-8 2 male, 1
female
ASD Small room at day
treatment center
Yakubova, Hughes, &
Hornberger, 2015
N = 3 17, 19,
18
3 male ASD Separate classroom
Yakubova & Taber-
Doughty, 2017
N = 4 20, 19,
17, 19
4 male ASD Self-contained classroom
Yakubova & Zeleke,
2016
N = 3 17, 18 3 male ASD Classroom, life skills
training rooms, cafeteria
kitchen, copy room
Note. Partic. = Participants, Disab. = Disability, ASD = Autism Spectrum Disorder, SLD
= specific learning disability, yr. = year, mo. = month.
83
Table 5
Summary of Research Point-of-View Video Modeling
Study Design IV Type Skills Targeted DV Results
Aldi et al., 2016 MP VM P + PR Activities of
daily living
skills
Percentage of steps
performed correctly
Effective at increasing living skills
Hammond,
Whatley, Ayres,
& Gast, 2010
MP VM P + PR iPod use Percentage of steps
completed
independently
Participants acquired the response. Students
maintained most tasks. Students were
retrained for deteriorated skills.
Hine & Wolery,
2006
MB VM P Play Number of types of
actions performed
Differentially effective for skill acquisition
and generalization
Moore et al., 2013 MB VM P + PR Letter writing Total score according
to rubric for producing
letters of name
Was effective to teach writing, more
research needed
Morris, Hughes, &
Stocker, 2019
MB VM P + PR Mathematics Percent of problems
correct
Effective to teach mathematics skills
Sancho, Sidener,
Reeve, &
Sidener, 2010
AT/M
B
VM P + PR Social skills,
social script
Number of scripted
outcomes
Effective for acquisition and maintenance
of play skills, no generalization
Scheflen,
Stephanny
Freeman, &
Paparella, 2012
MB VM P Play skills,
functional
play, social
skills, social
scripts
Average developmental
level of play
Increased play
84
Shrestha,
Anderson,
Moore, 2013
CC VM P + PR Functional
self-help
Number of steps
performed without
prompts
Effective for teaching a child to serve a
snack to self
Tereshko,
MacDonald, &
Ahearn, 2010
MB VM P + PR Play skills,
functional play
Number of steps
independently
completed in the
response chain
Effective for teaching functional play
Tetreault &
Lerman, 2010
MB VM P Social skills,
social script
Number of correct
conversational
exchanges
Increased eye contact
Yakubova,
Hughes, &
Hornberger,
2015
MP VM P + PR +
CL
Mathematics
problem-
solving
Percent of problems
correct
Increased accuracy
Yakubova &
Taber-Doughty,
2017
MP VM P + QS Problem
solving,
vocational
tasks
Percentage of
problems-solving steps
correct
Improved performance and generalized
problem-solving skills
Yakubova &
Zeleke, 2016
MP VM P + QS Transition-
related tasks
Percentage of correct
problem-solving steps
All students improved their problem-
solving performance
Note. IV = Independent variable, DV = Dependent variable, MP = Multiple probe single-case research design, MB = Multiple
baseline single-case research design, AT = Alternating treatments single-case research design, CC = Changing criterion single-case
research design, VM = Video modeling, P = Priming, PR = Prompting, CL = Problem solving check list, QS, = Que Sheet
85
Literature Reviews and Meta-Analysis for Teaching Fractions to Students with or At-risk
for Disabilities
Two literature reviews (Misquitta, 2011; Shin & Bryant, 2015) and one meta-analysis
(Ennis & Losinski, 2019) were identified that focused on teaching fractions to students with or at
risk for disabilities. The first review systematically searched the literature for studies that taught
fractions to struggling learners (Misquitta, 2011). Misquitta’s (2011) search resulted in 10 studies
published between 1990 and 2008. Key findings from this review were that graduated sequence,
strategy instruction, and direct instruction were effective methods for teaching fractions.
Misquitta also identified that explicit instruction is necessary for increasing performance in
fractions. Direct instruction and explicit instruction were both identified in this review as being
important for teaching fractions, and each of these share many of the same instructional features
(Hughes et al., 2017).
Shin and Bryant (2015), conducted a systematic literature review that synthesized studies
for teaching fractions to students struggling to learn mathematics. Their findings resulted in 17
studies published between 1975 and 2014. The authors focused on the identification of
instructional components rather than the type of intervention in their analysis. They found that
concrete and visual representations were key components for effective fraction instruction.
Additionally, the authors found that studies that combined explicit and systematic instruction
with concrete and visual representations produced positive effects for students learning fractions
(Shin & Bryant, 2015).
In the third article, researchers Ennis and Losinski (2019) conducted a systematic review
and meta-analysis to evaluate studies where fractions were taught to students with disabilities or
learning challenges. Ennis and Losinski identified 21 studies that met their inclusion criteria. The
86
authors concluded that out of five instructional strategies used to teach fractions (a) anchored
instruction, (b) graduated instruction, (c) explicit instruction, (d) strategy instruction, and (e)
video modeling, video modeling had the highest effect size. However, this is based on only one
video modeling study while there were five anchored instruction studies, three studies that
evaluated graduated instruction, seven studies that used explicit instruction, and three studies that
evaluated strategy instruction. They also found that explicit instruction had the widest research
base.
Furthermore, the authors evaluated each of the 21 studies for research quality using the
Council for Exceptional Children’s Standards for Evidence-Based Practices (CEC, 2014). Ten
of the studies met all eight of the quality indicators, while nine more met 80% or more of them
(Ennis & Losinski, 2019).
Culminating findings from these reviews of the literature suggest effective instructional
methods and intervention components on teaching fractions to students with or at risk for
disabilities. Explicit instruction and video modeling are two evidence-based interventions that
were identified by the above researchers as being effective methods for teaching fractions and
improving the mathematical outcomes for learners with mathematics difficulties and disabilities.
Video Modeling, Mathematics, and Learning Disability
A recent meta-analysis identified video-based interventions as meeting the CEC
evidence-based practice standards for teaching mathematics to students with ASD (Hughes &
Yakubova, 2019). Additional studies have demonstrated effective application of video modeling
to teach mathematics to students with moderate intellectual disabilities (Saunders, Spooner, &
Davis, 2018). In order to determine the research base and the effectiveness of video modeling to
87
teach mathematics to students with specific learning disabilities (SLD), a systematic review was
conducted.
The criteria for inclusion in this review were that the study would include: (a) a single
case, group experimental or quasi-experimental design; (b) an individual, kindergarten through
transition programs (age 21) classified as having SLD; (c) video modeling in some form as an
independent variable in the intervention; (d) instruction or an intervention in mathematics; and
that it was: (e) published in a peer-reviewed journal; and (f) published in the English language.
The following search engines were used in the search, ProQuest, ProQuest (Eric),
PsycINFO, EBSCO (Academic Search Ultimate). Following are the search terms that were
inputted in the search bar. The first search line was utilized to identify the disability classification
(e.g., “learning disability” OR “specific learning disability” OR dyscalculia). An “AND” was
placed next, to target studies that included video modeling as the dependent variable (e.g., “video
intervention” OR “video-based intervention” OR “video-based instruction” OR “video
modeling” OR “computer assisted instruction” OR “point-of-view” OR “self-model” OR “video
prompt”). Lastly, another “AND” was placed followed by the following terms used with the
objective of identifying studies that were related to mathematics (e.g., math OR mathematics OR
algebra OR geometry OR fractions OR arithmetic).
This search identified a total of 494 results. Two additional articles were identified, a
study in press and a thesis. The titles of each of the articles were read. If there was ambiguity
from reading the title as to whether an article might be relevant, the abstract was read. If further
clarification was needed, the methods section was read. While there were a number of articles
that utilized computer-based instruction, these articles did not use video modeling and often were
evaluating a software based program, (e.g., Leh & Jitendra, 2012, [GO Solve Word Problems, vs
88
teacher-mediated instruction; Bryant, et al., 2015, [math iPad apps vs. teacher-mediated
instruction]). Five studies met the inclusion criteria.
The first study was by researchers Cihak and Bowlin (2009). In this study, three high
school students with SLD were presented with video modeling to teach geometry skills. The
videos were created using Camtasia Studio 4.0 software. This software enables the instructor
writing to appear on the screen with audio recording of the instructor’s voice. The videos were
approximately 10 minutes long and were synced to a Toshiba tablet. The instructors used a
multiple-probe design across behaviors.
This study found positive results. The participants improved to an averaged mean of 97%
across the three skills when the video modeling was implemented. Additionally, the participants
had an averaged maintenance mean score of 89% across the three skills six weeks later (Cihak &
Bowlin, 2009).
The next study by Kellems and colleagues (2016) had nine adults in a post-high school
transition program. One of the nine participants, a 20-year-old female had an SLD classification,
while the other participants were classified as having an intellectual disability, ASD or other
health impairment. This study utilized video modeling to teach multi-step mathematics skills that
included calculating a 15% tip, calculating unit prices, and adjusting a recipe for more or less
people. The videos utilized video prompting and a procedure called system of least prompts
(when needed) as the independent variable. System of least prompts was used to encourage the
participants to attend to the videos. There are three steps in system of least prompts and the
participants’ response determined the prompt needed. The prompts include: (a) a task direction
provided to the participant, (b) an indirect verbal prompt, and (c) a direct verbal prompt. The
researchers created a task analysis of the skills that were to be taught and then used those task
89
analyses to create the videos. The research design utilized was a multiple probe across students.
A booster session was also provided to the participants and maintenance data was collected
(Kellems et al., 2016).
Results from this study were positive with the independent variable, the video modeling
intervention, demonstrating a functional relation on the dependent variable, the average
percentage of steps completed correctly for all of the participants, including the participant with
SLD (Kellems et al., 2016).
For the third study, three participants with SLD in mathematics were taught mathematics
using video modeling (Hughes, 2019). This study specifically used point-of-view video
modeling to teach simplifying fractions to middle grade students (5th-grade, n=2, 8th-grade, n=1).
Additionally, visual representations and a modified concrete-representational-abstract (CRA)
sequence was used in the instruction. The modification was that the intervention did not include
a semi-concrete component to the CRA sequence. Additionally, a self-regulation checklist was
implemented for two of the participants to remind them to check their work. The results from
this study indicate that a functional relation existed between the independent variable, the video
modeling and CRA intervention, and the dependent variable, the percent of problems answered
correctly (Hughes, 2019).
The fourth study identified that utilized video modeling to teach mathematics to students
with SLD (Edwards, 2015). This study is a master’s thesis and is not published in a peer
reviewed journal. However, because of its relevance, it is included in this review. This study had
five high school participants with SLD. There were three female and two male participants ages
16 and 17. A multiple probe experimental design across participants was used. The results found
90
a functional relation between the intervention and the outcome, the percentage of steps
completed correctly (Edwards, 2015).
The last study used an alternating treatments single-subject research design to evaluate
the differential effects of video modeling and explicit instruction to teach mathematics to
students with SLD (Satsangi, Hammer, & Hogan, 2019). This study had three female participants
ages 14, 15, and 16, each identified as having SLD (one of the participants also had other health
impairments as a secondary disability classification; Satsangi et al., 2019). Video modeling and
explicit instruction were the two independent variables in the study. The explicit instruction was
delivered face to face with the researcher. The video modeling lessons were recorded using an
app called ShowMe, and delivered using an Apple iPad. Five sessions of each condition were
alternated for each participant. Then, three additional sessions of the most effective treatment for
each participant were delivered.
The results from this study found that both the video modeling and the explicit instruction
conditions were effective in teaching the mathematics skills (Satsangi et al., 2019). There were
no overlapping data points and students’ performance jumped to 100% on the first intervention
session for both conditions. Each participant maintained high scores throughout the intervention
phase and the best treatment phase (Range of 80% to 100% accuracy). The participants
completed social validity measures and reported that both the explicit instruction and the video
modeling conditions were equally helpful though preferences varied. Limitations with this study
include that the researchers did not conduct maintenance or generalization data (Satsangi et al.,
2019).
Each of these studies demonstrated positive effects for implementing video-based
interventions for individuals with SLD. While this is a budding area of research, additional
91
studies are warranted to strengthen the research base on video modeling to teach mathematics to
individuals with SLD.
Augmented Reality
There have been a number of literature reviews that have identified trends, gaps, and
potential future directions for implementing augmented reality in educational settings. In an
article by Martin and colleagues (2011), the authors conducted a bibliometric analysis of
technology trends from the Horizon Report. The Horizon Report attempts to make predictions of
technology that will positively impact education (Martin et al., 2011). The results of that report
identified that the number of augmented reality related articles at the time was increasing and
that it is likely to have an impact on education (Martin et al., 2011). They suggest that augmented
reality has a number of “potential revolutionary applications including the study of architecture,
art, anatomy, languages, … QR code books” and other areas (Martin et al., 2011, p. 1898). While
this review was not a systematic review and focused on predicting the impact of technology, the
results are worth noting.
Two reviews were conducted by Radu (2012; 2014). The first was a comparative review
presented at the IEEE International Symposium on Mixed and Augmented Reality (ISMAR;
Radu, 2012). For this presentation, the author conducted a review of journal and conference
articles that compared augmented reality applications to non-augmented reality applications. The
results (N=32) from the search were coded into three overarching categories: (a) positive
learning effects, (b) negative learning effects, and (c) underlying factors (Radu, 2012). The
positive learning effects found: (a) increased content understanding, especially with learning
spatial structure and functions; (b) long-term memory retention, (e.g., memorization strength
could be enhanced as a result of augmented reality; (c) increased student motivation, high levels
92
of enthusiasm were reported in multiple papers; and (e) improved (group) collaboration (Radu,
2012).
The negative learning effects identified in this review include: (a) attention tunneling,
meaning that participants attention demands on the technology reduced their attention to the
intervention (b) usability difficulties, especially as related to physical or desktop related options
(c) ineffective classroom integration (e.g., one study noted that augmented reality caused the
students to participate less in exploration and role-play activities); and (d) learner differences
(e.g., some studies indicated that interventions utilizing augmented reality had a differentially
larger effect on lower achieving students (Radu, 2012).
Lastly, this review identified factors of augmented reality that may contribute to the
positive learning effects. These include: (a) transformed representations, meaning, non-
interactive content becoming interactive and the animation of static images; (b) aligned
representations (e.g., spatiotemporal alignment of information with physical items); (c) natural
interactions and reduced cognitive load; (d) directed attention, (e) presence and embodiment,
(i.e., embodied cognition); and (f) dynamic 3D simulations (Radu 2012). While this review adds
a measure of research to the application of augmented reality to the field of education and
highlights some benefits, challenges, and potential contributing factors, the review is lacking in
the systematic identification and review of empirical articles.
The second review by Radu (2014) was a cross-media analysis of 26 studies that
compared the use of augmented reality with non-augmented reality applications. This review did
not provide many details about the search procedures, however, the author noted that the studies
were identified by “searching online databases for conference and journal articles discussing
comparisons of AR and non-AR applications” (Radu, 2014, p. 1534). These search procedures
93
are a limitation of this review because they are not technological: that is, they are not described
in a clear and concise way so others can accurately replicate the procedures (Cooper et al., 2007).
The findings in this article are similar to those identified in the author’s symposium presentation
(Radu, 2012) and list benefits, challenges, and potential contributing factors that support learning
when utilizing augmented reality in educational settings; however, each section is expanded or
modified from the 2012 publication (Radu, 2014).
This review adds two additional benefits to the list presented in the previous publication
(Radu, 2012); they are, learning language associations and improved physical task performance
(Radu, 2014). The learning detriments (challenges) list remained the same as the 2012
publication, though more detail is included about each topic. The section on factors that may
support learning includes five areas that are similar to those found in the previous article. The
first potential factor mentioned for supporting learning is the novel representation of content that
augmented reality makes possible (Radu, 2014). In the next factor, Radu (2014) refers to
Mayer’s multimedia learning theory and how cognitive overload can reduce learning. Mayer’s
multimedia learning theory also notes how the presentation of multiple representations are more
effective when time and space are proximal, a feature that augmented reality can assist to
accomplish (Radu, 2014). The remaining factors are similar to the previously published article
though slightly reworded; for example: (a) the learner is physically enacting the educational
concepts, (b) attention is directed to relevant content, (c) the learner is interacting with a 3D
simulation, and (d) interaction and collaboration are natural (Radu, 2014). In this article, the
author also presents a heuristic questionnaire with five statements to be evaluated on a Likert-
type scale ranging from one to five. The author stated that the purpose of this questionnaire is to
help identify applications of augmented reality that have potential for maximizing the learning
94
potential individuals (Radu, 2014). The author mentions that the questionnaire has not formally
been evaluated, but that it may be a useful tool. A library search of publications by the author did
not currently identify articles where the author has published further work evaluating this
heuristic.
Another systematic review, with the goal of identifying trends for the use of augmented
reality in educational settings was published in 2014 (Bacca et al., 2014). The authors begin by
providing a summary of previously conducted reviews that utilized augmented reality in
educational settings (e.g., Martin et al., 2011; Radu, 2012, 2014). The authors then present the
research questions the review attempts to answer, including, among other questions, the overall
effectiveness of augmented reality in educational settings and also, if and how augmented reality
has been used for individuals with special needs (Bacca et al., 2014).
The methods the researchers utilized for this review were adapted from those proposed
by Kitchenham (2004), who published a technical report outlining the procedures for conducting
systematic reviews that include three phases: (a) planning the review, (b) conducting the review,
and (c) reporting the review (Bacca et al., 2014; Kitchenham, 2004). The authors of the review
were technological in describing the search procedures. The general inclusion criteria were that
augmented reality was used in an educational setting and the studies were published between
2003 and 2013. The review identified 32 studies.
The findings noted that while a large portion of the studies identified for the review
utilized augmented reality in science (40%), it appears that the authors also include mathematics
within this category. Humanities and Arts was the second most frequently utilized content area
for augmented reality in educational settings (Bacca et al., 2014). The review found that there
were not any studies conducted on participants with special needs (Bacca et al., 2014). The
95
authors suggest that there is insufficient evidence to draw conclusions about specific uses,
benefits, and challenges for utilizing augmented reality in educational settings (Bacca et al.,
2014).
Another systematic review of the literature was conducted by researchers Akçayır, &
Akçayır (2017). Research questions included: (a) what participants are selected for the research,
(b) which augmented reality technologies are most used in educational research, (c) what
advantages does augmented reality bring to educational settings, and (d) what challenges come
from using augmented reality (Akçayır & Akçayır, 2017).
The review searched the Social Science Citation Index (SSCI) for studies related to
augmented reality using the Web of Science search tool (Akçayır & Akçayır, 2017). While this
method can effectively identify studies on a selected topic, this is a limitation for this review
because it is possible (and probable) that additional studies might have been identified had
multiple databases been utilized. The search resulted in 102 articles; though, after applying the
inclusion criteria, 68 articles remained. The inclusion criteria did not stipulate specific time
period restrictions; however, the researchers note that the SSCI database only includes articles
from 1980 to 2015. While the date limitation is not likely a limitation for their search, because
research using augmented reality would likely not be published before 1980 and the earliest
studies their search identified were published in 2007, this time period restriction may have been
related to the access permissions at the author’s institution. The SSCI access from the
Pennsylvania State University Library Database goes back to 1900. The authors analyzed and
coded the 68 articles in order to answer the research questions (Akçayır & Akçayır, 2017).
The researchers reported that since 2007 the number of studies has steadily increased and
that around half of the articles (51%) had K-12 students as the participants (Akçayır & Akçayır,
96
2017). The review found that mobile devices were the most frequently used mode to deliver
augmented reality (60%) with computers being the next most frequent mode of delivery.
Augmented reality was found to have potential to support teaching and learning. Learner
outcomes from studies utilizing augmented reality were found to be positive, especially as
related to achievement, motivation, and attitude. A novelty effect was listed as a potential
contributor to the results. If it was a confounding variable contributing to the results of the
studies, the reviewers noted that the novelty effect might wear off as augmented reality becomes
more prevalent in school settings (Akçayır & Akçayır, 2017). The most reported challenge the
reviewers identified from the articles was that students can find augmented reality to be difficult
to use (Akçayır & Akçayır, 2017). Another challenge is that, depending on the type of
augmented reality being investigated, global position system (GPS) errors were found as an
occasional challenge for location-based applications and that triggers can have sensitivity issues
for recognition (Akçayır & Akçayır, 2017).
This review did a good job at identifying trends in published studies utilizing augmented
reality; however, it would have been more conclusive had multiple databases and methods for
searching been utilized. The researchers did not note sensitivity issues for marker-based
applications of augmented reality that also may potentially be a challenge.
The most recent peer-reviewed literature review and meta-analysis on the use of
augmented reality in education was published in 2019 (Garzón, Pavón, & Baldiris, 2019). The
review systematically identified articles and reviewed them following the guidelines proposed by
Kitchenham and Charters (Garzón et al., 2019). The authors identified 61 studies in scientific
journals and conference proceedings and includes studies published between 2012 and 2018. The
authors performed meta-analytical procedures and determined that augmented reality had a
97
medium effect on learning effectiveness and conclude that augmented reality is becoming more
established in educational settings (Garzón et al., 2019).
In summary, augmented reality has been found to be an effective mode for improving
student outcomes in educational settings. Augmented reality can be especially effective for
teaching concepts that are abstract and complex (Bacca et al., 2014). This study used augmented
reality as a platform for delivering explicit instruction videos in a self-directed manner, adapted
from an augmented reality implementation checklist published by Kellems, Cacciatore, &
Osborne (2019).
Review of Instructional Procedures for Teaching Rational Numbers
One review attempted to identify why fractions are difficult for students to understand,
why students with mathematics difficulties have increased difficulty, and what can be done to
improve student outcomes with fraction-related mathematic skills (Tian & Siegler, 2017). While
not a systematic review, these authors identified key sources for answering their research
questions. In their review on fractions learning for students with mathematics difficulties,
researchers Tian and Siegler (2017) suggest that emphasizing the importance of fraction
magnitude knowledge is essential for numerical understanding (see also Siegler, Thompson, &
Schneider, 2011). They found that, for teaching fractions, using a number line to illustrate
number magnitudes was a common element in some of the most effective interventions they
reviewed (Tian & Siegler, 2017).
Tian and Siegler (2018) conducted another review with the goal to determine the type of
rational numbers students should learn first. This review is not a systematic review but appears to
thoroughly review the literature. Out of the three types of rational numbers (i.e., fractions,
decimals, and percentages), they found that multiple researchers have suggested that decimals
98
are easier to learn than fractions and therefore may logically be selected to be taught first.
However, after careful analysis they conclude that decimals are not in fact easier than fractions
for students. They were surprised to find that minimal research had been done on percentages.
They concluded that presently there is not sufficient research to determine an order for teaching
rational numbers (Tian & Siegler, 2018).
Summary of Reports and Publications from National Institutions
In 1998, the National Science Foundation’s Directorate for Education and Human
Resources and the Office of Educational Research and Improvement of the US Department of
Education requested that the National Research Council (NRC) synthesize the research on
mathematics for pre-kindergarten through 8th-grade, (NRC, 2001). The NRC was also asked to
identify recommendations for teaching mathematics, teacher preparation and curriculum content
and design that will help increase student performance in mathematics and to provide direction to
educators, researchers and other stake holders (NRC, 2001). The report called for a balanced
approach to teaching and learning mathematics that is not solely prescribed by a text book or
teacher, but also that is not left entirely to the students’ creation. Five strands were identified in
the report as being key to mathematical proficiency: conceptual understanding, procedural
fluency, strategic competence, adaptive reasoning, and productive disposition (NRC, 2001).
These strands are defined as follows. Conceptual understanding goes beyond being able
to perform a mathematical calculation, it is a comprehension of mathematics relations and
concepts. Procedural fluency includes accuracy combined with efficiency. This is measured by
calculating the total number of correctly answered problems and dividing the sum by the amount
of time taken to perform the set of mathematical problems (NRC, 2001). The next component
that leads to proficiency in mathematics is strategic competence. Strategic competence is defined
99
as the ability to incorporate strategies to effectively recognize the essential information in
mathematical problems, represent it in a logical or accurate way, and then to solve it (NRC,
2001). The last two strands are adaptive reasoning and productive disposition. Adaptive
reasoning includes components of logic and reflection, in addition to demonstrating the ability to
explain and justify that logic (NRC, 2001). Productive disposition deals with the student’s
mindset about the usefulness of mathematics coupled with his or her self-efficacy, that is, a
student’s belief that he or she has the potential to be successful in mathematics. In summary, the
findings from the National Research Council determined that each of these strands (e.g.,
conceptual understanding, procedural fluency, strategic competence, adaptive reasoning, and
productive disposition), woven together, are essential components that teachers and curriculum
need to incorporate in order for students to learn mathematics effectively (NRC, 2001).
The National Council for Teachers of Mathematics (NCTM) published two resources to
aid in improving mathematics instruction, Principles and Standards for School Mathematics
(2000) and Curriculum Focal Points (2006). The principles and standards publication were
created as a resource for all stakeholders and decision makers for mathematics from pre-
kindergarten through 12th-grade (NCTM, 2000). The authors also suggest that it can be used as a
resource for researchers and mathematicians (NCTM, 2000). It outlines the key principles, skills,
and concepts that need would ideally be taught at the various grade levels. It also provides a
logical sequencing for the skills. The principles that are suggested in the book describe essential
pedagogical components that are necessary for teachers to maximize students’ mathematical
outcomes. Six principles are suggested by the council in this publication, they include: (a) equity,
(b) curriculum, (c) teaching, (d) learning, (e) assessment, and (d) technology. The principles are
presented as six separate principles; however, the recommendation is that the greatest results will
100
come from using the principles in concert one with another (NCTM, 2000). The Curriculum
Focal Points publication provides descriptions about the most essential skills and concepts (they
call these “targets”) for each grade level, prekindergarten through 8th-grade (NCTM, 2006). The
goal of this publication is to provide key skills that curriculum and instruction should focus on
(NCTM, 2006).
The National Mathematics Advisory Panel (NMAP) was created in response to a
presidential order that was signed April 18, 2006 by President George W. Bush. The panel spent
20 months working and researching, mostly in task-specific groups, to determine the research
supporting and utilization of evidence-based mathematics instruction, in order to advise the US
President and Secretary of Education for policy (Presidential Executive Order, 2006). The first
task item from the President was that the panel should focus on preparing students for success in
algebra which, in turn, is foundational for success in more advanced mathematics. The final
report published by NMAP was a synthesis of the findings that were considered to be most
important from reviewing over 16,000 research publications and the testimony of 110
individuals. The findings conclude that there are three things that a curriculum should do
simultaneously in order to successfully prepare students for Algebra, to develop: (a) conceptual
understanding, (b) computational fluency, and (c) problem-solving skills (NMAP, 2008). In
addition to presenting these three essential curricular components, the Final Report published by
the panel reiterated the nationwide need for improvement in mathematics education (NMAP,
2008).
Another tool that states have created to provide accountability and standards for
mathematics instruction are The Common Core State Standards (CCSS). The CCSS have been
created and implemented to help teachers and administrators keep students on track academically
101
to be prepared for college and the workplace (National Governors Association Center for Best
Practices & Council of Chief State School Officers [NGA/CCSSO], 2010). These standards
indicate specific areas that students should be proficient at by the end of each grade, kindergarten
through 12th-grade. At present, 41 states, four territories, and the District of Columbia have
adopted the common core state standards (Standards in Your State, n.d.). Of these, 38 states or
territories adopted them in 2010, seven in 2011, and two in 2012 (Standards in Your State, n.d.).
By 2017, only four states had never adopted the CCSS: Alaska, Nebraska, Texas and Virginia,
and 24 states had announced to conduct either minor or major rewrites or to replace the common
core standards all together (Achieve, 2017). The primary focus of the CCSS is on the content
standards and accountability rather than the pedagogical processes for implementation (Shannon,
2018), allowing for flexibility in instructional approach.
Internal Review Board
An internal review board (IRB) proposal, more formally called an IRB protocol, was
submitted to the University’s Internal Review Board (IRB). An IRB evaluates the proposed
study, the research design, potential risks and benefits, equitable selection of subjects, etc., in
accordance with the principles of the Belmont Report, as outlined under following three
categories: (a) respect for persons, (b) beneficence, and (c) justice (National Commission for the
Protection of Human Subjects of Biomedical and Behavioral Research, 1979). The IRB
determined that the study met the criteria for being exempt. This exemption ruling was made in
accordance with federal policy as found in subpart A of the Common Rule (Federal Policy for
the Protection of Human Subjects, 2017). Of the six exempt categories, one of them states that
human subject research is exempt as long as the research is conducted in a commonly accepted
educational setting, and that the intervention involves normal educational practices (e.g.,
102
instructional strategy or techniques, curricula) that are unlikely to negatively impact students’
ability to learn the required educational content (Federal Policy for the Protection of Human
Subjects, 2017).
Skill Identification and Details
Data from the school files, including aimswebPlus assessments and a diagnostic
assessment, were reviewed by the researchers and the school’s Math Specialist to identify skills
for utilization in the study. Skills chosen for the intervention were selected because they are key
for future mathematics success. Three rational number-related skills were selected, (a) adding
fractions with common denominators, (b) completing equivalent fractions, and (c) converting
fractions to decimal notation and decimal notation to fractions.
Note: Skill 1. While simplifying fractions is an important skill, it was determined to be a
standalone skill. In an in an attempt to evaluate the intervention on a wide range of fraction
related skills that are functionally independent, simplifying fractions was not taught in this
intervention. Simplifying fractions was evaluated by Hughes (2019) whose study successfully
demonstrated the effectiveness of video modeling for teaching simplifying fractions for students
with SLD in mathematics.
Rational Numbers
Figure 5 illustrates the key number systems, including rational numbers. In the
illustration (Figure 5), the inner most oval, denoted by N, represents natural numbers. Natural
numbers are the counting numbers (e.g., 1, 2, 3, 4…. note: some authors also include “0” as a
natural number as well because it is non-negative, however some authors do not include zero;
Clapham & Nicholson, 2014). Integers, denoted by Z, include all of the natural numbers, their
opposites, and zero (e.g., …-3, -2, -1, 0, 1, 2, 3…). Integers are sometimes referred to as whole
103
numbers (Clapham & Nicholson, 2014). Rational numbers include each of the two previous
categories and are defined as numbers written in the form of a/b, where both a and b are integers
and b does not equal zero or as a decimal, as long as the expansion is finite or recurring
(Clapham & Nicholson, 2014). The letter Q is often used to denote the set of all rational numbers
(Clapham & Nicholson, 2014; see Figure 5). The overall number system is denoted by R which
represents all real numbers. Real numbers include numbers like the square root of 2 and pi
(e. g. , √2 & 𝜋; Clapham & Nicholson, 2014).
Figure 5. Number Systems. Real numbers (R) include the rational numbers (Q), which include
the integers (Z), which include the natural numbers (N). Image in the public domain. Retrieved
from Wikimedia Commons (https://commons.wikimedia.org/wiki/File:Number-systems.svg) on
April 25, 2019.
A specific area in which students have difficulty, Rational numbers have been identified
by researchers as being vital for future success in mathematics (Hansen, Jordan, & Rodrigues,
2017). Rational numbers have unique characteristics from other number systems. While rational
numbers include fractions, decimals, and percentages, fractions and decimals will be the focus of
this section.
Many students have difficulty understanding the underlying principles behind rational
numbers and have trouble performing mathematical operations with rational numbers (Ni, 2001;
104
Vamvakoussi, 2015). The NRC identified fractions and decimals as “major challenge(s)” for
students in pre-K to 8th-grades because they can be represented and used in numerous ways
(NRC, 2001, p. 8). Similarly, the National Mathematics Advisory Panel found fractions to be a
“pervasive” and “major obstacle” in students’ mathematics progression (NMAP, 2008, p. xix).
Even though rational numbers have proven to be difficult for students, they have been identified
as essential skills for students’ future success in mathematics (NCTM, 2006; NMAP 2008).
The National Council for Teachers of Mathematics (NCTM, 2006) highlight various
fractions related skills as curriculum focal points beginning in Grade 2 through Grade 6. Siegler
and colleagues (2010) suggest that fraction knowledge is one of the greatest predictors for
success in algebra. The NMAP final report suggest that “a major goal for K–8 mathematics
education should be proficiency with rational numbers (including decimals, percent, and
negative fractions), for such proficiency is foundational for algebra” (NMAP, 2008 p. xvii).
These findings indicate that a strong knowledge of and ability to accurately perform fractions
related mathematic problems is essential to future success in mathematics, especially for algebra.
Reasons Underlying Students’ Difficulty with Rational Numbers
There are a number of reasons behind and theories posited for answering the question,
“Why are rational numbers difficult for students to grasp?” One reason suggested for students’
difficulty with rational numbers is called natural or whole number bias. The whole number bias
is defined as individuals generalizing whole number properties to rational number tasks, whether
appropriate or not (Ni & Zhou, 2006). However, Ni and Zhou, (2005) evaluated three
explanations of the nature of whole number bias and concluded that there was not enough
evidence to determine between the accounts.
105
A second theory proposed is the framework theory of conceptual change (Vamvakoussi,
Vosniadou, & Dooren, 2013; Vosniadou & Skopeliti, 2014). This theory examines students’
difficulty with rational numbers through a cognitive development lens. The framework theory,
similar to whole number bias, suggests that as students develop, a discrepancy materializes
between rational number concepts and the principles that govern reasoning with natural numbers
(Vosniadou, 2014).
Another proposed theory is called the integrated theory of numerical development. This
theory proposes that the development of numerical understanding includes learning about the
various characteristics that that unite and differentiate all types of real numbers (Siegler et al.,
2011). One strategy these authors suggest is that using a mental number line has been shown to
be a valuable support for understanding fraction magnitude.
Experimental Design
The three components of baseline logic are prediction, verification, and replication.
Prediction is the assumption that, if a baseline is stable, the performance would continue at the
same level. If the level changes from the predicted level when the intervention is applied, the
assumption is that the intervention is responsible for that change (Cooper et al., 2007). In a
multiple probe design, verification occurs when the intervention is implemented for one tier
(skill) and there is little or no change in the other tiers (skills; Cooper et al., 2007). When the
intervention is implemented in the second tier and a similar result with prediction and
verification occurs (e.g., there is a level change for the second tier but no change with the third
tier), this is called replication (Cooper et al., 2007). Replication has been defined as the “essence
of believability” (Baer, Wolf, & Risley, 1968, p. 95). In addition to intra-subject replication,
implementing the intervention across additional participants and finding similar results adds
106
another level of replication and can strengthen the believability of the experimental control
demonstrated. This inter-subject replication also strengthens the external validity of the
intervention.
The multiple probe design was selected for this intervention instead of the multiple
baseline design according to the criteria set for by Gast, Lloyd, and Ledford (2018) in which they
recommend that a multiple probe design be used instead of multiple baseline if testing threats are
likely to become a factor. It was determined that a testing threats could be possible with an
extended continuous baseline, especially on the second and third tiers, due to the high probability
that the participants would be repeatedly performing high rates of errors. In addition to the
potential testing effect, the multiple probe design was more likely to minimize frustration in the
participants as a result of repeated incorrectly performing the mathematics skills presented.
Additionally, an attempt was made to minimize the daily amount of time that the students were
removed from their other activities to participate in the intervention, and probing the data in the
second and third tiers helped reduce the time required for the intervention.
While single-subject research designs can demonstrate experimental control and a
functional relation with one participant, it was determined that replicating the results across two
additional participants (inter-participant direct replication) would demonstrate increased rigor,
believability, and external validity of the intervention (Cooper et al., 2007; Gast & Ledford,
2018).
Treatment Integrity
In an article about the importance of measuring treatment integrity, Wolery (2011)
suggested four reasons why it should be done. The first three reasons are applicable to this study.
The first reason provided is that measuring fidelity can potentially allow investigators the ability
107
to identify results that occurred at points where the treatment was not implemented with fidelity.
A second reason for measuring treatment integrity is because it helps evaluators assess the level
to which the intervention would be able to be implemented in authentic (or real world) settings.
The third reason is that fidelity can help with replication studies (Wolery, 2011).
Guided Access Settings
Guided access settings were utilized on the iPad to minimize features that could
potentially be a distraction for them (e.g., the camera, changing the brightness, etc.). Specifically,
within the guided access options settings: (a) the Sleep/Wake Button was disabled, (b) the
volume buttons were enabled, (c) the motion was disabled, (d) the keyboards were disabled, (e)
touch was enabled, (f) no time limit was set, and (g) no areas were circled on the screen to
disable the ability to touch those areas.
Instructional Strategies for Teaching Students with Learning Disabilities
To effectively improve students’ outcomes with rational numbers, teachers’ content
knowledge and pedagogical content knowledge about rational numbers plays an important role
(Depaepe et al., 2015). In a study that evaluated preservice teachers’ content knowledge and their
pedagogical content knowledge, researchers found that the participants had gaps in both areas
(Depaepe et al., 2015). A teachers’ content knowledge has been attributed as having an important
impact on student outcomes (Hattie, 2009).
For pedagogical content knowledge, researchers have identified effective methods for
teaching fractions. In a review conducted by Tian and Siegler (2017), the importance of fraction
magnitude knowledge was identified as being an essential element for teaching fractions to
students with mathematics difficulties. Additionally, they found that using a number line to
illustrate number magnitudes was a common element in some of the most effective interventions
108
they reviewed (Tian & Siegler, 2017). Another review determined that there currently is not
sufficient research supporting an order for teaching a certain type of rational numbers (Tian &
Siegler, 2018).
Theoretical Background
The theoretical background from the current study draws from two theories: Bandura’s
social cognitive theory and aptitude-by-treatment interaction theory. Social cognitive theory is
often cited as underpinning video modeling and aptitude by treatment interactions suggesting
that interventions effectiveness may be related to the participants’ previous knowledge.
Social Cognitive Theory
Bandura’s social cognitive theory provides a theoretical foundation for video modeling of the
proposed study (Corbett & Abdullah, 2005; Prater et al., 2012). Social cognitive theory explains
that individuals learn new behavior through modeling (Bandura 1977; 2018). One component of
social cognitive theory is observational learning, which proposes that human behavior is best
learned by observing and imitating others (Bandura, 1977). Bandura (1986) proposed four basic
processes of observational learning: attentional, retentional, production, and motivational.
Attentional denotes that learning occurs by paying attention to what is happening around the
learner. Retentional refers to the need for maintaining observational learning. This is done as the
observer symbolically processes the modeled behavior. Production is where the observer
performs the skill. The fourth process is motivation, which is the drive behind learning. As
students learn through video modeling they benefit from these processes. Specifically, video
modeling has been noted for its ability to keep students’ attention (Hughes & Yakubova, 2019).
109
Aptitude-By-Treatment Interactions
A way researchers are evaluating the effectiveness of interventions is by examining relevant
factors individual participants possess prior to commencing an intervention (e.g., Fuchs et al.,
2016a; Fuchs, Sterba, Fuchs, & Malone, 2016b). This is done, in part, to examine the potential
differential effects that treatments or interventions have on the participants and is called aptitude-
by-treatment interactions. This theory suggest that the effectiveness of an intervention is likely to
be a function, in part, of the participants’ aptitudes (Lloyd & Therrien, 2019). Research on and
the theory of learning styles (e.g., visual, tactile, or auditory learner), a theory that has been
found to lack research support (Pashler, McDaniel, Rohrer, & Bjork, 2008), is a similar concept
to aptitude-by-treatment interactions but is fundamentally different. Aptitudes include
characteristics (e.g., student performance on previous assessments in a similar content area, the
amount of knowledge, and previous experiences or exposure to the content or treatment) that a
participant possesses prior to the onset of the intervention that may indicate the likelihood that a
participant would be successful at the intervention (Preacher & Sterba, 2019). Fuchs et al.
(2016b) hypothesized that students with greater academic deficits before an intervention, would
likely benefit less from the intervention, what they termed the “initial academic deficit severity
hypothesis.” However, in their initial study, they determined that results of the intervention were
similar, regardless of the degree to which the participants’ academic performance differed prior
to the intervention (Fuchs et al., 2016b).
110
References
Achieve. (2017). Strong standards: A review of changes to state standards since the common
core. (November 13, 2017). Retrieved from: https://www.achieve.org/strong-standards
Adams, G. L., & Engelmann, S. (1996). Research on direct instruction: 25 years beyond
DISTAR. Seattle, WA: Educational Achievement Systems.
Adamson, R. M., & Lewis, T. J. (2017). A comparison of three opportunity-to-respond strategies
on the academic engaged time among high school students who present challenging
behavior. Behavioral Disorders, 42(2), 41–51. doi:10.1177/0198742916688644
Akçayır, G., & Akçayır, M. (2017). Advantages and challenges associated with augmented
reality for education: A systematic review of the literature. Educational Research Review,
20, 1–11. doi:10.1016/j.edurev.2016.11.002
Aldi, C., Crigler, A., Kates-McElrath, K., Long, B., Smith, H., Rehak, K., & Wilkinson, L.
(2016). Examining the effects of video modeling and prompts to teach activities of daily
living skills. Behavior Analysis in Practice, 9(4), 384–388. doi:10.1007/s40617-016-
0127-y
Archer, A. L., & Hughes, C. A. (2011). Explicit instruction: Effective and efficient teaching.
New York, NY: Guilford Press.
Bacca, J., Baldiris, S., Fabregat, R., Graf, S., & Kinshuk. (2014). Augmented reality trends in
education: A systematic review of research and applications. Journal of Educational
Technology & Society, 17(4), 133–149.
Baer, D. M., Wolf, M. M., & Risley, T. R. (1968). Some current dimensions of applied behavior
analysis. Journal of Applied Behavior Analysis, 1(1), 91–97. doi:10.1901/jaba.1968.1-91
111
Baker, S., Gersten, R., & Lee, D. (2002). A synthesis of empirical research on teaching
mathematics to low-achieving students. The Elementary School Journal, 103(1), 51–73.
doi:10.1086/499715
Baker, S., Lesaux, N., Jayanthi, M., Dimino, J., Proctor, C. P., Morris, J., et al. (2014). Teaching
academic content and literacy to English learners in elementary and middle school
(NCEE 2014–4012). Washington, DC: National Center for Education Evaluation and
Regional Assistance (NCEE), Institute of Education Sciences, U.S. Department of
Education. Retrieved from: https://ies.ed.gov/ncee/wwc/PracticeGuides
Banda, D. R., Dogoe, M. S., & Matuszny, R. M. (2011). Review of video prompting studies with
persons with developmental disabilities. Education and Training in Autism and
Developmental Disabilities, 46(4), 514–527.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change.
Psychological Review, 84(2), 191–215. doi:10.1037/0033-295X.84.2.191
Bandura, A. (1986). Social foundations of thought and action. Englewood Cliffs, NJ: Prentice‐
Hall.
Bandura, A. (2018). Toward a psychology of human agency: Pathways and reflections.
Perspectives on Psychological Science, 13(2), 130–136. doi:10.1177/1745691617699280
Bellini, S., & Akullian, J. (2007). A meta-analysis of video modeling and video self-modeling
interventions for children and adolescents with autism spectrum disorders. Exceptional
Children, 73(3), 264–287. doi:10.1177/001440290707300301
Brophy, J., & Good, T. L. (1986). Teacher behavior and student achievement. In M. Wittrock
(Ed.), Handbook of research on teaching (3rd ed., pp. 225–296). New York, NY:
Macmillan.
112
Burton, C. E., Anderson, D. H., Prater, M. A., & Dyches, T. T. (2013). Video self-modeling on
an iPad to teach functional math skills to adolescents with autism and intellectual
disability. Focus on Autism and Other Developmental Disabilities, 28(2), 67–77.
doi:10.1177/1088357613478829
Cihak, D. F., & Bowlin, T. (2009). Using video modeling via handheld computers to improve
geometry skills for high school students with learning disabilities. Journal of Special
Education Technology, 24(4), 17–30. doi:10.1177/016264340902400402
Clapham, C., & Nicholson, J. (2009). The concise oxford dictionary of mathematics (4th ed.).
New York, NY: Oxford University Press
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Upper
Saddle River, NJ: Pearson/Merrill-Prentice Hall.
Corbett, B. A., & Abdullah, M. (2005). Video modeling: Why does it work for children with
autism? Journal of Early and Intensive Behavior Intervention, 2(1), 2–8.
doi:10.1037/h0100294
Council for Exceptional Children. (2014). Council for Exceptional Children standards for
evidence based practices in special education. Retrieved from
https://www.cec.sped.org/~/media/Files/Standards/Evidence%20based%20Practices%20
and%20Practice/EBP%20FINAL.pdf
Denney, D. R. (1975). The effects of exemplary and cognitive models and self-rehearsal on
children's interrogative strategies. Journal of Experimental Child Psychology, 19(3),
476–488. doi:10.1016/0022-0965(75)90077-6
Depaepe, F., Torbeyns, J., Vermeersch, N., Janssens, D., Janssen, R., Kelchtermans, G., …Van
Dooren, W. (2015). Teachers' content and pedagogical content knowledge on rational
113
numbers: A comparison of prospective elementary and lower secondary school teachers.
Teaching and Teacher Education, 47, 82–92. doi:10.1016/j.tate.2014.12.009
Dowrick, R. W. (1991). Practical guide to using video in the behavioral sciences. New York,
NY: John Wiley & Sons
Edwards, S. E. (2015). Using video prompting to teach math skills to adolescent students with
specific learning disabilities (SLD) via iPad (Master’s thesis). Brigham Young
University, Provo, Utah.
Engelmann, S., & Carnine, D. (1982). Theory of instruction: Principles and applications. New
York, NY: Irvington.
Ennis, R. P., & Losinski, M. (2019). Interventions to improve fraction skills for students with
disabilities: A meta-analysis. Exceptional Children, 85(3), 367–386.
doi:10.1177/0014402918817504
Fede, J., Pierce, M., Matthews, W., Wells, C. (2013). The effects of computer-assisted schema-
based instruction intervention on word problem solving skills of low performing fifth
grade students. Journal of Special Education Technology, 28, 9–21.
Federal policy for the protection of human subjects: Final Rule. Fed. Reg. (January 19, 2017);
82(12):7149-7274.
Fien, H., Doabler, C. T., Nelson, N., Kosty, D., Clarke, B., Baker, S. (2016). An examination of
the promise of the NumberShire Level 1 gaming intervention for improving student
mathematics outcomes. Journal of Research on Educational Effectiveness, 9, 635–661.
doi:10.1080/19345747.2015.1119229
114
Fuchs, L. S., Malone, A. S., Schumacher, R. F., Namkung, J., Hamlett, C. L., Jordan, N. C., . . .
Changas, P. (2016). Supported self-explaining during fraction intervention. Journal of
Educational Psychology, 108(4), 493–508. doi:10.1037/edu0000073
Fuchs, L. S., Sterba, S. K., Fuchs, D., & Malone, A. S. (2016). Does evidence-based fractions
intervention address the needs of very low-performing students? Journal of Research on
Educational Effectiveness, 9(4), 662–677. doi:10.1080/19345747.2015.1123336
Garzón, J., Pavón, J., & Baldiris, S. (2019). Systematic review and meta-analysis of augmented
reality in educational settings. Virtual Reality, Advance online publication.
doi:10.1007/s10055-019-00379-9
Gast, D. L., & Ledford, J. R. (2018). Replication. In J. Ledford, & D. Gast (Eds.), Single case
research methodology: Applications in special education and behavioral sciences. (3rd
ed., pp. 365–391). New York, NY: Routledge.
Gast, D. L., Lloyd, B. P., & Ledford, J. R. (2018). Multiple baseline and multiple probe designs.
In J. Ledford, & D. Gast (Eds.), Single case research methodology: Applications in
special education and behavioral sciences. (3rd ed., pp. 239–281). New York, NY:
Routledge.
Gersten, R., Beckmann, S., Clarke, B., Foegen, A., Marsh, L., Star, J. R., & Witzel, B. (2009).
Assisting students struggling with mathematics: Response to INTERVENTION (RtI) for
elementary and middle schools (NCEE 2009–4060). Washington, DC: National Center
for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S.
Department of Education. Retrieved from https://ies.ed.gov/ncee/wwc/PracticeGuides
Graham, S., Bollinger, A., Booth Olson, C., D’Aoust, C., MacArthur, C., McCutchen, D., &
Olinghouse, N. (2012; 2018). Teaching elementary school students to be effective
115
writers: A practice guide (NCEE 2012– 4058). Washington, DC: National Center for
Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S.
Department of Education. Retrieved from https://ies.ed.gov/ncee/wwc/PracticeGuides
Goeke, J. L. (2009). Explicit instruction: A framework for meaningful direct teaching. Upper
Saddle River, NJ: Merrill.
Hall, T., & Vue, G. (2004). Explicit instruction. Wakefield, MA: National Center on Accessing
the General Curriculum. Retrieved from
http://aem.cast.org/about/publications/2002/ncac-explicit-
instruction.html#.XN7Ks45KhhE
Hammond, D. L., Whatley, A. D., Ayres, K. M., & Gast, D. L. (2010). Effectiveness of video
modeling to teach iPod use to students with moderate intellectual disabilities. Education
and Training in Autism and Developmental Disabilities, 45(4), 525–538.
Hansen, N., Jordan, N. C., & Rodrigues, J. (2017). Identifying learning difficulties with
fractions: A longitudinal study of student growth from third through sixth grade.
Contemporary Educational Psychology, 50, 45-59. doi:10.1016/j.cedpsych.2015.11.002
Hattie, J. (2009). Visible learning: A synthesis of over 800 meta-analyses relating to
achievement. New York, NY: Routledge.
Hattie, J., & Clark, S. (2019). Visible learning: Feedback. New York, NY: Routledge.
Hine, J. F., & Wolery, M. (2006). Using point-of-view video modeling to teach play to
preschoolers with autism. Topics in Early Childhood Special Education, 26(2), 83–93.
Hitchcock, C. H., Dowrick, P. W., & Prater, M. A. (2003). Video self-modeling intervention in
school-based settings: A review. Remedial and Special Education, 24(1), 36–45.
doi:10.1177/074193250302400104
116
Hollingsworth, J., & Ybarra, S. (2009). Explicit direct instruction (EDI): The power of the well-
crafted, well-taught lesson. Thousand Oaks, CA: Corwin Press.
Hughes, C. A., Morris, J. R., Therrien, W. J., & Benson, S. K. (2017). Explicit instruction:
Historical and contemporary contexts. Learning Disabilities Research & Practice, 32(3),
140–148. doi:10.1111/ldrp.12142
Hughes, C. A., Riccomini, P. J., & Morris, J. R. (2019). Use explicit instruction. In, J.
McLeskey, L. Maheady, B. Billingsley, M. Brownell, & T. Lewis (Eds.), High-leverage
practices for inclusive classrooms (pp. 215–236). New York, NY: Routledge.
Hughes, E. M. (In Press). Point of view video modeling to teach simplifying fractions to middle
school students with mathematical learning disabilities. Learning Disabilities: A
Contemporary Journal, 17(1) 41-57.
Hughes, E. M., & Yakubova, G. (2016). Developing handheld video intervention for students
with autism spectrum disorder. Intervention in School and Clinic, 52(2), 115–121.
doi:10.1177/1053451216636059
Hughes, E. M., & Yakubova, G. (2019). Addressing the mathematics gap for students with ASD:
An evidence-based systematic review of video-based mathematics interventions. Review
Journal of Autism and Developmental Disorders, 6(2), 147–158. doi:10.1007/s40489-
019-00160-3
Institute of Education Sciences (IES), (n.d.). About IES: Connecting research, policy and
practice. Retrieved from https://ies.ed.gov/aboutus/
Jitendra, A. K., Lein, A. E., Im, S., Alghamdi, A. A., Hefte, S. B., & Mouanoutoua, J. (2018).
Mathematical interventions for secondary students with learning disabilities and
117
mathematics difficulties: A meta-analysis. Exceptional Children, 84(2), 177–196.
doi:10.1177/0014402917737467
Joint Committee on Standards for Educational and Psychological Testing (JCSEPT; U.S.),
National Council on Measurement in Education, American Psychological Association, &
American Educational Research Association. (2014). Standards for educational and
psychological testing. Washington, DC: American Educational Research Association.
Kellems, R. O., Cacciatore, G., & Osborne, K. (2019). Using an augmented reality–based
teaching strategy to teach mathematics to secondary students with disabilities. Career
Development and Transition for Exceptional Individuals, 216514341882280 This is an
online first article, Fix this portion of the reference. doi:10.1177/2165143418822800
Kellems, R. O., & Edwards, S. (2016). Using video modeling and video prompting to teach core
academic content to students with learning disabilities. Preventing School Failure:
Alternative Education for Children and Youth, 60(3), 207–214.
doi:10.1080/1045988X.2015.1067875
Kellems, R. O., Frandsen, K., Hansen, B., Gabrielsen, T., Clarke, B., Simons, K., & Clements,
K. (2016). Teaching multi-step math skills to adults with disabilities via video prompting.
Research in Developmental Disabilities, 58, 31–44. doi:10.1016/j.ridd.2016.08.013
Kiru, E. W., Doabler, C. T., Sorrells, A. M., & Cooc, N. A. (2018). A synthesis of technology-
mediated mathematics interventions for students with or at risk for mathematics learning
disabilities. Journal of Special Education Technology, 33(2), 111-123.
doi:10.1177/0162643417745835
118
Kitchenham, B. (2004). Procedures for performing systematic reviews (NICTA Technical
Report No. 0400011T.1.). Retrieved from
http://www.inf.ufsc.br/~aldo.vw/kitchenham.pdf
Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., &
Shadish, W. R. (2010). Single-case designs technical documentation. Retrieved from
What Works Clearinghouse website: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf.
Kroesbergen, E. H., & Van Luit, J. E. H. (2003). Mathematics interventions for children with
special educational needs: A meta-analysis. Remedial and Special Education, 24(2), 97–
114. doi:10.1177/07419325030240020501
Lee, J. N. (2015). The effectiveness of point-of-view video modeling as a social skills
intervention for children with autism spectrum disorders. Review Journal of Autism and
Developmental Disorders, 2(4), 414–428. doi:10.1007/s40489-015-0061-x
Leh, J. M., & Jitendra, A. K. (2013). Effects of computer-mediated versus teacher-mediated
instruction on the mathematical word problem-solving performance of third-grade
students with mathematical difficulties. Learning Disability Quarterly, 36(2), 68–79.
doi:10.1177/0731948712461447
Lloyd, J. W., & Therrien, W. J. (2019). Preview. Exceptional Children, 85(2), 124–125.
doi:10.1177/0014402918811447
Martin, S., Diaz, G., Sancristobal, E., Gil, R., Castro, M., & Peire, J. (2011). New technology
trends in education: Seven years of forecasts and convergence. Computers & Education,
57(3), 1893–1906. doi:10.1016/j.compedu.2011.04.003
119
Martin, A. J., & Evans, P. (2018). Load reduction instruction: Exploring a framework that
assesses explicit instruction through to independent learning. Teaching and Teacher
Education, 73, 203–214. doi:10.1016/j.tate.2018.03.018
Mason, R. A., Ganz, J. B., Parker, R. I., Boles, M. B., Davis, H. S., & Rispoli, M. J. (2013).
Video-based modeling: Differential effects due to treatment protocol. Research in Autism
Spectrum Disorders, 7(1), 120–131. doi:10.1016/j.rasd.2012.08.003
McCoy, K., & Hermansen, E. (2007). Video modeling for individuals with autism: A review of
model types and effects. Education and Treatment of Children, 30(4), 183–213.
doi:10.1353/etc.2007.0029
McLeskey J., Barringer M-D., Billingsley B., Brownell M., Jackson D., Kennedy M., …Ziegler
D. (2017). High-leverage practices in special education. Arlington, VA: Council for
Exceptional Children & CEEDAR Center.
Misquitta, R. (2011). A review of the literature: Fraction instruction for struggling learners in
mathematics. Learning Disabilities Research & Practice, 26(2), 109–119.
doi:10.1111/j.1540-5826.2011.00330.x
Moher D., Liberati A., Tetzlaff J., & Altman D. G. (2009). Preferred reporting items for
systematic reviews and meta-analyses: The PRISMA statement. PLoS Medicine, 6(6),
e1000097. doi:10.1371/journal.pmed1000097
Moore, D. W., Anderson, A., Treccase, F., Deppeler, J., Furlonger, B., & Didden, H. C. M.
(2013). A video-based package to teach a child with autism spectrum disorder to write
her name. Journal of Developmental and Physical Disabilities, 25(5), 493–503.
doi:10.1007/s10882-012-9325-x
120
Morgan, R., & Salzberg, C. (1992). Effects of video-assisted training on employment-related
social skills of adults with severe mental-retardation. Journal of Applied Behavior
Analysis, 25(2), 365–383. doi:10.1901/jaba.1992.25-365
Morris, J. R., Hughes, E. M., & Stocker, J. D. (2019). Effects of Augmented Reality and Video
Modeling to Explicitly Teach Mathematics. Manuscript in progress.
National Commission for the Protection of Human Subjects of Biomedical and Behavioral
Research. (1979). The Belmont report: Ethical principles and guidelines for the
protection of human subjects of research. Retrieved from:
https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-
report/index.html
National Council of Teachers of Mathematics (NCTM). (2000). Principles and standards for
school mathematics: An overview. Retrieved from www.nctm.org/standards/
National Council of Teachers of Mathematics (NCTM). (2006). Curriculum focal points for
prekindergarten through grade 8 mathematics: A quest for coherence. Reston, VA:
Author.
National Governors Association Center for Best Practices & Council of Chief State School
Officers (NGA/CCSSO). (2010). Common Core State Standards for mathematics.
Washington, DC: Authors.
National Mathematics Advisory Panel (NMAP). (2008). Foundations for Success: The Final
Report of the National Mathematics Advisory Panel, Washington, DC: U.S. Department
of Education.
National Research Council (NRC). (2001). Adding it up: Helping children learn mathematics.
Washington, DC: National Academy Press.
121
Ni, Y. (2001). Semantic domains of rational numbers and the acquisition of fraction equivalence.
Contemporary Educational Psychology, 26(3), 400–417. doi:10.1006/ceps.2000.1072
Ni, Y., & Zhou, Y. (2005). Teaching and learning fraction and rational numbers: The origins and
implications of whole number bias. Educational Psychologist, 40(1), 27–52.
doi:10.1207/s15326985ep4001_3
Pashler, H., McDaniel, M., Rohrer, D., & Bjork, R. (2008; 2009). Learning styles: Concepts and
evidence. Psychological Science in the Public Interest, 9(3), 105–119.
doi:10.1111/j.1539-6053.2009.01038.x
Prater, M. A. (2018). Teaching students with high-incidence disabilities: Strategies for diverse
classrooms. Los Angeles, CA: SAGE.
Prater, M.A., Carter, N., Hitchcock, C., & Dowrick, P. (2012). Video self-modeling to improve
academic performance: A literature review. Psychology in Schools, 49(1), 71–81.
doi:10.1002/pits.20617.
Preacher, K. J., & Sterba, S. K. (2019). Aptitude-by-treatment interactions in research on
educational interventions. Exceptional Children, 85(2), 248–264.
doi:10.1177/0014402918802803
Presidential Executive Order (2006). Executive order 13398 of April 18, 2006: National
mathematics advisory panel. Federal Register, 77(77), April 21, 2006.
Radu, I. (2012, November). Why should my students use AR? A comparative review of the
educational impacts of augmented-reality. Paper presented at the IEEE International
Symposium on Mixed and Augmented Reality (ISMAR), Atlanta, Georgia, pp. 313–314.
doi:10.1109/ISMAR.2012.6402590
Radu, I. (2014). Augmented reality in education: A meta-review and cross-media analysis.
122
Personal and Ubiquitous Computing, 18(6), 1533–1543. doi:10.1007/s00779-013-0747-y
Rosenshine, B., & Stevens, R. (1986). Teaching functions. In M. Wittrock (Ed.), Handbook of
research on teaching (3rd ed., pp. 376–391). New York, NY: Macmillan.
Sancho, K., Sidener, T. M., Reeve, S. A., & Sidener, D. W. (2010). Two variations of video
modeling interventions for teaching play skills to children with autism. Education and
Treatment of Children, 33(3), 421–442. doi:10.1353/etc.0.0097
Satsangi, R., Hammer, R., & Hogan, C. D. (2019). Video modeling and explicit instruction: A
comparison of strategies for teaching mathematics to students with learning disabilities.
Learning Disabilities Research & Practice, 34(1), 35–46. doi:10.1111/ldrp.12189
Saunders, A. F., Spooner, F., & Ley Davis, L. (2018). Using video prompting to teach
mathematical problem solving of real-world video-simulation problems. Remedial and
Special Education, 39(1), 53–64. doi:10.1177/0741932517717042
Scheflen, S. C., Freeman, S. F. N., & Paparella, T. (2012). Using video modeling to teach young
children with autism developmentally appropriate play and connected speech. Education
and Training in Autism and Developmental Disabilities, 47(3), 302–318.
Seo, Y., & Bryant, D. (2012). Multimedia CAI program for students with mathematics
difficulties. Remedial and Special Education, 33(4), 217-225.
doi:10.1177/0741932510383322
Shannon, P. (2018, January 24). Common Core Standards (U.S.). Oxford Research Encyclopedia
of Education. Retrieved from
http://oxfordre.com/education/view/10.1093/acrefore/9780190264093.001.0001/acrefore-
9780190264093-e-316.
123
Shin, M., & Bryant, D. P. (2015). Fraction interventions for students struggling to learn
mathematics: A research synthesis. Remedial and Special Education, 36(6), 374–387.
doi:10.1177/0741932515572910
Shrestha, A., Anderson, A., & Moore, D. W. (2013). Using point-of-view video modeling and
forward chaining to teach a functional self-help skill to a child with autism. Journal of
Behavioral Education, 22(2), 157–167.
doi:10.1007/s10864-012-9165-x
Siegler, R., Carpenter, T., Fennell, F., Geary, D., Lewis, J., Okamoto, Y., … Wray, J. (2010).
Developing effective fractions instruction for kindergarten through 8th grade: A practice
guide (NCEE 2010-4039). Washington, DC: National Center for Education Evaluation
and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Retrieved from https://ies.ed.gov/ncee/wwc/PracticeGuides
Siegler, R. S., Thompson, C., & Schneider, M. (2011). An integrated theory of whole number
and fractions development. Cognitive Psychology, 62, 273–296.
doi:10.1016/j.cogpsych.2011.03.001
Solis, M., Ciullo, S., Vaughn, S., Pyle, N., Hassaram, B., & Leroux, A. (2012). Reading
comprehension interventions for middle school students with learning disabilities: A
synthesis of 30 years of research. Journal of Learning Disabilities, 45(4), 327–340.
doi:10.1177/0022219411402691
Standards in Your State. (n.d.). Retrieved from: http://www.corestandards.org/standards-in-your-
state/
124
Stevens, E. A., Rodgers, M. A., & Powell, S. R. (2018). Mathematics interventions for upper
elementary and secondary students: A meta-analysis of research. Remedial and Special
Education, 39(6), 327–340. doi:10.1177/0741932517731887
Swanson, H. L., & Jerman, O. (2006). Math disabilities: A selective meta-analysis of the
literature. Review of Educational Research, 76(2), 249–274.
doi:10.3102/00346543076002249
Tereshko, L., MacDonald, R., & Ahearn, W. H. (2010). Strategies for teaching children with
autism to imitate response chains using video modeling. Research in Autism Spectrum
Disorders, 4(3), 479–489. doi:10.1016/j.rasd.2009.11.005
Tetreault, A. S., & Lerman, D. C. (2010). Teaching social skills to children with autism using
point-of-view video modeling. Education and Treatment of Children, 33(3), 395–419.
doi:10.1353/etc.0.0105
Tian, J., & Siegler, R. S. (2017). Fractions learning in children with mathematics difficulties.
Journal of Learning Disabilities, 50(6), 614–620. doi:10.1177/0022219416662032
Tian, J., & Siegler, R. S. (2018). Which type of rational numbers should students learn first?
Educational Psychology Review, 30(2), 351–372. doi:10.1007/s10648-017-9417-3
Vamvakoussi, X. (2015). The development of rational number knowledge: Old topic, new
insights. Learning and Instruction, 37, 50–55. doi:10.1016/j.learninstruc.2015.01.002
Vamvakoussi, X., Vosniadou, S., & Dooren, W. V. (2013). The framework theory approach
applied to mathematics. In. S. Vosniadou (Ed.), International handbook of research on
conceptual change (2nd ed., pp. 305-321). New York, NY: Routledge
125
Vosniadou, S. (2014). Examining cognitive development from a conceptual change point of
view: The framework theory approach. European Journal of Developmental Psychology,
11(6), 645-661. doi:10.1080/17405629.2014.921153
Vosniadou, S., & Skopeliti, I. (2014). Conceptual change from the framework theory side of the
fence. Science & Education, 23(7), 1427–1445. doi:10.1007/s11191-013-9640-3
Watkins, C. L. (1997). Project Follow Through: A case study of the contingencies influencing
instructional practices of the educational establishment. (Monograph). Concord, MA:
Cambridge Center for Behavioral Studies.
Watkins, C. L., & Slocum, T. A. (2003). The components of direct instruction. Journal of Direct
Instruction, 3(2), 4–32.
What Works Clearinghouse (WWC). (2017). Retrieved from
https://ies.ed.gov/ncee/wwc/Handbooks
Wikimedia Commons (July 6, 2011). Number System [Image]. Retrieved from
https://commons.wikimedia.org/wiki/File:Number-systems.svg
Wolery, M. (2011). Intervention research: The importance of fidelity measurement. Topics in
Early Childhood Special Education, 31(3), 155–157. doi:10.1177/0271121411408621
Yakubova, G., Hughes, E. M., & Hornberger, E. (2015). Video-based intervention in teaching
fraction problem-solving to students with autism spectrum disorder. Journal of Autism
and Developmental Disorders, 45(9), 2865–2875. doi:10.1007/s10803-015-2449-y
Yakubova, G., Hughes, E. M., & Shinaberry, M. (2016). Learning with technology: Video
modeling with concrete–representational–abstract sequencing for students with autism
spectrum disorder. Journal of Autism and Developmental Disorders, 46(7), 2349–2362.
doi:10.1007/s10803-016-2768-7
126
Yakubova, G., & Taber-Doughty, T. (2017). Improving problem-solving performance of
students with autism spectrum disorders. Focus on Autism and Other Developmental
Disabilities, 32(1), 3–17. doi:10.1177/1088357615587506
Yakubova, G., & Zeleke, W. A. (2016). A problem-solving intervention using iPads to improve
transition-related task performance of students with autism spectrum disorder. Journal of
Special Education Technology, 31(2), 77–86. doi:10.1177/0162643416650023
127
APPENDIX B
Results from Participants AimswebPlus Benchmark Assessments
Results from Musette’s Aimsweb plus benchmark assessment
Assessment Area Score
Geometry 1 of 4
Classifies pairs of lines as either parallel or perpendicular. 1
Identifies specific types of triangles. 0
Identifies polygons with specific characteristics. 0
Identifies specific types of quadrilaterals. 0
Measurement & Data 1 of 6
Determines the missing dimension of an object when given the area and side
length. 0
Compares rectangles using area. 1
Determines the area of rectangles using the formula: a= l x w. 0
Solves word problems involving mass and conversion of standard measurement
units. 0
Solves word problems involving mass and conversion of standard measurements
units. 0
Determines the perimeter of an area when given side lengths 0
Number & Operations: Base 10 0 of 4
Compares numbers by rounding to the nearest 10 0
Identifies numbers in expanded form. 0
Multiplies multi-digit whole numbers. 0
Solves multi-digit whole number addition problems. 0
Number & Operations: Fractions 0 of 5
Solves subtraction word problems involving fractions with common
denominators. 0
Identifies equivalent fractions. 0
Identifies fractions on a number line. 0
128
Compares the magnitude of fractions by writing fractions with common
denominators. 0
Operations & Algebraic Thinking 3 of 10
Solves multiplication word problems 0
Identifies equations that represent number sentences. 0
Determines equations that can be used to solve word problems. 1
Determines missing numbers in a pattern to identify true statements about the
pattern. 1
Solves multi-step word problems involving whole numbers. 0
Solves word problems by following a given pattern. 0
Determines if a number is prime or composite by finding the factor pairs. 1
Note: Adapted from Aimsweb Plus benchmark assessment results summary, v21.3 Copyright ©
2018. All Rights Reserved. Patent No. 7,311,524
Results from Jaren’s Aimsweb plus benchmark assessment
Assessment Area Score
Geometry 1 of 4
Classifies pairs of lines as either parallel or perpendicular. 0
Identifies specific types of triangles. 0
Identifies polygons with specific characteristics. 0
Identifies specific types of quadrilaterals. 1
Measurement & Data 2 of 6
Determines the missing dimension of an object when given the area and side
length. 0
Compares rectangles using area. 0
Determines the area of rectangles using the formula: a= l x w. 0
Solves word problems involving mass and conversion of standard measurement
units. 0
129
Solves word problems involving mass and conversion of standard measurements
units. 1
Determines the perimeter of an area when given side lengths 1
Number & Operations: Base 10 0 of 4
Compares numbers by rounding to the nearest 10 0
Identifies numbers in expanded form. 0
Multiplies multi-digit whole numbers. 0
Solves multi-digit whole number addition problems. 0
Number & Operations: Fractions 0 of 5
Solves subtraction word problems involving fractions with common
denominators. 0
Identifies equivalent fractions. 0
Identifies fractions on a number line. 0
Compares the magnitude of fractions by writing fractions with common
denominators. 0
Operations & Algebraic Thinking 4 of 10
Solves multiplication word problems 0
Identifies equations that represent number sentences. 1
Determines equations that can be used to solve word problems. 1
Determines missing numbers in a pattern to identify true statements about the
pattern. 1
Solves multi-step word problems involving whole numbers. 1
Solves word problems by following a given pattern. 0
Determines if a number is prime or composite by finding the factor pairs. 0
Note: Adapted from Aimsweb Plus benchmark assessment results summary, v21.3 Copyright ©
2018. All Rights Reserved. Patent No. 7,311,524
Results from Alaric’s Aimsweb plus benchmark assessment
Assessment Area Score
Geometry 2 of 4
130
Classifies pairs of lines as either parallel or perpendicular. 0
Identifies specific types of triangles. 0
Identifies polygons with specific characteristics. 1
Identifies specific types of quadrilaterals. 1
Measurement & Data 1 of 6
Determines the missing dimension of an object when given the area and side
length. 0
Compares rectangles using area. 1
Determines the area of rectangles using the formula: a= l x w. 0
Solves word problems involving mass and conversion of standard measurement
units. 0
Solves word problems involving mass and conversion of standard measurements
units. 0
Determines the perimeter of an area when given side lengths 0
Number & Operations: Base 10 2 of 4
Compares numbers by rounding to the nearest 10 0
Identifies numbers in expanded form. 1
Multiplies multi-digit whole numbers. 0
Solves multi-digit whole number addition problems. 1
Number & Operations: Fractions 0 of 5
Solves subtraction word problems involving fractions with common
denominators. 0
Identifies equivalent fractions. 0
Identifies fractions on a number line. 0
Compares the magnitude of fractions by writing fractions with common
denominators. 0
Operations & Algebraic Thinking 2 of 10
Solves multiplication word problems 0
Identifies equations that represent number sentences. 1
Determines equations that can be used to solve word problems. 0
Determines missing numbers in a pattern to identify true statements about the
pattern. 0
131
Solves multi-step word problems involving whole numbers. 0
Solves word problems by following a given pattern. 0
Determines if a number is prime or composite by finding the factor pairs. 1
Note: Adapted from Aimsweb Plus benchmark assessment results summary, v21.3 Copyright ©
2018. All Rights Reserved. Patent No. 7,311,524
132
APPENDIX C
Social Validity Questionnaires
Student Social Validity Questionnaire
Name______________________ Date___________________
Instruction: Read each statement carefully. Circle the number below that best
describes your experiences and feelings.
1 =
Strongly
disagree
2 =
Disagree
3 =
Somewhat
disagree
4 =
Somewhat
agree
5 =
Agree
6 =
Strongly
agree
Watching videos on an iPad
helped me learn math. 1 2 3 4 5 6
I enjoyed learning math
from an iPad. 1 2 3 4 5 6
I learn math easily on an
iPad. 1 2 3 4 5 6
I could easily hear the
instructor in the video. 1 2 3 4 5 6
It was simple for me to
practice with the video. 1 2 3 4 5 6
I thought the length of the
instructional videos were
appropriate. 1 2 3 4 5 6
My math skills improved. 1 2 3 4 5 6
I would like to learn other
math skills in the same way. 1 2 3 4 5 6
I like math. 1 2 3 4 5 6
133
What did you like about learning math from the videos on an iPad?
What did you dislike about learning math from the videos on an iPad?
What would you change about how you learned math on an iPad?
134
Teacher Social Validity Questionnaire
Directions: The purpose of this questionnaire is to obtain information that will aid in the
selection of future classroom interventions and programs. These programs will be used by
teachers of children with identified needs. Please circle the number which best describes your
agreement or disagreement with each statement.
Strongly
disagree
Disagree Slightly
disagree
Slightly
agree
Agree Strongly
agree
1. This was an acceptable
intervention for the students’ needs. 1 2 3 4 5 6
2. Most teachers would find this
intervention appropriate for students
with similar needs.
1 2 3 4 5 6
3. This intervention proved
effective in supporting the students’
needs.
1 2 3 4 5 6
4. I would recommend the use of
this intervention to other teachers. 1 2 3 4 5 6
5. The students’ needs were severe
enough to warrant use of this
intervention.
1 2 3 4 5 6
6. I would be willing to use this intervention in the classroom
setting.
1 2 3 4 5 6
7. This intervention did not result in
negative side effects for the
students.
1 2 3 4 5 6
8. This intervention would be
appropriate for a variety of students.
1 2 3 4 5 6
9. This intervention was reasonable
for the needs of the students.
1 2 3 4 5 6
10. I liked the procedures used in
this intervention.
1 2 3 4 5 6
11. Overall, this intervention was
beneficial for the students.
1 2 3 4 5 6
Source: Adapted from Witt, J. C., & Elliott, S. N. (1985). Acceptability of classroom intervention strategies. In T.R.
Kratochwill, (Ed.), Advances in school psychology, (Vol. 4, pp. 251–288). Mahwah, NJ: Erlbaum. Reproduced
under Fair Use of copyrighted materials for education, scholarship, and research. 17 U.S.C. § 107
135
Please answer the following questions.
1. What were some of the program’s strengths?
2. What were some of the program’s weaknesses?
3. Is there anything you would want to change about the intervention?
4. Are there any additional comments you have?
136
APPENDIX D
Timeline and Schedules for the Intervention
Study Timeline
Phase of Research Date
Baseline: December 3-7, 2018
Skill 1: December 10-14, December 17-19, 2018
Skill 2: December 17-19, 2018, January 7-11, 2019
Skill 3: January 7-18, 2019
Maintenance: One to two weeks after the intervention has concluded for each
skill.
Generalization: Conducted in each phase: baseline, intervention and
maintenance.
Data Analysis Ongoing
137
Outline of Intervention Schedule
• AIMSweb mathematics data and other assessment data.
Analyze Assessment Data (1-2 days)
• Administer baseline assessment for all skills the first day. Continue collecting baseline for the first skill and probe the other skills.
Baseline (20 minutes the first day, 5-10 minutes
therafter)
• Application of mathematics word problems for each of the identified skills (20 minutes x2)
Generalization (20
minutes, during baseline/ 20 minutes post
intervention)
• Model skill (2 - 5 minutes)
• Guided practice (3 - 6 minutes)
• Check (1 - 3 minutes)
• Independent Practice (4 - 7 minutes)
Intervention (15–20
minutes at the beginning of mathematics class)
• Assess each skill at one - two week intervals post intervention.
Maintenance (5 - 10
minutes per skill)
138
Anticipated Schedule of Intervention Implementation
Participant 1 Participant 2 Participant 3 Participant 4 Participant 5
Session Skill
1 Skill
2 Skill
3 Skill
1 Skill
2 Skill
3 Skill
1 Skill
2 Skill
3 Skill
1 Skill
2 Skill
3 Skill
1 Skill
2 Skill
3
1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 0 0 0 0 0
3 0 0 0 0 0
4 0 0 0 0 0 0 0 0 0 0
5 0 0 0 0 0
6 0 0 0 0 0 0 0 0 0 0
7 5 0 5 0 5 0 5 0 5 0
8 5 5 5 5 5
9 5 5 5 5 5
10 5 5 5 5 5
11 5 0 5 0 5 0 5 0 5 0
12 5 0 5 0 5 0 5 0 5 0
13 5 0 5 0 5 0 5 0 5 0
14 5 5 5 5 5
15 5 5 5 5 5
16 5 0 5 0 5 0 5 0 5 0
17 5 0 5 0 5 0 5 0 5 0
18 5 5 5 5 5
19 5 5 5 5 5
20 5 5 5 5 5
21 5 5 5 5 5 5 5 5 5 5
22 5 5 5 5 5
23 5 5 5 5 5
24
25
26
27
28 5 5 5 5 5
29
30
31 5 5 5 5 5
32 5 5 5 5 5
33
34 5 5 5 5 5
35
139
36
Note: This schedule was created to minimize the time the intervention would take in the
classroom each day. It illustrates an anticipated schedule of implementation for the proposed
single-case research design for the study, multiple probe across skills. The “0’s” are baseline
sessions. Other than the first baseline session, all other sessions will have no more than two skills
in baseline or intervention at a time, and no two skills in intervention at the same time. Cells
highlighted in red are anticipated times that participants will be in intervention, cells highlighted
in dark blue are projected maintenance probes. Results will vary by participant.
140
APPENDIX E
Procedural Fidelity Checklists
Date: _____________________ Reviewer: ________________________
Baseline Fidelity Checklist
Did the interventionist do the following:
Pass out baseline worksheets to teach participant?
Assure that each student had:
A sharpened pencil
A calculator
Did the interventionist not do the following:
Help the students solve the problems in any way?
Prompt the students’ answers on the problems in any way?
Baseline Procedural Fidelity: ___________/6 x100 =______________%
141
Intervention Fidelity Checklist
Did the interventionist do the following:
Pass out an intervention worksheet to each participant?
Assure that each student had:
A sharpened pencil?
A calculator?
An iPad?
Headphones?
Read the instructions on the first page of the intervention to the participants?
At the “check” stage
Evaluate the accuracy of the first “check” problem and provide positive or corrective
feedback?
Have the students complete two additional check problems before the students continued
to the independent practice?
Additional Check Problem #1 (if necessary)
Additional Check Problem #2 (if necessary)
Check both additional check problems for accuracy and provide positive or corrective
feedback (if necessary)?
Did the interventionist not do the following:
Help the students solve the independent practice problems in any way?
Prompt the students’ answers on the independent practice problems in any way?
Intervention Procedural Fidelity: _________/13 x100 =_____________%
142
Participant Intervention Fidelity Checklist
Did the participant:
Work through the worksheet packet from front to back?
Scan the images and watch the videos on every page where an image was displayed?
Participant Intervention Procedural Fidelity: _______/2 x100 =______%
Signature: ______________________________________________
143
Fidelity Checklist of Explicit Instruction Components Contained in Videos
Skill: _________________________________________________
Introduction/Model Video
Did the instructor do the following:
Provide conceptual information?
State the purpose/goal of the lesson?
Discuss the relevance of the target skill?
Review relevant prerequisite skills?
Model the skill with 3 to 4 examples?
Fading occurs across the examples?
Provide cognitive modeling when modeling the skill by vocalizing thought processes or
internal dialogue (e.g., “First, I….”, “I ask myself…”, etc.)
Guided Practice Video
Did the instructor do the following:
Provide a transitional que (e.g., Now let’s do some together)?
Tell students how to do skill?
Ask students how to do the skill?
Remind students how to do the skill?
Instructional Fidelity: _________/11 x100 =________________%
Printed Name: __________________________________
Date: __________________________________________
Signature: _____________________________________________
144
Problem Difficulty Fidelity Checklist
Are the problem difficulty levels similar across baseline, intervention, and maintenance
conditions?
Yes
No
I reviewed __________________% of the baseline problems, _____________% of the
intervention problems, and _____________% of the maintenance problems and found no
significant difference in the level of problem difficulty.
Printed Name: __________________________________
Date: __________________________________________
Signature: _____________________________________________
145
APPENDIX F
Example Explicit Instruction Lesson Template
Skill/Rule: ______________________________________________________________
Introduction
Purpose/Goal of the Lesson:
“In this video we are going to learn how to:”___________________________________
________________________________________________________________________
Discuss the Relevance of the Skill
“It is important to know this skill because…”
Prerequisites
“Just as a reminder” (Review prerequisite skills)
•
•
•
Conceptual Component (provide conceptual information about the skill being taught).
Model
“First, I’m going to show you how to solve this type of problem.” (Show and Tell)
Problem 1
1. “I look at each of these…. In doing so I ask myself, … (or …?).”
“If not, our strategy doesn’t apply.”
“If so, we move to the next step”
2. “Step two is to …”
3. “Step three is to …”
Problem 2
1. “I first look at …. In doing so I ask myself, … (…?).”
“They are so I move on to step the next step.”
2. “Step two, …”
3. “Step three, …”
Problem 3
1. “I first ask myself, …?”
“They are.”
2. “Next, …”
3. “Last, …”
Problem 4
1. …
2. …
146
3. …
Prompt
“Now let’s do some together, pick up your pencil and look at the first problem in problem
set ‘A’.”
Tell
“First I’m going to ‘tell’ you how to do it”
5. “We First look at … In doing so we ask ourselves, …?”
“Each of these …so we move on to the next step.”
6. “Step two, …”
7. “Step three, …”
Ask
“Now I’m going to “ask” you how to solve this equation.”
1. “When we are presented with ________________ the first thing we ask ourselves is:”
___________ ? …[wait] … “if you said: _______ You are correct.”
“We look and see that __________…[wait] … if you said:__________ you are correct!
Let’s move on to step the next step.”
2. “Step two, we ________ …[wait] … _____________”
3. “Step three, we __________ …[wait] … _____________”
Remind
“Go ahead and do problem #3, and remember the ____________________”
(strategy/steps/rule etc.)
Check
“At the conclusion of the video complete the ‘check’ section on your own and raise your
hand when finished.” (Instructor will then check for understanding before the independent
practice)
Close/Review
“Just to review, in this video we learned how to __________________________” (briefly
review the skill taught).
147
APPENDIX G
Example Intervention Packet for Skill #1: Adding and Subtracting Fractions with
Common Denominators
Note: Example first page of the intervention packet for Skill 1: Adding and subtracting fractions
with common denominators. The instructions on this page were read aloud at the beginning of
each intervention session.
148
Note: Example second page of the intervention packet for Skill 1: Adding and subtracting
fractions with common denominators. Scanning the image on this page triggered the augmented
reality application on the iPad to play the introduction and model video.
149
Note: The third page of the intervention packet for Skill 1: Adding and subtracting fractions with
common denominators. Scanning the image on this page triggered the augmented reality
application on the iPad to play the guided practice video.
150
Note: Example fourth page of the intervention packet (the check page) for Skill 1: Adding and
subtracting fractions with common denominators. Participants completed this page without
prompting. Answers were reviewed by the interventionist and feedback was provided before
completing independent work.
151
APPENDIX H
Example Materials for Skill #1: Adding and Subtracting Fractions with Common
Denominators
Note: Example baseline probe for Skill 1: Adding and subtracting fractions with common
denominators.
Name______________________ Da te_____________
BL 4.4.mrd
Find the sum or diffe rence .
1. 8
10
5
10+ =
2. 3
6
3
6+ =
3. 1
2
1
2+ =
4. 2
6
1
6- =
5. 3
5
1
5- =
152
Note: Example independent practice worksheet for Skill 1: Adding and subtracting fractions with
common denominators. Participants completed this page without prompting, or prompts.
Skill 1.4.mrd
Find the sum or diffe rence .
1) 8
12
11
12+ =
2) 35
100
71
100+ =
3) 3
5
1
5+ =
4) 3
5
2
5- =
5) 4
6
2
6- =
153
Note: Example answer key for baseline probe for Skill 1: Adding and subtracting fractions with
common denominators.
Name______________________ Da te_____________
BL 4.4.mrd
Find the sum or diffe rence .
1. 8
10
5
10
13
10+ =
2. 3
6
3
6
1
1+ =
3. 1
2
1
2
1
1+ =
4. 2
6
1
6
1
6- =
5. 3
5
1
5
2
5- =
154
Note: The answer key for the independent page completed for the dependent variable for Skill 1:
Adding and subtracting fractions with common denominators.
Skill 1.4.mrd
Find the sum or diffe rence .
1) 8
12
11
12
19
12+ =
2) 35
100
71
100
53
50+ =
3) 3
5
1
5
4
5+ =
4) 3
5
2
5
1
5- =
5) 4
6
2
6
1
3- =
155
Note: Maintenance page completed for the dependent variable for Skill 1: Adding and
subtracting fractions with common denominators. Participants completed this page without any
type of prompting.
Maint. 1.1.mrd
Find the sum or diffe rence .
1) 2
6
5
6+ =
2) 1
2
1
2+ =
3) 7
10
5
10+ =
4) 11
12
9
12- =
5) 6
8
5
8- =
156
Note: Generalization page completed for the dependent variable for Skill 1: Adding and
subtracting fractions with common denominators. Participants completed this page without any
type of prompting.
157
APPENDIX I
Example Materials for Skill #2: Completing Equivalent Fractions
Note: Number line used for instructional lesson for Skill 2: Completing equivalent fractions.
Note: Image used during instruction for Skill 2: Completing equivalent fractions.
158
Note: Example baseline probe for Skill 1: Completing equivalent fractions.
Name______________________ Da te_____________
BL 3.2.mrd
Comple te the equiva lent fractions .
1.
3
6
18=
2.
6
12
36=
3. 4
5 50=
4.
3
20
30=
5. 4
5 10=
159
Note: Example independent practice worksheet for Skill 2: Completing equivalent fractions.
Participants completed this page without prompting, or prompts.
Name______________________ Da te_____________
Int. 2.0.1.mrd
Comple te the equiva lent fractions .
1. 1
5 35=
2. 4
8 32=
3.
3
6
18=
4. 1
4 20=
5.
6
10
60=
160
Note: Maintenance page for Skill 2: Completing equivalent fractions. Participants completed this
page without any type of prompting.
Name______________________ Da te_____________
Maint. 2.1.mrd
Comple te the equiva lent fractions .
1.
8
60
80=
2. 4
5 10=
3.
4
20
40=
4. 3
5 40=
5.
6
10
12=
161
Note: Generalization page completed for the dependent variable for Skill 2: Completing
equivalent fractions. Participants completed this page without any type of prompting.
162
APPENDIX J
Example Materials for Skills #3: Converting Fractions to Decimals Notation and
Converting Decimals Notation to Fractions
Note: Example baseline probe for Skill 3: Converting fractions to decimal notation and
converting decimal notation to fractions.
Name______________________ Da te_____________
BL 6.1.mrd
Convert fractions to decimal nota tion and convert de cimals to a fraction.
1. 3
10=
2. 2
10=
3.0.29 =
4.0.5 =
5.0.82 =
163
Note: Example independent practice worksheet for Skill 3: Converting fractions to decimal
notation and converting decimal notation to fractions. Participants completed this page without
prompting, or prompts.
Name______________________ Da te_____________
Int. 3.0.1.mrd
Convert fractions to decimal nota tion and convert de cimals to a fraction.
1. 3
10=
2.0.54 =
3.0.1 =
4. 8
100=
5. 2
10=
164
Note: Example maintenance page for Skill 3: Converting fractions to decimal notation and
converting decimal notation to fractions. Participants completed this page without any type of
prompting.
Name______________________ Da te_____________
Maint. 1.1.mrd
Convert fractions to decimal nota tion and convert de cimals to a fraction.
1. 32
100=
2.0.34 =
3. 2
10=
4.0.48 =
5.0.7 =
165
Note: Generalization page completed for the dependent variable for Skill 3: Converting fractions
to decimal notation and converting decimal notation to fractions. Participants completed this
page without any type of prompting.
166
APPENDIX K
Signed Problem Difficulty Checklist
Note: Problem difficulty fidelity checklist asserting the difficulty of the problems did not
significantly vary between phases.
CURRICULUM VITA
JARED R. MORRIS
Education
PhD Special Education The Pennsylvania State University
– University Park
August 2019
Graduate
Certificate
Applied Behavior
Analysis
The Pennsylvania State University
– University Park
August 2018
MEd Special Education University of Utah 2013
BA English Brigham Young University – Provo 2010
AS General Academics Utah Valley State College 2006
University Teaching Experience
Teaching Assistant/Guest Lecturer - SPLED 401: Motivating Exceptional Learners. (3 credits).
Group and individual techniques to promote student task engagement and prosocial
behavior. Fall, 2018. Responsibilities include teaching multiple sections, administering
exams and quizzes, grading. Spring, 2018.
Instructor - SPLED 412: Instruction for Students with Mild Disabilities (4 credits). Appropriate
teaching strategies, curriculum sequences, and materials selection and evaluation for
children with mild special needs. Spring, 2017.
Publications
Hughes, C. A., & Riccomini, P. J., & Morris, J. R. (2019). Use explicit instruction. In, J.
McLeskey, L. Maheady, B. Billingsley, M. Brownell, & T. Lewis (Eds.), High-leverage
practices for inclusive classrooms (pp. 215–236). New York, NY: Routledge.
Hughes, C. A., Morris, J. R., Therrien, W. J., & Benson, S. K. (2017). Explicit instruction:
Historical and contemporary contexts. Learning Disabilities Research & Practice, 32(3),
140-148. doi:10.1111/ldrp.12142
Therrien, W. J., Benson, S. K., Hughes, C. A., & Morris, J. R. (2017). Explicit instruction and
Next Generation Science Standards aligned classrooms: A fit or a split? Learning
Disabilities Research & Practice, 32, 149–154. doi:10.1111/ldrp.12137