annual report of faculty activities (january 2004 ... · dynamic transfer 1 dynamic transfer: a...
TRANSCRIPT
Annual Report of Faculty Activities (January 2004 – December 2004)
N. Sanjay Rebello
APPENDIX E: REFEREED PUBLICATIONS
Pre-prints and/or manuscripts of the following refereed publications are attached
1. “Dynamic transfer: A perspective from physics education research,” N. Sanjay Rebello, Dean A. Zollman, with contributions from Alicia R. Allbaugh, Paula V. Engelhardt, Kara E. Gray, Zdeslav Hrepic and Salomon F. Itza-Ortiz in Transfer of Learning from a Modern Multidisciplinary Perspective, Ed. Jose P. Mestre, Information Age Publishing, in series Current perspectives on cognition, learning and instruction, Series Editor: James M. Royer, University of Massachusetts, Amherst (in press).
2. “A framework for the dynamics of student reasoning in an interview,” Salomon F. Itza-Ortiz, Alicia R. Allbaugh, Paula V. Engelhardt, Kara E. Gray, Zdeslav Hrepic, N. Sanjay Rebello and Dean A. Zollman, Proceedings of the Annual Meeting of the National Association for Research in Science Teaching, April 1-3, 2004, Vancouver BC.
3. “A framework for student reasoning in an interview,” Paula V. Engelhardt, Kara Gray, Zdeslav Hrepic, Salomon F. Itza-Ortiz, Alicia R. Allbaugh, N. Sanjay Rebello and Dean A. Zollman, Invited Paper, Proceedings of the 2003 Physics Education Research Conference, August 2-6, 2003, Madison, WI.
4. “Implications of a framework for student reasoning in an interview,” Kara Gray, Zdeslav Hrepic, Salomon F. Itza-Ortiz, Alicia R. Allbaugh, Paula V. Engelhardt, N. Sanjay Rebello and Dean A. Zollman, Invited Paper, Proceedings of the 2003 Physics Education Research Conference, August 2-6, 2003, Madison, WI.
5. “Student goals and expectations in a large-enrollment physical science class,” N. Sanjay Rebello, Proceedings of the 2003 Physics Education Research Conference, August 2-6, 2003, Madison, WI.
6. “The teaching experiment – what it is and what it isn’t,” Paula V. Engelhardt, Edgar G. Corpuz, Darryl J. Ozimek and N. Sanjay Rebello, Proceedings of the 2003 Physics Education Research Conference, August 2-6, 2003, Madison, WI.
7. “Student understanding and perceptions of the content of a lecture,” Zdeslav Hrepic, Dean A. Zollman and N. Sanjay Rebello, Proceedings of the 2003 Physics Education Research Conference, August 2-6, 2003, Madison, WI.
8. “How many students does it take before we see the light,” Paula V. Engelhardt, Kara E. Gray, and N. Sanjay Rebello, The Physics Teacher, Vol. 42, April 2004, pp. 216-221.
9. “Student explorations of quantum effects in LEDs and luminescent devices,” Lawrence T. Escalada, N. Sanjay Rebello, and Dean A. Zollman, The Physics Teacher, Vol. 42, March 2004, pp.173-179.
Dynamic transfer 1
DYNAMIC TRANSFER:
A PERSPECTIVE FROM PHYSICS EDUCATION RESEARCH
Running head: Dynamic transfer
N. Sanjay RebelloPhysics Department, 116 Cardwell Hall, Kansas State University, Manhattan, KS 66506.
Phone: (785) 532 1539 Fax: (785) 532 6806 Email: [email protected]
Dean A. ZollmanPhysics Department, 116 Cardwell Hall, Kansas State University, Manhattan, KS 66506.
Phone: (785) 532 1619 Fax: (785) 532 6806, Email: [email protected]
Alicia R. AllbaughPhysics Department, 84 Lomb Dr., Rochester Inst. of Tech., Rochester, NY 14623.
Phone: (585) 475-5302 Fax: (585) 475 4153 Email: [email protected]
Paula V. EngelhardtPhysics Department, 116 Cardwell Hall, Kansas State University, Manhattan, KS 66506.
Phone: (785) 532 1612 Fax: (785) 532 6806 Email: [email protected]
Kara E. GrayPhysics Department, 116 Cardwell Hall, Kansas State University, Manhattan, KS 66506.
Phone: (785) 532 1612 Fax: (785) 532 6806 Email: [email protected]
Zdeslav HrepicPhysics Department, 116 Cardwell Hall, Kansas State University, Manhattan, KS 66506.Phone: (785) 532 1612 Fax: (785) 532 6806 Email: [email protected]
Salomon F. Itza-OrtizPhysics Department, 720 Haber Ave., San Diego State University, Calexico, CA 92231.
Phone: (760) 768 5606 Fax: (785) 768 5631 Email: [email protected]
Dynamic transfer 2
ABSTRACT
We contrast previous views of transfer of learning with emerging perspectives in the
field. Based on the latter, we have adapted our previously developed analytical framework to
characterize transfer as it occurs dynamically in an interview. Our adapted framework is also
consistent with a theoretical framework proposed by Redish (in press) that addresses several
cognitive and epistemological issues. In light of Redish’s framework and contemporary transfer
models, we have demonstrated how our analytical framework can help identify and characterize
transfer as it occurs in an interview. We describe instances in which students transfer their
learning spontaneously in an interview as well as those in which transfer is promoted by
scaffolding provided by the interviewer. In connection with the latter, we describe yet another
research methodology—the teaching interviews that can allow us to investigate dynamic
scaffolded transfer.
Dynamic transfer 3
OVERVIEW
Transfer of learning is often (e.g. Reed, 1993; Singley & Anderson, 1989) defined as
applying what one has learned in one situation to a different situation. Several researchers (e.g.
(McKeough, Lupart, & Marini, 1995) have described transfer of learning as the ultimate goal of
education. After all, what use is knowledge if it cannot be applied elsewhere? Science education
researchers and cognitive psychologists have spent significant time and effort in examining
transfer of learning in various situations, identifying the factors that influence it and suggesting
strategies and interventions to promote it.
A comprehensive review of transfer literature is beyond the scope of this chapter. Rather,
we will broadly describe some of the changing trends in researchers’ views about transfer,
focusing specifically on some contemporary models of transfer (Bransford & Schwartz, 1999;
Greeno, Moore, & Smith, 1993; Lobato, 1996, 2003). These models adopt perspectives that are
quite different from previous views of transfer (e.g. Singley & Anderson, 1989). Next, we
present a framework that we have used to analyze interview data and discuss how it aligns with
contemporary models of transfer, and when used in conjunction with these models, can help
identify how students transfer their knowledge and learning dynamically during an interview.
Finally, based on our framework and contemporary transfer models, we propose a research
methodology – the teaching interview – that has hitherto not been extensively used in physics
education research and discuss its promise for researching and promoting dynamic transfer.
CHANGING TRENDS IN TRANSFER RESEARCH
Most of the research on transfer of learning has focused on whether students who had
learned a problem solving strategy in a given context were able to apply this strategy to other
Dynamic transfer 4
contexts (e.g. Adams et al., 1988; Bassok, 1990; Brown & Kane, 1988; Chen &
Daehler, 1989; Lockhart, Lamon, & Gick, 1988; Nisbett, Fong, Lehmann, & Cheng, 1987;
Novik & Nussbaum, 1981; Perfetto, Bransford, & Franks, 1983; Reed, Ernst, & Banerji, 1974;
Throndike & Woodworth, 1901; Wertheimer, 1959). A typical example is the “jealous spouses
vs. cannibal-missionary” problem (Reed et al., 1974) or the “fortress vs. tumor” problem
(Duncker, 1945; Gick & Holyoak, 1980). Researchers saw deep structural similarities between
the two problems in each pair and they hoped that students through analogical transfer would be
able to successfully solve the second problem after learning how to solve the first. However, the
results of these and other similar transfer studies demonstrate that transfer, when measured this
way, is rather rare.
The perspective adopted by transfer researchers typically involves pre-defining the
underlying concept that should transfer and then seeking evidence for transfer. Studies based on
these traditional views of transfer often show little support for the occurrence of transfer.
However, almost all of us know from everyday experience that we seldom invent a procedure or
strategy each time in a new situation. Clearly something transfers from one situation to another.
In fact, we transfer even without consciously thinking about it. Could it be that we researchers
are overly focused on what we should find and are ignoring what students in fact do transfer?
To reconcile the apparently contradictory evidence of the simultaneous ubiquity and the
lack of transfer, some researchers have reconsidered the ways to characterize transfer (Bransford
& Schwartz, 1999; Greeno et al., 1993; Lobato, 1996, 2003). The above approach of
predetermining what should transfer can be self-limiting. Lobato (1996) points out that students
may transfer both productively and unproductively, in ways that the researchers may not have
previously considered. She argues that we should not decide a priori what students should
Dynamic transfer 5
transfer but rather adopt a student-centered perspective to find out what students do
transfer and investigate the mediating factors. An understanding of these factors can provide us
insights into the kinds of interventions that might facilitate productive transfer. Lobato’s “Actor-
Oriented Model of Transfer” has its origin in the ideas of “perceived similarities” by Hoffding
(1892) and “situated cognition” by Lave & Wenger (1991). The model relies on “personal
creations of relations of similarity” by the learner, between the learning and transfer contexts,
rather than similarities perceived by the researcher.
Lobato’s model builds on the socio-cultural aspects of transfer (Greeno et al., 1993) and
situated cognition (Lave & Wenger, 1991). These ideas go beyond thinking of transfer as
occurring entirely in a student’s mind and begin to look at how the external factors such as
interactions with the environment, peers or the teacher can affect the transfer of learning.
Previous researchers have conceptualized transfer as the process of recognizing similarity of
surface features (Throndike, 1906) or deep structure (Judd, 1908) between the two contexts.
Other researchers believe that transfer involves building symbolic mental representation or
schema in the learning context and then mapping and applying that schema to the transfer
context (Anderson & Thompson, 1989; Gentner, 1983; Holyoak & Thagard, 1989). Greeno et
al. (1993) argue that this process, while possible, is rather rare. Instead, they focus on activities
that the learner performs in the learning context. The learner interacts and becomes “attuned to
the affordances” of the learning contexts of its “potential states of affairs” and brings the
knowledge of these aspects of the learning context into the transfer context.
Another contemporary perspective of transfer is offered by Bransford and Schwartz
(Bransford & Schwartz, 1999, Schwartz, Bransford and Sears, this volume). They characterize
previous transfer studies as having focused on “sequestered problem solving” in which a learner
Dynamic transfer 6
is required to solve a problem in the transfer context without scaffolding that was
available in the learning context. Bransford and Schwartz promote an alternative perspective of
transfer as “preparation for future learning.” They believe that undue focus on whether or not
students can problem-solve “cold” in the transfer context has led to the lack of evidence of
transfer. Rather, they focus on how students learn to solve the problem in the transfer context.
Transfer is more likely if students are given opportunities to reconstruct their learning in the
transfer context in the same way as they did in the learning context.
All of the above perspectives share at least three common themes. First, they look at
transfer from the students’ perspective rather than a pre-defined researcher’s perspective, i.e.
they ask what similarities the student sees in a given situation. Second, they describe transfer as
the dynamic construction of knowledge in the target scenario, rather than applying what they
have learned previously. Therefore, transfer must be assessed by whether students can learn in
the new situation. Finally, the above perspectives go beyond looking at transfer from a purely
cognitive perspective and include socio-cultural aspects in their discussion.
In the next section we present an analytical framework that describes students’ sense-
making processes in an interview. We choose the interview as a setting in which to examine
transfer because it is a widely used tool in educational research and affords us an opportunity to
study how students transfer and construct knowledge dynamically—consistent with the current
perspectives.
A FRAMEWORK TO MODEL DYNAMIC TRANSFER
Interviews are a useful tool to gauge the dynamics of transfer of learning and provide
insights into how students apply and reconstruct knowledge and experiences gained elsewhere as
Dynamic transfer 7
they respond to a question. A researcher’s agenda can potentially affect interpretation
of interview data (Scherr & Wittmann, 2002). Based on her agenda an interviewer may attend to
a particular aspect of a student’s response at the expense of others or may unwittingly cue the
student. The assumption that student knowledge remains static while it is probed in an interview
can also affect the interpretation of interview data because it overlooks situations in which
students make up answers to questions they may never have previously considered. Therefore, it
ignores the dynamic of in situ transfer and construction of knowledge by students. The
analytical framework that we have developed addresses both of these assumptions.
Researchers in our group are working on various projects that investigate how students
transfer their learning from one context to another. Our goals include investigations on students’
transfer of Newtonian ideas (Allbaugh, 2003) or energy concepts (Itza-Ortiz, Lawrence, &
Zollman, 2003) from mechanics to electricity or magnetism; transfer from the classroom to the
real-world (Engelhardt, Gray, & Rebello, 2004; Engelhardt & Rebello, 2003; Engelhardt,
Rebello, & Itza-Ortiz, 2003); transfer from everyday experiences into an interview setting
(Hrepic, 2002; Hrepic, Rebello, & Zollman, 2002) and transfer from one problem to another
within an interview (Gray, 2004). Our interview participants ranged from non-science majors in
conceptually-based classes to engineering and physics majors in calculus-based physics classes.
Therefore, our framework has implications that are not specific to a particular level of students or
area in physics. The framework and its implication have been discussed elsewhere (Engelhardt
et al., 2003; Gray et al., 2003; Itza-Ortiz et al., 2004). Below we describe the role of elements of
our framework in transfer of learning.
Dynamic transfer 8
Elements of the Framework
Our framework emerged by re-analyzing and parsing several interview transcripts to
understand the student’s reasoning process. This process afforded us the opportunity to observe
how students build and transfer knowledge dynamically based on their previous learning and
experiences in ways that are consistent with the contemporary models of transfer. A careful
analysis of interviews revealed four elements or factors that can play a role in dynamic transfer
of knowledge and learning.
External Inputs answer the question: “What prompts transfer?” An external input is
information provided by the interviewer via a protocol question, follow-up or clarification
questions, as well as other hints or cues. It also includes other materials, e.g. text, pictures,
demos, videos, etc. used in the interview. External inputs can play a key role in influencing
transfer of knowledge. They can prime the student to focus on certain aspects of a problem
situation at the expense of others. They may provide verbal and non-verbal feedback that
prompts the student to think in a particular way, thereby facilitating either positive or negative
transfer. Taking into consideration the external input is consistent with Greeno et al. (1993) and
Lobato's (1996, 2003) view that “transfer is distributed across mental, material, social and
cultural planes.” Interaction with the interviewer is an example of this social interaction which
may cue students to access various knowledge elements or tools in their reasoning.
Tools answer the question: “What transfers?” They can be broadly categorized into pre-
existing tools or created tools. Pre-existing tools include a student’s prior experience or
knowledge gained through everyday life or instruction. This internal dormant knowledge
includes knowledge structures of grain sizes ranging from phenomenological primitives (diSessa,
Dynamic transfer 9
1988), resources (Hammer, 2000) or facets (Minstrell, 1992) to mental models (Driver,
1995; Glasersfeld, 1989; Johnson-Laird, 1983; Vosniadou, 1994). Tools enable us to
characterize what a student transfers from her/his prior knowledge and experience. Lobato’s
Actor-Oriented Model focuses on what the student considers the “same” between the learning
and transfer scenarios. She points out that “what experts consider a surface feature may be
structurally substantive for a learner.” Our definition of tools is consistent with Lobato’s
perspective in that we do not pre-define what the student should transfer but rather seek to find
anything that the students transfers regardless of whether it is a surface feature or a deep
structural similarity. Tools also include information about the “affordances” or “potential states
of affairs” of either the learning or the transfer context as proposed by Greeno et al. (1993). The
concept of tools is also similar to the notion of “knowing with” invoked by Bransford &
Schwartz (1999) who point out that learners often “utilize their previously acquired concepts and
experiences,” a view based on the idea that a person “… thinks, perceives and judges with
everything that [s]he has studied in school although [s]he cannot recall these things on demand,”
(Broudy, 1977, p. 12). Thus, almost any object or idea, concrete or abstract, real or imaginary,
can potentially be a tool.
Tools also include students’ epistemic resources (Redish, in press). Hammer & Elby
(2002) have described at least two kinds of personal epistemological modes that students use in a
learning situation—“knowledge as propagated stuff” and “knowledge as fabricated stuff.” A
student’s personal epistemology or epistemic resources affect the types of cognitive tools that
they tend to rely on. For instance, a student who believes that “knowledge is propagated stuff
(from authority)” may tend to transfer only those ‘facts’ acquired from ‘authoritative’ sources
such as a textbook or an instructor, rather than from her personal life experiences or peers. Thus,
Dynamic transfer 10
epistemic resources are “meta-tools” or higher-level tools that control the use of lower-
level (cognitive) tools that the student uses.
In contrast to pre-existing tools, created tools are dynamically constructed at an earlier
instance in the interview such as knowledge acquired while reasoning through previous
questions. Created tools are more likely to be utilized by a student operating in “knowledge is
fabricated stuff” epistemic mode. The notion of dynamic construction of tools consistent with
the contemporary transfer models is discussed in the context of the workbench.
Workbench includes various mental processes that may utilize external inputs and tools.
Workbench processes can be as simple as making connections between various tools or
executing a known rule or procedure in a typical workbench process. Workbench processes
include that reorganization and restructuring of knowledge such as assimilation and
accommodation (Piaget, 1964), conceptual combination (Ward, Smith, & Vaid, 1997) or
hybridization (Hrepic, 2002; Hrepic et al., 2002). Workbench processes also include analogical,
inductive or deductive reasoning as well as decision making. Decision making is often the first
step to transfer in that it involves the learner recognizing that the transfer context is similar to a
learning context and determines the appropriateness of the tools to be activated in the transfer
context. The tools that a learner activates depend upon their epistemic resource meta-tool. Tool
activation is referred to by Collins & Ferguson (1993) and later Redish (2003) as an “epistemic
game.” According to Redish (2003, p. 45), activation of a tool involves “coherent activity that
uses particular kinds of knowledge (i.e. tools) and the processes associated with that knowledge
to create [new] knowledge.” Therefore, epistemic games are a workbench process.
Dynamic transfer 11
The concept of a workbench is consistent with the notion that transfer is a
dynamic process in that the relations and similarities are constructed anew in the transfer context
and not merely transported from the learning context. Lobato’s Actor-Oriented Model asks,
“What relations of similarity are created? How are they supported by the environment?”
(Lobato, 2003, p. 20). The model of transfer by Greeno et al hypothesizes that “a symbolic
representation of structure is generated in the transfer situation based partly on information about
another situation that is retrieved.” This process of generation of the symbolic representation (in
the transfer situation) is a workbench process. The information about the other (learning)
situation that is retrieved is a tool. The concept of a workbench process affords the opportunity
for the researcher to investigate the learners’ ability “to learn new information and relate their
learning to previous experiences,” consistent with Bransford and Schwartz’s (1999, p. 69) view
of “ideal assessment” of transfer as preparation for future learning.
The answer marks a ‘stopping point’ in the reasoning process and not necessarily the
final outcome or conclusion. Answers can broadly be categorized into three types: decisive,
indecisive and none. A decisive answer is one in which the student arrives at a single
conclusion. A correct answer would typically be a performance measure of positive transfer and
an incorrect answer would be indicative of negative transfer. However, the correctness of the
answer is not important from the perspective of a student-centered model of transfer. An
indecisive answer—when a student is unable to choose between two answers or when a student
requests more information can be potentially interesting. Bransford & Schwartz (1999) believe
that an ‘answer’ is in fact a question requesting further information that is indicative of their
preparation for future learning. Some physics education researchers (e.g. Thornton, 2002) have
shown that the open-endedness of students’ questions is often an indicator of superior conceptual
Dynamic transfer 12
understanding. Students’ questions play an important role in determining the extent to
which students may be transferring and constructing new knowledge dynamically. Conversely, a
student’s response that he/she “does not know” without even venturing a guess can be indicative
of a “knowledge as propagated stuff” epistemic mode as per which the learner looks for the
‘right’ answer and is unwilling to even attempt to construct one on the spot.
Connections with Cognitive Information Processing
The framework above is consistent with cognitive information processing (Driscoll,
2000) and an often used metaphor – the computer. The external input is analogous to human
sensory inputs, or computer input devices – e.g. mouse, keyboard, etc. Tools correspond to
information stored in long term memory that is retrieved before usage, similar to data on the hard
drive that is loaded into a buffer before usage. The workbench corresponds to processes in the
short term working memory or in a computer’s CPU. Finally, the answer corresponds to the
output action or speech by the individual or in the case of the computer, the information
displayed on the monitor or printed. These connections demonstrate how contemporary ideas of
transfer can be considered in the context of cognitive information processing. Transfer involves
retrieval of information from the long term memory followed by its processing in the working
memory. The latter step helps emphasize that transfer is more than mere retrieval of stored
schema but involves “dynamic production of sameness” (Lobato, 1996, 2003) through
associations and control (Redish, 2003) in the short term memory.
Dynamic transfer 13
Alignment of Framework with Contemporary Models of Transfer
Various elements of our framework align with contemporary models of transfer (see
Table 1). The alignment indicates that our framework can serve as a valuable tool to investigate
transfer of learning from the perspective of these contemporary models of transfer.
TABLE 1 ABOUT HERE
Modeling Transfer
Redish (2003) describes a two-level framework (Figure 1) based on fundamental neuro-
cognitive theories. The lower level includes associations between knowledge elements, which
are “relations of similarity” in Lobato's (1996, 2003) Actor-Oriented Model. The upper level
includes executive control that enhances (turns on) or suppresses (turns off) the associations
between these knowledge elements based on a learner’s epistemologies and expectations.
FIGURE 1 ABOUT HERE
Redish’s framework provides an overarching structure for our model of transfer which
categorizes various tools and workbench processes as follows:
‘Source’ Tools are pre-existing knowledge or experiences from a prior context such as a
real-life experience (Engelhardt & Rebello, 2003; Engelhardt et al., 2003), classroom instruction
Dynamic transfer 14
(Allbaugh, 2003; Itza-Ortiz et al., 2003; Itza-Ortiz, Rebello, & Zollman, 2004),
popular media or even previous interview questions (Gray, 2004). Source tools include a
learner’s dormant knowledge that is activated to make sense of new situations.
‘Target’ Tools are attributes of the ‘target’ situation that the learner uses to “know with,”
(Bransford & Schwartz, 1999; Broudy, 1977). They define the target context in the learner’s
mind. Target tools are presented via external inputs; however, not all inputs are tools. Rather
the learner ‘reads-out’ part of the input information that she considers relevant, and uses this
read-out information as tools (diSessa, 1998). Target tools may include surface features, deep
structure, affordances or states of affairs (Greeno et al., 1993) that a learner attends to.
‘Epistemic Meta-Tools’ are epistemic resources (“knowledge is propagated” or
“knowledge is fabricated”) that a student activates to exercise executive control over workbench
processes. Unlike the target tool, the epistemic meta-tool may be activated from a learner’s long
term memory through priming by the external input.
‘Read-out’ is the process by which a learner recognizes the relevance of certain
attributes or transfer tools in the external inputs. A learner may be primed to notice some
information at the expense of others, based on the epistemic meta-tools that are activated at that
time.
‘Activation’ is the process by which a learner recalls into working memory, source tools
or epistemic meta-tools that are dormant in long term memory.
‘Association’ is the process by which a learner interconnects tools in the working
memory. Various types of associations are possible, e.g. inferential, causal, analogical,
deductive or inductive. It is often difficult to distinguish between activation of a tool and its
Dynamic transfer 15
association with other tools. Typically when students explicate the associations that
they construct, the activation is implied.
‘Priming’ is a higher order (meta) process by which covert meta-messages influence the
way in which a learner frames the situation and activates certain epistemic meta-tools. Evidence
of priming is indirectly inferred from the sources of knowledge that the learner refers to in her
reasoning.
‘Control’ is a higher order (meta) process by which a learner enhances or suppresses
associations, activations and read-out based on the epistemic meta-tools. ‘Epistemic gaming’
(Redish, 2003) by which a learner decides the types of knowledge is a controlling process. Like
priming, evidence of executive control must be inferred indirectly from a learner’s statements
(e.g. “I made it up.”).
FIGURE 2 ABOUT HERE
Figure 2 demonstrates our framework that builds on the generic structure provided by
Redish (Figure 1). We model transfer mechanism in three phases that are often
indistinguishable.
Phase 1: The interviewer provides external input describing the problem scenario. Additionally,
the interviewer also primes the learner through ‘covert messages’ to activate epistemic meta-
tools.
Dynamic transfer 16
Phase 2: The activated epistemic meta-tool controls the process by which the learner
weighs the relevance and reads-out certain pieces of input information to be used as a target
tool in the reasoning process.
Phase 3: The epistemic meta-tool activates source tools from long-term memory. If the
‘knowledge is propagated stuff’ epistemic meta-tool is activated in phase 2, the learner is
more likely to utilize knowledge acquired through formal instruction. If the ‘knowledge as
fabricated stuff’ epistemic meta-tool is activated the learner is more likely to use self-
constructed knowledge. The learner establishes associations or relationships between the
source and target tools. The association process described here is typically explicated by the
student, while the activation process is implicit.
Therefore, in our model, transfer is a dynamic creation of associations between target
tools read out from the external inputs and source tools activated from long term memory.
Readout, activation and associations are mediated through higher-order control by epistemic
meta-tools which are in turn activated through priming by covert meta-messages in the external
input.
We acknowledge that our model of transfer includes the role of the working memory in
ways that may not be consistent with existing knowledge about the limitations of working
memory. One such limitation is the maximum number of items that we can attend to
simultaneously in our working memory. Another limitation pertains to the maximum duration
for which one can hold information in working memory without continuous rehearsal. Our
model is silent about these limitations, because we use the term ‘working memory’ rather
loosely. Further research may be needed to refine this model to be more consistent with existing
notions of working memory as used in cognitive science.
Dynamic transfer 17
Phase 4 (not in Figure 2): Two possibilities exist. First, in the short term, the source-
target tool association prompts metacognitive reflection and self-regulation (Flavell, 1979,
1987) causing the learner to rethink the problem. Second, if the source-target tool
association is strongly established to yield a new tool (comprising the two interlinked tools)
that is committed to long term memory. This new tool may be activated as a single cognitive
entity in the future, akin to Hammer, Elby, Scherr and Redish’s model (this volume) of
coherent activation of coordinated resources. A learner’s repeated association of the same
tools in different contexts creates in her mind a coordination class (diSessa, 1998) that is
central to ‘Class A’ transfer described by diSessa and Wagner (this volume).
We adopt a ‘value neutral’ stance toward the scientific correctness of the associations
described above and focus instead on the underlying factors. This knowledge of the intuitive
associations, whether correct or incorrect, can help us design curriculum and instruction that
promotes transfer as described later in this chapter.
Commonalities of Our Model with Other Models in this Volume
Our model shares commonalities with other models discussed in this volume. We focus
on the perspectives described by Dufresne, et al., Hammer et al, Schwartz et al., and diSessa and
Wagner in their respective chapters.
Dufresne et al. (this volume) have described transfer as a “complex dynamical process
leading to the activation and application of knowledge in response to context.” Transfer is a
dynamic process involving coordination of knowledge pieces, which are akin to the source and
target tools in our model, albeit in our model we do not comment of the grain size of the tools.
Transfer as per Dufresne et al. includes two sub-processes. First is the “readout filter” i.e.
Dynamic transfer 18
noticing relevant information in a situation, which is very similar to the read-out
process in our model. Second is the “expectation filter” which includes activating and applying
the knowledge pieces to make inferences. This sub-process is analogous to the activation and
association processes in our model. Dufresne et al. describe transfer as a process through which
learners align their readout and expectations to achieve a state of quasi-equilibrium. This
description is similar to our notion that the associations (between source and target tools)
dynamically created by the learner are not stable in time and are in fact highly context
dependent. One minor difference with our model is that the latter attempts to include issues
pertaining to social interactions which can affect a learner’s epistemic mode albeit we have
explored these issues in a rather contrived research context of an interview. Dufresne et al.
clearly acknowledge the importance of social context in transfer, but focus instead on the
cognitive issues pertaining to their research in physics problem solving.
Hammer et al (this volume) point out that previous researchers have used a “unitary
ontology” of transfer of an “intact cognitive unit.” Rather, Hammer et al describe transfer as a
“manifold ontology” of “locally coherent resources activated or deactivated based on the
learner’s epistemic “frame” in the context. These resources are mutually associated so that they
have a high likelihood of being activated together. Transfer occurs when the learner enters a
similar state in a new context and activates the same set of resources. This idea is similar to our
notion of dynamic creation of associations between tools activated from a learner’s long term
memory. However, the long term stability of this association is not a required aspect of what we
call transfer, although such long term stability is indeed desirable.
Schwartz et al (this volume) extend their previous discussion of transfer as preparation
for future learning. They differentiate between “transferring out of” and “transferring into”
Dynamic transfer 19
situations. The former is the conventional and rather rarely observed transfer. The
latter is consistent with our view. Transferring in is akin to Broudy’s view (1977) of “knowing
with,” i.e. interpreting your new situation in light of previous experiences. Interpretive
associations are rather subtle and are ignored in traditional assessments such as sequestered
problem solving, which focus on replicative and applicative associations. The “double transfer”
experiment (discussed later) can measure both “transfer in” and “transfer out.” Our model of
transfer also focuses on associations that learners dynamically construct between the target and
source tools; however, it does not distinguish between replicative, applicative and interpretive
associations. Schwartz et al’s double transfer experiment could be adapted with the teaching
interview discussed later in this chapter.
diSessa and Wagner (this volume) categorize transfer based on the grain size of the
transferred knowledge, frequency of transfer and need for new learning to facilitate transfer.
‘Class A Transfer’ is deployment of “well prepared” knowledge – such as a coordination class
(diSessa, 1998), which requires little or no new learning, because the expert learner already
possesses a coherently organized set of resources and is aware of their realm of applicability
(“span”) and can recognize the applicability of resources in new contexts. In contrast, during the
ubiquitous ‘Class C Transfer’ novice learners use small grained prior knowledge productively
and unproductively in new situations. Our model focuses on Class C that is almost
indistinguishable from novice learning, rather than Class A transfer by experts.
Applying Our Model – An Example
We demonstrate our model to analyze data collected by other researchers (Wittmann &
Scherr, 2002) who investigated the effect of a student’s epistemological mode on her reasoning
Dynamic transfer 20
in an interview about current and conductivity. The student was asked what
“category” [conductor or insulator] Styrofoam fell into. She began by stating that it was
insulating. When asked why, she stated that she had “memorized it!” When asked to explain the
property of Styrofoam that might lead to its insulating behavior, she referred to the “little density
thing” and added that she did not “really know” the answer. When prompted that Styrofoam was
“not terribly dense” she restated that she did not “really know” but added that “something
inhibits the electrons from moving quickly.” Asked to explain her reasoning, she talked about
electrons bound to the lattice, but when asked to elaborate she stated, “I have no idea! That’s
organic chemistry!” As the authors point out, the student in this segment appears to rely on the
epistemic mode that “knowledge is propagated” from authority (organic chemistry) and must be
committed to memory. She appears to read out the ‘density’ attribute (target tool) of the
Styrofoam and associate with her memorized knowledge about electrons (source tool).
In a subsequent interview segment the interviewer specifically asks the student to provide
“any explanation” that she can find. The student begins to elaborate her reasoning: “… the
electrons are bound to these molecules and it takes certain energies to tear them away.” She is
asked what tears them away and responds that she “assumes … just the battery…the power
supply.” As the authors explain, the prompt to provide “any explanation” appears to switch the
student into the “knowledge is fabricated stuff” epistemic mode as indicative of the choice of her
word “assume.” She associates her ideas of electrons being “bound to molecules” and needing
“energies to tear them away,” both of which are source tools – knowledge acquired previously
with the battery or power supply attributes in the target context.
In this episode, the phrasing of the question appears to have primed the student into
different epistemic modes. Initially she was asked what category (conductor or insulator)
Dynamic transfer 21
Styrofoam fell into. We speculate that asking her use pre-constructed categories with
scientific sounding labels may have activated her “knowledge is propagated” epistemic meta-
tool. Later, asking her to provide “any explanation” she could find activated the “knowledge is
fabricated” epistemic meta-tool. That a student’s epistemic mode is not stable is consistent with
the idea that transfer exists across multiple planes – intellectual, material and social (Lobato,
1996). Interaction with the interviewer primes the student into an epistemic mode, which in turn
controls the activation and association between source and target tools.
IMPLICATIONS FOR TRANSFER STUDIES IN PHYSICS EDUCATION
RESEARCH
As demonstrated above, our model has implications for investigating dynamic transfer in
an interview, occurring over timescales of a few minutes. In this section we present examples
from our research and use the framework to identify transfer processes occurring dynamically in
a student’s reasoning path.
Spontaneous Transfer
During an interview, students often spontaneously, without any external hints create
associations or “relations of similarity” (Lobato, 1996, 2003) between source tools (e.g. prior
knowledge) and the target tools read out from the scenario at hand. Below we discuss several
examples of spontaneous transfer occurring in myriad situations.
Dynamic transfer 22
Spontaneous transfer from the classroom to the real-world
Engelhardt et al. (2003) investigated the extent to which students transferred their
classroom learning to everyday devices e.g. a bicycle. In the transcript below a bicycle is turned
upside down so that the wheels a rotate freely.
Interviewer: Why doesn’t the rear wheel stop moving when you stop pedaling?
Student: Inertia, because it’s already in motion so it tends to just keep going in
motion unless a force is applied to stop it.
Interviewer: What is force?
Student: Force is for instance if I put my hand and I push down that is me putting a
force on the wheel. So I guess force is a …we just covered that definition
today. Force is a downward pull on an object.
The student appears to activate the “knowledge as propagated stuff” epistemic resource.
She spontaneously associates inertia and Newton’s first law (source tools) learned in class with
the spinning bike wheel. (target tool) She clarifies the association when asked to explain the
term “force” (another source tool) which she associates with the affordances of the target context
-- the bike pedal and her kinesthetic feeling. (target tools) As demonstrated in this episode, most
students intuitively used force rather than energy to explain the working of a bicycle even when
presented with contexts (e.g. cycling uphill vs. downhill) that we believed should prompt them to
make associations with energy. The implications for using real-world devices as a pedagogical
vehicle to learn physics is that there may be certain concepts that students tend to associate
spontaneously with certain devices. To develop effective curricula, we need to be cognizant of
these spontaneous associations.
Dynamic transfer 23
Spontaneous transfer from the real-world to the interview
Hrepic (2002) investigated students’ mental models of sound propagation in air and
through a wall. Below a student, provided with the diagram in Figure 3, explains her reasoning
for why and how sound gets to the other side of a wall.
Interviewer: If the listener hears the speaker [from the other side of the wall], how does
sound get to the other side?
Student: If the person is loud enough I think it can get to the other side because it’s
gonna travel through. Sound…is almost more… (pause) sneakier than air
would be. Like air can’t always get through, but sound can because it can
get through the tinier, little areas and tiny as little…I mean as if a door is
shut, you obviously are gonna be able to hear [sound from the other side].
It’s not as loud. It’s not as pointy. It’s muffled. But you can still hear.
But if there is a fan directly on an open door, it’s gonna go right into the
room, it’s gonna go right into the area that fan is blowing. But if you shut
that door, the air is not gonna go through. It’s not gonna be able to get
through. It’s just like that’s that – end of barrier. It can’t get any there.
But the sound can get through.
FIGURE 3 ABOUT HERE
Dynamic transfer 24
The student contrasts sound with air in that sound is “sneakier” than air, which
could not have gotten through unless the door was open. She uses this contrast as a source tool
(i.e. air through a door) to accentuate the affordances of the target tool (i.e. sound through the
wall).
Spontaneous transfer from one class to another class
Allbaugh (2003) investigated students’ transfer of Newton’s second law from mechanics
to electricity and magnetism. Each student was presented a scenario involving a charged particle
in an electric field and was asked to predict the path of the particle. Although these students had
previously never used Newton’s second law in electrostatics contexts, over half of them appeared
to spontaneously recognize the relevance of mass (source tool) in the target context and associate
it with Newton’s second law (source tool) learned in the previous semester.
Student A: Well then the acceleration will be smaller…Because the force on it is
going to be the same and if the mass is going to go up, the acceleration is
going to have to come down from Newton’s second law.
Student B: Since it’s mass like it is ‘it’s kq1q2 over r squared [writes equation] also
equals ma. So acceleration is going to be less. The mass is bigger so
compared to that one this is going to move closer.
Spontaneous transfer between successive interview questions
Gray (2004) investigated the effect of question order on student responses to pairs of
Force Concept Inventory (FCI) (Hestenes, Wells, & Swackhammer, 1992) questions. In think
aloud interviews students were asked to work though one question followed by another question,
the similarities and differences between the two questions and whether the second question
Dynamic transfer 25
would cause them to rethink the previous question. Therefore, the study investigated
the extent to which students would spontaneously associate elements of one question with
another.
A student incorrectly selects choice 4 for the hockey puck question in Figure 4 (FCI
Question # 8). Next, she decides that the spaceship question in Figure 5 (FCI Question # 21) is
similar to the hockey question and selects choice 4 for the spaceship question as well. But, while
explaining her reasoning in the spaceship question, she realizes that her answer to the hockey
puck question is incorrect and now selects choice 1 for the hockey puck question.
Student: Uh, it looks like the same deal except this is in space and not under forces
of gravity, like the hockey puck was. So, I think [choice] 4 is also going
to be a good answer for this one. Actually looking at this one I think on
the first question [choice] 1 was probably the best answer for that one.
Interviewer: Okay.
Student: Yeah, I think I mixed up the reasoning. In space the momentum and
inertia are going to carry it at an angle to get to its right angle position.
The above example is ‘backward transfer’ because the second context (spaceship)
prompts the student to change her responses in the first context (hockey puck). She dynamically
and spontaneously associates the two contexts and recognizes the differences between them. She
recognizes the relevance of the absence of gravity (source tool) in the spaceship context and
associates it with the hockey puck question (target tool). However, this association does not
enable her to arrive at the correct answer, therefore this case is an example of negative backward
transfer. However, in a different situation this type of association could potentially result in
positive backward transfer. This activation process did not occur for the reverse question order.
Dynamic transfer 26
This example also demonstrates the role of backward transfer in metacognitive self-
regulation. This learner uses tools from a later experience to reflect on and recognize
inconsistencies in her previous reasoning. Although her final answer is incorrect, it is
noteworthy that the learner is spontaneously engaging in these productive reasoning processes
during an interview.
FIGURE 4 ABOUT HERE
FIGURE 5 ABOUT HERE
Finally, ‘spontaneous’ transfer as described here does not necessarily mean that a learner
relies solely on the “knowledge is fabricated stuff” epistemic resource as the controlling meta-
tool. In fact, a student could be spontaneous yet tend to rely only on what she has learned from
authority such as definitions learned in class.
Scaffolded Transfer
In the examples in the previous section, the learner appeared to create associations
spontaneously without any additional external inputs from the interviewer. Minimizing external
inputs is desirable if our goal is to avoid, as far as possible, changing student knowledge in the
process of investigating it. However, if our goal is to design instructional interventions we must
also investigate how students respond to external inputs and attempts to change their ideas. In
this section we discuss what we call ‘scaffolded’ transfer that is facilitated by direct and
Dynamic transfer 27
conscious inputs of the interviewer, which would prompt the student to dynamically
create associations.
Existence of transfer explicated to the student
Allbaugh (2003) presented students with a sequence of images and descriptions of a
problem scenario involving a large sled on ice that held six identical blocks (Figure 6). The
student was asked a series of questions asking her/him to predict what would happen if first one
block, then two blocks and so on were thrown from the sled at a given velocity every 10 seconds.
Finally, each student was then asked if the scenario reminded her/him of anything she/he had
encountered before either in or out of class.
FIGURE 6 ABOUT HERE
Initially, with no scaffolding provided five of the 14 students could not think of anything
from their past that they could associate with this scenario. Three students (e.g. Student C
below) spontaneously associated the target tool (sled problem) with a problem encountered in the
past. The remaining six students associated the scenario with tools learned in class: center of
mass and internal forces.
Student C: Like it reminds me of a problem we did in high school. A squirrel is on an
icy tin roof and so he’s sliding down the roof and he’s able to stop sliding
because he has nuts in his mouth and so he like spits them out …So I’ve
encountered stuff like this before.
Dynamic transfer 28
Finally, the scaffolding was introduced. Students were told that physics
professors who had been teaching the subject for years found the sled scenario similar to a
homework problem on rocket propulsion that had been previously assigned in the course. The
students were asked whether or not they agreed with the professors and why.
Thus, students were provided with a specific source tool -- the rocket problem.
Epistemological factors are also present because students usually tend to believe in the
professors’ correctness (e.g. Student A below). By telling students that professors had seen
associations, it is likely that students’ “knowledge as propagated from authority” epistemic
resource may have been activated.
Student A: I guess [turns paper sideways] Hmm. [laughs] I guess. Yeah. Like,
initially at rest, another similarity, a certain mass. Now, you’re like
burning fuel. Yeah. Yeah. I guess those guys are smart.
All of the students alluded to associations between the two scenarios. Two of these
students (e.g. Student A) turned the horizontal image of the sled onto its side before they agreed
with the professors. From Greeno et al.'s (1993) perspective these students were performing
activities (rotating the page) that helped attune them to the affordances of the source tool (rocket
problem) presented to them and recognized “potential states of affairs” in the learning context.
These students (e.g. Student D below) transferred tools used in the sled scenario to the rocket
propulsion scenario.
Student D: Yeah, I can see how that relates to it. Sending out a mass of fuel in one
direction and that propels the rocket forward. It’s just this guy, ah, what
he’s throwing to the left is like the rocket fuel.
Dynamic transfer 29
In this experiment we do not know whether students would have established
associations if they had been presented with two problems and simply asked if they saw
similarities between the two scenarios, without being told that professors saw similarities. This
fact does not negate the positive implications of the study. That it is possible to have students
dynamically construct “relations of similarity” (Lobato, 1996, 2003) with the appropriate
prompts and epistemological triggers has important implications for instruction.
Cued transfer between interview questions
Gray's (2004) research on the effect of presenting students with two related questions
successively provides an interesting example of scaffolded transfer. Students were specifically
asked whether their response to one question was affected by the other question and what
similarities they saw between the two questions. Thus, students were cued to focus on
similarities and engage in the “dynamic production of sameness” between the two questions
(Lobato, 1996, 2003).
The student below was first presented with the ‘airplane question,’ (Figure 7) which is
modified (Rebello & Zollman, 2004) Question # 23 on the FCI. The student incorrectly
answered this question by choosing path 1, a parabolic path behind the plane. The student was
then presented with the cannon question, (Figure 8) which is Question # 12 on the FCI. Here, he
incorrectly selects choice 3. The interviewer then asked the student several questions regarding
the two scenarios such as, “Did the airplane question influence your answer to the cannon
question?” “Do you still agree with your answer to the airplane question?” “Would you have
answered the airplane question differently if you had been asked the cannon question first?” and
Dynamic transfer 30
“Do you see any similarities between these questions?” All of these questions
prompted the student to dynamically create similarities between the two scenarios.
FIGURE 7 ABOUT HERE
FIGURE 8 ABOUT HERE
The last of the interviewers’ cuing questions makes the student rethink his answer to the
airplane question. Initially the student had stated that the two problems were not similar.
However, asking the student again whether there were “any” similarities, appears to have
triggered him into an epistemic mode in which he was comfortable creating relations of
similarity between the two situations. While describing these similarities he realized that his
answer to the airplane question was incorrect. He then chose the correct response, answer 5, and
described why this was correct.
Interviewer: Do you think these two problems are similar?
Student: No, they aren’t.
Interviewer: Do they have any similarities?
Student: I mean, I can see some similarities because you’ve got the velocity of the
ball by the time it reaches the end of the cannon and you’ve got, you
know, the velocity of the bowling ball being carried inside the plane so
when it leaves the plane, I mean they both still have a velocity carrying
themselves and I’ve actually… now I change my mind. If I had this
Dynamic transfer 31
question [cannon] first, I would have probably answered [choice] 4
differently, er I would have answered 4 on this [airplane] question or
[choice] 5.
Interviewer: Okay.
Student: Cause I didn’t even think about that. Cause of when the ball comes out,
it’s still got a velocity going forward, not backwards.
Interviewer: Okay.
Student: The ball would have carried with a forward motion.
Interviewer: Which, [choice] 4 or [choice] 5? Any preference?
Student: I would probably say [choice] 5.
The student recognizes the relevance of the forward velocity of the bowling ball. He
associates this target tool with a source tool. (velocity of the cannon ball) These associations
were prompted by the interviewer’s external input asking the student whether there were “any”
similarities between the questions. The line of reasoning shown above was also displayed by
other students. All four of the students who were asked the two questions in this order
eventually answered the airplane question correctly. Some of them did so after returning to it
following the cannon ball question. This experimental design of comparing two questions is a
useful way to promote transfer. Students can build a holistic conceptual understanding when
they are prompted to generate associations by engaging in “personal constructions of similarities
across activities.” (Lobato, 1996, 2003) This strategy also appears to invoke metacognitive self
regulation, such as in the example above when the student realized that she “didn’t even think
about” the forward velocity of the bowling ball in the airplane question, until after she was asked
the cannon question. Finally, the students were specifically asked to return to the previous
Dynamic transfer 32
(airplane) question and conjecture whether they would answer it differently. They did
not do so of their own accord. Therefore, we have established that transfer could occur
spontaneously, not that it will occur. Such a possibility of transfer could not be established when
the order of questions was reversed.
Scaffolded transfer is consistent with the model of transfer by Greeno et al. (1993).
Interaction with the interviewer can prime the learner to activate epistemic resources that control
associations between source and target tools. A method to promote transfer along the lines
described above is described by Redish (2003). Students are presented an ‘Elby pair’ of
questions, both of which involve the same physics concept. One question cues a common
misconception while the other cues the correct solution. After students answer both questions,
they are asked to reconcile their different approaches, i.e. they are asked to dynamically create
associations between the two situations. The airplane and cannonball question, when asked in
that order, appear to function similar to an ‘Elby pair.’
THE TEACHING INTERVIEW: ANOTHER METHOD TO STUDY
TRANSFER
Physics education researchers have often used semi-structured clinical interviews
modeled after Piaget (1929). The goal is to explore student conceptual understanding without
altering it in the process. Clinical interviews help uncover the ideas that students bring with
them from previous experiences to the interview although they tell us little about how students
might respond to particular instructional strategies. Knowledge of the latter is important for
curriculum development and instruction.
Dynamic transfer 33
The teaching interview is an adaptation of the teaching experiment technique
that has often been used in mathematics education research (Steffe, 1983; Steffe & Thompson,
2000) to investigate how students might respond to certain instructional strategies. A few
physics education researchers (Katu, Lunetta, & van den Berg, 1993; Komorek & Duit, in press)
have used the teaching experiment methodology. Our adaptation of the teaching interview was
developed by Engelhardt, Corpuz, Ozimek and Rebello (2003) who were interested in
investigating how student ideas of real-world devices changed with instruction. The teaching
interview includes multiple teaching episodes with a group of two or three students. The
researcher (interviewer) simultaneously takes on the role of a teacher in a mock instructional
setting that utilizes the learning cycle (Karplus, 1974) and Socratic dialog (Hake, 1987),
incorporating demonstrations, hands-on experiences and predict-explain-observe-explain
sequences. Because it incorporates these instructional elements, the teaching interview can serve
as a useful bridge between clinical research and curriculum development.
The teaching interview is also different from action research. The latter is typically
performed in a ‘real’ instructional setting to test curriculum or instruction that has already been
developed. Rather, the teaching interview precedes the development of curriculum and
instruction, and therefore does not follow a pre-decided strategy. Instead, the teaching interview
is semi-structured in that it allows the researcher to attempt different instructional inputs that
may change students’ models. For instance, if a student is unable to construct a mental model,
the researcher can gradually provide increasingly focused prompts (e.g. discrepant events) until
the student builds the model. Conversely, if a student already has a coherent model in a given
scenario, the researcher can present different situations to ‘stress’ the student’s model to
determine its robustness. Alternatively, if a student in the group has a coherent (and correct)
Dynamic transfer 34
model and another one does not, the interviewer can ask each student to convince the
other of the correctness of their model as in peer instruction (Mazur, 1997) and observe the
ensuing interaction.
Finally, the teaching interview is not a particular research methodology but rather refers
to a family of techniques that lie along a continuum ranging from clinical interviews to
classroom action research. Several variations in the teaching interview are also possible. For
instance, one might conduct teaching interviews with individual students rather than with groups
of students. Having a single student eliminates the variables associated with student-student
interactions and provides the researcher with greater control in guiding the single student’s
model construction process.
We believe that the teaching interview can also help investigate transfer from
contemporary perspectives. First, it creates an environment that provides a rich repertoire of
experiences and tools and provides an opportunity for the dynamic “personal constructions of
relations of similarities” (Lobato, 1996, 2003) and associations (Redish, 2003) between tools,
maximizing the possibilities of students’ attunement to the affordances (Greeno et al., 1993) of
these tools. Second, the teaching interview allows the researcher to assess student learning in
situ, consistent with transfer as preparation for future learning (Bransford & Schwartz, 1999).
Finally, the teaching interview allows for student-student and student-teacher interactions,
allowing the researcher to investigate the socio-cultural dynamics of transfer (Greeno et al.,
1993; Lobato, 1996, 2003).
The teaching interview provides a level of scaffolding that is greater than a clinical
interview. Interactions with other students and hands-on activities also provide inputs to the
Dynamic transfer 35
sense-making process of each student. Therefore, the teaching interview enables
students to develop associations and transfer their learning from one scenario to another, and
therefore allows one to investigate transfer of learning with maximal scaffolding. It is not our
intent here to elevate the teaching interview as the ultimate research methodology to investigate
transfer. Indeed, as contemporary perspectives indicate, transfer is almost ubiquitous and
unavoidable. Therefore, a clinical interview too provides a useful methodology in which to
study the dynamics of spontaneous in situ transfer. However, the teaching interview additionally
allows us to study scaffolded transfer.
In this volume Schwartz, Bransford and Sears describe a methodology to investigate the
effects of a learning treatment on how well it prepares students to learn. Two groups of students
used two different teaching methods (“tell and practice” vs. “invent”). Then both groups were
provided with a common learning resource followed by a common transfer problem task.
Students who used the “invent” method were better able to utilize the learning resource and
performed better than the “tell and practice” group on the transfer task. Using this methodology
the researchers were able to study both “transfer in” and “transfer out” of the learning resource.
The teaching interview offers a similar, albeit yet unexplored opportunity as shown in
Figure 9. By presenting students with specific external inputs the teaching interview can be
adapted to perform a “double transfer” study. Unlike the Schwartz et al’s study however, we
will not have control of the initial treatment, which in this case is their real-world experience.
However, we can study how students associate their prior experiences with instructional
experiences provided during the teaching interview. We can also study how their prior
experiences affect associations between tools acquired through ‘instructional’ experiences and
those available in the final target scenario.
Dynamic transfer 36
FIGURE 9 ABOUT HERE
We emphasize that by presenting students with particular ‘external inputs’ in the
‘instructional’ segment of the teaching interview, we do not depart from the notion that transfer
is ubiquitous. Rather we continue to adopt a stance consistent with our perspective of not pre-
supposing what should transfer but rather looking for ‘anything’ that transfers. The above
methodology only increases the likelihood that students transfer the tools that they acquire in the
‘instructional’ segment of the teaching interview. We do not preclude the possibility that this
transfer may not occur or that students may transfer tools gained through other experiences.
SUMMARY
Transfer of learning has often been defined as the ability to apply what one has learned in
one context to a different context. Several previous research studies have demonstrated that
transfer of learning, defined this way, is rather rare. These findings appear to contradict our
everyday experiences as learners in which we often bring to bear previous experiences in any
new situation that we encounter. Recently, some researchers have begun to rethink the ways in
which to characterize transfer of learning.
We have focused specifically on the perspectives of Bransford & Schwartz (1999),
Greeno et al. (1993) and Lobato (1996, 2003). All of these emergent views of transfer appear to
share at least three common themes. First, they look at transfer from the students’ perspective
rather than a pre-defined researcher’s perspective, i.e. they ask what similarities the student sees
in a given situation. Second, they describe transfer as a dynamic phenomenon in which learners
Dynamic transfer 37
construct their knowledge in the target scenario, rather than apply previously learned
knowledge. They promote the notion that transfer must be assessed by whether students can
learn in the new situation. Finally, they go beyond looking at transfer from a purely cognitive
perspective and include socio-cultural factors.
Based on these contemporary perspectives, we have adapted a framework that we had
previously developed to analyze student responses in an interview. Our framework is based on
the premise that students construct their responses to interview questions dynamically and often
make things up on the spot. This notion is consistent with contemporary dynamic models of
transfer. Therefore, our framework can be utilized to recognize dynamic transfer in an interview.
Our framework consists of four elements. First are the ‘external inputs’ provided by the
interviewer and interview materials. Tools may be acquired in a prior learning (source) context
or in the present transfer (target) context: Source tools are the prior knowledge or experiences
including those gained from earlier instances in the interview. Target tools include information
about the new context that the learner attends to. The third element in our framework is the
‘workbench’ which includes dynamic mental processes that help the learner associate the source
and target tools. The fourth element is the ‘answer’ which is either an intermediate stopping
point or a final conclusion of the reasoning process, and sometimes a starting point of
metacognition.
Our adapted framework to study transfer is consistent with Redish's (2003) two-level
theoretical framework of associations and activations controlled by a learner’s epistemic mode.
We identify transfer as activation of associations between tools in the source (learning) and
target (transfer) contexts. Epistemic resources are ‘meta-tools’ that control which associations a
Dynamic transfer 38
learner activates. For instance, a learner may selectively activate associations between
the target scenario and classroom knowledge and ignore her everyday experiences because her
epistemic resource directs her to see knowledge as propagated from authority and not created by
her based on her everyday experience. Based on the external input, including meta-messages
from the interviewer, a learner may be primed into a particular epistemic mode. The view of
transfer as a process of epistemologically controlled activation of associations between source
(learning) tools and target (transfer) tools is useful in characterizing dynamic transfer in an
interview.
The clinical interview is useful in observing how students construct and transfer their
knowledge dynamically. However, its goal of ‘measuring’ while not changing the knowledge
state of the learner limits the amount of scaffolding that the researcher can provide to the learner.
Therefore, although it may tell us the learner’s prior knowledge state, the clinical interview often
reveals little about how the learner will construct and transfer knowledge in a true instructional
setting when external inputs are provided. The teaching interview affords the researcher the
opportunity to investigate dynamic transfer and knowledge construction. The interviewer
engages the learners in ways similar to a teacher in a small group instructional setting, often
providing scaffolding such as hints, cues, hands-on learning, peer instruction, Socratic dialog,
etc. All of these interactions provide a rich repertoire of tools akin to those in a true instructional
setting. Therefore, the teaching interview provides yet another tool with which to study dynamic
transfer in ways that are consistent with contemporary models of transfer of learning.
Dynamic transfer 39
ACKNOWLEDGEMENTS
This work was supported in part by U.S. National Science Foundation grants REC-
0087788 and REC-0133621. Views expressed are those of the authors and not necessarily those
of the Foundation. One of the authors (NSR) wishes to thank Paula Heron, University of
Washington for useful discussions and feedback.
Dynamic transfer 40
REFERENCES
Adams, L., Kasserman, J., Yearwood, A., Perfetto, G. A., Bransford, J. D., & Franks, J. J.
(1988). The effects of facts versus problem-oriented acquisition. Memory and Cognition,
16, 167-175.
Allbaugh, A. R. (2003). The problem-context dependence of students' application of Newton's
Second Law, Ph.D. Dissertation. Unpublished Doctoral Dissertation, Kansas State
University, Manhattan, KS.
Anderson, J. R., & Thompson, R. (1989). Use of analogy in a production system architecture. In
S. Vosniadou & A. Ortony (Eds.), Similarity and analogical reasoning (pp. 367-397).
New York, NY: Cambridge University Press.
Bassok, M. (1990). Transfer of domain-specific problem-solving procedures. Journal of
Experimental Psychology: Memory, Learning, and Cognition, 16(3), 522-533.
Bransford, J. D., & Schwartz, D. (1999). Rethinking transfer: A simple proposal with multiple
implications. Review of Research in Education, 24, 61-100.
Broudy, H. S. (1977). Types of knowledge and purposes of education. In R. C. Anderson & R. J.
Spiro & W. E. Montague (Eds.), Schooling and the acquisition of knowledge (pp. 1-17).
Hillsdale, NJ: Erlbaum.
Brown, A. L., & Kane, M. J. (1988). Preschool children can learn to transfer: Learning to learn
and learning from example. Cognitive Psychology, 20, 493-523.
Dynamic transfer 41
Brown, J. S., Collins, A., & Duguid, P. (1989). Situated cognition and the culture of
learning. Educational Researcher, 18, 32-42.
Chen, Z., & Daehler, M. W. (1989). Positive and negative transfer in analogical problem solving.
Cognitive Development, 4, 327-344.
Collins, A., & Ferguson, W. (1993). Educational Psychologist, 28(1), 25-42.
diSessa, A. A. (1988). Knowledge in pieces. In G. Forman & P. B. Pufall (Eds.), Constructivism
in the computer age (pp. 49-70). Hillsdale, NJ: Lawrence Erlbaum Associates.
diSessa, A. A. (1998). What changes in conceptual change? International Journal of Science
Education, 20(10), 1155-1191.
diSessa, A., & Wagner, J. (2004, this volume). What coordination has to say about transfer. In J.
Mestre (Ed.), Transfer of learning: Research and perspectives. Greenwich, CT:
Information Age Publishing.
Driscoll, M. P. (2000). Psychology of learning for instruction (2nd ed.). Needham Heights, MA:
Allen & Bacon Publishing.
Driver, R. (1995). Constructivist approaches to science teaching. In L. P. Steffe & J. Gale (Eds.),
Constructivism in education (pp. 385-400). Hillsdale, NJ: Lawrence Erlbaum Associates.
Dufresne, R., Mestre, J., Thaden-Koch, T., Gerace, W., & Leonard, W. (2004, this volume). The
dynamics of transfer as a sense-making enterprise. In J. Mestre (Ed.), Transfer of
learning: Research and perspectives. Greenwich, CT: Information Age Publishing.
Duncker, K. (1945). On problem solving. Psychological Monographs, 58(270).
Dynamic transfer 42
Engelhardt, P. V., Corpuz, E. G., Ozimek, D. J., & Rebello, N. S. (2003). The teaching
experiment - What it is and what it isn't. Paper presented at the Physics Education
Research Conference, 2003, Madison, WI.
Engelhardt, P. V., Gray, K. E., Hrepic, Z., Itza-Ortiz, S. F., Allbaugh, A. R., Rebello, N. S., &
Zollman, D. A. (2003). A framework for student reasoning in an interview. Paper
presented at the Physics Education Research Conference, 2003, Madison, WI.
Engelhardt, P. V., Gray, K. E., & Rebello, N. S. (2004). How many students does it take before
we see the light? The Physics Teacher, 42, 216-221.
Engelhardt, P. V., & Rebello, N. S. (2003). Students' view of how sound is produced by musical
instruments. AAPT Announcer, 33(2), 124.
Engelhardt, P. V., Rebello, N. S., & Itza-Ortiz, S. F. (2003). Students' mental models of a
bicycle. Paper presented at the American Association of Physics Teachers, Winter
Meeting 2003, Austin, TX.
Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-
developmental inquiry. American Psychologist, 34, 906-911.
Flavell, J. H. (1987). Speculations about the nature and development of metacognition. In F. E.
Weinert & R. H. Kluwe (Eds.), Metacognition, motivation and Uuderstanding (pp. 906-
911). Hillside, NJ: Lawrence Erlbaum Associates.
Gentner, D. (1983). Structure mapping: A theoretical framework for analogy. Cognitive Science,
7, 155-170.
Dynamic transfer 43
Gick, M. L., and Holyoak, K. J. (1980). Analogical problem solving. Cognitive
Psychology, 12, 306-355.
Glasersfeld, E. (1989). Cognition, construction of knowledge and teaching. Synthese, 80(1), 121-
140.
Gray, K. E. (2004). The effect of question order on student responses to multiple-choice physics
questions. Unpublished M.S. Dissertation, Kansas State University, Manhattan, KS.
Gray, K. E., Hrepic, Z., Itza-Ortiz, S. F., Allbaugh, A. R., Engelhardt, P. V., Rebello, N. S., &
Zollman, D. A. (2003). Implications of a framework for student reasoning in an
interview. Paper presented at the Physics Education Research Conference, 2003,
Madison, WI.
Greeno, J. G., Moore, J. L., & Smith, D. R. (1993). Transfer of situated learning. In D. K.
Detterman & R. J. Sternberg (Eds.), Transfer on trial: Intelligence, cognition and
instruction (pp. 99-167). Norwood, NJ: Ablex.
Hake, R. R. (1987). Promoting student crossover to the Newtonian world. American Journal of
Physics, 55, 878.
Hammer, D. (2000). Student resources for learning introductory physics. American Journal of
Physics - Physics Education Research Supplement, 68(7), S52-S59.
Hammer, D., & Elby, A. (2002). On the form of a personal epistemology. In B. K. Hofer & P. R.
Pintrich (Eds.), Personal epistemology: The psychology of beliefs about knowledge and
knowing (pp. 169-190). Mahwah, NJ: Lawrence Erlbaum.
Dynamic transfer 44
Hammer, D., Elby, A., Scherr, R., & Redish, E. (2004, this volume). Resources,
framing, and transfer. In J. Mestre (Ed.), Transfer of learning: Research and
perspectives. Greenwich, CT: Information Age Publishing.
Hestenes, D., Wells, M., & Swackhammer, G. (1992). Force concept inventory. The Physics
Teacher, 30, 141-151.
Hoffding, H. (1892). Outlines of psychology. London: Macmillan.
Holyoak, K. J., & Thagard, P. (1989). Analogical mapping by constraint satisfaction. Cognitive
Science, 13, 295-356.
Hrepic, Z. (2002). Identifying students' mental models of sound propagation. Unpublished M.S.
Dissertation, Kansas State University, Manhattan, KS.
Hrepic, Z., Rebello, N. S., & Zollman, D. A. (2002). Identifying student models of sound
propagation. Paper presented at the 2002 Physics Education Research Conference, Boise,
ID.
Itza-Ortiz, S. F., Allbaugh, A. R., Engelhardt, P. V., Gray, K. E., Hrepic, Z., Rebello, N. S., &
Zollman, D. A. (2004). A framework for the dynamics of student reasoning in an
interview. Paper presented at the Proceedings of the Annuam Meeting of National
Association of Research in Science Teaching 2004, Vancouver, BC.
Itza-Ortiz, S. F., Lawrence, B., & Zollman, D. A. (2003). Students energy models: Mechanics
through electromagnetism. AAPT Announcer, 33(2), 149.
Itza-Ortiz, S. F., Rebello, N. S., & Zollman, D. A. (2004). Students’ models of Newton’s second
law in mechanics and electromagnetism. European Journal of Physics, 25, 81-89.
Dynamic transfer 45
Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language,
inference, and consciousness. Cambridge, MA: Harvard University Press.
Judd, C. H. (1908). The relation of special training to general intelligence. Educational Review,
36, 28-42.
Karplus, R. J. (1974). Science teaching and development of reasoning. Journal for Research in
Science Teaching, 12, 213-218.
Katu, N., Lunetta, V. N., & van den Berg, E. (1993). Teaching experiment methodology in the
study of electricity concepts. Paper presented at the Third International Seminar on
Misconceptions and Education Strategies in Science and Mathematics, Ithaca, NY.
Komorek, M., & Duit, R. (in press). The teaching experiment as a powerful method to develop
and evaluate teaching and learning sequences in the domain of non-linear systems.
Journal of Science Education.
Lave, J., & Wenger, E. (1991). Situated learning: Legitimate peripheral participation.
Cambridge, UK: Cambridge University Press.
Lobato, J. E. (1996). Transfer reconceived: How "sameness" is produced in mathematical
activity, Ph.D. Dissertation. Unpublished Ph.D. Dissertation, University of California,
Berkeley, Berkeley, CA.
Lobato, J. E. (2003). How design experiments can inform a rethinking of transfer and vice versa.
Educational Researcher, 32(1), 17-20.
Lockhart, R. S., Lamon, M., & Gick, M. L. (1988). Conceptual transfer in simple insight
problems. Memory and Cognition, 16, 36-44.
Dynamic transfer 46
Mazur, E. (1997). Peer instruction: A user's manual. Upper Saddle River, NJ:
Prentice-Hall.
McKeough, R. E., Lupart, J., & Marini, A. (1995). Teaching for transfer: Fostering
generalization in learning. Mahawah, NJ: Erlbaum.
Minstrell, J. (1992). Facets of students' knowledge and relevant instruction. In R. Diut & F.
Goldberg & H. Niedderer (Eds.), Research in physics learning: Theoretical issues and
empirical studies (pp. 110-128). Kiel, Germany: Institut für Pädagogik der
Naturwissenschaften.
Nisbett, R. E., Fong, G. T., Lehmann, D. R., & Cheng, P. W. (1987). Teaching reasoning.
Science, 238, 625-630.
Novik, S., & Nussbaum, J. (1981). Brainstorming in the classroom to invent a model: A case
study. School Science Review, 62, 771-779.
Perfetto, G. A., Bransford, J. D., Franks, J. J. (1983). Constraints on access in a problem solving
context. Memory and Cognition, 11, 12-31.
Piaget, J. (1929). The child's conception of the world. New York: Harcourt Brace.
Piaget, J. (1964). Development and learning. Journal of Research in Science Teaching, 2(3),
176-186.
Rebello, N. S., & Zollman, D. A. (2004). The effect of distracters on student performance on the
force concept inventory. American Journal of Physics, 72(1), 116-125.
Dynamic transfer 47
Redish, E. F. (in press). A theoretical framework for physics education research:
Modeling student thinking. In E. Redish, C. Tarsitani & M. Vicentini (Eds.), Proceedings
of the Enrico Fermi Summer School, Course CLVI: Italian Physical Society.
Reed, S. K. (1993). A schema-based theory of transfer. In D. K. Detterman & R. J. Sternberg
(Eds.), Transfer on trial: Intelligence, cognition and instruction (pp. 39-67). Norwood,
NJ: Ablex.
Reed, S. K., Ernst, G. W., & Banerji, R. (1974). The role of analogy in transfer between similar
problem states. Cognitive Psychology, 6, 436-450.
Scherr, R. E., & Wittmann, M. C. (2002). The challenge of listening: The effect of researcher
agenda on data collection. Paper presented at the 2002 Physics Education Research
Conference, Boise, ID.
Schwartz, D. L., Bransford, J. D. & Sears, D. (2004, this volume). Efficiency and innovation in
transfer. In J. Mestre (Ed.), Transfer of learning: Research and perspectives. Greenwich,
CT: Information Age Publishing.
Singley, K., & Anderson, J. R. (1989). The transfer of cognitive skill. Cambridge, MA: Harvard
University Press.
Steffe, L. P. (1983). The teaching experiment methodology in a constructivist research program.
Paper presented at the Fourth International Congress on Mathematical Education.,
Boston, MA.
Steffe, L. P., & Thompson, P. W. (2000). Teaching experiment methodology: Underlying
principles and essential elements. In R. K. Lesh, A. E. Kelly (Ed.), Handbook of
Dynamic transfer 48
Research design in mathematics and science education (pp. 267-307).
Hillsdale, NJ: Erlbaum.
Thornton, R. K. (2002). Uncommon knowledge: Student behavior correlated to conceptual
learning. AAPT Announcer, 32(4), 79.
Throndike, E. L. (1906). Principles of teaching. New York: A.G. Seigler.
Throndike, E. L., & Woodworth, R. S. (1901). The influence of improvement in one mental
function upon the efficacy of other functions. Psychological Review, 8, 247-261.
Vosniadou, S. (1994). Capturing and modeling the process of conceptual change. Learning &
Instruction, 4, 45-69.
Ward, T. B., Smith, S. M., Vaid, J. (1997). Conceptual structures and processes in creative
thought. In T. B. Ward, S. M. Smith, & J. Vaid (Eds.), Creative thought: An investigation
of conceptual structures and processes (pp. 1-16). Washington, DC: American
Psychological Association.
Wertheimer, M. (1959). Productive thinking. New York: Harper & Row.
Wittmann, M. C., & Scherr, R. E. (2002). Student epistemological mode constraining researcher
access to student thinking: An example from an interview on charge flow. Paper
presented at the 2002 Physics Education Research Conference, Boise, ID.
Dynamic transfer 49
TABLE CAPTIONS
Table 1. Alignment of our analytical framework with some contemporary models of transfer.
Dynamic transfer 50
FIGURE CAPTIONS
Figure 1. Redish’s (2003) “two-level” model showing associations and control.
Figure 2. Our model of transfer.
Figure 3. The figure accompanying the question asking the student to describe why the
listener can hear the speaker across the wall.
Figure 4. The “hockey puck” question -- Question # 8 on the Force Concept Inventory.
Figure 5. The “spaceship” question -- Question # 21 on the Force Concept Inventory.
Figure 6. The sled problem: The person slides a block off the sled once every 10 seconds.
Figure 7. The “airplane” question -- Question # 23 on the Force Concept Inventory.
Figure 8. The “cannon ball” question -- Question # 12 on the Force Concept Inventory.
Figure 9. Adapting the “double transfer” (Schwartz, et al., this volume) with the teaching
interview.
Dynamic transfer 51
TABLES
Elements of Framework What Some Contemporary Models of Transfer Say
External Inputs [I]:
Information provided tothe learner throughinteractions with theinterviewer and interviewmaterials.
“Transfer is distributed across mental, material, social [I] andcultural planes.” (Lobato, 2003, p. 20)
“…activities… can be defined socially [I] … in a transfer situation,to try to relate the situation to previous experience [T].” (Greeno etal., 1993, p. 100)
Tools [T]:
Knowledge structures ofvarying grain sizes as wellas prior experience, etc.used in reasoning.
“… influence of prior activity [T] on current activity..” “.. a surfacefeature [T] may be structurally substantive to the learner.” (Lobato,2003, p. 20)
“… when transfer occurs it is because of general properties andrelations [T] of the person’s interactions with features [T] of asituation.” (Greeno et al., 1993, p. 146)
“The PFL (Preparation for Future Learning) perspective fits withBroudy’s (1977) arguments...” “People also ‘know with’ theirpreviously acquired concepts and experiences [T].” (Bransford &Schwartz, 1999, p. 69)
Workbench [W]:
Various mental processesused by the student thatmay utilize informationprovided by the externalinput and tools
“What relations of similarity are created [W]?” “Dynamicproduction of sameness.” “Multiple processes [W] are involved.”(Lobato, 1996, 2003, p. 20)
“…a symbolic representation [T] of structure is generated [W] inthe transfer situation, based partly on information about anothersituation [T] that is retrieved.” (Greeno et al., 1993, p. 146)
“From the PFL perspective, one looks for evidence of initiallearning trajectories [W].” “… the focus shifts to whether they (thestudents) are prepared to learn [W] to solve new problems.”(Bransford & Schwartz, 1999, p. 69)
Answer [A]:A ‘stopping point’ in thereasoning process. Itcould also be a question orrequest for information.
“… one determinant about the course of future learning is thequestions people ask [A] about a topic, because these questionsreshape their learning goals.” (Bransford & Schwartz, 1999, p. 69)
TABLE 1
Dynamic transfer 52
FIGURES
FIGURE 1
53
FIGURE 2
54
FIGURE 3
55
FIGURE 4
56
FIGURE 5
57
FIGURE 6
58
FIGURE 7
59
FIGURE 8
60
FIGURE 9
Proceedings of the NARST 2004 Annual Meeting (Vancouver, BC, Canada)
National Association for Research in Science Teaching (NARST) April 1-3, 2004
A FRAMEWORK FOR THE DYNAMICS OF STUDENT REASONING IN AN INTERVIEW
We propose a framework to characterize student reasoning during an interview. Our framework is based on data collected by five researchers, each with different goals. The research participants were enrolled in various introductory physics courses at Kansas State University. The framework has the following elements: ‘External Inputs’ (e.g. questions, verbal, graphic and other cues) from the interviewer and interview environment; ‘Tools’ (e.g. memorized and familiar formulae, laws and definitions, prior experiences) that the student uses; ‘Workbench’ encompassing mental processes (e.g. induction, accommodation) that incorporate the aforementioned inputs and tools; ‘Answer’ given by the student and reasoning paths connecting these elements. We have used a coding scheme to map out the reasoning paths in our framework. We discuss the applications and implications of our framework.
Salomon F. Itza-Ortiz, Kansas State Universitya Alicia R. Allbaugh, Kansas State Universityb Paula. V. Engelhardt, Kansas State University Kara E. Gray, Kansas State University Zdeslav Hrepic, Kansas State University N. Sanjay Rebello, Kansas State University Dean A. Zollman, Kansas State University
Introduction
Interviews have long been used in physics education research. However, they are often influenced by the researcher’s agenda and the assumption that knowledge remains static while it is probed. This assumption is not always true. Sometimes students create answers as they speak; thus, we need to be cognizant of the factors that may influence a student’s responses. This paper addresses the following questions:
• How do students construct their reasoning during an interview? • What factors mediate students’ sense-making processes during an interview?
Relevant Literature Student knowledge has been described across a spectrum of grain size. Near one end of the spectrum, Driver (1995), Glasersfeld (1989) and others describe knowledge in terms of mental models. Learners test these models in light of new experiences and may then modify or reorganize them. Near the other end of the spectrum, diSessa (1988) believes in knowledge in pieces or “p-prims.” Minstrell (1992) has divided concepts into units called “facets.” Hammer (2000) describes “resources” as the smallest usable pieces of knowledge. Our framework, which describes knowledge change in an interview is not anchored at any particular grain size, rather we consider all grain sizes equally and simultaneously.
a Current affiliation: San Diego State University. b Current affiliation: Rochester Institute of Technology.
Proceedings of the NARST 2004 Annual Meeting (Vancouver, BC, Canada)
National Association for Research in Science Teaching (NARST) April 1-3, 2004
Our framework describes knowledge change or cognitive dynamics in an interview. Piaget (1975) describes this change in terms of assimilation (adapting our experiences to fit our knowledge) and accommodation (modifying our knowledge to account for our experiences). More recently researchers have talked about conceptual change in terms of conceptual combination (Ward, 1997) or hybridization (Hrepic, 2002). Researchers often use a flexible semi-structured interview format. This flexibility can make the format susceptible to a researcher’s bias. Recently, Scherr and Wittmann (2002) demonstrated how a researcher’s agenda “filters” out some of what the student is saying in an interview. Our framework enables a researcher to identify some of these “filters.” Evolution of a Framework Researchers in the KSU physics education research group often shared anecdotal experiences of their interviewees making up or changing responses in an interview. Therefore, we decided to re-examine our previous data from the perspective of the dynamics of student reasoning in an interview. We emphasize that these data were from five researchers working independently on different projects with different goals. The students were from diverse backgrounds (non-science majors, engineering/physics majors) in different introductory physics courses. Through deliberations we identified four common elements that encapsulated the dynamics of reasoning in an interview. Elements of the Framework Our framework is shown in Figure 1. The interconnecting arrows represent all possible reasoning paths followed by students as they articulate their response to an interviewer’s question.
Proceedings of the NARST 2004 Annual Meeting (Vancouver, BC, Canada)
National Association for Research in Science Teaching (NARST) April 1-3, 2004
Figure 1: Our framework with four interconnected elements.
External Inputs denoted by {I} is the input provided by the interviewer such as protocol questions, follow-up or clarification questions, hints or cues, both verbal and non-verbal. It also includes other materials e.g. text, pictures, demos, videos, etc. that the student is allowed to use. Tools denoted by {T} include the knowledge structures that a student uses in her or his reasoning. Tools can be either pre-existing or created. Existing tools include a student’s prior experience, memorized information, facts, data, formulae, definitions, rules, procedures, etc. It also includes knowledge structures of different grain sizes, ranging from p-prims or facets to mental models or theories. Additionally, tools include a student’s epistemological stance (Wittmann and Scherr, 2002) and expectations about the type of knowledge (“knowledge as fabricated stuff” vs. “knowledge as propagated stuff”) that can be used in given situation. Created tools are dynamically constructed knowledge and experiences at an earlier instance in the interview, such as answers to or knowledge acquired through previous questions. Workbench denoted by {W} includes mental processes used by the student. These processes activate dormant knowledge in {T}, such as executing a known rule or procedure. These processes often reorganize and restructure knowledge (e.g. assimilation, accommodation) or synthesize different pieces of knowledge (e.g. conceptual combination, hybridization). {W} includes transferring and applying prior knowledge and experiences in new situations such as analogical, inductive or deductive reasoning as well as decision making. The latter can occur when a student decides that a given analogy or explanation is applicable to the situation at hand or when the student has to choose an answer from more than one option.
Proceedings of the NARST 2004 Annual Meeting (Vancouver, BC, Canada)
National Association for Research in Science Teaching (NARST) April 1-3, 2004
Answers denoted by {A} are the conclusion of a reasoning process, but could be articulated first by the student. Answers could also be an intermediate stopping point. This type of situation occurs during metacognition (Flavell, 1979). Answers can be decisive, i.e. a single conclusion or indecisive, e.g. two or more answers, “don’t know” or a request for more information. In the latter case {A} is in fact a question. Applying the Framework -- Analyzing Students’ Reasoning Paths Our framework can unearth some interesting reasoning paths used by students and their components. An example (Figure 2) from our interview data demonstrates the details of cognitive conflict demonstrated by a student during an interview. Cognitive conflict or dissonance (Festinger, 1957) can help students learn science (Hewson, 1984). Piaget’s (Piaget, 1975) cognitive disequilibrium occurs during assimilation and accommodation (both {W}), when a learner’s internal knowledge {T} conflicts with her/his external experience in a discrepant event {I}.
Figure 2: Conflict resolution reasoning path
When asked to predict how the brightness of two bulbs in parallel will compare to a single bulb {I1}, the student answers based on a p-prim (more is less) {T1}, and elaborates {W1} their answer - less bright {A1}. The interviewer completes the circuit so
Interviewer: {I1} How will they (two bulbs in parallel) compare now (to one battery and one bulb)? Student: {A1} I still think it won’t be as bright as a single bulb {T1} because you still have two bulbs to light. {W1} It will still be less than the first (one battery and one bulb) because you still have
energy, you still have to share between two bulbs instead of just one. Interviewer: {I2} So what happened? (Interviewer completed circuit and bulbs light.) Student: {A2} It stayed the same. Interviewer: {I3} Why? Student: {W2t} Well, you just have that constant energy going to each {A2} so it stays the same.
I1 T1
W1
A1I1 T1
W1
A1I2
W2
A2I2
W2
A2
Proceedings of the NARST 2004 Annual Meeting (Vancouver, BC, Canada)
National Association for Research in Science Teaching (NARST) April 1-3, 2004
that the bulbs light and asks what happened {I2}. The student answers that they stayed the same {A2}, reasoning that the energy must be the same going to each bulb {W2t}. The tool, which is implicit, is denoted by ‘t.’ Advantages of Using the Framework The process of identifying various elements of the framework in an interview transcript forces a researcher to carefully consider what the student is saying, without overlooking words or phrases which may have been filtered out by the research agenda. The framework urges the researcher to look for evidence of each of these four elements. Therefore, using this framework alerts the researcher to the absence of one or more of these elements, especially {T} and {W}, thereby avoiding an exclusive focus on {A}. By interconnecting the elements, the researcher can carefully trace the effect of various inputs and cues. For instance, the {T} that a student uses when presented with a particular input {I} may have been lost if the focus had been only on {W} or {A}. The framework can help the researcher design questions that elicit cognitive tools {T} and processes {W}. During the interview, the framework can help the interviewer ask follow-up questions {I} that explicate students’ reasoning. The framework can also help the researcher glean overall trends in a student’s reasoning across several questions or to analyze a transcript at multiple grain sizes. The example below shows a transcript analyzed at two grain sizes (Figure 3). We can use a ‘fine brush’ to see details that emerge from the data such as small grain size knowledge elements (e.g. resources), selection of various tools and the back and forth deciding between different answers. We can also use a ‘broad brush’ to see global trends in the data and large grain size knowledge elements (e.g. mental models).
Proceedings of the NARST 2004 Annual Meeting (Vancouver, BC, Canada)
National Association for Research in Science Teaching (NARST) April 1-3, 2004
Figure 3: Analyzing the transcript above at two grain sizes –Fine and Coarse
Our framework can be applied in two ways. First, it can be used to understand what students say by categorizing various words and phrases in the transcript as {I}, {T}, {W} or {A}. Second, it can be used to infer what students think. This mode of application is more susceptible to researcher interpretation and bias than the first one. In the example below (see Table 1), a student was asked to explain how sound propagates through the wall. By parsing the student’s response one can identify {W}, {T} and {A} as they chronologically occur in the transcript. A researcher can also infer that the student uses analogical reasoning (Gentner, 2000) involving three {W} processes: -- recognizing a target {T}, abstracting structural similarities between source and target and mapping similarities from source to target. The first of these processes is somewhat evident in the transcript. The other two are inferred, based on our theoretical understanding of analogical reasoning. Therefore, the reasoning path goes back to {W} (for abstracting and mapping) before terminating at {A}. Note that there was no attempt made in the
Interviewer: {I} So how many gears do you think this one has (bike 1)? Student: {W1} Well, my first guess {A1} is a 10 speed {T1} because this is the size they usually are {A2} but maybe it’s a three speed. {T2} It’s got three little thingies. {W2} If I was going to use reason {A2} I guess I’d say three {W1} if I were going to use a guess {A1} I’d say a 10 speed.
Fine Analysis
Coarse Analysis
Proceedings of the NARST 2004 Annual Meeting (Vancouver, BC, Canada)
National Association for Research in Science Teaching (NARST) April 1-3, 2004
inferential analysis to separate the abstraction and mapping processes in {W}. This demonstrates that although the framework can bridge data with theory, use of the framework is ultimately grounded in the data.
Table 1: Applying the framework in different ways What the student says What we infer the student thinks
{I} Asked how sound gets to the other side of a wall.
{W} “Well, I would say that to me it is somewhat like
{T} a maze for the sound {A} it just kind of works its way through until
it gets to the other side.”
I T
W
AI T
W
A
Student recognizes {W} that the situation is analogous to a maze {T} for the sound. She applies the analogy to deduce {W} that air works its way through until it gets to other side of the wall {A}.
I T
W
AI T
W
A
Connections with Cognitive Psychology It may be evident from the nomenclature of various elements that our framework uses the metaphor of a workshop. The input {I} is analogous to the work order given to a worker (e.g. build a chair). The tools {T} are analogous to the tangible implements (e.g. saw) that the worker uses, as well as her/his skills in performing the task. The workbench {W} is analogous to the work area (e.g. work table) as well as the fabrication processes. The answer {A} provided by the student is analogous to the finished product (e.g. chair) constructed by the worker. Our framework also has underpinnings in cognitive psychology (Driscoll, 2000). The sensory input and response are analogous to {I} and {A} respectively. The short-term (working) memory and the mental processes occurring therein are analogous to {W}. The long-term memory and information stored therein are analogous to tools {T}. Our framework also shares commonalities with a metaphor in cognitive psychology – the computer. Input {I} is analogous to input devices (e.g. keyboard). Answer {A} is analogous to output devices (e.g. monitor). Tools {T} are analogous to stored information (data, software, etc.) on the hard drive. Workbench {W} is analogous to active processes in a processor or RAM. Limitations of Framework The descriptions of various elements in our framework are not exhaustive, e.g. {W} can include processes (e.g. abduction (Josephson, 1994)) that we have not mentioned. It is possible that a student’s statement cannot be uniquely categorized as a particular type of
Proceedings of the NARST 2004 Annual Meeting (Vancouver, BC, Canada)
National Association for Research in Science Teaching (NARST) April 1-3, 2004
tool. For instance, a {T} prior experience (e.g. pushing a grocery cart), could also be a p-prim (motion implies force). Similarly in {W} two processes can be inseparable, e.g. abduction includes decision making. The boundaries between various elements in our framework can often be difficult to distinguish, e.g. the procedure, “If ‘X’ then ‘Y’” is either a {T} or a {W}. Elements can sometimes be implicit, e.g. the answer {A}, “It speeds up because a net force acts on it” implicitly uses {T}, Newton’s II law. Our framework may not characterize a student’s reasoning definitively. It is plausible that two researchers analyzing the same transcript may arrive at slightly different descriptions of a student’s reasoning path. Therefore, our framework is susceptible to a researcher’s bias in ways similar to other qualitative methods. We determined the inter-rater reliability of the coding scheme based on our framework as follows: Four researchers involved in this project pooled two transcript segments from each of their data sets. Each segment was coded by two different researchers, who had not originally collected the data. The inter-rater reliability, averaged over the four pairs of researchers who coded the transcripts, was 81% ± 6% for the fine analysis and 67% ± 5% for the coarse analysis. Summary and Conclusions Our research has shown that students indeed do construct their reasoning during the course of an interview. Therefore, students’ dynamic sense-making process in an interview and the factors that control these processes are worthy of attention. In carefully re-analyzing interview transcripts from our data we conclude the following: • Students’ reasoning in an interview can be described in terms of an analytical
framework that comprises four elements. Three of these elements: Tools, Workbench and Answer together describe the cognitive processes through which the student constructed her/his response to the question.
• The factors that control students’ sense-making processes are often controlled by the fourth element, i.e. the external input provided to the student by the interviewer. The external input may provide tools that a student uses in her/his reasoning. More subtly, the external input can also cue the student into a certain epistemic mode and indirectly affect the types of knowledge that he/she utilizes in her reasoning process.
Acknowledgements This work was supported by the U.S. National Science Foundation grants REC-0087788 and REC-0133621. References diSessa, A. A. (1988). Knowledge in pieces. Constructivism in the computer age. G. Forman and P. B. Pufall. Hillsdale, NJ, Lawrence Erlbaum Associates: 49-70.
Proceedings of the NARST 2004 Annual Meeting (Vancouver, BC, Canada)
National Association for Research in Science Teaching (NARST) April 1-3, 2004
Driscoll, M. P. (2000). Psychology of Learning for Instruction. Needham Heights, MA, Allen and Bacon Publishing.
Driver, R. (1995). Constructivist approaches to science teaching. Constructivism in Education. L. P. S. a. J. Gale. Hillsdale, NJ, Lawrence Erlbaum Associates: 385-400.
Festinger, L. (1957). A Theory of Cognitive Dissonance. Stanford, CA, Stanford University Press.
Flavell, J. H. (1979). "Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry." American Psychologist 34: 906-911.
Gentner, D., Holyoak, K.J., and Kokinov, B. (2000). Analogy: Perspectives from Cognitive Science. Cambridge, MA, MIT Press.
Glasersfeld, E. v. e. (1989). "Cognition, construction of knowledge and teaching." Synthese 80(1): 121-140.
Hammer, D. (2000). "Student Resources for Learning Introductory Physics." American Journal of Physics - Physics Education Research Supplement 68(7): S52-S59.
Hewson, P. W. H., M. G. A. (1984). "The role of conceptual conflict in conceptual change and the design of science instruction." Instructional Science 13: 1-13.
Hrepic, Z., Rebello, N. S., Zollman, D. A. (2002). Identifying student models of sound propagation. 2002 Physics Education Research Conference, Boise, ID, PERC Publishing.
Josephson, J. R., Josephson, S. G. (1994). Abductive Inference: Computation, Philosophy, Technology. New York, NY, Cambridge University Press.
Minstrell, J. (1992). Facets of students' knowledge and relevant instruction. Research in Physics Learning: Theoretical Issues and Empirical Studies. R. Duit, F. Goldberg and H. Niedderer. Kiel, Germany, Institut für Pädagogik der Naturwissenschaften.
Piaget, J. (1975). The equilibration of cognitive structures. Chicago, IL, University of Chicago Press.
Scherr, R. E., Wittmann, M. C. (2002). The challenge of listening: The effect of researcher agenda on data collection. 2002 Physics Education Research Conference, Boise, ID, PERC Publishing.
Ward, T. B., Smith, S. M., Vaid, J. (1997). Conceptual structures and processes in creative thought. Creative Thought: An Investigation of Conceptual Structures and Processes. T. B. Ward, Smith, S. M., Vaid, J. Washington, DC, American Psychological Association.
Wittmann, M. C., Scherr, R. E. (2002). Student epistemological mode constraining researcher access to student thinking: An example from an interview on charge flow. 2002 Physics Education Research Conference, Boise, ID, PERC Publishing.
1
A framework for student reasoning in an interview Paula V. Engelhardt, Kara E. Gray, Zdeslav Hrepic, Salomon F. Itza-Ortiz,
Alicia R. Allbaugh, N. Sanjay Rebello & Dean A. Zollman Physics Department, Kansas State University, Manhattan KS 66506-2601
Abstract: We propose a framework to characterize students’ reasoning in an interview. The framework is based on interview data collected by five researchers with different research goals. The participants were enrolled in various introductory physics courses at Kansas State University (KSU). Our framework includes external inputs (e.g. questions asked, verbal, graphic and other cues) from the interviewer and interview environment; tools (e.g. memorized or familiar formulae, laws and definitions, prior experiences) that the student brings to the interview; a workbench encompassing mental processes (e.g. induction, accommodation) that incorporate the inputs and tools; and the answer given by the student. We describe how the framework can be used to analyze interview data.
Introduction Interviews have long been used in physics
education research (PER). At least two issues influence the interpretation of interview data. First is the researcher’s agenda. Second is the assumption that knowledge remains static while it is probed in an interview. This assumption overlooks situations in which students make up answers as they speak, especially when asked questions they may never have previously considered. Therefore, we need to be cognizant of the factors that may influence students’ responses.
This paper addresses the following questions: How do students construct their reasoning
during an interview? What factors mediate students’ sense-making
processes during an interview? In light of these questions, we carefully
examined a vast data set, which led to the emergence of a theoretical framework. Relevant Literature
The above questions pertain to interviews that investigate student knowledge. Therefore, we are concerned with the interview as well as the object of its investigation – knowledge and reasoning.
Researchers have different ways of describing student knowledge. Driver, [1] Glaserfeld, [2] Redish [3] and others describe knowledge in terms of mental models that minimize the mental energy. Learners test these models in light of new experiences to modify or reorganize the models. These models can be nebulous complex structures incorporating incomplete, overlapping and even contradictory ideas. They may involve multiple representations, myriad rules and procedures or
schemas that the student may not even be aware of. diSessa [4] believes in knowledge in pieces or “p-prims.” Minstrell [5] has divided concepts into units called “facets.” Hammer [6] describes “resources” as the smallest usable pieces of knowledge. Our framework is not anchored at any particular grain size, rather we consider all grain sizes equivalently.
Our framework pertains to the dynamics of reasoning and knowledge change in an interview. Piaget [7] describes this change in terms of assimilation (adapting our experiences to fit our knowledge) and accommodation (modifying our knowledge to account for our experiences). More recently researchers have talked about conceptual change in terms of conceptual combination [8] or hybridization [9].
Physics education researchers typically use a flexible semi-structured format that allows for follow-up questions. This flexibility makes the semi-structured format susceptible to a researcher’s bias. Recently, Scherr & Wittmann [10] demonstrated how a researcher’s agenda implicitly “filters” what the student is saying in an interview. Our framework provides an explicit filter through which to examine what a student is saying in an interview. Evolution of a Framework
Researchers in the KSU PER Group are working on projects with different goals and use varying degrees of semi-structured interviews. In sharing our findings we discovered that we had all encountered interviewees who made up or changed their responses to interview questions as the interview progressed. Therefore, we decided
2
Fig. 1: Four elements connected through all
possible reasoning paths.
to re-examine our data from the perspective of the dynamics of student reasoning in the interview.
It is important to emphasize that these data were from five different researchers working independently on their respective projects. Their goals included investigations on students’ use of Newton’s second law, models of sound propagation, real-world devices, electric circuits and the effect of question order. The students interviewed were from diverse backgrounds (non-science majors to engineering/physics majors) in introductory physics classes ranging from concept-based to calculus-based. Through several deliberations we identified four common elements that formed the basis of our framework. Elements of the Framework
The elements of our framework emerged through analysis and parsing several interview transcripts to understand the role each word or phrase played in the student’s reasoning process. Using this method we identified four elements that were common to all transcripts. These elements are shown in Figure 1. The interconnecting arrows represent all possible reasoning paths followed by students in an interview. The four elements are discussed below. External Inputs, denoted by {I}, is the input provided by the interviewer such as protocol questions, follow-up or clarification questions, hints or cues. It also includes other materials such as text, pictures, demos, videos, etc. that the student is allowed to use. Typically, a student does not directly control {I}, but rather responds to it. However, a clarification or follow-up question may be prompted by what a student says. Tools, denoted by {T}, include a vast array of cognitive entities that a student uses in her or his reasoning. Tools can be broadly categorized into pre-existing tools that the student brings into the interview or created tools that a student may construct at an earlier time in the interview and reuse later.
Existing tools include a student’s prior experience gained through everyday life or instruction. These tools also include a student’s internal knowledge in a dormant state, which includes memorized information such as facts, data, formulae, definitions, rules, procedures, etc. It also includes knowledge structures of different
grain sizes ranging from p-prims or facets of smaller grain size to mental models or theories that have a larger grain size. In addition to learned knowledge and prior experiences, tools can also include a student’s epistemology and expectations about the nature of knowledge that is appropriate in a given situation.
Created tools include dynamically constructed knowledge and experiences at an earlier instance in the interview. Typically these might be answers to previous questions that the student refers back to during the interview. It could also include experiences or knowledge of varying grain sizes that a student has acquired while reasoning through previous questions in the interview. Workbench, denoted by {W}, includes various mental processes used by the student. These processes may utilize {I} as well as activate the existing or previously created dormant knowledge and prior experiences in {T}, such as executing a known rule or procedure.
{W} includes processes that reorganize and restructure knowledge such as assimilation and accommodation. {W} also includes processes in which students combine different pieces of knowledge such as conceptual combination or hybridization. Additionally {W} includes processes which transfer and apply prior knowledge and experiences in new situations such as the processes inherent in analogical, inductive or deductive reasoning. Finally, {W} also includes the process of decision making. Decision making can occur when a student decides that a given analogy or explanation is applicable to the situation at hand. Decision making can also occur in situations when the student has arrived at more
3
Fig. 2: Figure accompanying interview question.
Fig. 3: Reasoning path.
than one plausible answer and has to choose between them. Answers, denoted by {A}, marks the conclusion of the reasoning process. It is important to emphasize that the answer does not necessarily occur at the end of the response given by the student. Sometimes the answer is only an intermediate stopping point. For instance, a student might arrive at a particular {A} and decide to rethink a given question and therefore continue the reasoning process.
Answers can broadly be categorized into three types. A decisive answer is one in which the student arrives at a single conclusion, which could be either correct or incorrect. A student may also give an indecisive answer. This situation can occur when a student has arrived at two or more answers and is unable to choose between them or when a student requests more information from the interviewer. In the latter case {A} will in fact be phrased as a question. Finally, an acceptable {A} could also be one in which the student has no answer, e.g. when she simply says “I don’t know,” and does not request further information from the interviewer. Using the Framework
We demonstrate the framework with a specific example in which the student was asked to walk the interviewer through a Force Concept Inventory [11] question (# 18), given the figure (Fig. 2).
Coding: The transcript is parsed into words and phrases corresponding to {I}, {T}, {W} or {A}: Interviewer {I}: Okay, if you can walk me through
this [hockey puck] problem (Fig. 2).
Student: {T} Well, from watching the hockey games, um, {W} the puck would s-, when it was hit it would
stop it’s um whatever the horizontal, what appears to be horizontal in this picture, um that speed would stop and it would then move ahead. Um, it completely changes directions,
{A} so I would say it would be number [choice] 1. Um - Yeah that’s all I can think of on that one.
Analysis: When asked the hockey puck question {I}, the student recalls his prior experience (watching hockey games) {T} and applies it to select {W} choice 1 {A} for the path of the puck. The reasoning path is depicted in Fig. 3.
This example was chosen primarily because it clearly demonstrates the mechanics of coding and how the framework enables a researcher to identify various elements. More interesting examples will be discussed in a second paper. Some Caveats
A few remarks are in order. First the descriptions of various elements in our framework are not exhaustive. For instance {I} could include non-verbal cues such as interviewer’s gestures or facial expressions that we did not explicitly include in our framework. Similarly {W} could include several mental processes such as abduction [12] that we have neglected to mention.
Second, the various entities within a given element are not mutually exclusive. For instance, when a student refers to a specific {T}, say prior experience (e.g. pushing a grocery cart), she may also be using a p-prim (motion implies force) which is related to this experience. While she explicitly states the former, she may also be using the latter. Similarly in {W} two or more processes can equivalently describe a students’ thinking. For instance, abduction involves decision making.
Third, the boundaries between various elements are often difficult to distinguish. For instance, a mental model that is procedural in nature (e.g. If ‘X’ then ‘Y’) could be categorized as either a {T} or a {W}. The use of an element
4
can sometimes be implicit. For instance, answer {A} (“It speeds up because a net force acts on it”) implicitly uses a {T} (Newton’s II law) although the student does not explicitly state the tool.
Our framework does not characterize a student’s reasoning definitively. The inter-rater reliability is about 80%. Our framework is susceptible to a researcher’s bias in ways similar to other methods of qualitative research analysis. Why use our Framework?
The process of coding the transcript forces a researcher to carefully consider what the student is saying without overlooking words or phrases which may have been filtered out by the research agenda. The researcher is urged to look for evidence of each of the four elements, therefore, using this framework alerts the researcher to the absence of one or more of these elements, especially {T} and {W}, thereby enabling her to look past {A} and ask appropriate follow-up questions. By interconnecting the elements, the researcher can carefully trace the effect of various inputs and cues such as a {T} that a student uses when presented with a particular input {I}.
Our framework can be used not merely in the analysis of interview data but also in the planning and design of an interview protocol. Interviewers can use their knowledge of the framework to frame questions that elicit the relevant tools and workbench processes that a student uses. Similarly, by being aware of the framework the interviewer can ask appropriate follow-up questions to elicit these tools and processes.
In the next paper in these Proceedings we present several examples that demonstrate how our framework can identify interesting reasoning paths. We also discuss the implications of our framework as a research tool. Acknowledgements
This work was supported in part by the NSF grants REC-0087788 and REC-0133621. References Cited 1. Driver, R., Constructivist approaches to
science teaching, in Constructivism in Education, L.P.S.a.J. Gale, Editor. 1995, Lawrence Erlbaum Associates: Hillsdale, NJ. p. 385-400.
2. Glasersfeld, E., Cognition, construction of knowledge and teaching. Synthese, 1989. 80(1): p. 121-140.
3. Redish, E.F., The Implications of Cognitive Studies for Teaching Physics. American Journal of Physics, 1994. 62(6): p. 796-803.
4. diSessa, A.A., Knowledge in pieces, in Constructivism in the computer age, G. Forman and P.B. Pufall, Editors. 1988, Lawrence Erlbaum Associates: Hillsdale, NJ. p. 49-70.
5. Minstrell, J., Facets of students' knowledge and relevant instruction in Research in Physics Learning: Theoretical Issues and Empirical Studies, R. Diut, F. Goldberg, and H. Niedderer, Editors. 1992, Institut für Pädagogik der Naturwissenschaften: Kiel, Germany.
6. Hammer, D., Student Resources for Learning Introductory Physics. American Journal of Physics - Physics Education Research Supplement, 2000. 68(7): p. S52-S59.
7. Piaget, J., Development and Learning. Journal of Research in Science Teaching, 1964. 2(3): p. 176-186.
8. Ward, T.B., Smith, S. M., Vaid, J., Conceptual structures and processes in creative thought, 1997, American Psychological Association: Washington, DC.
9. Hrepic, Z., Rebello, N. S., Zollman, D. A. Identifying student models of sound propagation in Physics Education Research Conference. 2002. Boise, ID: PERC Publishing.
10. Scherr, R.E., Wittmann, M. C. The challenge of listening: The effect of researcher agenda on data collection in Physics Education Research Conference. 2002. Boise, ID: PERC Publishing.
11. Hestenes, D., M. Wells, and G. Swackhammer, Force Concept Inventory. The Physics Teacher, 1992. 30: p. 141-151.
12. Josephson, J.R., Josephson, S. G., Abductive Inference: Computation, Philosophy, Technology. 1994, New York, NY: Cambridge University Press.
1
Example 1 Interviewer: {I} How does turning the pedals make the rear wheel
move? (Real bike provided) Student: {A} Because it has a chain {T} it’s kinda like a pulley, almost like an elevator in a
way, how this is set up. {W} It just grabs onto this little round thing (a sprocket),
but it works like a pulley thing. As this moves it in turn makes this sprocket move which in turn is connected to this, that rotates this as this is rotating.
Implications of a framework for student reasoning in an interview Kara E. Gray, Zdeslav Hrepic, Salomon F. Itza-Ortiz, Alicia R. Allbaugh,
Paula V. Engelhardt, N. Sanjay Rebello & Dean A. Zollman Physics Department, Kansas State University, Manhattan KS, 66506-2601
Abstract: We discuss the implications of a framework to characterize student reasoning in an interview and its underpinnings in cognitive psychology. Our framework, described in a previous paper in these Proceedings, enables a researcher to identify various cognitive elements used by a student during an interview. Our thesis is that this framework can help identify reasoning paths used by the students. We discuss how this framework can be applied to both a coarse and fine grained analysis of reasoning and how it can be used to infer a student’s implicit reasoning processes.
Summary of the Framework From our diverse interview data, we have
constructed a framework for student reasoning in an interview. Our framework consists of four elements: 1) External inputs {I} (e.g. questions, verbal, graphic and other cues) from the interviewer and interview environment. 2) Tools {T} (e.g. memorized facts, formulae, laws and definitions as well as prior experiences) that the student brings to the interview. 3) Workbench {W} encompassing mental processes (e.g. induction, accommodation) that incorporate {I} and {T}. 4) The answer {A} given by the student. Connections with Cognitive Psychology
It may be evident from the nomenclature of various elements that our framework uses the metaphor of a workshop. The input {I} is analogous to the work order given to a worker (e.g. build a chair). The tools {T} are analogous to the tangible implements (e.g. saw) that the worker uses, as well as her skills in performing the task. The workbench {W} is analogous to the work area (e.g. work table) as well as the fabrication processes. The answer {A} provided by the student is analogous to the finished product (e.g. chair) constructed by the worker.
Our framework also has underpinnings in cognitive psychology. The sensory input and response are analogous to {I} and {A} respectively. The short-term (working) memory, and the mental processes occurring therein are analogous to {W}. The long-term memory and information stored therein are analogous to tools {T}.
Our framework also shares commonalities with a metaphor in cognitive psychology – the computer. Input {I} is analogous to input devices
(e.g. keyboard). Answer {A} is analogous to output devices (e.g. monitor). Tools {T} are analogous to stored information (data, software etc.) on the hard drive. Workbench {W} is analogous to active processes in a processor or RAM. Some Interesting Reasoning Paths
Our framework can unearth some interesting reasoning paths used by students as shown below. Analogical Reasoning: Analogies can be powerful reasoning tools. [1] An analogy involves two main components -- source and target. In our framework the target is provided by {I}, however. the source is the tool {T} that the student selects. Analogical reasoning involves three processes in the workbench {W}. First is recognizing, i.e. finding {T}. Second is abstracting the structural similarities between source and target. Third is mapping these principles from source to target.
In Example 1, the student uses a real-world analogy to answer a question about a bike. When asked how the pedals make the rear wheel move {I}, the student uses an analogy of a pulley in an elevator {T} (source). The first process in {W} (recognition) is implicit. The student explicates
2
I1 T1
W1
A1I1 T1
W1
A1I2
W2
A2I2
W2
A2
Fig. 1: Conflict resolution reasoning path.
Example 3
Interviewer: {I} What happens when it [the sound] propagates
[through the wall], and what happens when [i.e. why does] it gets quieter?
Student: {W1} (Pause) Well, if I would say {T1} it [i.e. the sound] was material, {W2} which I don’t think it is … it would go through
here (sketches the path shown below), and it would hit some of these until it’d lose some of its strength.
{T2(A0)} But, since I said it’s not material, I’m not sure….
{W3} So, maybe I’ll have to go back and say that maybe there is something material in it, because … I don’t know why else
{T3} it would … be louder on …[one] side and quieter on the other side.
{W4} [Unless] if it was material … I’m still having a hard time thinking that, like the vibrations are material, but on the other hand
{A} I don’t know why they, like how this [sound diminishing] would happen if it wasn’t material.
Example 2 Interviewer: {I1} How will they (2 bulbs in parallel) compare now (to
one battery and one bulb)? Student: {A1} I still think it won’t be as bright as a single bulb {T1} because you still have two bulbs to light. {W1} It will still be less than the first (one battery and one
bulb) because you still have energy, you still have to share between two bulbs instead of just one.
Interviewer: {I2} So what happened? (Interviewer completed circuit
and bulbs light.) Student: {A2} It stayed the same. {W2t} Well, you just have that constant energy going to
each.
the other two processes (mapping and abstracting). She talks about the mechanism and how it makes the wheel move via the chain {A}. Conflict Resolution: Cognitive conflict or dissonance [2] can help students learn science. [3] Piaget’s [4] cognitive disequilibrium occurs during assimilation and accommodation (both {W}), when a learner’s internal knowledge {T} conflicts with her experience in a discrepant event {I}.
In Example 2, when asked to predict how the brightness of two bulbs in parallel will compare to a single bulb {I1}, the student answers based on a model {T1} that the battery supplies a fixed amount of energy that is shared by the two bulbs in parallel. She applies {W1} this model to conclude that the bulbs will be less bright {A1}. The interviewer completes the circuit so that the bulbs light and asks what happened {I2}. The student answers that they stayed the same {A2} reasoning that the energy must be the same going to each bulb {W2t}. The tool, which is implied, is denoted by ‘t.’ Fig. 1 shows the reasoning path. Metacognition, or “thinking about thinking,” was first defined by Flavell. [5] Metacognition is often described in terms of two components –knowledge
and regulation. Metacognitive knowledge, a {T} in our framework, refers to self-awareness about one’s own learning. Metacognitive regulation [6] involves mental processes i.e. {W} to monitor cognitive outcomes {A}. Therefore, various components of metacognition correspond to the elements of our framework.
In Example 3, a student is asked {I} to explain why sound is softer on the other side of a wall. She starts by assuming {W1} that sound is a material entity {T1} based on which she figures {W2} that it would be softer on the other side of the wall. Next she alludes to a response {A0} to a previous question where she had concluded that sound is not a material entity [7]; and uses this response as a tool {T2 (A0)}. Then she reflects {W3} on why this model {T2} does not explain her
3
experience {T3} that sound is quieter on the other side. Finally, she goes back {W4} to her previous assumption that sound is material, which would explain {T3}, but she is not comfortable with the idea that sound (“vibration”) is material. So the final answer {A} is an unresolved dilemma. This reasoning path is metacognitive because she engages in self-regulation {W}, monitoring her cognitive outcome -- conflict between assumption {T1} and model {T2} -- and tries unsuccessfully to achieve self-consistency. Advantages of Using Our Framework
The framework was constructed from our interview data. Therefore, it can aid in various stages of an interview-based research project. In the research design stage the framework can help focus the overall protocol to better meet the goal of understanding students’ reasoning. Second, it can help researchers design individual interview questions {I} to better elicit the cognitive tools {T} and workbench processes {W} that a student uses. In the research implementation stage, i.e. during the interview, the framework can help the interviewer ask appropriate follow-up questions {I} that would urge students to explicate their reasoning. Finally, in the research analysis stage, the framework can help a researcher glean overall trends in a student’s reasoning across several questions or to analyze a transcript at multiple grain sizes.
In Example 4, a student is asked the number of gears that a bike has. The transcript can be analyzed at two grain size levels. We can use a broad brush to see global trends in the data and larger knowledge structures. We can also use a finer brush to see details that emerge from the data such as smaller knowledge structures, trying various tools and the back and forth trying to decide between different answers.
Our framework can be applied in two ways. First, it can be used to understand what students say by categorizing various words and phrases in the transcript as {I}, {T}, {W} or {A}. Second, it can be used to infer what students think. To do so researchers make informed speculations about what students are thinking. Thus, this mode of application is highly susceptible to researcher interpretation and bias. In either case, it is advisable to use standard reliability measures such
as inter-rater reliability while using the framework. Example 5 below demonstrates how the framework can be used in the two ways described above.
In Example 5 below, students are asked to
explain how sound propagates through the wall. By parsing the student’s response one can identify {W}, {T} and {A} as they chronologically occur in the transcript. A researcher may also try to infer that the student uses analogical reasoning.
Example 5: Applying the framework to… What students say What we infer they think
{I} Asked how sound gets to the other side of a wall.
{W} “Well, I would say that to me it is somewhat like
{T} a maze for the sound {A} it just kind of works its
way through until it gets to the other side.”
Student recognizes {W} that the situation is analogous to a maze {T} for the sound. She applies the analogy to deduce {W} that air works its way through until it gets to the other side of the wall {A}.
I T
W
AI T
W
A
I T
W
AI T
W
A
Analogical reasoning involves three {W}s
recognizing and selecting a target {T}, abstracting the structural similarities between source and target and mapping similarities from source to target. The first of these processes is somewhat evident from the transcript. The other two are inferred based on our theoretical understanding of analogical reasoning. Therefore, the reasoning
Fine
4
path goes back to {W} (for abstracting and mapping) before terminating at {A}.
There was no attempt made in the inferential analysis above to separate the abstraction and mapping processes in {W}. This demonstrates that although the framework can be used to bridge data with theory, use of the framework must ultimately be grounded in the data. Inter-rater Reliability
Our framework may not characterize a student’s reasoning definitively. It is plausible that two researchers analyzing the same transcript may arrive at slightly different descriptions of a student’s reasoning path. Therefore, our framework is susceptible to a researcher’s bias in ways similar to other qualitative methods. We determined the inter-rater reliability of the coding scheme based on our framework as follows: Four researchers involved in this project pooled two transcript segments from each of their data sets. Each segment was coded by two different researchers, who had not originally collected the data. The inter-rater reliability averaged over the four pairs of researchers who coded the transcripts, was 81% ± 6% for the fine analysis and 67% ± 5% for the coarse analysis. Other Issues
In using our framework to characterize the dynamics of student reasoning in an interview we have so far focused exclusively on student reasoning rather than underlying factors such as a student’s epistemology and expectations. These factors are in fact ‘higher order’ or ‘meta’ tools in that they influence a student’s choice of tools and workbench processes. Wittmann and Scherr [8] have demonstrated that a student’s epistemological stance can mediate a student’s sense-making processes. Our framework can alert a researcher to these issues and help her identify the possible epistemological mode that the student is operating in.
In the first segment below, the student says that she needs to be “scientific.” Similarly, the student in the second segment indicates that he should have “read the chapter.” In both cases it appears that the student is operating in the “knowledge is propagated stuff” epistemic mode.
Our framework alerts the researcher to statements such as those made above, which may reflect a student’s epistemological stance. Acknowledgements
This work is supported in part by NSF grants REC-0087788 and REC-0133621. References Cited 1. Gentner, D., Holyoak, K.J., Kokinov, B.,
Analogy: Perspectives from Cognitive Science, ed. D. Gentner, Holyoak, K.J., ,& Kokinov, B. 2000, Cambridge, MA: MIT Press.
2. Festinger, L., A Theory of Cognitive Dissonance. 1957, Stanford, CA: Stanford University Press.
3. Hewson, P.W.H., M. G. A., The role of conceptual conflict in conceptual change and the design of science instruction. Instructional Science, 1984, 13: p. 1-13.
4. Piaget, J., The equilibration of cognitive structures. 1995, Chicago, IL: University of Chicago Press.
5. Flavell, J.H., Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 1979, 34: p. 906-911.
6. Brown, A.L., Metacognition, executive control, self-regulation, and other more mysterious mechanisms, in Metacognition, motivation, and understanding, R. H. Kluwe, Ed.. 1987, Lawrence Erlbaum Associates: Hillsdale, NJ.
7. Hrepic, Z., Rebello, N. S., Zollman, D. A. Identifying student models of sound propagation in 2002 Physics Education Research Conference. 2002. Boise, ID: PERC Publishing.
8. Wittmann, M.C., Scherr, R. E. Student epistemological mode constraining researcher access to student thinking: An example from an interview on charge flow, 2002 Proceedings of the Physics Education Research Conference. 2002. Boise, ID: PERC Publishing.
1
Student goals and expectations in a large-enrollment physical science class N. Sanjay Rebello
Physics Department, Kansas State University, Manhattan KS 66506-2601
Abstract: What are the goals of non-science students taking a lecture-based physical science course? Do students’ goals and expectations change as they progress through the class? We surveyed students on the first day of class about their goals as well as what they, their instructor and their classmates could do to help them achieve these goals. The same questions were asked at the end of the semester. A comparison of students’ pre- vs. post-course responses reveals that students change what they believe to be key to meeting their goals for the class. After the class they are more likely to believe that they and their peers rather than the instructor have a larger role in achieving their goals.
Introduction Instructors in science courses often have
implicit expectations about what students should learn. [1] In his recent book, Redish [2] refers to these goals as the “hidden curriculum.” However, instructors are not the only ones that have goals and expectations about a course. Often students have goals beyond getting a good grade. In the present study we attempt to understand and measure students’ goals in a conceptual physics course as well as their expectations of themselves, their peers and their instructors in helping them achieve these goals.
We believe it is important for educators to understand students’ motivations and goals and what students believe can help them achieve these goals. We hope that students would begin to see learning as a shared responsibility by themselves and their peers and believe that classroom interaction can positively contribute toward their goals. The results of this study demonstrate, albeit in the context of a single class, that it is possible to achieve these desired shifts. Relevant Literature
It has been recognized that students’ goals and expectations affect the way in which they react to instruction. Researchers [3-5] have found that students often have misconceptions about what they should expect from a science class. At least three instruments have been used to measure students’ views, expectations and beliefs about physics and science in general.
Redish and co-workers [6] developed the Maryland Physics Expectations Survey (MPEX) to measure students’ expectations about what they needed to do to succeed in their physics course. Students’ results are often presented as either
favorable or unfavorable compared to those of experts. The instrument was originally designed for an introductory calculus-based class. The Views About Sciences Survey (VASS), developed by Halloun and Hestenes [7] probes students’ views about the nature of science and about what it takes to learn science. Students are categorized as having either expert, folk or transitive views. Elby and co-workers [8] have developed the Epistemological Beliefs Assessment in the Physical Sciences (EBAPS) which measures how students function in a real science class rather than what they think about how they should function in an idealized situation. Motivation & Research Goals
Each of the instruments above and several other instruments similar to these represent years of research and measure attributes along multiple dimensions. However, we felt that none of them completely met our needs, i.e. measuring students’ goals in our particular class and what students felt they, their classmates and their instructor could do to help them achieve these goals. Our research questions were: What are the goals of students entering a
conceptual physics class? What they expect they, their instructor and their
classmates should do to help them achieve their goals?
What major obstacles do they perceive in achieving these goals?
Do their answers to the above questions change at the end of the semester?
Research Methodology We adapted a survey used previously by an
earlier researcher [9] in our group, which contained the following open-ended questions:
2
Q1: In addition to a good grade what are the most important goals that you wish to attain?
Q2: What are the most important actions you can take to help attain your goals?
Q3: What are the most important actions your instructor can take to help attain your goals?
Q4: What are the most important actions other students can take to help attain your goals?
Q5: What are the biggest obstacles or barriers that you will need to overcome to reach your goals? In Phase I of the study in fall 2001, students
were given the survey on the first day of class. On the last day of class one-half of the class was given a copy of their responses to the first-day survey and asked to what extent they had achieved the goals mentioned by them in the first-day survey and the extent to which they, their instructor, and classmates helped achieve their goals. The other half of the class was not shown their responses to the first-day survey; rather they were given a survey with the same questions as the pre-instruction survey, but phrased in past tense (e.g. …what were the most important goals…, or what were the most important actions you/instructor/classmates took to help…).
The open-ended responses to each question on both pre- and post-surveys were categorized for each using phenomenographic [10] analysis. In this qualitative analysis technique, the researcher categorizes students’ open-ended responses on the survey. The researcher does not decide a priori what the categories should be but rather the categories emerge from the responses.
Phase II of the study was conducted one year later, in fall 2002, in the same course taught by the same instructor (author). Students were presented with a survey having the same questions, but this time the students were asked to rank order a set of statements for each question. These statements were based on the categories extracted from students’ open-ended responses in Phase I. We acknowledge that the labels for these categories may be ambiguous. For instance, “studying hard” in Q2 was a category that arose from student responses, but we cannot be sure what students mean by “studying.”
Based on the pre- vs. post- comparisons for Phase I we decided not to split the post-instruction survey into the two formats, rather all of the
students in the post-instruction survey were given the pre-instruction survey with rephrased questions in the past tense as described above. Context of Study
This study was conducted in a conceptual physics class for non-science majors. The largest single group of majors was business majors (35%). A vast majority were either sophomores (45%) or first-year students (36%). The gender ratio was nearly 50:50.
The textbook for the class is Conceptual Physics by Hewitt. [11] Most of the students were non-science majors who had not taken physics in high school. The course met three times a week for a 50-minute lecture in large lecture hall. There was no laboratory component in this course.
This course was chosen for two reasons. First, anecdotal evidence indicates that students in this course typically do not see themselves as “science people” and are taking the course merely to fulfill a requirement. They are also usually very apprehensive of this course and for the most part are satisfied with merely passing it. Therefore, we wondered how these students would respond if asked their goals and expectations for this course. Second the course was taught in a traditional format. Research [12] has shown that such courses tend to be ineffective in promoting student conceptual learning. Student attitudes and beliefs are also typically harder to change, even with targeted interventions in calculus-based physics courses. [6] Given this background this course provided a challenging opportunity to test whether any attitudinal change at all could be affected.
At least two strategies to increase student participation in class were used. The first is an adaptation of Peer Instruction [13] using an infrared Personal Response System (PRS). The second is an adaptation of Interactive Lecture Demonstrations. [14] Students were asked to predict the outcome of the demos by voting on the PRS system. They then observed the demo and often voted again to explain their observations. Therefore, we integrated a predict-observe-explain sequence with these demos using the PRS.
We hypothesized that we would see a shift toward students’ believing that classroom interaction between themselves and their peers had
3
0% 25% 50% 75% 100%
Be Quiet & NotDisturb
Share Notes &Give Help
Work in StudyGroups
Participate inClass
What should your classmates do to help your goals?
What did your classmates do to help your goals?
z =7.90
z =4.20
Fig. 2: Responses to Q4 for pre vs. post
0% 25% 50% 75% 100%
Being Attentive
Original Ideas
Interacting w/ Others in Class
Keeping Positive
Studying Hard
Working w/ Others Outside Class
What should you do to achieve goals?What did you do to achieve goals?
z =2.38, p=0.017
Fig. 1: Responses to Q2 for pre vs. post
a positive role to play in helping them achieve their goals. Results & Discussion Phase I (N=176) was primarily used to construct the categories from students’ open-ended responses to the questions. The results have been reported previously. [15] Half of the post-instruction surveys were used to gauge the extent to which students felt that their goals had been met and that they and others (peers and instructor) did what they had expected them to do to help them achieve these goals.
Results of this half of the post-instruction survey indicated that almost all (>95%) students felt that they had met their goals and expectations for the course. The results for the second half of the post-instruction survey (when students were not shown their pre-instruction responses) showed significant shifts (similar to Phase II) in students’ perceptions of the role of themselves, their instructor and other students in the class.
Phase II (N=124) Students were asked to rank order statements from most likely (Rank=1) to least likely (Rank=5). Similar wording was used for all questions (pertaining to expectations of themselves, their instructor, their classmates and obstacles faced). The results reported below indicate the percent of respondents who ranked the corresponding statement toward the “top.” The functional definition of “top” is responses in which the statement was ranked either ‘1’ or ‘2.’ We used the z-test of proportions to compare the pre-instruction vs. post-instruction top ranking for each statement. A z-value ≥ 1.96 corresponds to p-value ≤ 0.05.
There is no significant change in the top goals identified by students at the beginning of the semester to those identified at the end. “Increasing general knowledge” (~75%) followed by “Understanding Physics” (~50%) were cited as students’ top goals before and after the class.
We focus on responses to Q2, Q4 and Q5. The first two of these pertain to what the students and their classmates did to help them achieve their goals. Q5 focuses on obstacles faced by the students.
In Q2 (Fig. 1) students rank ordered statements pertaining to what they did to achieve their goals for the course. The only statistically significant
(z=2.38, p=0.017) increase is for “interacting with others in class.” After completing the class students appear to have recognized that in-class participation helped them achieve their goals much more than they predicted at the beginning of the class.
In Q4 (Fig. 2) the percentage of students who ranked class participation as one of the top actions their classmates could take significantly increased (z=4.20). Correspondingly the percent of students who at the beginning of the course ranked being quiet as the top action their classmates could take to help them achieve their goals declined significantly (z=7.90). Students appear to have
realized that their classmates can help them achieve their goals by participating in class rather than by being quiet.
In Q5 (Fig. 3) students rank ordered statements pertaining to the main obstacles and barriers they expected to face or did face as they tried to achieve their goals. The only significant (z=3.67, p=0.0024) increase occurs in those students who
4
0% 25% 50% 75% 100%
Inadequate ScienceBackground
Inadequate Math Background
Lack of Interest in Physics
Lack of Natural Ability
Lack of Motivation
What obstacles do you expect to face to achieve goals ?What obstacles did you face to achieve goals?
z =3.67, p=0.0024
Fig. 3: Responses to Q5 for pre vs. post
cited “lack of motivation” as a barrier. Again students appear to believe their own attitude was an obstacle to the course rather than external factors.
Conclusions We have demonstrated that students’
expectations of the role of themselves, their peers and their instructors in helping them achieve their goals can change significantly in a course. Although students at the beginning of the course do not cite classroom participation or peer interaction as a factor contributing toward their goal, at the end of the semester both of these factors increase significantly in their importance toward contributing to students achieving their goals in the class. Also, at the end of the class students are more likely to cite their own lack of motivation as an obstacle, rather than their prior knowledge or external factors.
These changes are all desirable because students appear to recognize their own and their peers’ role in contributing toward their goal. We speculate that these changes were due to the focus on interactive engagement in class. Further research comparing this class with a more traditionally taught class would be needed to substantiate this claim. Acknowledgements
Dr. Kirsten Hogg, University of Sydney provided the initial version of the survey [9]. References Cited 1. Lin, H., Learning physics vs. passing courses.
The Physics Teacher, 1982. 20: p. 151-157. 2. Redish, E.F., Teaching physics with the physics
suite. 2003: John Wiley & Sons.
3. Songer, N.B., Linn, M. C., How do students' views of science influence knowledge integration? Journal of Research in Science Teaching, 1991. 28(9): p. 761-784.
4. Linn, M.C., Songer, N. B., Cognitive and conceptual change in adolescence. American Journal of Education, 1991: p. 379-417.
5. Carey, S.E., R., Honda, M., Jay, E., Unger, C., An experiment is when you try it and see if it works: a study of grade 7 students' understanding of the construction of scientific knowledge. International Journal of Science Education, 1989. 11: p. 514-529.
6. Redish, E.F., Saul, J. M., Steinberg, R. N., Students expectations in introductory physics. American Journal of Physics, 1998. 66(212-224).
7. Halloun, I. Views about science and physics achievement: The VASS story, Proceedings of the International Conference on Undergraduate Physics Education (ICUPE), 1997. College Park, MD: American Institute of Physics.
8. Elby, A., Helping students learn how to learn. Physics Education Research: A Supplement to the American Journal of Physics, 2001. 69(7): p. S54-S64.
9. Hogg, K. Attitudes of future teachers to teaching and learning in Physics Education Research Conference 2000. Guelph, ON, Canada.
10.Marton, F., Phenomenography - a research approach to investigating different understanding of reality. Journal of Thought, 1986. 21: p. 29-39.
11. Hewitt, P. G. (1998). Conceptual Physics, 7th Ed, Addison Wesley.
12.Hake, R.R., Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses. American Journal of Physics, 1998. 66(1): p. 64-74.
13. Mazur, E., Peer Instruction: A User's Manual. 1997, Upper Saddle River, NJ: Prentice-Hall.
14.Sokoloff, D.R., Interactive Lecture Demonstrations. 2001: John Wiley & Sons.
15.Rebello, N.S., Student goals in a conceptual physics class. AAPT Announcer, 2002. 32(2).
1
The Teaching Experiment – What it is and what it isn’t
Paula V. Engelhardt, Edgar G. Corpuz, Darryl J. Ozimek, and N. Sanjay Rebello Physics Department, Kansas State University, Manhattan KS , 66506-2601
Abstract: Much of the research to investigate how students’ reason or what knowledge structures they possess and utilize have typically been done using the clinical interview format. The clinical interviews are often semi-structured and may or may not involve demonstration equipment. In the early 1980’s, mathematics researchers began experimenting with a new style of interviewing which they termed the “teaching experiment.” These two methods will be compared and contrasted within the context of sound. Students from a conceptually-based introductory physics course were interviewed using both formats in an effort to understand how they view the production of sound from musical instruments.
Introduction David Ausubel comments “the most important
single factor influencing learning is what the learner already knows. Ascertain this and teach him accordingly.”[1] Typically, ascertaining what a student already knows has been done using the clinical interviewing style developed by Piaget.[2, 3] A technique known as the “teaching experiment” utilized by mathematics education researcher Steffe [4] may shed more light on how students’ concepts change and are influenced by various instructional methods. This paper will examine the differences between the clinical interview and the “teaching experiment” by examining the types of information that one can glean by using these two methods within the context of sound. Clinical Interviews
Clinical interviews have become the bread and butter method for determining what students understand of various physics phenomena. The method has been used at all levels of instruction from primary school to university graduate level. Typically, the format of the interview is semi-structured, having some pre-planning of the content, tasks, and questions.[5] The results of the interviews are then transferred to the learning environment, providing the instructor with a better understanding of how their students view particular concepts and what alternative explanations students may be expected to give.
The goal of the interview is to understand students’ current reasoning patterns without attempting to change them. Teaching Experiment
The teaching experiment is a variation on the interview technique. It incorporates three components: modeling, teaching episodes, and individual or group interviews. The most important aspect of the teaching experiment is the modeling of the students’ responses into a coherent picture of the students’ progress over an extended period.[6]
Teaching episodes involve the teacher/interviewer, an observer, and the students under investigation. As with clinical interviews, the teaching episodes are recorded and analyzed. The analysis is then used to guide the next teaching episode. It is during this phase that the researcher’s hypotheses are tested or perhaps abandoned based on responses given by the students. During the teaching episode, the students’ reasoning is the focus of attention just as in the clinical interview.[6] The purpose of the observer in the teaching experiments is to help the teacher/interviewer understand the student and to aid in determining the next phase of the teaching episode. The observer offers a more objective view of the interactions that occur during a teaching episode.
These three components are not self-standing but are intimately interwoven. One does not carry
2
out modeling of student responses without first having conducted an interview that might have included a teaching episode. Steffe and Thompson remark
In their attempts to learn students’ mathematics, the researchers create situations and ways of interacting with students that encourage the students to modify their current thinking.[6]
This aspect of the teaching experiment sets it apart from the clinical interview in that it is an acceptable outcome of the teaching experiment for students to modify their thinking.
Much of the work using the teaching experiment methodology has been in the field of mathematics. Two groups have adopted this methodology in their investigations of electricity concepts [7] and non-linear systems.[8] Advantages of the teaching experiment
In terms of curriculum development and the evaluation of new teaching methods, the teaching experiment offers several advantages over the traditional clinical interview. First, the teaching episodes allow the testing of new techniques. Analysis can pinpoint which technique provided the students with the most conceptual growth. Second, it more closely mimics the natural classroom environment when performed with groups of students. Additionally, from a research perspective, it provides a training ground for graduate students in interview techniques. When the graduate student acts as an observer, they learn proper interview etiquette as well as the process of transcribing and analyzing transcripts. The teaching experiment, learning cycle and Socratic teaching
The teaching experiment embraces both the learning cycle [9] and Socratic teaching [10] in its tenets. The structure of the interview resembles a Socratic dialog. Students are repeatedly asked probing questions to try and elicit as much of their reasoning and thought processes as possible. The questions tend to be focused around the activities or tasks that the students are asked to think about and explain.
The teaching experiment is also related to the learning cycle. A typical learning cycle consists of three stages, an exploration phase, concept
introduction phase and concept application phase. In the exploration phase, students explore the concept under investigation through hands-on activities. In the concept introduction phase, an explanation of the observations that were performed in the exploratory phase is given a name and further refined. In the concept application phase, students apply the concept that they explored and later named to new situations. In the teaching experiment, there is a cycle associated with the students and another associated with the interviewer/researcher. The connections between these two cycles are depicted in Figure 1. Our adaptation of the teaching experiment
At present, we are using the teaching experiment methodology with a group of conceptually- based introductory physics students to investigate how they understand the production of sound in musical instruments. The focus of the investigation is on their understanding of the relationship between the variables that affect the pitch of the sound that is produced. For example, variables of key interest in the cello are the thickness of the strings, the type of metal from which the string is made, the tension in the string, and the length of the string.
We have run two trials of the teaching experiment. In both trials, students are involved for one hour for three days. In the first trial, students met with us every other day over the course of one week. The first trial occurred prior to instruction on sound. In the second trial, students met with us once a week for three weeks. The greater time lag in the second trial allowed for time to distill the information gathered and to transcribe the previous teaching episode. The second trial occurred after brief instruction on sound.
Throughout the teaching experiment, multiple learning cycles occurred. Day 1, illustrated in Figure 1, covered a complete learning cycle related to the properties of waves. A second cycle explored the properties of standing and traveling waves. This cycle continued on Day 2 with further explorations of standing waves in demonstrations involving organ pipes, singing rods, and bugle (corrugated) tubes. Portions of the second cycle served as an exploration of relevant concepts for later application to musical instruments.
3
Teaching
Experiment Learning
Cycle Phase Student’s perspective Teacher/Interviewer’s
perspective Exploration Explore the concept of waves
through demonstrations such as wave machine, slinky, and stadium waves
Explores initial student’s conception of wave and tests which demonstrations aid the students most in developing an understanding of waves
Teaching episode 1
Concept introduction
Introduction of transverse and longitudinal waves and elaboration on the properties of each
Clarifies student’s initial and current conception of waves and verifies which demonstrations helped most in developing an understanding of a wave
Concept application
Applies new knowledge to determine which type of wave sound is based on demonstrations using tin can telephones, speaker and feather, and sympathetic resonance of tuning forks
Evaluate demonstrations for their effectiveness in aiding students to build an understanding of the fact that sound is a longitudinal wave
Figure 1: Comparison of the learning cycle from the teacher/interviewer’s perspective and the
student’s perspective to a teaching experiment covering sound in musical instruments Demonstrations used in this third cycle included wine glasses, glass bottles (blowing over), organ pipes, singing pipes, sonometer, and Chladni plates. Day 3 finished cycles 2 and 3 with applications to musical instruments. The concepts introduced at the beginning of Day 3 served as the concept introduction phase for both cycles 2 and 3 and their application to musical instruments serving as the application phase.
During a particular teaching episode, the teacher/interviewer asked students to predict, observe, and explain what they saw or expected to happen in each demonstration. Students were allowed on their own to further explore demonstrations. The flow of the interview was always dictated by students’ answers to questions posed by the teacher/interviewer. Additional demonstrations were created on the spot to help students answer questions that they posed during the course of explaining what they believed to be happening in a particular demonstration. At the end of each teaching episode, students were asked to reflect on the days’ activities and draw connections between the activities, indicate where they still had questions, and make any other comments they thought were relevant. This is an
adaptation of a recommendation by Komorek and Duit [8] to use questionnaires between teaching episodes to further aid the preparation of the next episode. Results from Clinical interviews
Five clinical interviews were conducted with students taking a conceptually- based introductory physics course in spring 2003. These students had no prior instruction on sound. All had previously played a musical instrument. The focus of the interview was on the relationship between the variables (string length, tension, etc.) and how that affected the pitch of the sound that was produced.
Although detailed analysis of these interviews has not yet been completed, some general comments can be made. This group of students relied heavily on their personal experience having played a musical instrument. They tended to describe the sound produced by each musical instrument in terms of a vibration. The location of the vibration depended on the instrument being discussed. They could often correctly predict the pitch of the sound produced by the instrument. They related the length to the pitch – short length, low pitch, long length, high pitch, but were not often able to explain their reasoning in more detail.
4
Results from trial 1 of teaching experiment
One group of three students enrolled in a conceptually- based introductory physics course in summer 2003 volunteered to participate in the teaching experiment. All had previously played a musical instrument and participated prior to instruction on sound. The analysis is only partially completed. Here are a few general comments. These students relied more heavily on the demonstrations that were performed during the teaching episodes rather than their personal experience from playing musical instruments. They used the earlier demonstrations to strengthen their explanations. Conclusions
Clinical interviews provided details of how students currently understand a particular physics concept. They reveal areas where students are confused, but cannot always reveal how best to create a change in students’ thinking as this would violate the rule that one should not teach during an interview.[5] Through a teaching experiment one can discover which technique will produce a change and can follow that change. For example, the speaker with either a candle or feather placed in front of it appears to help students conclude that sound is a longitudinal wave. Students also need to have a firm grasp of the properties of longitudinal waves in order to make this connection. They especially need to understand that a longitudinal wave has its displacement and movement of the wave in the same direction.
Although clinical interviews provide a wealth of information about a students’ current thinking of a particular topic, the teaching experiment can provide more robust information on how one can facilitate a shift in students’ conceptions toward the scientific view. The information gathered can be used to aid instructors in selecting appropriate materials and help them determine the proper sequencing of activities. Teaching experiments mimic more closely the actual classroom environment. Acknowledgeme nts
This work was supported in part by the U.S. National Science Foundation under grant REC-0133621.
References Cited 1. Ausubel, D. P. Educational Psychology: A
Cognitive View. 1968, Holt, Rinehart, and Winston, p. vi.
2. Piaget, J. The Child's Conception of Physical Causality. 1930, New York: Harcourt Brace.
3. Piaget, J. The Child's Conception of the World. 1929, New York: Harcourt Brace.
4. Steffe, L. P., The Teaching Experiment Methodology in a Constructivist Research Program, in Proceedings of the Fourth International Congress on Mathematical Education. M. Zweng, T. Green, J. Kilpatrick, H. Pollack, and M. Suydam, Editors. 1983, Birkhäuser: Boston, Massachusetts.
5. Ault, C. R., Jr. Structured Interviews and Children's Science Conceptions. Hoosier Science Teacher, 1983. 9 (2): p. 45-53.
6. Steffe, L. P. and Thompson, P.W. Teaching experiment methodology: Underlying principles and essential elements in Research design in mathematics and science education. R. Lesh and A. E. Kelly, Editors. 2000, Erlbaum: Hillsdale, NJ, p. 267-307.
7. Katu, N., Lunetta, V.N., and van den Berg, E. Teaching Experiment Methodology in the Study of Electricity Concepts in The Proceedings of the Third International Seminar on Misconceptions and Education Strategies in Science and Mathematics. 1993, Misconceptions Trust: Ithaca, NY.
8. Komorek, M. and Duit, R. The teaching experiment as a powerful method to develop and evaluate teaching and learning sequences in the domain of non-linear systems. To be published in the International Journal of Science Education. Available on the web at http://www.ipn.uni-kiel.de/abt_physik/ paper.pdf
9. Zollman, Dean. Learning Cycles for a Large-Enrollment Class. The Physics Teacher, 1990, 28 (1), p. 20-25.
10.This web site discusses Socratic teaching, http://www.criticalthinking.org/University/socratict.html
Students’ understanding and perceptions of the content of a lecture
Zdeslav Hrepic, Dean Zollman and Sanjay Rebello Physics Department, Kansas State University, Manhattan, KS 66506
[email protected]; [email protected]; [email protected]
Abstract
In spite of advances in physics pedagogy, the lecture is by far the most widely used instructional format. We investigated students’ understanding and perceptions of the content delivered during a physics lecture. Students participating in our study responded to a written conceptual survey on sound propagation. Next, they looked for answers to the survey questions in a videotaped lecture by a nationally known teacher. As they viewed the lecture, they indicated instances, if any, in which the survey questions were answered during the lecture. A group of experts (physics instructors) also participated in our study. We discuss students’ and experts’ responses to the survey questions. Introduction
The lecture is perhaps the oldest instructional format that is commonly used today. Researchers’ interest in issues related to classroom teaching has resulted in a variety of findings significant for improvements in instruction.(Cooper & Simonds, 2003) However, educators are still concerned with how students learn in a traditional lecture.(Kvasz, 1997; Zollman, 1996) Although many novel instructional methods have been developed, it is unlikely that the lecture will soon be replaced as the most commonly used format. Therefore, the lecture deserves our attention. Motivation
In our recent study on students’ mental models of sound propagation,(Hrepic, Zollman, & Rebello, 2002) we interviewed students using the same protocol before and after lecture-based instruction. An observation of the lecture by one of the authors (Dean Zollman) indicated that the instructor had explicitly answered some of the interview questions during the lecture. However, several interviewees stated that they were unable to find answers to the interview questions in the lecture, even though they specifically looked for them. This mismatch between the perceptions of students and experts regarding the content of the lecture motivated the research presented here. Goals
Our research questions were: • What kind of questions do students perceive as being answered in a lecture? • How are students’ perceptions related to their knowledge prior to the lecture? • How do students’ perceptions of the content of a lecture compare with those of experts? Methodology
We interviewed 18 students in a conceptual physics class at Kansas State University. Over half of the students had taken high school physics. Students received extra credit worth 2% of the course grade for their participation. Sound propagation was the topic of the experimental lecture and the study was conducted soon after students had already completed their lectures and also taken an in-class exam on this topic.
In the experiment students viewed a videotaped lecture on the chosen topic, which was different from the one they had heard in class. We used a segment of a commercially available video lecture(Hewitt, 1991) on sound propagation by the author of the class text. (Hewitt, 1998) The duration of a lecture segment was less than a third of the normal class time. The fact that the lecture was given by a nationally known teacher was deemed to improve the possibility that students would find the lecture understandable. Both the lecturer and the students were native English speakers. Students had full control over video and
there were not any typical classroom distractions such as noise etc. Therefore, the experimental situation had several important advantages compared to a typical classroom.
Before students viewed the video, they responded to a written survey on sound propagation. Survey questions ranged from those addressed explicitly in the video to those not addressed at all. The survey enabled us to gauge students’ initial understanding of sound propagation. It also provided specific questions to which students were subsequently asked to find answers to in the video.
The following questions were on the survey:
Q1. Describe the nature/mechanism of sound propagation in air? [Answer: Sound is the propagation of the (longitudinal) vibration of medium particles. Or, sound is a pressure wave.]
Q2. Does the speed of the sound in air depend on temperature? [Answer: Yes. Sound propagates faster if the temperature is higher.]
Q3. Does the speed of propagation of sound depend on the motion of the source? [Answer: No. It depends only on medium properties.]
Q4. Does the speed of propagation of sound depend upon the medium? If so, how does the speed of sound generally compare between solids, liquids and gases. [Answer: Yes. Generally it is faster in solids than in liquids and faster in liquids than in gasses.]
Q5. Does sound propagate in a vacuum? [Answer: No. It needs a medium.] Q6. Does sound affect a dust particle floating in front of the loudspeaker? If so, how? [Answer: Yes. It
will vibrate longitudinally.] While watching the video, students were asked to record the answer they perceived as being given in
the video to each survey question. They were also asked to indicate, on a Likert scale, the extent to which the question was answered: 1 (Hint of the answer) to 5 (Answered completely).
After completely viewing the video, each participant was asked to record whether any further answers could be inferred from the video. This task aimed to determine questions whose answers the student perceived as being indirectly addressed, although not explicitly answered in the video.
Besides students, we also surveyed a group of experts using the same protocol. For this study experts were defined as M.S. or Ph.D. degree holders in physics. We also required the expert’s mental model of sound propagation before the video lecture was undoubtedly the wave model. Two of 11 of the potential experts were disqualified as they did not satisfy this requirement. Nearly half of the experts were non-native English speakers. Additionally, we asked the videotaped instructor, Paul Hewitt, to participate in the study. Data Analysis
A uniform set of criteria were applied to analyze the data from all participants – students and experts. Due to the complexity of answers to Q1, we classified participants’ answers in terms of their mental models of sound propagation determined from earlier research.(Hrepic et al., 2002) Results and Discussion
As a reference point we started with responses from Dr. Hewitt: Q1: Answered (rated 4/5) Q2: Answered completely (rated 5/5) Q3: Not answered Q4: Answered (rated 4/5) Q5: Not answered Q6: Partially answered (rated 2/5) Results obtained from students and experts are shown in Table 1. Both students and experts
perceived Q1, Q2 and Q4 as answered in the lecture. However, unlike students, experts in general perceived Q5 and Q6 as also answered in the lecture. Conversely, Q3 was perceived as addressed by five students though not by a single expert. In general, experts perceived questions as being answered more
frequently than students did (except Q3 that no expert saw as answered). Similarly, for all but one question (Q2) experts rated the answers as being more complete than the students did.
In almost all cases the number of correct answers after the lecture is practically the same as the number of correct answers before the lecture. The exceptions are students’ answers to Q2 and Q4. These questions were explicitly addressed in the lecture. Although the whole lecture segment was related to sound propagation, three students perceived that Q1 was not addressed at all. One student recorded the answer to Q1, artificially, so he did not address the nature of sound propagation. Only three students “upgraded” their models: Two of them from an incorrect to a less incorrect model and only one from an incorrect model to the correct model. For three students we could not determine with certainty the mental model that they used, but their responses were clearly inconsistent with the wave model. The remaining students retained their initial (incorrect) model after the lecture.
Frequency at which the
participants saw the questions addressed in the video lecture
Completeness with which questions were addressed as
rated by participants
Correctness of the answers given by participants during/after the video
lecture
Qs Viewers group
Question seen as
addressed by (%)
Average number of times addresse
d
Mode of
times addressed
Complete-ness
rated by
Avg Complete-ness
Mode of complet
eness
Answered correctly and with relevance
Answered correctly
also before the video
Correct answer given
only as an
inference
Students (N=18) 15 (83%) 1.2 1 13 3. 8 5 2 1 0
1 Experts (N=9) 9 (100%) 2.1 1 8 4.5 5 9 9 0
Students (N=18) 18 (100%) 1.3 1 17 4.8 5 17 8 0
2 Experts (N=9) 9 (100%) 2.4 2 8 4.7 5 9 8 0
Students (N=18) 5 (27.8%) 1 1 3 2 N/A 1 1 1
3 Experts (N=9) 0 (0%) 0 0 0 0 0 0 0 0
Students (N=18) 18 (100%) 1.4 1 16 4.1 5 17 12 0
4 Experts (N=9) 9 (100%) 3 3 8 4.8 5 9 8 0
Students (N=18) 3 (16.7%) 1 1 2 2.3 N/A 2 2 2
5 Experts (N=9) 7 (77.8%) 1 1 1 3 N/A 7 7 7
Students (N=18) 3 (16.7%) 1 1 4 2 2
2 1 1
6 Experts (N=9) 7 (77.8%) 1 1 5 2.2 1
6 6 3
Table 1: Results from students and experts
There were a total of 20 instances (11 students and 9 experts) in which the participants decided after completely viewing the video, that an answer to a question could be inferred from the video. In all of these cases correct inferences were made only by participants (students and experts) who answered the question correctly before viewing the video.
In comparing the ways in which experts and students see questions as being answered on the video, only the instructor (Dr. Hewitt) and the most experienced expert perceived the questions as being addressed similar to the way in which students did. This result appears to indicate that more experienced teachers have a better sense of the ways in which students might perceive the content of a lecture.
When the nature of their answers is examined, the following traits are observed. Students may... 1. concentrate on details in the instructor’s statements. (e.g. “Sound travels faster through the steel than
through the lead.”) 2. record details incorrectly. (e.g. “Sound travels four times faster in steel and about two times faster in
water [than in air].”) 3. hear/understand exactly the opposite of what the instructor said. (e.g. “Sound propagates faster in cold
air. Slower in a warm air.”) 4. hear what was not said. (e.g. “The sound molecules vibrate back and forth.”) 5. make inappropriate generalizations. (e.g. “In a liquid… sound would move four times faster than
when it is not in a liquid”) 6. create false positive answers. (e.g. “Sound bounces back and forth …so the dust particle will move
back and forth.” 7. perceive the incorrect answer when no answer is given to the question. (e.g. “If the source is moving
fast … you’ll hear it faster.”) 8. correctly repeat instructors’ statements but do not make sense of them. (e.g. “He [the instructor] was
just talking about the way the sound moves. When molecules start moving, they’re vibrating back and forth and they hit the next one and the next one ... [Sound is] just traveling with those, I guess. I don’t know. It’s just traveling with that. Like being carried with each vibrating molecule. ... I’m just in the dark with this whole sound thing.)
9. correctly repeat the instructor’s statement without realizing that it does not make sense to them. (e.g. “Molecules hit one another until they reach the person.” Interviewer: How is sound related to these molecules hitting each other? ... Student: “What do you mean? ...I don’t know. I mean I don’t think every molecule just kind of transfers…I don’t know. I didn’t think about it.”)
10. correctly repeat the instructor’s statements but interpret them differently than intended. (e.g. The students understood statements about vibration of molecules when given an example of a room full of vibrating ping pong balls so that sound is an autonomous entity which is different from the medium which moves by using vibration of medium molecules.)
11. ...hear “what makes sense” and overlook what was actually stated. (e.g. “[The dust] particle vibrates up and down,” (the same answer as given before the interview). Follow-up by Interviewer: “So what did he [instructor] say about the direction of vibration? Do you remember?” Student: (Pause) What do you mean?” Interviewer: How did you conclude that they will vibrate up and down?” Student: (Pause) Just…it wouldn’t…it wouldn’t make sense to vibrate…They couldn’t vibrate sideways.”)
12.
We now discuss the effect of earlier answers on student understanding of the lecture content. Examples are omitted from this section due to lack of the space. With respect to their earlier answers students may... 1. stick to their previous background ideas although they change the specific answer. 2. keep their initial (incorrect) model in identical form. 3. inappropriately incorporate new information into the existing (incorrect) model. 4. use the same terminology as experts do, with a very different meaning attached to it before and
after the lecture. 5. be confused more after than before the lecture.
Conclusions
In general we found in our study that… • Students correctly notice answers that are simple and explicitly stated, preferably multiple times.
Otherwise, they may try to make sense of things in a ways not intended by the instructor. • Students make incorrect inferences. Correct inferences were made in this study only when the student
(or experts) knew it previously. • Experts tend to believe that more was delivered than actually was and frequently perceive questions
more addressed than the students do. All of the above findings are important for a lecturer to bear in mind because the aforementioned problems with students’ understanding were observed in a situation that had significant advantages to a classroom lecture. Acknowledgements
This work is supported in part by NSF grant # REC-0087788. The authors wish to thank Dr. Paul Hewitt for his kind participation in this research. His input was
invaluable for analysis of our data. References Cooper, P. J., & Simonds, C. J. (2003). Communication for the clasroom teacher. Boston, MA: Allyn and
Bacon. Hewitt, P. G. (1991). Vibrations and Sound II [Video tape]: Addison-Wesley. Hewitt, P. G. (1998). Conceptual Physics (8th ed.). Reading, MA: Addison-Wesley. Hrepic, Z., Zollman, D., & Rebello, S. (2002). Identifying students' models of sound propagation. Paper
presented at the 2002 Physics Education Research Conference, Boise ID. Kvasz, L. (1997). Why don't they understand us? Science and Education, 6, 263-272. Zollman, D. (1996). Millikan Lecture 1995: Do They Just Sit There? Reflections on Helping Students
Learn Physics. American Journal of Physics, 64, 114-119.
How Many Students Does It Take Before We See the Light?Paula V. Engelhardt, Kara E. Gray, and N. Sanjay Rebello, Kansas State University, Manhattan, KS
Prior research suggests that students who can-not light a bulb given a single wire, a bulb,and a battery are not able to reason correctly
regarding complete circuits. Our research shows thatstudents believe that the wires from the filament areconnected to the base of the bulb at the bottom. Thepercentage of students with this belief seems to be de-pendent on the level of the introductory physicscourse taken (conceptual, algebra, calculus). We haveproposed three activities that appear to aid students indeveloping the correct model of how a light bulb iswired and a definition of complete circuit that classi-fies a short circuit as a complete circuit but one that isnot advantageous.
Try giving one of your students a battery, a bulb,and some wires and ask him or her to make a bulblight. You will find that this simple task will causemany students great difficulty. James Evans1 notes thelow success rate of performing this task among highschool seniors, university students, and university
Circuit 1 Circuit 2
Fig. 1. Typical drawings given by students while tryingto make a light bulb light.
216 DOI: 10.1119/
graduates. McDermott and Shaffer2 suggest that stu-dents who have difficulty with the bulb-lighting taskfail to understand and apply the concept of a com-plete circuit. Evans, however, asserts that “most of thestudents have no idea of the way the various wires in-side a light bulb are connected. Lacking this under-standing, how secure can they be in their understand-ing of ‘circuit’?”3 Yet, no research to date suggests howstudents think the wires inside a light bulb are con-nected.
Research4 indicates that students have difficultylighting a bulb given one wire, a battery, and a bulb.Does this result truly indicate that they do not under-stand the concept of a complete circuit as previous re-searchers have suggested? Or is it more that they donot know how a light bulb is wired internally? Fur-thermore, what do we, as physics instructors, mean bythe term “complete circuit?” It is generally not a termthat is explicitly defined in introductory textbooks.Students who were interviewed define it as “a com-plete path for _____ (energy, current, etc.) to movewith no breaks.” By this definition, a short circuit isalso a complete circuit. We disagree with the infer-ence of McDermott and Shaffer that suggests that astudent who makes a drawing as shown in Fig. 1 (Cir-cuit 1), or who cannot make a bulb light given a bat-tery and one wire does not understand the concept ofa complete circuit. We contend that the student doesnot understand the internal wiring of the light bulband may have difficulty identifying a short circuit. Insome respects the drawing itself is confirmation thatthe student understands a complete circuit; otherwise,
1.1696589 THE PHYSICS TEACHER ◆ Vol. 42, April 2004
the drawing would look like that in Fig. 1 (Circuit 2).There is a complete path for the charges to flow — itsimply does not flow through the bulb. We as educa-tors need to be more precise in our definition of acomplete circuit. At present, its meaning is hiddenfrom the students and not clearly defined in our text-books. We suggest the following definition:
A complete circuit is any complete path inwhich charges can move having no breaks orgaps. A short circuit is also a complete circuit;however, it is not an advantageous circuit as theelement that is intended to receive charge doesnot because the charges are following the pathof least resistance, and therefore, bypassing theelement.
This paper will present the results of a one-questionsurvey to ascertain students’ ideas of the internalwiring of a light bulb. We have already suggested auseful definition of a complete circuit and will offer aset of promising activities to help students better un-derstand how a light bulb works, strengthening theiridea of a complete circuit and enabling them to deter-mine where the contact points are on a light bulb anda socket.
So How Do Students Think a Bulb IsWired Inside?
In order to answer this question, a one-questionsurvey was created asking students to draw the loca-tion of the wires connecting to the base of a light bulb(see Fig. 2). The survey was given to 124 first-semes-ter introductory calculus-based engineering students,and 149 first-semester introductory algebra-basedgeneral physics (GP) students at Kansas State Univer-sity (KSU) in spring 2003. The results from the
siersbMlp
“
gidvlhcteoo
Class Both wires Both wires One wire to Otherto the to the side the bottom, bottom one wire to
the side
Calculus-based 18% 5% 72% 6%
Algebra-based 58% 9% 25% 8%
Concep-tual- 70% 0% 30% 0%based
Table I. Results from question asking where the wires con-nect to the “invisible” portion of a light bulb.
THE PHYSICS TEACHER ◆ Vol. 42, April 2004
urvey and additional data from 10 interviews withntroductory conceptual physics students are present-d in Table I. One intriguing observation is that theesults for the calculus-based and the algebra-basedtudents are almost exactly reversed. There appears toe an effect from the level of the physics course taken.ore research will need to be done to uncover precise-
y why this is so; however, this is not the focus of theresent paper.
Making Sense of Incandescence”From Table I, it is clear that more than 50% of our
eneral education physics students have an incorrectmage of the internal wiring of a light bulb. So, howoes an instructor help a student develop the correctiew of the internal wiring of a light bulb? We wouldike to suggest the following three activities, which weave chosen to call “Making Sense of Incandes-ence.”5 These activities are not intended to replaceraditional circuit activities, but to make them moreffective by strengthening the students’ understandingf a complete circuit and developing the correct viewf how a light bulb is internally wired and how it op-
base of the bulb
Where do these two wires connect to the base of the bulb?
filament
Fig. 2. Diagram given to students to answer the question,“On the figure of the light bulb, draw in where youbelieve the two wires that come from the filament con-nect to the base of the bulb.”
217
erates. These activities grew out of the interview pro-tocol used with the conceptual physics students. Theywere initially developed for and pilot-tested with stu-dents at a local high school in Kansas, but have sincebeen used with university-level conceptual physicsand general physics students. “Making Sense of In-candescence” loosely follows the learning cycle,6
which has three components: an exploration (BrightIdeas), a concept introduction (Activity 1: GettingHot; Activity 2: That Glowing Feeling; and Activity 3:Getting Connected), and an application (Lighting Upthe Night). The activities are done in stations. Theactivities will now be described in more detail.
The exploration activity, Bright Ideas, elicits thestudents’ initial ideas about the concepts that are pre-sented in the remainder of the activities and serves as abaseline to see what conceptual changes occur during
Fig. 3. Circuits for Activity 2: That Glowing Feeling.
Fig. 4. Equipment used in Activity 3:Getting Connected.
Circuit 1:Bulb lights.
Circuit 2:Bulb does not light.Light bulb isunscrewed.
Circuit 3:Bulb does not light.Batteries are connectedso that negative polesare in contact.
218
the activities. Students are asked to describe how alight bulb works, how they believe the wires are con-nected to the base (given Fig. 2 on which to draw), towhat other circuit element(s) the bulb is most similar(wire, battery, resistor, capacitor), what experiencesled them to choose their answer, and to define in theirown words what is meant by a complete circuit.
Activity 1: Getting Hot is intended to help studentsbetter understand how the light bulb functions via acomparison with another well-known device, theheating element from an electric stove. Students areled through a series of focused questions to deter-mine how a light bulb filament and the heating ele-ment from an electric stove are similar and dissimilarto one another. Students also determine the numberof connections each has, and whether or not the orientation of the connections to the battery isimportant.
Activity 2: That Glowing Feeling tests their idea ofa complete circuit and confronts the idea that,although three circuits may physically appear to beidentical, their actual operation may not. Studentsproblem solve to find out why two out of three iden-tical-looking circuits do not function. The circuitsare shown in Fig. 3 as they first appear to the stu-dents. Circuit 1 in Fig. 3 will result in the bulblighting after the final connection is made. In theother two circuits, the light bulb does not light aftermaking the final connection. In Circuit 2 (Fig. 3),the bulb is not fully screwed into the socket creatinga break in the circuit. In Circuit 3 (Fig. 3), the bat-teries have been hooked together so that the negativepoles are in contact with each other. Prior to makingthe final connection, students are asked if the circuitsare complete based on their definition from theBright Ideas exploration activity. They make thefinal connection and are again asked if the circuitsare complete. If they answer that the circuit is notcomplete, then the students are asked to problemsolve as to why the bulb would not light, fix theproblem, and explain how this situation initiallyresulted in an incomplete circuit.
Activity 3: Getting Connected directly confrontsstudents’ alternative image of how a light bulb is
THE PHYSICS TEACHER ◆ Vol. 42, April 2004
internally wired. Its goal is to make a Christmas treebulb light using a 6-V lantern battery connected to alarge household socket (see Fig. 4). Students do thisby first exploring how the socket works. Studentsrecord all their attempts to make the Christmas treebulb light and note at what points on the socket theyhave tried. Near the end of this activity, they aregiven a battery, a wire, and a bulb and asked to makethe bulb light given their new understanding of theinternal wiring of a light bulb.
The application activity, Lighting Up the Night,provides students a chance to test their understandingof a complete circuit and their newfound knowledgeof the internal wiring of a light bulb. Students are giv-en a flashlight, two batteries, and some wires (see Fig.5). They are asked to light the bulb without using theyellow casing. The casing is provided to the studentsfor them to reference.
Field Testing the ActivitiesThe activities have been used with three different
populations of students: high school physics (N = 13),and two introductory-level university groups — alge-bra-based general physics (N = 29), and conceptually-based physics (N = 12). Both of the university groupswere enrolled in summer school. All students had al-ready begun their study of electric circuits. The quali-tative nature of the activities was unusual for all threegroups. The high school students typically performedfew hands-on laboratory experiments, but were shownmany demonstrations in class. The most recent set ofdemonstrations dealt with electricity and was demon-
Fig. 5. Equipment given to studentsfor the application activity, Light-ing Up the Night. Students werenot allowed to use the yellow bodyof the flashlight, although it wasgiven for them to reference.
THE PHYSICS TEACHER ◆ Vol. 42, April 2004
strated in part via a Van de Graaff generator. Havingseen what electricity could do via these demonstra-tions, the high school students were reluctant to ini-tially touch and interact with the equipment. Theuniversity students were accustomed to quantitativelaboratory activities that required numerous calcula-tions and were more self-paced.
The students began by answering the questions inthe Bright Ideas activity. We recommend having stu-dents independently answer the questions from BrightIdeas in the classroom so that their own ideas areelicited and not the ideas of someone else, or the text-book. For the field test with the high school and gen-eral physics group, each of the authors was at a station(what a luxury!) to help guide and focus the students’work, as well as to observe how the students interactedwith the activities. Students were divided into threegroups of two to four people. For the larger generalphysics class, we had 10 students per station subdivid-ed into smaller groups of three to four. Sufficientequipment was available for each group. Groups ro-tated between the three stations. The order of rota-tion did not matter. After all three stations had beencompleted, we had a discussion that crystallized themain ideas from each of the activities. The main ideaswere how a light bulb works, how it was connectedwithin itself and to a socket, and what constituted acomplete circuit. The final activity was Lighting Upthe Night followed by a minute paper to reflect onwhat they had learned and pulled together from theactivities. The minute paper also served as an evalua-tion of the activities. The high school students had 40minutes to complete the activities while the universitygroups had an hour and 50 minutes. Thus, there weredifficulties completing all of the activities with thehigh school students. In the future, we recommendtwo class periods for those with 40- to 50-minute periods.
When the high school students were asked howthey liked the activities, most responded positively.The main complaint was lack of time to complete allthe activities and to discuss the meaning of their re-sults. One student remarked that there were notenough hands-on experiences in Activity 1: GettingHot. Others did not see how Activity 1 was linked tothe other two activities. We feel that this link, hadtime permitted, could have been established during
219
the discussion phase. In Activity 2: That GlowingFeeling, some students in the high school and generalphysics groups believed that the light bulb was polar-ized so that a change in the wire connections wouldresult in the bulb not lighting. We believe that this isan artifact of overemphasizing the direction of currentflow during instruction. We have hence added to Ac-tivity 1 a section that deals with the issue of whether abulb is polarized or not. In Activity 2: That GlowingFeeling, students were not accustomed to unscrewedlight bulbs and had difficulty seeing how this wouldaffect the lighting of the bulb.
The university-level students were split with theirevaluation of the activities. Those who had an incor-rect view of the internal wiring of a light bulb foundthe activities useful, often clearing up their confusionabout how a light bulb functions and connects to acircuit. Those who had the correct view found the ac-tivities trivial. Both groups found parts of the activi-ties to be repetitive. For this group of students, werecommend adapting the activities and incorporatingall or parts of them into an existing activity dealingwith light bulbs and simple circuits.
For most students Activity 3: Getting Connectedwas the most illuminating. It was during this activitythat students’ alternative image of the internal wiringof a light bulb was most strongly confronted. Manystudents began by trying to touch both leads from the
Fig. 6. Socket similar to those usedfor Activity 3: Getting Connected.The green arrows indicate the loca-tions of the point where the base isconnected to the side. The largeblue arrow indicates the tab on thebase where many students tried toconnect both wires from theChristmas tree light.
220
Christmas tree bulb to the metal tab (indicated by theblue arrow in Fig. 6) of the socket. Students were visi-bly surprised when this did not work and proceededto try other combinations. After several attempts, stu-dents found a combination that worked with little orno prompting. Students often found two screw (seegreen arrows in Fig. 6) connections on the bottom ofthe socket, which connected a metal strip on the baseto the side of the socket, and would attempt to con-nect between them, which would result in the bulblighting. They, however, did not realize that in theprocess they also made a connection to the metal tab,which actually completed the circuit. This was an im-portant point that we had to emphasize to the stu-dents and we encouraged them to find other locationsthat would also work. However, one high school stu-dent lit the Christmas tree bulb in this manner, but refused to find other locations that also worked. As aresult, this student believed that a light bulb connectsonly to the base of the socket and did not acquire thecorrect view that it is connected to the base and theside. Had time permitted, we would have liked tohave given this student the one wire task to complete,and to question how those results and the results fromthe socket related to one another. However, almost allof the students were able to quickly transfer their newknowledge of how the socket connects to a bulb tocorrect their image of a light bulb’s internal wiring.
Summing It All UpTo summarize, prior research has suggested that
students who cannot light a bulb given a single wire,a bulb, and a battery are not able to reason correctlyregarding complete circuits. No study to dateinquired into how students believe a light bulb iswired. Our research shows that more than half ofintroductory general education physics studentsbelieve that the wires from the filament are connect-ed to only the base of the bulb at the bottom. Thereappears to be a correlation with the level of theintroductory physics course taken (conceptual, alge-bra, calculus). The reason for this relationship isunclear from our current work, although it may haveto do with students’ prior experience with lightbulbs. Further research would need to be done touncover the factors influencing this apparent correla-tion. We have proposed three activities that appearto aid students in developing the correct model of
THE PHYSICS TEACHER ◆ Vol. 42, April 2004
how a light bulb is wired. We recommend their usein high schools and as supplementary activities in theuniversity. We also propose a definition of a com-plete circuit that classifies a short circuit as a com-plete circuit, but one that is not advantageous.
AcknowledgmentsThis work has been supported in part by NSF Grant# REC-0133621. We would also like to thank DeanZollman for his insightful comments and suggestionsto improve the paper. Lastly, we would like to thankthe students and their instructors for participating inthe pilot testing of the materials.
References1. James Evans, “Teaching electricity with batteries and
bulbs,” Phys. Teach. 16, 15–22 (Jan. 1978).2. Lillian C. McDermott and Peter S. Shaffer, “Research
as a guide for curriculum development: An examplefrom introductory electricity. Part 1: Investigation ofstudent understanding,” Am. J. Phys. 60, 996 (Nov.1992).
3. Ref. 1, p. 17.4. Timothy F. Slater, Jeffrey P. Adams, and Thomas R.
Brown, “Undergraduate success–and failure–in com-pleting a simple circuit,” J. Coll. Sci. Teach. 30, 96–99(2001); see references 1 and 2.
THE PHYSICS TEACHER ◆ Vol. 42, April 2004
5. These activities are available by contacting the authors.6. R.J. Karplus, “Science teaching and development of
reasoning,” J. Res. Sci. Teach. 12, 213–218. (1974).
PACS codes: 41.71, 01.40Ga, 01.40Gb, 01.40R
Paula V. Engelhardt is a research sssociate with the PhysicsEducation Research Group at Kansas State University. Herresearch interests include student understandings of real-world devices and curriculum development.
Department of Physics, Kansas State University, 116Cardwell Hall, Manhattan, KS 66506-2601;[email protected]
Kara E. Gray graduated in 2003 from Kansas StateUniversity with a bachelor's degree in physics. Sheworked with the Physics Education Research Group fortwo years. She is pursuing a Ph.D. specializing in PhysicsEducation Research.
Department of Physics, Kansas State University, 116Cardwell Hall, Manhattan, KS 66506-2601;[email protected]
N. Sanjay Rebello is an assistant professor of physics atKansas State University. He earned his Ph.D. in physics in1995. Since then he has been involved in physics educa-tion research and curriculum development. His currentinterests include student understanding of real-worlddevices.
Department of Physics, Kansas State University, 116Cardwell Hall, Manhattan, KS 66506-2601;[email protected]
221
Student Explorations ofQuantum Effects in LEDsand Luminescent DevicesLawrence T. Escalada, University of Northern Iowa, Cedar Falls, IA
N. Sanjay Rebello and Dean A. Zollman, Kansas State University, Manhattan, KS
We developed activity-based instructionalunits to introduce basic quantum princi-ples to students with limited physics and
mathematics backgrounds. To emphasize the practi-cal applications of contemporary physics, we intro-duced concepts using the contexts of light-emittingdevices such as light-emitting diodes (LEDs), fluo-rescent lamps, and glow-in-the-dark toys. As ourstandard of living becomes more dependent on thelatest developments in science and technology, ourstudents’ literacy must be at a level that enables themto make educated decisions on science- and technol-ogy-related issues and their everyday applications.Students need to have at least a basic understandingof 20th-century physics and its applications in orderto make informed decisions about them. Unfor-tunately, many physics teachers either exclude orspend very little time on modern topics such asquantum mechanics in high school physics cours-es.1,2 The high degree of mathematical formalismand abstract nature of quantum mechanics is fre-quently given as a reason for not introducing quan-tum physics in high school physics courses.3,4
Over the past few years we have been addressingthese issues as part of the Visual Quantum Mechanicsproject. To enable a broad spectrum of students tolearn quantum mechanics, we have developed a seriesof interactive instructional units that utilize hands-onactivities and computer visualizations to introducequantum principles. Two versions of the VisualQuantum Mechanics instructional materials havebeen developed — one called Visual Quantum Me-
THE PHYSICS TEACHER ◆ Vol. 42, March 2004 DOI: 10.11
chanics – Original 5 for high school students and non-science undergraduates, and Visual Quantum Mechan-ics – The Next Generation6 for undergraduate physicsmajors, which is derived from the Original. Both ver-sions focus on enabling students to make observa-tions, develop mental models consistent with quan-tum principles, and then apply these models to other,related situations. This paper describes instructionalstrategies and computer tools that have been used andadapted from two instructional units in the VisualQuantum Mechanics – Original, and what introducto-ry physics and physical science students can learnfrom exploring with everyday light sources. Materialsfor other audiences are under development.
The Visual Quantum Mechanics – Original instruc-tional materials utilize a modified learning cycle inwhich student investigations of concrete phenomenaprecede and follow the introduction of abstract con-cepts.7 These materials are divided into four major,but relatively short, instructional units and two unitsfor background review. The instructional units in-clude the following:
■■ “Solids & Light” — Students observe the lightemitted by solids and gases to understand energyquantization in atoms and its consequences ineveryday devices such as the LED.
■■ “Luminescence: It’s Cool Light!” — Students ob-serve light emitted by luminescent materials suchas fluorescent lamps and glow-in-the-dark objectsto build energy level models that explain a varietyof light-emitting processes.
19/1.1664385 173
■■ “Waves of Matter” — Students develop a modelto explain discrete energy states and to learn aboutthe wave nature of matter by examining its appli-cations to the electron microscope and Star TrekTransporter through visualization activities.
■■ “Seeing the Very Small: Quantum Tunneling” —Students learn about quantum tunneling and itsapplications to a scanning tunneling microscopeusing a computer program.
■■ “Potential Energy Diagrams” — Students reviewenergy conservation through classical experiencesinvolving Hot Wheels® cars or dynamics carts withmagnets along the track. They explore how poten-tial energy diagrams play an important role in un-derstanding of the quantum ideas.
■■ “Making Waves” — A basic review on some prop-erties of waves including interference.
“Solids & Light”Students begin by investigating and comparing the
electrical properties of LEDs and incandescent lamps.Students find that LEDs, unlike incandescent lamps,emit light of a single color that does not change whenthe applied voltage is increased. Students also findthat LEDs, unlike incandescent lamps, will only emitlight when connected in a certain polarity within thecircuit and only above a threshold that depends upon
Fig. 1. An energy diagram foran electron with –1.3 eV ofenergy.
Fig. 2. An energy diagram ofan electron resulting in theemission of a 2.0-eV pho-
174
ton.
the color of light emitted. Students then can applytheir observations to determine whether a Christmaslight is an incandescent lamp or an LED.
Following the initial investigations of the electricalproperties of LEDs and incandescent lamps, studentscompare the spectra of these devices with that of gaslamps using inexpensive hand-held spectroscopes.Students may overcome any difficulties in using thespectroscopes by first using rainbow glasses (paperglasses with diffraction gratings as “lenses”). Theythen use spectroscopes that have an eV scale8 to makemeasurements of the light energy directly. Thus, stu-dents do not need to learn the relationship betweenwavelength and energy of light at this time.
Students may recognize the spectral lines emittedby each gas lamp and the visible spectrum of the in-candescent lamp from previous experiences. This in-vestigation, however, may be their first experiencewith LEDs’ broad spectra.
Students realize that LEDs’ spectra is different fromthat of gas lamps and incandescent lamps. Studentsalso recognize that an LED is made of a solid material,which is more complicated than a gas, because theatoms in a solid are more closely packed together andinteract with each other. Therefore to understand thespectra of LEDs, one must first understand the spectraof gases.
Students have already observed that gases emit dis-crete spectral lines. Therefore, they must emit discreteenergies of light. Thus students are introduced to anenergy level model of an atom (Fig. 1) where the elec-tron’s total energy or energy level is represented by ahorizontal line.
Electrons undergo transitions, i.e., they change en-ergy levels by emitting a photon of light. As per thelaw of energy conservation:
Electron energy before = Electron energy after + Light (photon) energy. (1)
Figure 2 illustrates what happens when a photon of2.0 eV (red and orange light) of energy is emitted.Thus, by looking at the energy of emitted photonsone can learn what is happening in an atom. Thisprocess provides students with the opportunity tobuild models of the atom.
The advantage of using the energy level model torepresent the atom is that students would only require
THE PHYSICS TEACHER ◆ Vol. 42, March 2004
a qualitative understanding of energy and energy con-servation to understand how light sources consistingof either gases or solids emit light. The work of Fis-chler,9 Johnston et al.,3 and Petri and Niedderer10
have shown that students can understand the energylevel model of the atom by utilizing concepts of ener-gy and energy conservation in their reasoning. Thus,by using this model students could be introduced to afew basic quantum ideas and could reinforce their un-derstanding of energy conservation.
In our initial development of Visual Quantum Me-chanics – Original, when we asked students to con-struct an energy level diagram from the spectrum ofgas, they often used to incorrectly associate the ener-gies of the spectral lines with the energy levels, ratherthan the transitions between energy levels. To allevi-ate this misconception, we created a computer pro-gram. Students use the software Gas Lamp EmissionSpectroscopy to aid in visualizing an energy levelmodel.11,12 Students select one of the gas lamps onthe computer screen and try to match its spectrum byplacing energy levels and constructing transitions re-sulting in spectral lines (Fig. 3). When students dragthe energy levels with attached transitions, they im-mediately see the corresponding changes in the spec-tral lines that they can match with the real spectrum.Thus, the program allows students to construct an en-ergy level model that explains the spectrum that theyobserve.
The program confronts directly the aforemen-tioned misconception that the energy of a spectral lineis related to an energy level. In using the program,students quickly find that a spectrum cannot be pro-duced by simply placing energy levels on the verticalenergy scale, and that the energy of a spectral line isthe energy difference associated with a transition.
An interesting aspect of this approach is that stu-dents often find that no two energy level models areidentical! For example, one student may produce amodel that looks like a ladder with transitions be-tween each step. Another student’s model may have afew energy levels close together at the top with eachtransition going to an energy level located on the bot-tom. Both of these models are correct since they eachproduce the desired spectrum. Following this activity,the instructor could introduce the limitations of thismodel in that it does not give us a unique description
THE PHYSICS TEACHER ◆ Vol. 42, March 2004
of the energies of the atom. Thus, students can learnabout the limitations of scientific models in generaland how additional information can sometimes re-solve the differences between models.
Students then use Gas Lamp Emission Spec-troscopy to match the spectrum of an LED in whichthe spectral lines are grouped together very closely andappear like a spectral band. They find that their mod-el has two sets of energy levels that are grouped closelytogether.6 Thus, students discover that atoms in anLED must have many energy levels that are extremelyclose together. Following this discovery, students areintroduced to the idea that solids have many closelyspaced, interacting atoms. These interactions creategroups of very closely spaced energy levels called ener-gy bands. LEDs have two energy bands — the con-duction (or excited state) and valence band (or groundstate), separated by a gap called the energy band gap.Students move to other programs, LED Spectroscopyand Incandescence Spectroscopy, to investigate howenergy bands can explain the spectra of LEDs and in-candescent lamps.11
Depending on their physics background, studentscould then move to a computer program that simu-lates how two semiconductors are combined to con-struct an LED and how the LED operates in terms ofenergy bands. The program plays an important rolebecause it illustrates how energy bands of solids can beused to explain student initial investigations of theelectrical and spectral properties of LEDs.
Fig. 3. Gas Lamp Emission Spectroscopy program.
175
“Luminescence: It’s Cool Light!”Students begin this unit by investigating the physi-
cal properties of different light sources, including anincandescent lamp, LEDs, a Lime Light® night-light,Wint-o-green™ Lifesavers®, light sticks, and phospho-rescent and fluorescent objects. They find that, unlikethe incandescent lamp, these other devices do not feelappreciably warm. These devices emit light, notthrough incandescence in which the primary mecha-nism used for light emission is thermal energy, butthrough various forms of luminescence in which someother process involving another type of energy is used.Students also observe the spectra of various lightsources, including gas lamps, fluorescent lamps,LEDs, and phosphorescent and fluorescent objects.
Students are introduced to the energy level modelin a manner similar to that used in “Solids & Light.”They are also introduced to a potential energy dia-gram — a visual graph of electrical potential energyversus distance to represent a model of an atom. Stu-dents also use the Energy Band Creator program. En-ergy Band Creator allows the students to visualize“atoms” of a gas (a few potential energy diagrams rela-tively far apart), a pure solid (a large number of poten-tial energy diagrams relatively close together), and a solid with impurities (a large number of potential en-ergy diagrams with a few of varying depth) and theireffect on energy levels. The Energy Band Creatorhelps students investigate how the depth, width, anddistance between potential energy diagrams affect en-
Fig. 4. Energy Band Creator program.
176
ergy levels (Fig. 4). Adding a large number of impurity atoms to a pure
solid results in the formation of a third band of ener-gies lying between the conduction and valence bands.This band of energies, called the impurity band ormetastable-state band, is characteristic of luminescentsolids. After the students are introduced to the con-cept of the impurity band, they use computer pro-grams in Spectroscopy Lab Suite to create an energylevel model to explain the working of a fluorescentlamp, a phosphorescent toothbrush, and an infrareddetector card — a device used by repair people to de-termine whether remote controls for electronic de-vices function properly.11
Students have observed the presence of both dis-crete spectral lines and a broad spectral band for a flu-orescent lamp. From their previous observations ofgas spectra, they are able to recognize the spectral linesas those from a mercury gas lamp. Students know thatthe fluorescent lamp is a mercury gas lamp with a ma-terial coating on the inner walls of the lamp. Studentsthen use the Fluorescence Spectroscopy program toconstruct a model to explain the broad spectral band(Fig. 5). Students discover that electrons in the solidcoating of a fluorescent glass tube make a transitionfrom the ground-state band to the excited-state bandby absorbing ultraviolet light emitted by a mercurygas found inside the tube. These electrons then lose asmall amount of energy to neighboring atoms in theform of thermal energy and make a transition to the
Fig. 5. Fluorescence Spectroscopy program.
THE PHYSICS TEACHER ◆ Vol. 42, March 2004
impurity-state band. Electrons then lose energy in theform of visible light and make a transition to theground-state band. The program allows the studentto edit the properties of the fluorescent glass tube sothat various white-light and black-light fluorescentlamps may be modeled.
In fluorescent materials, electrons have energies inthe impurity-state band for a relatively short period oftime. Then, they emit light as their energy changes toenergy in the valence band. As a result, fluorescentmaterials will only glow while light of sufficient ener-gy shines on them. In phosphorescent objects, elec-trons remain in the impurity-state band. After a timedelay, the electrons emit light as their energy changes.Thus, phosphorescent materials emit light using ener-gy that was absorbed at an earlier time. When all en-ergy is converted to light, the object stops glowing inthe dark.
With the program Fluorescence Spectroscopy, stu-dents learn electrons of a glow-in-the-dark toothbrushmake a transition from the ground-state band to theexcited-state band by absorbing visible light from anexternal source. These excited electrons lose a smallamount of energy in the form of thermal energy tonearby atoms to make the transition from the conduc-tion band to the impurity band. These electrons thenabsorb thermal energy from the surroundings andmake a transition back to the conduction band. Theelectrons lose energy by emitting the visible light char-acteristic of glow-in-the-dark objects and make a tran-sition to the valence band. The program allows thestudent to edit the properties of the external lightsource and the phosphorescent toothbrush to investi-gate how these variables affect the resulting spectra.
After exploring the properties of an electronic re-mote-control device and an infrared detector card,students use the IR Detector Card Spectroscopy pro-gram to learn that electrons of an infrared detectorcard absorb energy from visible light and make a tran-sition to the conduction band. Electrons then lose asmall amount of energy to neighboring atoms in theform of thermal energy and make a transition to theimpurity-state band. Electrons absorb energy from aninfrared light source such as a remote control and thenmake a transition back to the excited-state band.These electrons then lose energy by emitting visiblelight and make a transition to the valence band.
THE PHYSICS TEACHER ◆ Vol. 42, March 2004
After being introduced to these various energyband models to explain luminescent phenomena, stu-dents can make comparisons between models.
ResultsThe Visual Quantum Mechanics – Original materi-
als have been used and adapted in a number of highschool and introductory college physics and physicalscience classrooms. Results indicate that the use ofhands-on activities, interactive computer programs,inexpensive materials, and the focus on conceptualunderstanding are very effective in these classes. Stu-dents have indicated that they liked the hands-on ac-tivities, computer programs, and the real-life scienceapplications. When asked to report what they learnedfrom these instructional units, some students men-tioned the mechanism for emission and absorption oflight, relating energy diagrams of atoms to spectra,different ways in which light can be emitted, and howpotential energy diagrams can be used to represent alltypes of atoms from gases to solids.
Based on the results of our assessments, studentsare able to recognize the spectral and electrical featuresof various light sources. When presented with a diagram that has few energy levels, the majority of stu-dents are able to correctly sketch the electronic transi-tions and identify the resulting “colors” and energiesof the resulting spectrum. Most students are able tounderstand the concept that the energy of the spectralline is related to the difference in energy between twolevels and not the values of the levels themselves.
Students are able to correctly associate the energydiagram with a particular light-emitting device, butdo experience some difficulty in identifying specificdetails about these models, especially those that aresomewhat similar. Following the initial implementa-tion of “Luminescence: It’s Cool Light” in differentIowa high school schools, 44% of 124 physics stu-dents were able to correctly identify the correct energyband diagram that explains the spectra of a white-lightfluorescent lamp when presented with energy band di-agrams for a phosphorescent toy, black-light fluores-cent lamp, white-light fluorescent lamp, fluorescentobject, and gas lamp. Fifty percent of the studentswere able to correctly identify the correct energy banddiagram that explains the spectra of a black-light fluo-rescent lamp. In both cases, we found a number of
177
students who confused the energy band diagram thatexplains the spectra of a black-light fluorescent lampwith that of the diagram that explains the spectra of awhite-light fluorescent lamp. This result is not sur-prising since both energy band diagrams are some-what similar with the exception of the differences inenergy requirements and the resulting energies of lightemitted. Sixty-seven percent of the high school stu-dents were able, however, to identify the correct ener-gy band diagram that explains the spectral propertiesof a phosphorescent toy.
Our assessments reveal some student confusion onthe difference between absorption and emission of en-ergy for electrons as they undergo a transition fromone energy level to another. The confusion betweenabsorption and emission could result from these stu-dents not fully understanding which energy level hasthe highest energy — the highest negative energy lev-el, or the lowest negative energy level. These resultssuggest that students need additional practice inworking problems involving energy diagrams and be-ing able to identify electron transitions that result inthe absorption and the emission of energy.
AdaptationsWe have observed that in implementing “Solids &
Light” and/or “Luminescence: It’s Cool Light!” in ahigh school physics classroom some instructors havefocused on the concrete observational aspects of thematerials and not on developing the energy model ofthe atom as a result of their lack of familiarity withthis model. The Potential Energy Diagrams instruc-tional unit was developed to provide both the studentsand instructor with experiences in reviewing energyconservation by using potential energy diagrams. Wehave found that when students are provided with op-portunities to practice applying energy diagrams forvarious classical situations, they are likely to experi-ence less difficulty when using these diagrams to ex-plain the physical properties of light-emitting devices.
Since computer accessibility in many high schoolclassrooms is a problem, an inexpensive alternative orsupplement to the Gas Lamp Emission Spectroscopyprogram that has proven effective involves the use of“energy ladders” — wooden dowels or pencils placedon a piece of cardboard that contains an energy scale
178
similar to the one found in the program. The woodendowels represent energy levels that may be placed hor-izontally on the scale. Electron transitions are repre-sented as large bold arrows drawn to scale on separatepieces of paper. Students then can place these “transi-tions” pointing downward between two “energy lev-els” to represent electron transitions that result in theemission of energy. The energy difference between thetwo “energy levels” is equal to energy of a resultingspectral line. On the back of these “transitions,” a sin-gle vertical line of a specific color is used to identifythe resulting spectral line. The energy ladder used inconjunction with the computer program provides stu-dents with a concrete, hands-on, and visual means toconstruct their energy level model. The use of the en-ergy ladder could also be used to provide studentswith concrete experiences in constructing “electrontransitions” that result in the absorption or emissionof energy.
Investigations on the physical properties of lumi-nescent devices themselves, without the introductionof energy level models of the atom, have also beenadapted to provide students with a unique means toreinforce their understanding of the concepts of ener-gy and light and to develop proficiency in scientificinquiry.13 For example, students can use light sticksto model how temperature affects the light emitted bya bioluminescent organism — the firefly. This investi-gation has been very effective and relevant in illustrat-ing the general idea of a scientific model for audiencesof all ages and cultures.
We have demonstrated that it is possible to intro-duce 20th-century physics ideas in an introductoryuniversity or high school physics and physical sciencecourse with the use of visual and interactive simula-tions, combined with interesting and relevant hands-on activities using accessible materials. The VisualQuantum Mechanics – Original materials are pub-lished by Ztek Inc., http://www.ztek.com. A sampleris available at http://web.phys.ksu.edu/vqm/index.html.
AcknowledgmentThe research reported in this article was supportedby NSF grant #ESI-9452782 and an Iowa SpaceGrant Consortium Phase I – “Seed Grant.”
THE PHYSICS TEACHER ◆ Vol. 42, March 2004
References1. A. Hobson, “Teaching quantum theory in the introduc-
tory course,” Phys. Teach. 34, 202–210 (April 1996).
2. M. Neuschatz and L. Alpert, Overcoming Inertia: HighSchool Physics in the 1990s (American Institute ofPhysics, College Park, MD, 1996).
3. I.D. Johnston, K. Crawford, and P.R. Fletcher, “Stu-dent difficulties in learning quantum mechanics,” Int.J. Sci. Educ. 20, 427–446 (1998).
4. M.J. Morgan and G. Jakovidis, “Characteristic energyscales of quantum systems,” Phys. Teach. 32, 354–358(Sept. 1994).
5. D.A. Zollman and KSU Physics Education ResearchGroup, Visual Quantum Mechanics – The Original CD-ROM, Ztek, Lexington, KY ( 2001).
6. D.A. Zollman, N.S. Rebello, and K. Hogg, “Quantummechanics for everyone: Hands-on activities integratedwith technology,” Am. J. Phys. 70, 252–259 (2002).
7. R. Karplus, “Science teaching and development of rea-soning,” J. Res. Sci. Teach. 12, 213–217 (1974).
8. Project STAR Spectroscope. Learning Technologies,Inc., Somerville, MA (http://www.starlab.com/psgi.html).
9. H. Fischler, “The atomic model in science teaching:Learning difficulties or teachers’ problems?” Paper pre-sented at the annual meeting of the National Associa-tion for Research in Science Teaching, St. Louis, MO(1996).
10. J. Petri and H. Niedderer, “A learning pathway in highschool level quantum atomic physics,” Int. J. Sci. Educ.20, 1075–1088 (1998).
11. N.S. Rebello, C. Cumaranatunge, D.A. Zollman, andL.T. Escalada, “Simulating the spectra of light sources,”Comp. Phys. 28, 33 (1998).
12. D. Donelly, “CIP’s seventh annual educational softwarecontest: The winners,” Comp. Phys. 10, 532–541(1996).
13. L. Escalada, R. Unruh, T. Cooney, and J. Foltz, “Stu-dents see the light: Investigating luminescence and in-candescence through a three-part learning cycle,” Sci.Teach. 68, 40–43 (2001).
PACS codes: 01.40Ga, 01.50H, 85.30
Lawrence Escalada is an associate professor in physicsand science education at the University of Northern Iowa.He teaches introductory physics/physical science and sec-ondary science methods courses, and works with highschool physics teachers and students on a number of dif-ferent projects and activities. He has taught high schoolphysics and physical science and has been involved in
THE PHYSICS TEACHER ◆ Vol. 42, March 2004
developing instructional materials including the VisualQuantum Mechanics– The Original and PhysicsResources and Instructional Strategies for MotivatingStudents (PRISMS) PLUS.
Department of Physics, 205 Physics Building,University of Northern Iowa, Cedar Falls, IA 50614-0150; [email protected]
Sanjay Rebello is an assistant professor in physics edu-cation at Kansas State University. He has been involved inphysics education research for the past seven years. Hisinterests include research on students’ mental models ofreal-world applications, curriculum development in con-temporary physics, and use of technology in the teachingand learning of physics.
Department of Physics, Kansas State University,Manhattan, KS 66506-2601; [email protected]
Dean Zollman is head of the Department of Physics andUniversity Distinguished Professor at Kansas StateUniversity. He has been conducting research on studentlearning of physics and developing instructional materialsfor more than 30 years. AAPT awarded him its 1995Robert A. Millikan Medal. In 1996 he was named theDoctoral University National Professor of the Year by theCarnegie Foundation for the Advancement of Teaching.
Kansas State University, Manhattan, KS 66506-2601;[email protected]
179