Download - Real neuroscience in virtual worlds
Real neuroscience in virtual worldsDaniel A Dombeck1 and Michael B Reiser2
Available online at www.sciencedirect.com
Virtual reality (VR) holds great promise as a tool to study the
neural circuitry underlying animal behaviors. Here, we discuss
the advantages of VR and the experimental paradigms and
technologies that enable closed loop behavioral experiments.
We review recent results from VR research in genetic model
organisms where the potential combination of rich behaviors,
genetic tools and cutting edge neural recording techniques are
leading to breakthroughs in our understanding of the neural
basis of behavior. We also discuss several key issues to
consider when performing VR experiments and provide an
outlook for the future of this exciting experimental toolkit.
Addresses1 Department of Neurobiology, Northwestern University, Pancoe
Laboratory, Evanston, IL 60208, USA2 Janelia Farm Research Campus, Howard Hughes Medical Institute,
19700 Helix Drive, Ashburn, VA 20147, USA
Corresponding authors: Dombeck, Daniel A.
([email protected]) and Reiser, Michael B.
Current Opinion in Neurobiology 2012, 22:3–10
This review comes from a themed issue on
Neurotechnology
Edited by Winfried Denk and Gero Miesenbock
Available online 2nd December 2011
0959-4388/$ – see front matter
# 2011 Elsevier Ltd. All rights reserved.
DOI 10.1016/j.conb.2011.10.015
IntroductionThe behaviors of animals have long fascinated naturalists,
who observed animals in their native environments. A
more mechanistic understanding of behavior was taken
up by the ethologists who combined fieldwork with
experiments conducted under more controlled situations
where behavioral strategies could be isolated and tested
[1]. The spirit of these early investigators is alive and well
today and inspires at least two popular approaches to the
study of neuronal mechanisms of behavioral control:
attaching miniaturized recording devices onto freely man-
euvering animals as they interact with a controlled
environment and adapting larger recording systems to
restrained animals that are stimulated by animal move-
ment controlled dynamic sensory environments. This
latter approach — a form of virtual reality (VR) for
animals — becomes all the more powerful when it is
applied to the small number of organisms, such as flies,
mice, zebrafish, and worms that have become genetic
www.sciencedirect.com
model systems in the neuroscience community. While
VR has been used for decades in primates to study the
neural basis of behavior [2–5], it is currently only in these
model systems that wide-ranging investigations that com-
bine methods in molecular biology, genetics, neural
recording, and behavior are possible. We therefore restrict
our focus to recent research that has used or made
possible VR as a means to dissect the neural circuitry
underlying behavior in genetic model organisms.
What is virtual reality and why should one useit?In general, a VR experiment consists of a simulated
environment that is sensed by the animal and is updated
based on the animal’s actions (Figure 1a). The interaction
between the animal and the environment must be para-
metric; that is, movements of the animal must map to
trajectories in parameter space, which in turn correspond
to updates of the virtual world. While the simulation is
often imperfect, the goal of these methods is to reproduce
a sufficiently convincing subset of the stimuli that the
animal would sense while freely moving within the real
analog of the virtual environment. Virtual environments
are often implemented as computer-controlled visual
worlds displayed around the animal (Figure 1b,c), but
the stimulus space could also include or be defined
entirely by tactile, olfactory, auditory or other cues. VR
environments are implemented as closed-loop (feedback)
systems where the actions of the animal in response to the
synthesized cues up to time T are detected and used to
generate the next ‘view’ of the virtual environment at
time T + Dt. This approach is in sharp contrast to most
neuroscience studies that are based on an open-loop
system in which the stimulus conditions are presented
independently of the animal’s response to stimuli (or in
which the experimenter adapts stimulus parameters
based on the animal’s responses across, but not within,
trials [6]).
The VR approach has two primary advantages: first, a
much finer analysis of perception is enabled since the
experimenter can control and record the sequence of
stimuli that animals receive, and second, restraining
the bodies and/or heads of animals during VR based
behaviors enables the application of a wide range of high
precision functional neural recording techniques. For
example, recording neural activity with two-photon
microscopy, whole cell patch clamp, or MRI requires a
high degree of mechanical stability of the animal’s head
since movements during recording could cause significant
distortions of the recorded signals. These techniques are
difficult or impossible to apply during the freely moving
Current Opinion in Neurobiology 2012, 22:3–10
4 Neurotechnology
Figure 1
(b)tethered
fly
torque ≈ gain x wing beat difference
Integrate to rotational position
wing beatamplitudes
visualscene
rotationalvelocity
sensorydisturbance
visual LED display
IR LED
optical wingbeat analyzer
(c)
projector
Air
ball movementsensor
visualdisplayscreen
Head-restraint
head-restrainedmouse
(gain x velocity) for selectedball rotational components
Integrate tolinear and view angleposition
locomotionvelocity
visualscene
linear andview anglevelocity
motordisturbance
Animal
motor/virtual world coupling(dynamics)
Mapping fromvirtual world to stimuli
measuredmovement
stimulipresented
VR parameters
motor outputsensory input
sensorydisturbance
motordisturbance
(a)
(d)
centralnervous system
sensorysystems
motorsystems
measuredmovement
stimulipresented
Partially-restrained animal (e.g. tethered flying fly)
wing beat
leg movement
abdominal ruddering
head movement
visual system
olfactory system
body rotationshalteres
prosternal organ
wing hinge stretch receptor
VR instrumentation
Current Opinion in Neurobiology
(a) Schematic view of a virtual reality (VR) behavioral experiment. The animal’s movements are measured and passed through instrumentation and
computational stages (below the dashed line), whereby the motor output is coupled to movements within the virtual world, which are then mapped to
sensory stimuli. The ideal closed loop simulation aims to reproduce the typical feedback a freely moving animal would experience. In VR, the
experimenter may perturb this ideal situation by injecting a disturbance into either the motor or sensory side of these transformations. In the
corresponding open loop experiment, stimuli presented to the animals are not updated based on the animal’s motor outputs. (b) An illustrative
example of a tethered fly flight closed loop experiment. The fly is tethered above an optical wingbeat sensor. Attempted rotations of the animal are
measured as wingbeat differences which are then integrated over the update time of the system to determine angular rotations of the fly within the
virtual world; the corresponding rotations are then fed back to the fly on an LED display. This system has also been widely used to test specific
aspects of closed loop behavior by the introduction of a time varying disturbance that the flies can counteract by producing compensatory reactions.
Part of figure reproduced from [83]. (c) An illustrative example of a head-restrained mouse closed loop visual virtual reality experiment. Mouse
locomotion results in rotations of a trackball. The rotational velocity components of the ball are measured and used to define linear and view angle
velocities in the virtual world. A ‘motor disturbance’ at this stage can make this transformation nonlinear. For example, when a wall is encountered
along the virtual space trajectory, no virtual translocation would take place even though the mouse may still be running on the spherical treadmill.
The velocities are integrated over the update time of the system to determine the new virtual position and view angle. The visual scene defined by the
new position and angle is computer rendered and displayed on a screen surrounding the mouse. Part of figure reproduced from [31�,37�]. (d) A
schematic model of the animal’s nervous system engaged in VR, with specific examples based on tethered fly flight. This view demonstrates that VR
experiments typically only stimulate and measure a fraction of the animal’s sensory and motor systems. As in (b), the wing beat output is measured
and fed back as updates of the visual display. Additionally, a typical experiment may contain several other cases: intact proprioceptive feedback
represented by the dashed line (such as wing motion sensed by stretch receptors), sensory systems that are intact but unstimulated in this
experiment (such as the antennal olfactory system), others that are intact but prevented from being stimulated by the animal’s restraint (such as the
angular velocity sensing halteres), and proprioceptive inputs that are disrupted since the animal’s movements are partially disturbed (such as the
head position sensing prosternal organ). On the motor side there are several outputs that are clamped due to tethering and head fixation (such as
head and body movement), or are free to move but either unmeasured or detected but not used to update the presented sensory inputs (such as
motion of the abdomen and legs).
Current Opinion in Neurobiology 2012, 22:3–10 www.sciencedirect.com
Real neuroscience in virtual worlds Dombeck and Reiser 5
behaviors that are of great interest to neurobiologists; VR
provides the ability to reproduce many of these behaviors
in restrained animals where high precision functional
recordings are more feasible. By combining these
methods with genetic techniques [7], especially those
that enable the labeling of specific cell types for targeted
electrophysiology, with a reporter for functional imaging
[8,9], or with a light activated protein for optogenetic
control in model systems [10], we expect that dramatic
progress will be made in dissecting the neural circuitry
underlying behavior.
The ‘nuts and bolts’: experimental paradigmsand technologies enabling VRThe realization of a VR system enabling the two main
advantages listed above requires three general categories
of methods and components:
1. Animal restraint: The simultaneous measurement of
neural activity and behavior in VR requires restricting
the animal so that the stimuli can be sensed while
providing enough freedom of mobility for the animal
to move in response to the environment. Progress
towards this goal was built on efforts to develop high
resolution functional neural recording experiments in
which animals were awake but fully immobilized;
these experiments were further developed to enable
the same types of recordings in animals with increasing
mobility, which leads to feedback signals that can be
used in a VR paradigm [11–23]. For example, the focus
on fly behavior in the 1960s at the Institute for
Biological Cybernetics in Tubingen, popularized
studies of the many behaviors that can be measured
from tethered flying flies (Figure 1b) whose wings are
free to beat [24]. Conceptually similar experiments
have been carried out by restraining the head of
zebrafish in agarose while leaving its tail free to swim
[25] or restraining the movements of locomoting C.elegans in microfluidic devices [11]. The technique of
restraining the body or head of animals has been
extended by allowing animals to locomote on a
‘spherical treadmill,’ that is usually an air-supported
ball [26–30]; the head-restrained version of this
experimental preparation has become a popular way
to measure behavior-related neural activity in legged
animals using two-photon microscopy [8,31�,32,33,34�]and electrophysiological methods [28,35].
2. Virtual world generation and display: Generating a VR
simulation requires some attention to the details of the
stimulated sensory system(s). Visually defined virtual
environments have dominated the field of animal VR
and offer a good example of how to determine the
parameterization of stimulus presentation. The visual
system of mice and rats, for example, is defined by
large light collecting power, low acuity and a large solid
angle field of view. The construction of an ideal VR
www.sciencedirect.com
system for rodents therefore requires a large panoramic
display with relatively low resolution and low
illumination level; this has been achieved using the
combination of a projector and an angular amplification
mirror [36] to illuminate a large omni-max like toroidal
screen [31�,37�,38�]. The update time of the environ-
ment (Dt) is critically important to providing a high-
fidelity simulation to the animals, as it sets the closed-
loop systems bandwidth. In general, Dt should not be
greater than the animal’s corresponding reaction time,
or else the closed-loop behavioral experiment may
suffer from instabilities. To accommodate the reaction
time of rodents (�100 ms [39]), modern graphic
display software on PCs have been used to generate
the VR simulations using tools like openGL [38�] or a
video game engine [31�,37�]. Similar considerations of
fly vision (reaction times on the order of 80 ms [40]
with lower visual acuity and a larger sampling of visual
space [41] in comparison to mice) have led to the
development of VR systems based on arrays of LEDs
[42�]. A recent version of these LED display systems
makes use of programmable microcontrollers to
achieve deterministic control of the timing of
displayed scenes, together with higher level exper-
imental control and data acquisition carried out on a
PC [83]. Similarly for larval zebrafish (reaction times in
the range of 10’s of ms [43] to �100 ms [44�]), VR has
been implemented using a DLP projector to display
patterns on a small screen below the fish using PCs
running DirectX3D or Labview [15�,44�].Additional parameters may prove to be important if the
virtual environment is defined by different senses and
stimuli. For example, in comparison to light based
stimulation, the diffusion and clearance time of odors
impose markedly different constraints that must be
considered in the design of olfactory-defined virtual
environments.
3. Measuring animal movements: The animal tethering
methods discussed above allow for precise measure-
ments of animal movements that can be transformed
into feedback signals for use in updating the VR
simulation loop (Figure 1). The wing beating move-
ment of tethered flies, for example, has been recorded
with an electromechanical torque meter [24] or via
optical wing tracking [45]. The walking movements of
head-restrained rodents on modern implementations
of a spherical treadmill have been measured using an
optical computer mouse sensor placed in close
proximity to the ball [28,32,34�,38�]. High speed
cameras have been used to monitor the movements of
unrestrained zebrafish [15�] and tail movements in
head fixed zebrafish [44�] for use as a feedback signal
in VR paradigms.
Once the movements are recorded, it is necessary to
determine which movement components will be used
and how they will be processed (see Considerations) to
update the next view of the virtual environment. For
Current Opinion in Neurobiology 2012, 22:3–10
6 Neurotechnology
example, head-restrained mice running on a spherical
treadmill (Figure 1c) can rotate the ball around any of
three orthogonal axes; of the three possible movement
components, the yaw movement of the ball (around
the vertical axis) is a natural choice for updating the
view angle of the virtual environment, while the
movement of the ball around the horizontal axis can be
used as a control signal for forward and backward
movements in the virtual environment (the third axis
of ball movement is not required and not recorded).
VR to study the neural activity underlyingbehaviorWhile the powerful combination of VR, behavior, and high
precision functional recording and/or stimulation have thus
far only been accomplished in mice (see below); these
methods offer great future promise for understanding the
neural basis of behavior in other genetic model systems.
Recent progress on many fronts suggests that these are only
the early days of a rapidly growing field that will soon
expand to flies, fish, and possibly worms. In this section, we
cover the recent results from rodents, and the work that will
likely soon make these same experiments possible in flies
and fish. As a preview of what may be possible with genetic
organisms, we also cover some of the relevant VR research
performed in primates and humans.
Animal spatial navigation is one of the most widely
studied behaviors [46,47] and consequently there has
been great interest in reproducing these navigation beha-
viors within a VR system. One of the first studies to
demonstrate locomoting animals interacting with a virtual
visual environment involved body tethered rats maneu-
vering on a large trackball [38�]. Rats were trained to run
from point to point in an infinite repeating square grid in
virtual space. One goal of such experiments is to activate
the navigation neural circuitry during virtual navigation in
the same way that it is activated in freely moving animals
in real environments. Where possible, it is therefore
critical to record the neural activity within relevant cir-
cuitry and make a comparison to the same activity
measured in the freely moving animals. Such a compari-
son was made for hippocampal place cells in head fixed
mice navigating along a virtual linear track [31�,37�].These studies found that several key features of virtual
place cells were highly similar to those measured from
place cells in freely moving rodents. Having validated the
success of appropriately activating the navigation circuitry
during locomotion in head-fixed mice, the unique poten-
tial of the combination of all of the methods discussed in
this review was then used by these studies to answer
questions that were previously inaccessible. For example,
the subthreshold membrane potential in place cells was
examined using whole cell patch recordings [37�] and the
spatial organization of place cells was studied using two-
photon microscopy [31�].
Current Opinion in Neurobiology 2012, 22:3–10
The recent development of a visual place memory para-
digm for flies [48] demonstrates that VR navigation-based
studies in flies may already be possible, since the tech-
nological components are available to be combined to
achieve the goals outlined in this review. VR techniques
have been used in a range of fly studies as varied as long
range navigation [45], control of flapping [49], visual
pattern learning [50], and navigation strategies based
on visual cues [40]. Recent developments have adapted
these behavioral methods to experiments using either
whole cell patch clamp [18�] or extracellular recordings
from visual system interneurons in tethered flying flies
[51] and functional two photon microscopy from visual
neurons in walking tethered flies during visual motion
stimulation [52]. Rapid progress from zebrafish research-
ers will soon bring the behavioral richness of observa-
tional studies [44�] together with functional two photon
microscopy [14,15�] under VR conditions.
VR in worms may be possible using olfaction, tempera-
ture, or even light based environments. While a range of
orientation behaviors have been studied in C. elegans —
including phototaxis [53] — the translation of these stu-
dies into practical VR paradigms might be an exciting
next step, capitalizing on the experimental tractability of
that organism’s nervous system [54–56] and the relative
ease of optical access for calcium imaging [8,11,57,58].
There are several other interesting extensions of visual
VR experiments that provide ample inspiration for stu-
dies with genetic model systems. In one extreme, it has
been possible to study navigation related neural circui-
try in immobile humans and primates that are interact-
ing with a visual virtual world [59–64], much like the
navigation based ‘first person shooter’ video games that
involve virtual world translocation driven by minimal
real world movement by the subject (i.e. finger move-
ments). The relative immobility of the subjects in these
experiments makes them ideal platforms for the appli-
cation of MRI [63,64] or extracellular single unit [59–61]
recordings to assess brain activity. In fact, these record-
ings have shown a remarkable activation of the naviga-
tion circuitry, even while the subjects are immobile. On
the other extreme, it has been possible to present a
virtual world to animals that are freely moving in a small
arena: while the animals maneuver in the environment,
the visual cues on the surrounding walls are manipulated
in closed-loop based on on-line video tracking of the
animals’ current position [15�,42�,65]. This allows for
implementing unnatural coupling between the visual
scene and the animals’ movements and can be used to
study the components of visually guided behaviors, such
as the optomotor response and object-directed loco-
motion. These developments suggest that in the near
future we might see experimenters integrating freely
moving animal VR experiments with miniaturized
recording systems.
www.sciencedirect.com
Real neuroscience in virtual worlds Dombeck and Reiser 7
Considerations for VR experimentsBy design, VR experiments are implemented as closed-
loop systems and as a consequence, scientists who pursue
these methods must grapple with the often non-intuitive
properties of feedback systems. In particular all feedback
systems must be ‘tuned’ in some way; classical feedback
systems are parameterized to allow the designer to balance
the speed and the stability of the closed-loop system. The
simplest form of coupling between the animal’s measured
motor output and subsequent stimuli presentations are
implemented as linear (proportional) relationships, where
the feedback law is literally the equation of a line (in the
form of output = input � gain + offset). In the realm of
behavioral VR these considerations become immediately
relevant — if the gain is set too low, the animal will hardly
notice the consequence of motor actions, but if the gain is
too high, the changes in presented stimuli will amplify
minute reactions of the animals. In either extreme the
closed-loop system will suffer from instabilities that must
be empirically tuned away. A great strength of closed-loop
control systems is that they feature a remarkable robust-
ness to a range of uncertainties, either in the parameters of
the feedback system itself (e.g. the gain) or in the repeat-
ability of the animal’s reactions. Although tuning a feed-
back loop will require some trial and error, in practice, these
behavioral closed loop systems can achieve consistent
levels of performance across animals — this is especially
true for fly flight arenas, which typically do not require any
modification to the system’s settings for years. One of the
best methods for evaluating the robustness of a closed-loop
experiment is the application of a time-varying disturbance
signal (Figure 1a,b) that without correction would cause
the animals to drift strongly in one direction, and then in
the other direction — if the animal is engaged in the
feedback loop and can control its subsequent stimuli with
motor reactions, then the animal should (at least on aver-
age) be able to trim out most of these applied disturbance
signals and maintain a straight course (this technique has
been widely used in fly work, e.g. [66]). Most feedback
loops used in practice are based on simple linear systems as
described, but far more elaborate feedback loops can be
designed, using more complex dynamical couplings (such
as higher order dynamics in the form of an ODE or the
introduction of non-linearities between inputs and
outputs, such as dead-bands, thresholds, etc.) or transform-
ations between many measured signals and high-dimen-
sional stimulus space in a design that explicitly combines
multiple measurements in the evaluation of the feedback
signal(s). In practice, however, the design of such closed-
loop behavioral systems will require significant trial and
error, since the principled methods of control theory are
best applied to deterministic physical systems rather than
behaving animals.
While certain feedback loops are maintained during VR
behavioral experiments, it is important to consider that
several others are broken. For example, head or body
www.sciencedirect.com
restraint (Figure 1b,c) can disrupt the elaborate control
systems that animals posses to regulate the position of
the head and eyes [67]. Since VR experiments typically
interfere with the full complement of the animal’s feed-
back systems, a model of the sort diagrammed in
Figure 1d may be a critical tool for the experimental
design and for the interpretation of surprising findings.
Having fewer feedback loops closed certainly removes
the animal’s condition further away from the natural one,
but also points to further advantages of VR methods. In
particular, it is possible to provide animals with other-
wise impossible experimental conditions, such as those
with ‘unnatural’ couplings between the animal and the
world [19,66], the stimulation of individual sensory-
motor loops in isolation, combinations of sensory cues
that do not typically occur together (cue conflict, or
simultaneous combinations of closed-loop control with
open-loop cues [40]), and the ability to make discon-
tinuous jumps in parameter space (e.g. teleportation in
the virtual world).
Restraining an animal is typically not the ideal method for
studying issues relating to motor control and the biome-
chanics of locomotion. Indeed tethered flying flies, and
head-fixed walking rodents and flies, do not locomote
with a natural gait (e.g. excessive pitching down in flying
flies [49], strong coupling of walking into rotation for
walking flies [34�] and mice [68]). Nevertheless, these
experiments succeed despite these shortcomings — but
this success is defined by necessity in a circular way, that
is the similarity of the animals’ behavior to any analogous
freely moving animal data, is itself used to determine
whether the closed-loop paradigm succeeds. Therefore, a
comparative analysis of freely walking and restrained
animal motor output can be a productive way to both
further understand motor control issues, and to improve
the fidelity of restrained animal locomotion (e.g. [30]).
Habituation to head restraint [32,34�,68] and training
animals to perform locomotion dependent tasks
[37�,38�] may be useful in recovering more natural loco-
motion gaits. Even with these limitations, the future of
restrained-animal neuroscience is bright, since in many
cases even simple behavioral simulations prove to be very
compelling (from the animal’s perspective) and make it
possible to study complex, naturalistic movements that
organize navigational strategies [69,70].
OutlookSeveral recent studies in genetic model organisms have
provided new evidence that the activity levels [18�,35]
and even the tuning and encoding properties of neurons
[51,52,71] are modulated by the behavioral state of the
animal. In general the neurons recorded in these studies
were found to be more active while the animals are
actively locomoting, and in one study this modulation
was shown to be graded, that is the more the animals
walked, the greater the enhancement in neuronal activity
Current Opinion in Neurobiology 2012, 22:3–10
8 Neurotechnology
[52]. These exciting findings present a challenge to the
design of closed-loop VR experiments, since the activity
of animals dictates not only the updates of the virtual
world but also the properties of the neurons being
measured. The analysis of these layers of modulation is
complicated by the fact that these changes typically occur
on slightly different timescales and so a reverse-corre-
lation-based approach between neurophysiology data and
the animal’s behavioral events will be challenging.
Furthermore, information theoretic analysis methods —
the workhorse of many sensory physiology studies — are
most likely not suited for the analysis of closed-loop data
since there is no well-established framework within infor-
mation theory to address the causal relationship between
input and output signals that defines closed-loop exper-
iments. Despite these challenges, in order to understand
how sensory systems truly function during behavior, these
experiments combined with a more sophisticated analysis
approach will be essential.
Most of the VR studies in genetic model organisms to
date have used visual stimuli as the means of closing the
loop, but environments supplemented with auditory,
olfactory, or tactile cues would engage an increasing
number of the animal’s natural feedback loops and would
likely lead to a more convincing simulation and increased
virtual task performance. Question about the mechanisms
of path integration or forms of inertial sensing will require
the stimulation of certain sensory systems, such as the
mammalian vestibular system or a fly’s gyroscopic hal-
teres that must use physical motion of the animal. In
principle these cues are possible to integrate, by for
example, placing the restrained animal on a robotic
motion control system (open loop studies: [72,73]).
Finally, instead of using the animal’s motor output as
the feedback signal to close the loop, a more direct route
to understanding internal brain states may be to directly
control the virtual environment with neural signals
[74,75] — an important extension that requires the de-
velopment and validation of a suitable neuronal activity
decoding scheme. This ambitious research program is
highly synergistic with efforts to develop functional
neural prosthetics [76].
Certain behaviors, such as complex social interaction,
may prove to be nearly impossible to study using VR
(but see [77]) and therefore studying the neural circuitry
underlying these behaviors at the level described in this
review will require the development of methods such as
high-resolution imaging and intracellular electrophysi-
ology for use in freely roaming animals [54,78–82]. In
light of recent successes in studying brain activity during
a diverse range of behaviors in VR, we fully expect that
this tremendous toolkit will be brought to bear on most
important problems in behavioral neuroscience in the
near future — this revolution, a century in the making,
has only just begun.
Current Opinion in Neurobiology 2012, 22:3–10
References and recommended readingPapers of particular interest, published within the period of review,have been highlighted as:
� of special interest
1. Tinbergen N: The Study of Instinct. Oxford University Press; 1951.
2. Burgess N, Maguire EA, O’Keefe J: The human hippocampusand spatial and episodic memory. Neuron 2002, 35:625-641.
3. Maguire EA, Burgess N, O’Keefe J: Human spatial navigation:cognitive maps, sexual dimorphism, and neural substrates.Curr Opin Neurobiol 1999, 9:171-177.
4. Tarr MJ, Warren WH: Virtual reality in behavioral neuroscienceand beyond. Nat Neurosci 2002, 5 Suppl.:1089-1092.
5. Adamovich SV, Fluet GG, Tunik E, Merians AS: Sensorimotortraining in virtual reality: a review. NeuroRehabilitation 2009,25:29-44.
6. Benda J, Gollisch T, Machens CK, Herz AV: From response tostimulus: adaptive sampling in sensory physiology. Curr OpinNeurobiol 2007, 17:430-436.
7. Luo L, Callaway EM, Svoboda K: Genetic dissection of neuralcircuits. Neuron 2008, 57:634-660.
8. Tian L, Hires SA, Mao T, Huber D, Chiappe ME, Chalasani SH,Petreanu L, Akerboom J, McKinney SA, Schreiter ER et al.:Imaging neural activity in worms, flies and mice with improvedGCaMP calcium indicators. Nat Methods 2009, 6:875-881.
9. Garaschuk O, Griesbeck O, Konnerth A: Troponin C-basedbiosensors: a new family of genetically encoded indicators forin vivo calcium imaging in the nervous system. Cell Calcium2007, 42:351-361.
10. Miesenbock G: The optogenetic catechism. Science 2009,326:395-399.
11. Chronis N, Zimmer M, Bargmann CI: Microfluidics for in vivoimaging of neuronal and behavioral activity in Caenorhabditiselegans. Nat Methods 2007, 4:727-731.
12. Greenberg DS, Houweling AR, Kerr JN: Population imaging ofongoing neuronal activity in the visual cortex of awake rats.Nat Neurosci 2008, 11:749-751.
13. Kuhn B, Denk W, Bruno RM: In vivo two-photon voltage-sensitive dye imaging reveals top-down control of corticallayers 1 and 2 during wakefulness. Proc Natl Acad Sci U S A2008, 105:7588-7593.
14. Miri A, Daie K, Burdine RD, Aksay E, Tank DW: Regression-basedidentification of behavior-encoding neurons during large-scale optical imaging of neural activity at cellular resolution.J Neurophysiol 2011, 105:964-980.
15.�
Orger MB, Kampff AR, Severi KE, Bollmann JH, Engert F: Controlof visually guided behavior by distinct populations of spinalprojection neurons. Nat Neurosci 2008, 11:327-333.
In this study, a behavioral assay was developed to identify visual stimulithat drove basic motor patterns in zebrafish. Functional two photonmicroscopy was used to monitor the activity in large populations ofneurons during visual stimulation in agarose embedded zebrafish. Thiswork was the precursor to a closed loop version of this paradigm whichfollowed (Ref. [44�]). This study also demonstrated a closed loop visualfeedback system applied to fish swimming in a small arena.
16. Rohde CB, Zeng F, Gonzalez-Rubio R, Angel M, Yanik MF:Microfluidic system for on-chip high-throughput whole-animal sorting and screening at subcellular resolution. ProcNatl Acad Sci U S A 2007, 104:13891-13895.
17. Aksay E, Gamkrelidze G, Seung HS, Baker R, Tank DW: In vivointracellular recording and perturbation of persistent activityin a neural integrator. Nat Neurosci 2001, 4:184-193.
18.�
Maimon G, Straw AD, Dickinson MH: Active flight increases thegain of visual motion processing in Drosophila. Nat Neurosci2010, 13:393-399.
This study is the first to demonstrate stable, targeted whole cell patchclamp recordings from visual interneurons in tethered flying flies. In
www.sciencedirect.com
Real neuroscience in virtual worlds Dombeck and Reiser 9
addition to making an important contribution to the growing body ofevidence for behavioral state dependent modulation of neuronal activity,this study open the door for many future studies on neural function in thecontext of classic fly flight behaviors.
19. Major G, Baker R, Aksay E, Mensh B, Seung HS, Tank DW:Plasticity and tuning by visual feedback of the stability of aneural integrator. Proc Natl Acad Sci U S A 2004, 101:7739-7744.
20. Poulet JF, Petersen CC: Internal brain state regulatesmembrane potential synchrony in barrel cortex of behavingmice. Nature 2008, 454:881-885.
21. Andermann ML, Kerlin AM, Reid RC: Chronic cellular imaging ofmouse visual cortex during operant behavior and passiveviewing. Front Cell Neurosci 2010, 4:3.
22. Komiyama T, Sato TR, O’Connor DH, Zhang YX, Huber D,Hooks BM, Gabitto M, Svoboda K: Learning-related fine-scalespecificity imaged in motor cortex circuits of behaving mice.Nature 2010, 464:1182-1186.
23. O’Connor DH, Clack NG, Huber D, Komiyama T, Myers EW,Svoboda K: Vibrissa-based object localization in head-fixedmice. J Neurosci 2010, 30:1947-1967.
24. Gotz KG: Flight control in Drosophila by visual perception ofmotion. Kybernetik 1968, 4:199-208.
25. Ritter DA, Bhatt DH, Fetcho JR: In vivo imaging of zebrafishreveals differences in the spinal networks for escape andswimming movements. J Neurosci 2001, 21:8956-8965.
26. Buchner E: Elementary movement detectors in an insect visualsystem. Biol Cybern 1976, 24:85-101.
27. Hassenstein VB, Reichardt W: Systemtheoretische Analyse derZeit-, Reihenfolgen und Vorzeichenauswertung bei derBewegungsperzeption des Russelkafers Chlorophanus.Zeitschr Naturforsch 1956, 11b:513-524.
28. Mason AC, Oshinsky ML, Hoy RR: Hyperacute directionalhearing in a microscale auditory system. Nature 2001,410:686-690.
29. Uexkull JJ, Kriszat G: Streifzuge durch die umwelten von tieren undmenschen; ein bilderbuch unsichtbarer welten. Berlin: J. Springer;1934.
30. Weber T, Thorson J, Huber F: Auditory-behavior of the cricket 1.Dynamics of compensated walking and discriminationparadigms on the kramer treadmill. J Comp Physiol 1981,141:215-232.
31.�
Dombeck DA, Harvey CD, Tian L, Looger LL, Tank DW: Functionalimaging of hippocampal place cells at cellular resolutionduring virtual navigation. Nat Neurosci 2010, 13:1433-1440.
This study combined virtual reality and cellular resolution functionalimaging of place cells in mice. A hippocampal imaging window wasdeveloped to allow for the optical recording of populations of place cellsin CA1. The methods allowed for measurements of the spatial micro-organization of place cells in the hippocampus.
32. Dombeck DA, Khabbaz AN, Collman F, Adelman TL, Tank DW:Imaging large-scale neural activity with cellular resolution inawake, mobile mice. Neuron 2007, 56:43-57.
33. Nimmerjahn A, Mukamel EA, Schnitzer MJ: Motor behavioractivates Bergmann glial networks. Neuron 2009, 62:400-412.
34.�
Seelig JD, Chiappe ME, Lott GK, Dutta A, Osborne JE, Reiser MB,Jayaraman V: Two-photon calcium imaging from head-fixedDrosophila during optomotor walking behavior. Nat Methods2010, 7:535-540.
This study details the methods developed to achieve recordings of neuralactivity from head-fixed flies using two-photon calcium imaging. This taskwas accomplished by miniaturizing the spehrical treadmill system so thatflies could walk on this air-supported ball for many hours while engaged ina visual orientation task; concurrently the activity of GCaMP3.0-expres-sing neurons in the visual system was monitored.
35. Niell CM, Stryker MP: Modulation of visual responses bybehavioral state in mouse visual cortex. Neuron 2010,65:472-479.
36. Chahl JS, Srinivasan MV: Reflective surfaces for panoramicimaging. Appl Opt 1997, 36:8275-8285.
www.sciencedirect.com
37.�
Harvey CD, Collman F, Dombeck DA, Tank DW: Intracellulardynamics of hippocampal place cells during virtual navigation.Nature 2009, 461:941-946.
This study developed a visual virtual reality system for head-fixed mice thatwere maneuvering on a spherical treadmill. The electrophysiology mea-surements showed the existence of virtual place cells and provided intra-cellular recordings from these cells. The measurements were compared totheoretical models predicting the intracellular features of place cells.
38.�
Holscher C, Schnee A, Dahmen H, Setia L, Mallot HA: Rats areable to navigate in virtual environments. J Exp Biol 2005,208:561-569.
The authors developed a visual virtual reality system for body-tetheredrats that could run on a spherical treadmill. It was shown that rats canperform virtual reality spatial tasks and that their performance on thetasks increases with training.
39. Mauk MD, Buonomano DV: The neural basis of temporalprocessing. Annu Rev Neurosci 2004, 27:307-340.
40. Reiser MB, Dickinson MH: Drosophila fly straight by fixatingobjects in the face of expanding optic flow. J Exp Biol 2010,213:1771-1781.
41. Heisenberg M, Wolf R: Vision in Drosophila: Genetics ofMicrobehavior. Berlin: Springer-Verlag; 1984.
42.�
Strauss R, Schuster S, Gotz KG: Processing of artificial visualfeedback in the walking fruit fly Drosophila melanogaster.J Exp Biol 1997, 200:1281-1296.
In this pioneering study, the authors implemented a system that achievesclosed loop control over a freely walking fly’s visual input by on-linemonitoring of the animal’s position and orientation and updating a visualdisplay. This system was used to deliver virtual open loop conditions thatallowed the authors to disentangle some of the locomotion-inducedchanges in the visual scene from reactions to external motion cues.
43. McElligott MB, O’Malley DM: Prey tracking by larval zebrafish:axial kinematics and visual control. Brain Behav Evol 2005,66:177-196.
44.�
Portugues R, Engert F: Adaptive locomotor behavior in larvalzebrafish. Front Sys Neurosci 2011, 5:72.
This study demostrates visual closed-loop virtual reality for head-restrained larval zebrafish. The authors characterize a range of openand closed loop visual responses based on tail movements, showing thatthe larvae rapidly adapt their motor output in response to changes in theclosed-loop coupling.
45. Gotz KG: Course-control, metabolism and wing interferenceduring ultralong tethered flight in Drosophila-melanogaster.J Exp Biol 1987, 128:35-46.
46. O’Keefe J, Dostrovsky J: The hippocampus as a spatial map.Preliminary evidence from unit activity in the freely-moving rat.Brain Res 1971, 34:171-175.
47. Collett TS, Collett M: Memory use in insect visual navigation.Nat Rev Neurosci 2002, 3:542-552.
48. Ofstad TA, Zuker CS, Reiser MB: Visual place learning inDrosophila melanogaster. Nature 2011, 474:204-207.
49. Fry SN, Sayaman R, Dickinson MH: The aerodynamics ofhovering flight in Drosophila. J Exp Biol 2005, 208:2303-2318.
50. Dill M, Wolf R, Heisenberg M: Visual pattern recognition inDrosophila involves retinotopic matching. Nature 1993,365:751-753.
51. Jung SN, Borst A, Haag J: Flight activity alters velocitytuning of fly motion-sensitive neurons. J Neurosci 2011,31:9231-9237.
52. Chiappe ME, Seelig JD, Reiser MB, Jayaraman V: Walkingmodulates speed sensitivity in Drosophila motion vision. CurrBiol 2010, 20:1470-1475.
53. Ward A, Liu J, Feng Z, Xu XZ: Light-sensitive neurons andchannels mediate phototaxis in C. elegans. Nat Neurosci 2008,11:916-922.
54. Leifer AM, Fang-Yen C, Gershow M, Alkema MJ, Samuel AD:Optogenetic manipulation of neural activity in freelymoving Caenorhabditis elegans. Nat Methods 2011,8:147-152.
Current Opinion in Neurobiology 2012, 22:3–10
10 Neurotechnology
55. de Bono M, Tobin DM, Davis MW, Avery L, Bargmann CI: Socialfeeding in Caenorhabditis elegans is induced by neurons thatdetect aversive stimuli. Nature 2002, 419:899-903.
56. Stirman JN, Crane MM, Husson SJ, Wabnig S, Schultheis C,Gottschalk A, Lu H: Real-time multimodal optical control ofneurons and muscles in freely behaving Caenorhabditiselegans. Nat Methods 2011, 8:153-158.
57. Guo ZV, Hart AC, Ramanathan S: Optical interrogation ofneural circuits in Caenorhabditis elegans. Nat Methods 2009,6:891-896.
58. Kerr R, Lev-Ram V, Baird G, Vincent P, Tsien RY, Schafer WR:Optical imaging of calcium transients in neurons andpharyngeal muscle of C. elegans. Neuron 2000, 26:583-594.
59. Matsumura N, Nishijo H, Tamura R, Eifuku S, Endo S, Ono T:Spatial- and task-dependent neuronal responses during realand virtual translocation in the monkey hippocampalformation. J Neurosci 1999, 19:2381-2393.
60. Ekstrom AD, Kahana MJ, Caplan JB, Fields TA, Isham EA,Newman EL, Fried I: Cellular networks underlying humanspatial navigation. Nature 2003, 425:184-188.
61. Hori E, Nishio Y, Kazui K, Umeno K, Tabuchi E, Sasaki K, Endo S,Ono T, Nishijo H: Place-related neural responses in the monkeyhippocampal formation in a virtual space. Hippocampus 2005,15:991-996.
62. Maguire EA, Nannery R, Spiers HJ: Navigation around Londonby a taxi driver with bilateral hippocampal lesions. Brain 2006,129:2894-2907.
63. Hassabis D, Chu C, Rees G, Weiskopf N, Molyneux PD,Maguire EA: Decoding neuronal ensembles in the humanhippocampus. Curr Biol 2009, 19:546-554.
64. Doeller CF, Barry C, Burgess N: Evidence for grid cells in ahuman memory network. Nature 2010, 463:657-661.
65. Fry SN, Rohrseitz N, Straw AD, Dickinson MH: Visual control offlight speed in Drosophila melanogaster. J Exp Biol 2009,212:1120-1130.
66. Heisenberg M, Wolf R: Reafferent control of optomotor yawtorque in Drosophila-melanogaster. J Comp Physiol A 1988,163:373-388.
67. Zeil J, Boeddeker N, Hemmi JM: Vision and the organization ofbehaviour. Curr Biol 2008, 18:R320-R323.
68. Dombeck DA, Graziano MS, Tank DW: Functional clustering ofneurons in motor cortex determined by cellular resolutionimaging in awake behaving mice. J Neurosci 2009, 29:13751-13760.
69. Hedwig B, Poulet JF: Complex auditory behaviour emergesfrom simple reactive steering. Nature 2004, 430:781-785.
Current Opinion in Neurobiology 2012, 22:3–10
70. Lohmann K, Swartz A, Lohmann C: Perception of ocean wavedirection by sea turtles. J Exp Biol 1995, 198:1079-1085.
71. Ferezou I, Haiss F, Gentet LJ, Aronoff R, Weber B, Petersen CC:Spatiotemporal dynamics of cortical sensorimotor integrationin behaving mice. Neuron 2007, 56:907-923.
72. Sherman A, Dickinson MH: A comparison of visual and haltere-mediated equilibrium reflexes in the fruit fly Drosophilamelanogaster. J Exp Biol 2003, 206:295-302.
73. Gu Y, Fetsch CR, Adeyemo B, Deangelis GC, Angelaki DE:Decoding of MSTd population activity accounts for variationsin the precision of heading perception. Neuron 2010,66:596-609.
74. Jarosiewicz B, Chase SM, Fraser GW, Velliste M, Kass RE,Schwartz AB: Functional network reorganization duringlearning in a brain-computer interface paradigm. Proc NatlAcad Sci U S A 2008, 105:19486-19491.
75. Moritz CT, Perlmutter SI, Fetz EE: Direct control of paralysedmuscles by cortical neurons. Nature 2008, 456:639-642.
76. Santhanam G, Ryu SI, Yu BM, Afshar A, Shenoy KV: A high-performance brain-computer interface. Nature 2006,442:195-198.
77. Kohatsu S, Koganezawa M, Yamamoto D: Female contactactivates male-specific interneurons that trigger stereotypiccourtship behavior in Drosophila. Neuron 2011, 69:498-508.
78. Lee AK, Manns ID, Sakmann B, Brecht M: Whole-cell recordingsin freely moving rats. Neuron 2006, 51:399-407.
79. Flusberg BA, Nimmerjahn A, Cocker ED, Mukamel EA,Barretto RP, Ko TH, Burns LD, Jung JC, Schnitzer MJ: High-speed, miniaturized fluorescence microscopy in freely movingmice. Nat Methods 2008, 5:935-938.
80. Sawinski J, Wallace DJ, Greenberg DS, Grossmann S, Denk W,Kerr JN: Visually evoked activity in cortical cells imaged infreely moving animals. Proc Natl Acad Sci U S A 2009,106:19557-19562.
81. Schulz D, Southekal S, Junnarkar SS, Pratte JF, Purschke ML,Stoll SP, Ravindranath B, Maramraju SH, Krishnamoorthy S,Henn FA et al.: Simultaneous assessment of rodent behaviorand neurochemistry using a miniature positron emissiontomograph. Nat Methods 2011, 8:347-352.
82. Szuts TA, Fadeyev V, Kachiguine S, Sher A, Grivich MV,Agrochao M, Hottowy P, Dabrowski W, Lubenov EV, Siapas AGet al.: A wireless multi-channel neural amplifier for freelymoving animals. Nat Neurosci 2011, 14:263-269.
83. Reiser MB, Dickinson MH: A modular display system forinsect behavioral neuroscience. J Neurosci Methods 2008,167:127-139.
www.sciencedirect.com