subject wearing a vr helmet immersed in the virtual environment on the right, with obstacles and...

27
t wearing a VR helmet immersed in the virtual environment on the rig les and targets. Subjects follow the path, avoid the obstacels, and s.

Upload: dwight-holt

Post on 03-Jan-2016

215 views

Category:

Documents


0 download

TRANSCRIPT

PowerPoint Presentation

Subject wearing a VR helmet immersed in the virtual environment on the right, withobstacles and targets. Subjects follow the path, avoid the obstacels, and touch thetargets.

Top down view of the experiment. A sequence of fixations are shown, directed at the path (green), obstacles (red), and targets. The subjects path is shown by the thin red line and shows two instances of avoidance of an obstacle (pink). In both cases avoidance occurs while the S is fixating the next target (blue). General question: How is vision used when walking?

How far away is the subject when the obstacle is fixated? (Choose the end of thefixation.)

2. Does this distance very for chairs vs people vs bins? Why?

3. Are some obstacles not fixated at all? Why?

4. What location on the obstacle is fixated? Eg is it the edge closest to the body?

5. Is there any change in gaze behavior as the path becomes more familiar? Why?A basic visual task: walking

4However what Walter would be able to handle is an unexpected salient event, such as appearance of another pedestrian in the field of viewWalter would be in trouble because he doesnt have looking for other pedestrians in his behavioral repertoireWhere do we look when we walk? Why do we look there?

Suppose that complex behavior can be broken down into simpler sub-tasks.

What sub-tasks can be identified when walking?

- Control heading- Avoid obstacles- Update locationRemember surroundings

Can we identify the function of the fixations made while walking?

Each sub-task can be accomplished either by bottom up capture of attention orby top down search for relevant visual information.

The top down factors reflect learning.

In this lab we would like to:Describe gaze behavior as a first step to understanding how vision is used.See if we can find evidence for learning where obstacles are. Such learning is critical for top down control.

Looming a potential bottom up cue that attracts gazeNeurons in Area MT sensitive to looming stimuli.6Note this may seem obvious, but in contrast, lot of work trying to predict fix locs by analyzing properties of image.Not clear what role of saliency might be in normal visionBody motion generates image motion over whole retina

target selectionsignals to musclesinhibits SC saccade decisionsaccade commandNeural Circuitry for SaccadesSubstantia nigra pc(Dopamine)Motion sensitive area MT7Where do targets come from?Optic Flow the pattern on the retina when walking

Optic Flow might be used for controlling heading direction.This is commonly thought to be a bottom up mechanism.

How Gaze Patterns are LearnedNeuroeconomicsFixation on Collider

11Learning to Adjust GazeChanges in fixation behavior fairly fast, happen over 4-5 encounters (Fixations on Rogue get longer, on Safe shorter)

12

Shorter Latencies for Rogue FixationsRogues are fixated earlier after they appear in the field of view. This change is also rapid.

13Neural Substrate for LearningNeurons in substantia nigra pc in basal ganglia release dopamine.These neurons signal expected reward. Neurons at all levels of saccadic eye movement circuitry are sensitive to reward.This provides the neural basis for learning gaze patterns in natural behavior, and for modeling these processes using Reinforcement Learning. 14These findings are important because if we want to understand how fixation patterns and timing are so tightly linked to task, need reward mechanisms like this to control learning

target selectionsignals to musclesinhibits SC saccade decisionsaccade commandplanning movementsNeural Circuitry for SaccadesSubstantia nigra pcSubstantia nigra pc modulates caudate15Where do targets come from?Neurons at all levels of saccadic eye movement circuitry are sensitive to reward.

LIP: lateral intra-parietal cortex. Neurons involved in initiating a saccade to a particular location have a bigger response if reward is bigger or more likely

SEF: supplementary eye fieldsFEF: frontal eye fieldsCaudate nucleus in basal ganglia16 This provides the neural basis for learning gaze patterns in natural behavior, and for modeling these processes using Reinforcement Learning. (eg Sprague, Ballard, Robinson, 2007) 17Virtual environments allow direct comparison of human behavior and model predictions in the same natural context.

Use Reinforcement Learning models with an embodied agentacting in the virtual environment.

Modelling Natural Behavior in Virtual Environments. 18

Assume behavior composed of a set of sub-tasksSprague, Ballard, Robinson, 2007; Rothkopf ,2008 Modelling behaviors using virtual agents19Model agent after learning

Pickup litterFollow walkwayAvoid obstacles20Choose the task that reduces uncertainty of reward the mostControlling the Sequence of fixationsobscanside

21

22

Humans walk in the avatars environment23

Reward weights estimated from human behavior.Human path Avatar path 24

Time fixatingIntersection.

Follow the car.orFollow the car and obeytraffic rules.CarRoadsideRoadIntersectionShinoda et al. (2001)Detection of signs at intersection results from frequent looks.2521What do we know? Previous work on dsn of attn in natural environments:IntersectionP = 1.0Mid-blockP = 0.3Greater probability of detection in probable locationsSuggests Ss learn where to attend/look.

How well do human subjects detect unexpected events? Shinoda et al. (2001)Detection of briefly presented Stop signs. 26