chapter 2. outline linear filters visual system (retina, lgn, v1) spatial receptive fields –v1...

Download Chapter 2. Outline Linear filters Visual system (retina, LGN, V1) Spatial receptive fields –V1 –LGN, retina Temporal receptive fields in V1 –Direction

If you can't read please download the document

Upload: callie-greenland

Post on 14-Dec-2015

219 views

Category:

Documents


0 download

TRANSCRIPT

  • Slide 1

Chapter 2 Slide 2 Outline Linear filters Visual system (retina, LGN, V1) Spatial receptive fields V1 LGN, retina Temporal receptive fields in V1 Direction selectivity Slide 3 Linear filter model Given s(t) and r(t), what is D? Slide 4 Slide 5 White noise stimulus Slide 6 Fourier transform Slide 7 H1 neuron in visual system of blowfly A: Stimulus is velocity profile; B: response of H1 neuron of the fly visual system; C: r est (t) using the linear kernel D( ) (solid line) and actual neural rate r(t) agree when rates vary slowly. D( ) is constructed using white noise Slide 8 Deviation from linearity Slide 9 Slide 10 Early visual system: Retina 5 types of cells: Rods and cones: photo- transduction into electrical signal Lateral interaction of Bipolar cells through Horizontal cells. No action potentials for local computation Action potentials in retinal ganglion cells coupled by Amacrine cells. Note G_1 off response G_2 on response Slide 11 Pathway from retina via LGN to V1 Lateral geniculate nucleus (LGN) cells receive input from Retinal ganglion cells from both eyes. Both LGNs represent both eyes but different parts of the world Neurons in retina, LGN and visual cortex have receptive fields: Neurons fire only in response to higher/lower illumination within receptive field Neural response depends (indirectly) on illumination outside receptive field Slide 12 Simple and complex cells Cells in retina, LGN, V1 are simple or complex Simple cells: Model as linear filter Complex cells Show invariance to spatial position within the receptive field Poorly described by linear model Slide 13 Retinotopic map Neighboring image points are mapped onto neighboring neurons in V1 Visual world is centered on fixation point. The left/right visual world maps to the right/left V1 Distance on the display (eccentricity) is measured in degrees by dividing by distance to the eye Slide 14 Retinotopic map Slide 15 Slide 16 Visual stimuli Slide 17 Nyquist Frequency Slide 18 Spatial receptive fields Slide 19 V1 spatial receptive fields Slide 20 Gabor functions Slide 21 Response to grating Slide 22 Temporal receptive fields Space-time evolution of V1 cat receptive field ON/OFF boundary changes to OFF/ON boundary over time. Extrema locations do not change with time: separable kernel D(x,y, )=D s (x,y)D t ( ) Slide 23 Temporal receptive fields Slide 24 Space-time receptive fields Slide 25 Slide 26 Slide 27 Direction selective cells Slide 28 Complex cells Slide 29 Example of non-separable receptive fields LGN X cell Slide 30 Slide 31 Comparison model and data Slide 32 Constructing V1 receptive fields Oriented V1 spatial receptive fields can be constructed from LGN center surround neurons Slide 33 Slide 34 Stochastic neural networks 2000 top-level neurons 500 neurons 28 x 28 pixel image 10 label neurons The model learns to generate combinations of labels and images. To perform recognition we start with a neutral state of the label units and do an up-pass from the image followed by a few iterations of the top-level associative memory. The top two layers form an associative memory whose energy landscape models the low dimensional manifolds of the digits. The energy valleys have names Hinton Slide 35 Samples generated by letting the associative memory run with one label clamped using Gibbs sampling Hinton Slide 36 Examples of correctly recognized handwritten digits that the neural network had never seen before Hinton Slide 37 How well does it discriminate on MNIST test set with no extra information about geometric distortions? Generative model based on RBMs 1.25% Support Vector Machine (Decoste et. al.) 1.4% Backprop with 1000 hiddens (Platt) ~1.6% Backprop with 500 -->300 hiddens ~1.6% K-Nearest Neighbor ~ 3.3% See Le Cun et. al. 1998 for more results Its better than backprop and much more neurally plausible because the neurons only need to send one kind of signal, and the teacher can be another sensory input. Hinton Slide 38 Summary Linear filters White noise stimulus for optimal estimation Visual system (retina, LGN, V1) Visual stimuli V1 Spatial receptive fields Temporal receptive fields Space-time receptive fields Non-separable receptive fields, Direction selectivity LGN and Retina Non-separable ON center OFF surround cells V1 direction selective simple cells as sum of LGN simple cells Slide 39 Exercise 2.3 Is based on Kara, Reinagel, Reid (Neuron, 2000). Simultaneous single unit recordings of retinal ganglion cells, LGN relay cells and simple cells from primary visual cortex Spike count variability (Fano) less than Poisson, doubling from RGC to LGN and from LGN to cortex. Data explained by Poisson with refractory period Fig. 1,2,3