computational cognitive neuroscience lab
DESCRIPTION
Computational Cognitive Neuroscience Lab. Today: Model Learning. Computational Cognitive Neuroscience Lab. Today: Homework is due Friday, Feb 17 Chapter 4 homework is shorter than the last one! Undergrads omit 4.4, 4.5, 4.7c, 4.7d. Hebbian Learning. - PowerPoint PPT PresentationTRANSCRIPT
Computational Cognitive
Neuroscience Lab
Today: Model Learning
Computational Cognitive Neuroscience Lab
» Today:» Homework is due Friday, Feb 17» Chapter 4 homework is shorter than
the last one!» Undergrads omit 4.4, 4.5, 4.7c, 4.7d
Hebbian Learning
» “Neurons that fire together, wire together”» Correlations between sending and
receiving activity strengthens the connection between them
» “Don’t fire together, unwire”» Anti-correlation between sending and
receiving activity weakens the connection
LTP/D via NMDA receptors
» NMDA receptors allow calcium to enter the (postsynaptic) cell
» NMDA are blocked by Mg+ ions, which are cast off when the membrane potential increases
» Glutamate (excitatory) binds to unblocked NMDA receptor, causes structural change that allows Ca++ to pass through
Calcium and Synapses
» Calcium initiates multiple chemical pathways, dependent on the level of calcium
» Low Ca++ long term depression (LTD)» High Ca++ long term potentiation (LTP)» LTP/D effects: new postsynaptic
receptors, incresed dendritic spine size, or increased presynaptc release processes (via retrograde messenger)
Fixing Hebbian learning
» Hebbian learning results in infinite weights!» Oja’s normalization (savg_corr)
» When to learn?» Conditional PCA--learn only when you see
something interesting
» A single unit hogs everything?» kWTA and Contrast enhancement -->
specialization
Principal Components Analysis (PCA)
» Principal, as in primary, not principle, as in some idea
» PCA seeks a linear combination of variables such that maximum variance is extracted from the variables. It then removes this variance and seeks a second linear combination which explains the maximum proportion of the remaining variance, and so on until you run out of variance.
PCA continued
» This is like linear regression, except you take the whole collection of variables (vector) and correlate it with itself to make a matrix.
» This is kind of like linear regression, where a whole collection of variables is regressed on itself
» The line of best fit through this regression is the first principal component!
PCA cartoon
QuickTime™ and aTIFF (Uncompressed) decompressor
are needed to see this picture.
Conditional PCA
» “Perform PCA only when a particular input is received”
» Condition: The forces that determine when a receiving unit is active
» Competition means hidden units will specialize for particular inputs
» So hidden units only learn when their favorite input is available
Self-organizing learning
» kWTA determines which hidden units are active for a given input
» CPCA ensures those hidden units learn only about a single aspect of that input
» Contrast enhancement -- drive high weighs higher, low weights lower
» Contrast enhancement helps units specialize (and share)
Bias-variance dilemma
» High bias--actual experience does not change model much, so biases better be good!
» Low bias--experience highly determines learning, so does random error! Model could be different, high model variance
Architecture as Bias
» Inhibition drives competition, and competition determines which units are active, and the unit activity determines learning
» Thus, deciding which units share inhibitory connections (are in the same layer) will affect the learning
» This architecture is the learning bias!
Fidelity and Simplicity of representations
» Information must be lost in the world-to-brain transformation (p118)
» There is a tradeoff in the amount of information lost, and the complexity of the representation
» Fidelity / simplicity tradeoff is set by» Conditional PCA (first principal component
only)» Competition (k value)» Contrast enhancement (savg_corr, wt_gain)