focus on unsupervised learning. no teacher specifying right answer
TRANSCRIPT
Focus on Unsupervised Learning
No teacher specifying right answer
No teacher specifying right answerTechniques for autonomous SW or
robots to learn to characterize their sensations
“Competitive” learning algorithm
“Competitive” learning algorithm Winner-take-all
Learning Rule: Iterate
Learning Rule: IterateFind “winner”
Learning Rule: IterateFind “winner”Delta = learning rate * (sample –
prototype)
Example: Learning rate = .05 Sample = (122, 180) Winner = (84, 203)
DeltaX = learning rate * (sample x – winner x) DeltaX = .05 * (122 – 84) DeltaX = 1.9 New prototype x value = 84 + 1.9 = 85.9
DeltaY = .05 * (180 - 203) DeltaY = -1.15 New prototype y value = 203 -1.15 = 201.85
Python Demo
Sound familiar?
ClusteringDimensionality ReductionData visualization
Yves Amu Klein’s Octofungi uses a kohonen neural network to react to its environment
Associative learning method
Associative learning methodBiologically inspired
Associative learning methodBiologically inspiredBehavioral conditioning and
Psychological models
activation = sign(input sum)
activation = sign(input sum)+1 and -1 inputs
activation = sign(input sum)+1 and -1 inputs2 layers
weight change = learning constant *
neuron A activation * neuron B activation
weight change = learning constant *
desired output * input value
Long-term memory
Long-term memory Inspired by Hebbian learning
Long-term memory Inspired by Hebbian learningContent-addressable memory
Long-term memory Inspired by Hebbian learningContent-addressable memoryFeedback and convergance
Attractor – “a state or output vector in a system
towards which the system consistently evolves toward given a specific input vector.”
Attractor Basin – “the set of input vectors surrounding
a learned vector which will converge to the same output vector.”
Bi-directional Associative MemoryAttractor network with 2 layers
Smell Taste
Bi-directional Associative MemoryAttractor network with 2 layers Information flows in both directions
Bi-directional Associative MemoryAttractor network with 2 layers Information flows in both directionsMatrix worked out in advance
Hamming vector – vector composed of
+1 and -1 only
Ex. [1,-1,-1,1] [1,1,-1,1]
Hamming distance – number of components by which 2 vectors differ
Ex. [1,-1,-1,1] and [1,1,-1,1]Differ in only one element (index 1)Hamming distance = 1
Weights are a matrix based on memories we want to store
To associate X = [1,-1,-1,-1] With Y = [-1,1,1]XY
1 -1 -1 -1
-1 -1 1 1 1
1 1 -1 -1 -1
1 1 -1 -1 -1
[1,-1,-1,-1] -> [1,1,1] and [-1,-1,-1,1] -> [1,-1,1]
+
=
1 -1 -1 -1
1 -1 -1 -1
1 -1 -1 -1
-1 -1 -1 1
1 1 1 -1
-1 -1 -1 1
0 -2 -2 0
2 0 0 -2
0 -2 -2 0
AutoassociativeRecurrent
To remember the pattern [1,-1,1,-1,1]
1 -1 1 -1 11 1 -1 1 -1 1-1 -1 1 -1 1 -11 1 -1 1 -1 1-1 -1 1 -1 1 -11 1 -1 1 -1 1
Demo
Complements of a vector also become attractors
Complements of a vector also become attractors
Ex. Installing [1,-1, 1] [-1, 1, -1] also
“remembered”
Complements of a vector also become attractors
Crosstalk
George Christos “Memory and Dreams”
Ralph E. Hoffman models of schizophrenia
Spurious Memories