self organized map (som) neural network
DESCRIPTION
Self Organized Map (SOM) Neural Network. Self Organized Map (SOM). The self-organizing map (SOM) is a method for unsupervised learning , based on a grid of artificial neurons whose weights are adapted to match input vectors in a training set. - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/1.jpg)
1
Self Organized Map (SOM) Neural Network
![Page 2: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/2.jpg)
2
Self Organized Map (SOM)
• The self-organizing map (SOM) is a method for unsupervised learning, based on a grid of artificial neurons whose weights are adapted to match input vectors in a training set.
• It was first described by the Finnish professor Teuvo Kohonen and is thus sometimes referred to as a Kohonen map.
• SOM is one of the most popular neural computation methods in use, and several thousand scientific articles have been written about it. SOM is especially good at producing visualizations of high-dimensional data.
![Page 3: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/3.jpg)
3
![Page 4: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/4.jpg)
4
Self Organizing Maps (SOM)
• SOM is an unsupervised neural network technique that approximates an unlimited number of input data by a finite set of models arranged in a grid, where neighbor nodes correspond to more similar models.
• The models are produced by a learning algorithm that automatically orders them on the two-dimensional grid along with their mutual similarity.
![Page 5: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/5.jpg)
5
The brain maps the external multidimensional representation of the world into a similar 1 or 2 - dimensional internal representation.
That is, the brain processes the external signals in a topology-preserving way
Mimicking the way the brain learns, our system should be able to do the same thing.
Brain’s self-organization
![Page 6: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/6.jpg)
6
Why SOM ?
• Unsupervised Learning
• Clustering
• Classification
• Monitoring
• Data Visualization
• Potential for combination between SOM and other neural network (MLP-RBF)
![Page 7: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/7.jpg)
7
Self Organizing Networks
Discover significant patterns or features in the input data
Discovery is done without a teacher Synaptic weights are changed according to
local rules The changes affect a neuron’s immediate
environmentuntil a final configuration develops
![Page 8: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/8.jpg)
8
Concept of the SOM.Input spaceInput layer
Reduced feature spaceMap layer
s1
s2Mn
Sr
Ba
Clustering and ordering of the cluster centers in a two dimensional grid
Cluster centers (code vectors) Place of these code vectors in the reduced space
![Page 9: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/9.jpg)
9
Network Architecture
• Two layers of units– Input: n units (length of training vectors)
– Output: m units (number of categories)
• Input units fully connected with weights to output units
• Intralayer (lateral) connections– Within output layer
– Defined according to some topology
– Not weights, but used in algorithm for updating weights
![Page 10: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/10.jpg)
10
SOM - Architecture
• Lattice of neurons (‘nodes’) accepts and responds to set of input signals
• Responses compared; ‘winning’ neuron selected from lattice• Selected neuron activated together with ‘neighbourhood’ neurons• Adaptive process changes weights to more closely inputs
2d array of neurons
Set of input signals
Weights
x1 x2 x3 xn...
wj1 wj2 wj3 wjn
jj
![Page 11: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/11.jpg)
11
Measuring distances between nodes• Distances between output
neurons will be used in the learning process.
• It may be based upon:a) Rectangular latticeb) Hexagonal lattice
• Let d(i,j) be the distance between the output nodes i,j
• d(i,j) = 1 if node j is in the first outer rectangle/hexagon of node i
• d(i,j) = 2 if node j is in the second outer rectangle/hexagon of node i
• And so on..
![Page 12: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/12.jpg)
12
•Each neuron is a node containing a template against which input patterns are matched.
•All Nodes are presented with the same input pattern in parallel and compute the distance between their template and the input in parallel.
•Only the node with the closest match between the input and its template produces an active output.
•Each Node therefore acts like a separate decoder (or pattern detector, feature detector) for the same input and the interpretation of the input derives from the presence or absence of an active response at each location
(rather than the magnitude of response or an input-output transformation as in feedforward or feedback networks).
![Page 13: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/13.jpg)
13
SOM: interpretation
• Each SOM neuron can be seen as representing a cluster containing all the input examples which are mapped to that neuron.
• For a given input, the output of SOM is the neuron with weight vector most similar (with respect to Euclidean distance) to that input.
![Page 14: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/14.jpg)
14
Self-Organizing Networks
• Kohonen maps (SOM)
• Learning Vector Quantization (VQ)
• Principal Components Networks (PCA)
• Adaptive Resonance Theory (ART)
![Page 15: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/15.jpg)
15
Types of Mapping
• Familiarity – the net learns how similar is a given new input to the typical (average) pattern it has seen before
• The net finds Principal Components in the data• Clustering – the net finds the appropriate
categories based on correlations in the data • Encoding – the output represents the input, using
a smaller amount of bits • Feature Mapping – the net forms a topographic
map of the input
![Page 16: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/16.jpg)
16
Possible Applications
• Familiarity and PCA can be used to analyze unknown data
• PCA is used for dimension reduction • Encoding is used for vector quantization• Clustering is applied on any types of data• Feature mapping is important for dimension
reduction and for functionality (as in the brain)
![Page 17: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/17.jpg)
17
Simple Models
• Network has inputs and outputs
• There is no feedback from the environment no supervision
• The network updates the weights following some learning rule, and finds patterns, features or categories within the inputs presented to the network
![Page 18: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/18.jpg)
18
Unsupervised Learning
In unsupervised competitive learning the neuronstake part in some competition for each input. Thewinner of the competition and sometimes someother neurons are allowed to change their weights
• In simple competitive learning only the winner is
allowed to learn (change its weight).
• In self-organizing maps other neurons in the
neighborhood of the winner may also learn.
![Page 19: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/19.jpg)
19
Simple Competitive Learning
x1
x2
xN
W11
W12
W22
WP1
WPN
Y1
Y2
YP
N inputs unitsP output neuronsP x N weights
Pi
N
jjiji XWh
...2,1
1
01oriY
![Page 20: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/20.jpg)
20
Network Activation
• The unit with the highest field hi fires
• i* is the winner unit
• Geometrically is closest to the current input vector
• The winning unit’s weight vector is updated to be even closer to the current input vector
*iW
![Page 21: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/21.jpg)
21
Learning
Starting with small random weights, at each step:
1. a new input vector is presented to the network
2. all fields are calculated to find a winner
3. is updated to be closer to the input
Using standard competitive learning equ.*iW
)( ** jijji WXW
![Page 22: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/22.jpg)
22
Result
• Each output unit moves to the center of mass of a cluster of input vectors
clustering
![Page 23: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/23.jpg)
23
Competitive Learning, Cntd
• It is important to break the symmetry in the initial random weights
• Final configuration depends on initialization– A winning unit has more chances of winning
the next time a similar input is seen– Some outputs may never fire– This can be compensated by updating the non
winning units with a smaller update
![Page 24: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/24.jpg)
24
More about SOM learning
• Upon repeated presentations of the training examples, the weight vectors of the neurons tend to follow the distribution of the examples.
• This results in a topological ordering of the neurons, where neurons adjacent to each other tend to have similar weight vectors.
• The input space of patterns is mapped onto a discrete output space of neurons.
![Page 25: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/25.jpg)
25
SOM – Learning Algorithm 1. Randomly initialise all weights
2. Select input vector x = [x1, x2, x3, … , xn] from training set
3. Compare x with weights wj for each neuron j to
4. determine winnerfind unit j with the minimum distance
5. Update winner so that it becomes more like x, together with the winner’s neighbours for units within the radius according to
6. Adjust parameters: learning rate & ‘neighbourhood function’
7. Repeat from (2) until … ?
i
iijj xwd 2)(
)]()[()()1( nwxnnwnw ijiijij
1)1()(0 nn Note that: Learning rate generally decreases with time:
![Page 26: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/26.jpg)
26
ExampleAn SOFM network with three inputs and two cluster units is to be
trained using the four training vectors:[0.8 0.7 0.4], [0.6 0.9 0.9], [0.3 0.4 0.1], [0.1 0.1 02] and
initial weights
The initial radius is 0 and the learning rate is 0.5 . Calculate the weight changes during the first cycle through the data, taking the training vectors in the given order.
5.08.0
2.06.0
4.05.0
weights to the first cluster unit
0.5
0.6
0.8
![Page 27: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/27.jpg)
27
Solution
The Euclidian distance of the input vector 1 to cluster unit 1 is:
The Euclidian distance of the input vector 1 to cluster unit 2 is:
Input vector 1 is closest to cluster unit 1 so update weights to cluster unit 1:
26.04.08.07.06.08.05.0 2221 d
42.04.05.07.02.08.04.0 2222 d
)8.04.0(5.08.06.0
)6.07.0(5.06.065.0
)5.08.0(5.05.065.0
)]([5.0)()1(
nwxnwnw ijiijij
5.060.0
2.065.0
4.065.0
![Page 28: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/28.jpg)
28
Solution
The Euclidian distance of the input vector 2 to cluster unit 1 is:
The Euclidian distance of the input vector 2 to cluster unit 2 is:
Input vector 2 is closest to cluster unit 1 so update weights to cluster unit 1 again:
155.09.06.09.065.06.065.0 2221 d
69.09.05.09.02.06.04.0 2222 d
)60.09.0(5.060.0750.0
)65.09.0(5.065.0775.0
)65.06.0(5.065.0625.0
)]([5.0)()1(
nwxnwnw ijiijij
5.0750.0
2.0775.0
4.0625.0
Repeat the same update procedure for input vector 3 and 4 also.
![Page 29: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/29.jpg)
29
Neighborhood Function
– Gaussian neighborhood function:
– dji: lateral distance of neurons i and j
• in a 1-dimensional lattice | j - i |
• in a 2-dimensional lattice || rj - ri || where rj is the position of neuron j in the lattice.
2
2
2exp)(
ijiji
ddh
![Page 30: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/30.jpg)
30N13(1) N13(2)
![Page 31: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/31.jpg)
31
Neighborhood Function
measures the degree to which excited neurons in the vicinity of the winning neuron cooperate in the learning process.
– In the learning algorithm is updated at each iteration during the ordering phase using the following exponential decay update rule, with parameters
10 exp)( T
nn
![Page 32: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/32.jpg)
32
Neighbourhood function
0
0.5
1
-10 -8 -6 -4 -2 0 2 4 6 8 10
0
0.5
1
-10 -8 -6 -4 -2 0 2 4 6 8 10
Degree ofneighbourhood
Distance from winner
Degree ofneighbourhood
Distance from winner
Time
Time
![Page 33: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/33.jpg)
33
UPDATE RULE
)(-x )( )()()1( )( nwnhnnwnw jxijjj
exponential decay update of the learning rate:
20 exp)( T
nn
![Page 34: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/34.jpg)
34
Illustration of learning for Kohonen maps
Inputs: coordinates (x,y) of points drawn from a square
Display neuron j at position xj,yj where its sj is maximum
Random initial positions
100 inputs 200 inputs
1000 inputsx
y
![Page 35: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/35.jpg)
35
Two-phases learning approach
– Self-organizing or ordering phase. The learning rate and spread of the Gaussian neighborhood function are adapted during the execution of SOM, using for instance the exponential decay update rule.
– Convergence phase. The learning rate and Gaussian spread have small fixed values during the execution of SOM.
![Page 36: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/36.jpg)
36
Ordering Phase Ordering Phase
• Self organizing or ordering phaseSelf organizing or ordering phase ::– Topological ordering of weight vectors.Topological ordering of weight vectors.– May take 1000 or more iterations of SOM algorithm.May take 1000 or more iterations of SOM algorithm.
• Important choice of the parameter values. For instanceImportant choice of the parameter values. For instance (n):(n): 00 = 0.1 = 0.1 TT2 2 = 1000= 1000
decrease gradually decrease gradually (n) (n) 0.01 0.01
– hhji(x)ji(x)(n):(n): 00 big enough big enough TT1 1 = =
• With this parameter setting initially the neighborhood of the winning neuron includes almost all neurons in the network, then it shrinks slowly with time.
10001000
log (log (00))
![Page 37: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/37.jpg)
37
Convergence Phase
• Convergence phase:– Fine tune the weight vectors.– Must be at least 500 times the number of neurons in
the network thousands or tens of thousands of iterations.
• Choice of parameter values:Choice of parameter values:
(n) maintained on the order of 0.01.– Neighborhood function such that the neighbor of the
winning neuron contains only the nearest neighbors. It eventually reduces to one or zero neighboring neurons.
![Page 38: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/38.jpg)
38
![Page 39: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/39.jpg)
39
Another Self-Organizing Map (SOM) Example
• From Fausett (1994)• n = 4, m = 2
– More typical of SOM application– Smaller number of units in output than in input;
dimensionality reduction• Training samples
i1: (1, 1, 0, 0)i2: (0, 0, 0, 1)i3: (1, 0, 0, 0)i4: (0, 0, 1, 1)
Input units:
Output units: 1 2
What should we expect as outputs?
Network Architecture
![Page 40: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/40.jpg)
40
What are the Euclidean Distances Between the Data Samples?
• Training samplesi1: (1, 1, 0, 0)
i2: (0, 0, 0, 1)
i3: (1, 0, 0, 0)
i4: (0, 0, 1, 1)
i1 i2 i3 i4
i1 0
i2 0
i3 0
i4 0
![Page 41: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/41.jpg)
41
Euclidean Distances Between Data Samples
• Training samplesi1: (1, 1, 0, 0)
i2: (0, 0, 0, 1)
i3: (1, 0, 0, 0)
i4: (0, 0, 1, 1)
i1 i2 i3 i4
i1 0
i2 3 0
i3 1 2 0
i4 4 1 3 0Input units:
Output units: 1 2 What might we expect from the SOM?
![Page 42: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/42.jpg)
42
Example Details• Training samples
i1: (1, 1, 0, 0)i2: (0, 0, 0, 1)i3: (1, 0, 0, 0)i4: (0, 0, 1, 1)
• With only 2 outputs, neighborhood = 0– Only update weights associated with winning output unit (cluster) at each
iteration• Learning rate
(t) = 0.6; 1 <= t <= 4(t) = 0.5 (1); 5 <= t <= 8(t) = 0.5 (5); 9 <= t <= 12etc.
• Initial weight matrix(random values between 0 and 1)
Input units:
Output units: 1 2
3.7.4.8.
9.5.6.2.
2
1 ,, ))((
n
k kjkl twi
))()(()()1( twittwtw jljj
d2 = (Euclidean distance)2 =
Weight update:
Unit 1:
Unit 2:
Problem: Calculate the weight updates for the first four steps
![Page 43: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/43.jpg)
43
First Weight Update
• Training sample: i1– Unit 1 weights
• d2 = (.2-1)2 + (.6-1)2 + (.5-0)2 + (.9-0)2 = 1.86
– Unit 2 weights• d2 = (.8-1)2 + (.4-1)2 + (.7-0)2 + (.3-0)2 = .98
– Unit 2 wins– Weights on winning unit are updated
– Giving an updated weight matrix:
3.7.4.8.
9.5.6.2.Unit 1:
Unit 2:
i1: (1, 1, 0, 0)i2: (0, 0, 0, 1)i3: (1, 0, 0, 0)i4: (0, 0, 1, 1)
])3.7.4.8.[-0] 0 1 [1(6.0]3.7.4.8.[2 weightsunitnew.12] .28 .76 [.92
12.
9.
28.
5.
76.
6.
92.
2.Unit 1:
Unit 2:
![Page 44: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/44.jpg)
44
Second Weight Update
• Training sample: i2– Unit 1 weights
• d2 = (.2-0)2 + (.6-0)2 + (.5-0)2 + (.9-1)2 = .66
– Unit 2 weights• d2 = (.92-0)2 + (.76-0)2 + (.28-0)2 + (.12-1)2 = 2.28
– Unit 1 wins– Weights on winning unit are updated
– Giving an updated weight matrix:
Unit 1:
Unit 2:
i1: (1, 1, 0, 0)i2: (0, 0, 0, 1)i3: (1, 0, 0, 0)i4: (0, 0, 1, 1)
])9.5.6.2.[-1] 0 0 [0(6.0]9.5.6.2.[1 weightsunitnew.96] .20 .24 [.08
Unit 1:
Unit 2:
12.
9.
28.
5.
76.
6.
92.
2.
12.
96.
28.
20.
76.
24.
92.
08.
![Page 45: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/45.jpg)
45
Third Weight Update
• Training sample: i3– Unit 1 weights
• d2 = (.08-1)2 + (.24-0)2 + (.2-0)2 + (.96-0)2 = 1.87
– Unit 2 weights• d2 = (.92-1)2 + (.76-0)2 + (.28-0)2 + (.12-0)2 = 0.68
– Unit 2 wins– Weights on winning unit are updated
– Giving an updated weight matrix:
Unit 1:
Unit 2:
i1: (1, 1, 0, 0)i2: (0, 0, 0, 1)i3: (1, 0, 0, 0)i4: (0, 0, 1, 1)
])12.28.76.92.[-0] 0 0 [1(6.0]12.28.76.92.[2 weightsunitnew.05] .11 .30 [.97
Unit 1:
Unit 2:
12.
96.
28.
20.
76.
24.
92.
08.
05.
96.
11.
20.
30.
24.
97.
08.
![Page 46: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/46.jpg)
46
Fourth Weight Update
• Training sample: i4– Unit 1 weights
• d2 = (.08-0)2 + (.24-0)2 + (.2-1)2 + (.96-1)2 = .71
– Unit 2 weights• d2 = (.97-0)2 + (.30-0)2 + (.11-1)2 + (.05-1)2 = 2.74
– Unit 1 wins– Weights on winning unit are updated
– Giving an updated weight matrix:
Unit 1:
Unit 2:
i1: (1, 1, 0, 0)i2: (0, 0, 0, 1)i3: (1, 0, 0, 0)i4: (0, 0, 1, 1)
])96.20.24.08.[-1] 1 0 [0(6.0]96.20.24.08.[1 weightsunitnew.98] .68 .10 [.03
Unit 1:
Unit 2:
05.
98.
11.
68.
30.
10.
97.
03.
05.
96.
11.
20.
30.
24.
97.
08.
![Page 47: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/47.jpg)
47
Applying the SOM Algorithm
time (t) 1 2 3 4 D(t) (t)
1 Unit 2 0 0.6
2 Unit 1 0 0.6
3 Unit 2 0 0.6
4 Unit 1 0 0.6
Data sample utilized
‘winning’ output unit
Unit 1:
Unit 2:
0
0.1
0
5.
5.
0
0.1
0
After many iterations (epochs) through the data set:
Did we get the clustering that we expected?
![Page 48: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/48.jpg)
48
What clusters do thedata samples fall into?
Unit 1:
Unit 2:
0
0.1
0
5.
5.
0
0.1
0
WeightsInput units:
Output units: 1 2
Training samplesi1: (1, 1, 0, 0)i2: (0, 0, 0, 1)i3: (1, 0, 0, 0)i4: (0, 0, 1, 1)
![Page 49: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/49.jpg)
49
Solution
• Sample: i1– Distance from unit1 weights
• (1-0)2 + (1-0)2 + (0-.5)2 + (0-1.0)2 = 1+1+.25+1=3.25
– Distance from unit2 weights• (1-1)2 + (1-.5)2 + (0-0)2 + (0-0)2 = 0+.25+0+0=.25 (winner)
• Sample: i2– Distance from unit1 weights
• (0-0)2 + (0-0)2 + (0-.5)2 + (1-1.0)2 = 0+0+.25+0 (winner)
– Distance from unit2 weights• (0-1)2 + (0-.5)2 + (0-0)2 + (1-0)2 =1+.25+0+1=2.25
Unit 1:
Unit 2:
0
0.1
0
5.
5.
0
0.1
0
Weights
Input units:
Output units: 1 2
Training samplesi1: (1, 1, 0, 0)i2: (0, 0, 0, 1)i3: (1, 0, 0, 0)i4: (0, 0, 1, 1)
2
1 ,, ))((
n
k kjkl twid2 = (Euclidean distance)2 =
![Page 50: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/50.jpg)
50
Solution
• Sample: i3– Distance from unit1 weights
• (1-0)2 + (0-0)2 + (0-.5)2 + (0-1.0)2 = 1+0+.25+1=2.25
– Distance from unit2 weights• (1-1)2 + (0-.5)2 + (0-0)2 + (0-0)2 = 0+.25+0+0=.25 (winner)
• Sample: i4– Distance from unit1 weights
• (0-0)2 + (0-0)2 + (1-.5)2 + (1-1.0)2 = 0+0+.25+0 (winner)
– Distance from unit2 weights• (0-1)2 + (0-.5)2 + (1-0)2 + (1-0)2 = 1+.25+1+1=3.25
Unit 1:
Unit 2:
0
0.1
0
5.
5.
0
0.1
0
Weights
Input units:
Output units: 1 2
Training samplesi1: (1, 1, 0, 0)i2: (0, 0, 0, 1)i3: (1, 0, 0, 0)i4: (0, 0, 1, 1)
2
1 ,, ))((
n
k kjkl twid2 = (Euclidean distance)2 =
![Page 51: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/51.jpg)
51
Word categories
![Page 52: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/52.jpg)
52
Examples of Applications
• Kohonen (1984). Speech recognition - a map of phonemes in the Finish language
• Optical character recognition - clustering of letters of different fonts
• Angeliol etal (1988) – travelling salesman problem (an optimization problem)
• Kohonen (1990) – learning vector quantization (pattern classification problem)
• Ritter & Kohonen (1989) – semantic maps
![Page 53: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/53.jpg)
53
Summary• Unsupervised learning is very common • US learning requires redundancy in the stimuli• Self organization is a basic property of the brain’s
computational structure• SOMs are based on
– competition (wta units)– cooperation– synaptic adaptation
• SOMs conserve topological relationships between the stimuli
• Artificial SOMs have many applications in computational neuroscience
![Page 54: Self Organized Map (SOM) Neural Network](https://reader033.vdocuments.site/reader033/viewer/2022061421/56813a81550346895da27d5b/html5/thumbnails/54.jpg)
54
End of slides