5/16/2015intelligent systems and soft computing1 introduction introduction hebbian learning hebbian...

37
06/18/22 Intelligent Systems and Soft Computing 1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning Generalised Hebbian learning algorithm algorithm Competitive learning Competitive learning Self-organising computational Self-organising computational map: map: Kohonen network Kohonen network Summary Summary Lecture 8 Lecture 8 tificial neural networks: tificial neural networks: Unsupervised Unsupervised learning learning

Upload: adele-baldwin

Post on 17-Dec-2015

231 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 1

IntroductionIntroductionHebbian learningHebbian learningGeneralised Hebbian learning algorithmGeneralised Hebbian learning algorithmCompetitive learningCompetitive learningSelf-organising computational map:Self-organising computational map:

Kohonen networkKohonen networkSummarySummary

Lecture 8Lecture 8Artificial neural networks:Artificial neural networks:Unsupervised Unsupervised learninglearning

Page 2: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 2

The main property of a neural network is an The main property of a neural network is an ability to learn from its environment, and to ability to learn from its environment, and to improve its performance through learning. So improve its performance through learning. So far we have considered far we have considered supervisedsupervised or or active active learninglearning learning with an external “teacher” learning with an external “teacher” or a supervisor who presents a training set to the or a supervisor who presents a training set to the network. But another type of learning also network. But another type of learning also exists: exists: unsupervised learningunsupervised learning..

IntroductionIntroduction

Page 3: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 3

In contrast to supervised learning, unsupervised or In contrast to supervised learning, unsupervised or self-organised learning self-organised learning does not require an does not require an external teacher. During the training session, the external teacher. During the training session, the neural network receives a number of different neural network receives a number of different input patterns, discovers significant features in input patterns, discovers significant features in these patterns and learns how to classify input data these patterns and learns how to classify input data into appropriate categories. Unsupervised into appropriate categories. Unsupervised learning tends to follow the neuro-biological learning tends to follow the neuro-biological organisation of the brain.organisation of the brain.

Unsupervised learning algorithms aim to learnUnsupervised learning algorithms aim to learn rapidly and can be used in real-time.rapidly and can be used in real-time.

Page 4: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 4

In 1949, Donald Hebb proposed one of the key In 1949, Donald Hebb proposed one of the key ideas in biological learning, commonly known as ideas in biological learning, commonly known as Hebb’s LawHebb’s Law. Hebb’s Law states that if neuron . Hebb’s Law states that if neuron ii is is near enough to excite neuron near enough to excite neuron j j and repeatedly and repeatedly participates in its activation, the synaptic connection participates in its activation, the synaptic connection between these two neurons is strengthened and between these two neurons is strengthened and neuron neuron j j becomes more sensitive to stimuli from becomes more sensitive to stimuli from neuron neuron ii..

Hebbian learningHebbian learning

Page 5: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 5

Hebb’s Law can be represented in the form of two Hebb’s Law can be represented in the form of two rules: rules:

1.1. If two neurons on either side of a connection If two neurons on either side of a connection are activated synchronously, then the weight of are activated synchronously, then the weight of that connection is increased. that connection is increased.

2.2. If two neurons on either side of a connection If two neurons on either side of a connection are activated asynchronously, then the weight are activated asynchronously, then the weight of that connection is decreased. of that connection is decreased.

Hebb’s Law provides the basis for learning Hebb’s Law provides the basis for learning without a teacher. Learning here is a without a teacher. Learning here is a local local phenomenon phenomenon occurring without feedback from occurring without feedback from the environment.the environment.

Page 6: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 6

Hebbian learning in a neural networkHebbian learning in a neural network

i j

I n

p u

t S

i g n

a l

s

O u

t p

u t

S i g

n a

l s

Page 7: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 7

Using Hebb’s Law we can express the adjustment Using Hebb’s Law we can express the adjustment applied to the weight applied to the weight wwijij at iteration at iteration p p in the in the

following form:following form:

As a special case, we can represent Hebb’s Law as As a special case, we can represent Hebb’s Law as follows:follows:

where where is the is the learning rate learning rate parameter. parameter. This equation is referred to as the This equation is referred to as the activity activity product ruleproduct rule..

][ )(),()( pxpyFpw ijij

)()()( pxpypw ijij a

Page 8: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 8

Hebbian learning implies that weights can only Hebbian learning implies that weights can only increase. To resolve this problem, we might increase. To resolve this problem, we might impose a limit on the growth of synaptic weights. impose a limit on the growth of synaptic weights. It can be done by introducing a non-linear It can be done by introducing a non-linear forgetting factor forgetting factor intointo Hebb’s Law:Hebb’s Law:

where where is the forgetting factor.is the forgetting factor.

Forgetting factor usually falls in the interval Forgetting factor usually falls in the interval between 0 and 1, typically between 0.01 and 0.1, between 0 and 1, typically between 0.01 and 0.1, to allow only a little “forgetting” while limiting to allow only a little “forgetting” while limiting the weight growth. the weight growth.

)()()()()( pwpypxpypw ijjijij

Page 9: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 9

Step 1Step 1: Initialisation.: Initialisation. Set initial synaptic weights and thresholds to small Set initial synaptic weights and thresholds to small random values, say in an interval [0, 1 random values, say in an interval [0, 1 ].].

Step 2Step 2: Activation.: Activation. Compute the neuron output at Compute the neuron output at iteration iteration p p

where where n n is the number of neuron inputs, and is the number of neuron inputs, and jj is the is the threshold value of neuron threshold value of neuron jj..

Hebbian learning algorithmHebbian learning algorithm

j

n

iijij pwpxpy

1)()()(

Page 10: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 10

Step 3Step 3: Learning. : Learning. Update the weights in the network: Update the weights in the network:

where where wwijij((pp) is the weight correction at iteration ) is the weight correction at iteration pp. .

The weight correction is determined by the The weight correction is determined by the generalised activity product rule:generalised activity product rule:

Step 4Step 4: Iteration. : Iteration. Increase iteration Increase iteration p p by one, go back to Step 2.by one, go back to Step 2.

)()()1( pwpwpw ijijij

][ )()()()( pwpxpypw ijijij

Page 11: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 11

To illustrate Hebbian learning, consider a fully To illustrate Hebbian learning, consider a fully connected feedforward network with a connected feedforward network with a single layer of five computation neurons. Each single layer of five computation neurons. Each neuron is represented by aneuron is represented by a McCulloch and Pitts model McCulloch and Pitts model with the sign activation function. The network is with the sign activation function. The network is trained on the following set of input vectors:trained on the following set of input vectors:

Hebbian learning exampleHebbian learning example

0

0

0

0

0

1X

1

0

0

1

0

2X

0

1

0

0

0

3X

0

0

1

0

0

4X

1

0

0

1

0

5X

Page 12: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 12

Initial and final states of the networkInitial and final states of the network

Input layer

x1 1

Outputlayer

2

1 y1

y2x2 2

x3 3

x4 4

x5 5

4

3 y3

y4

5 y5

1

0

0

0

1

1

0

0

0

1

Input layer

x 1

Outputlayer

2

1 y1

y2x 2

x 3

x 4

x

4

3 y3

y4

5 y5

1

0

0

0

1

0

0

1

0

1

2

5

Page 13: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 13

Initial and final weight matricesInitial and final weight matrices

Output l ayer

10000

01000

00100

00010

00001

21 43 5

1

2

3

4

5

Output l ayer

0

0

0

0

0

1 2 3 4 5

1

2

3

4

5

0

0

2.0204

0

2.0204

1.0200

0

0

0

00 .9996

0

0

0

0

0

0

2.0204

0

2.0204

Page 14: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 14

A test input vector, or probe, is defined asA test input vector, or probe, is defined as

When this probe is presented to the network, we When this probe is presented to the network, we obtain: obtain:

1

0

0

0

1

X

10

0

10

0737.09478.0

0907.0

2661.04940.0

10

0

01

2.0204002.0204000.9996000

001.020000

2.0204002.0204000000

signY

Page 15: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 15

In competitive learning, neurons compete among In competitive learning, neurons compete among themselves to be activated.themselves to be activated.

While in Hebbian learning, several output neurons While in Hebbian learning, several output neurons can be activated simultaneously, in competitive can be activated simultaneously, in competitive learning, only a single output neuron is active at learning, only a single output neuron is active at any time.any time.

The output neuron that wins the “competition” is The output neuron that wins the “competition” is called the called the winner-takes-all winner-takes-all neuron.neuron.

Competitive learningCompetitive learning

Page 16: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 16

The basic idea of competitive learning was The basic idea of competitive learning was introduced in the early 1970s.introduced in the early 1970s.

In the late 1980s, Teuvo Kohonen introduced a In the late 1980s, Teuvo Kohonen introduced a special class of artificial neural networks called special class of artificial neural networks called self-organizing feature mapsself-organizing feature maps. These maps are . These maps are based on competitive learning.based on competitive learning.

Page 17: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 17

Our brain is dominated by the cerebral cortex, a Our brain is dominated by the cerebral cortex, a very complex structure of billions of neurons and very complex structure of billions of neurons and hundreds of billions of synapses. The cortex hundreds of billions of synapses. The cortex includes areas that are responsible for different includes areas that are responsible for different human activities (motor, visual, auditory, human activities (motor, visual, auditory, somatosensory, etc.), andsomatosensory, etc.), and associated with different associated with different sensory inputs. We can say that each sensory sensory inputs. We can say that each sensory input is mapped into a corresponding area of the input is mapped into a corresponding area of the cerebral cortex. cerebral cortex. The cortex is a self-organizing The cortex is a self-organizing computational map in the human brain.computational map in the human brain.

What is a self-organizing feature map?What is a self-organizing feature map?

Page 18: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 18

Feature-mapping Kohonen modelFeature-mapping Kohonen model

Input layer

Kohonen layer

(a)

Input layer

Kohonen layer

1 1(b)

00

Page 19: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 19

The Kohonen networkThe Kohonen network

The Kohonen model provides a topological The Kohonen model provides a topological mapping. It places a fixed number of input mapping. It places a fixed number of input patterns from the input layer into a higher- patterns from the input layer into a higher- dimensional output or Kohonen layer. dimensional output or Kohonen layer.

Training in the Kohonen network begins with the Training in the Kohonen network begins with the winner’s neighborhood of a fairly large size. Then, winner’s neighborhood of a fairly large size. Then, as training proceeds, the neighborhood size as training proceeds, the neighborhood size gradually decreases.gradually decreases.

Page 20: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 20

Architecture of the Kohonen NetworkArchitecture of the Kohonen Network

Inputlayer

x1

x2

Outputlayer

y

y2

1

y3

Page 21: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 21

The lateral connections are used to create a The lateral connections are used to create a competition between neurons. The neuron with competition between neurons. The neuron with the largest activation level among all neurons in the largest activation level among all neurons in the output layer becomes the winner. This neuron the output layer becomes the winner. This neuron is the only neuron that produces an output signal. is the only neuron that produces an output signal. The activity of all other neurons is suppressed in The activity of all other neurons is suppressed in the competition.the competition.

The lateral feedback connections produce The lateral feedback connections produce excitatory or inhibitory effects, depending on the excitatory or inhibitory effects, depending on the distance from the winning neuron. This is distance from the winning neuron. This is achieved by the use of a achieved by the use of a Mexican hat function Mexican hat function which describes synaptic weights between neurons which describes synaptic weights between neurons in the Kohonen layer.in the Kohonen layer.

Page 22: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 22

The Mexican hat function of lateral connectionThe Mexican hat function of lateral connection

Connectionstrength

Distance

Excitatoryeffect

Inhibitoryeffect

Inhibitoryeffect

0

1

Page 23: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 23

In theIn the Kohonen network, a neuron learns by Kohonen network, a neuron learns by shifting its weights from inactive connections to shifting its weights from inactive connections to active ones. Only the winning neuron and its active ones. Only the winning neuron and its neighbourhood are allowed to learn. If a neuron neighbourhood are allowed to learn. If a neuron does not respond to a given input pattern, then does not respond to a given input pattern, then learning cannot occur in that particular neuron.learning cannot occur in that particular neuron.

The The competitive learning rule competitive learning rule defines the change defines the change wwijij applied to synaptic weight applied to synaptic weight wwijij as as

where where xxii is the input signal and is the input signal and is the is the learning learning rate rate parameter.parameter.

ncompetitiothelosesneuronif,0

ncompetitiothewinsneuronif),(

j

jwxw

ijiij

Page 24: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 24

The overall effect of the competitive learning rule The overall effect of the competitive learning rule resides in moving the synaptic weight vector resides in moving the synaptic weight vector WWjj of of

the winning neuron the winning neuron j j towards the input pattern towards the input pattern XX. . The matching criterion is equivalent to the The matching criterion is equivalent to the minimum minimum Euclidean distance Euclidean distance between vectors.between vectors.

The Euclidean distance between a pair of The Euclidean distance between a pair of nn-by-1 -by-1 vectors vectors X X and and WWjj is defined byis defined by

where where xxii and and wwijij are the are the iith elements of the vectors th elements of the vectors

X X and and WWjj, respectively., respectively.

2/1

1

2)(

n

iijij wxd WX

Page 25: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 25

To identify the winningTo identify the winning neuron, neuron, jjXX, that best , that best

matches the input vector matches the input vector XX, we may apply the , we may apply the following condition:following condition:

where where m m is the number of neurons in the Kohonen is the number of neurons in the Kohonen layer.layer.

,jj

minj WXX j =1,2, . . .,m

Page 26: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 26

Suppose, for instance, that the 2-dimensional input Suppose, for instance, that the 2-dimensional input vector vector X X is presented to the three-neuron Kohonen is presented to the three-neuron Kohonen network,network,

The initial weight vectors, The initial weight vectors, WWjj, are given by, are given by

12.0

52.0X

81.0

27.01W

70.0

42.02W

21.0

43.03W

Page 27: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 27

We find the winning (best-matching) neuron We find the winning (best-matching) neuron jjXX using the minimum-distance Euclidean criterion:using the minimum-distance Euclidean criterion:

Neuron 3 is the winner and its weight vector Neuron 3 is the winner and its weight vector WW33 is is updated according to the competitive learning rule.updated according to the competitive learning rule.

2212

21111 )()( wxwxd 73.0)81.012.0()27.052.0( 22

2222

21212 )()( wxwxd 59.0)70.012.0()42.052.0( 22

2232

21313 )()( wxwxd 13.0)21.012.0()43.052.0( 22

0.01)43.052.0(1.0)( 13113 wxw

0.01)21.012.0(1.0)( 23223 wxw

Page 28: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 28

The updated weight vector The updated weight vector WW33 at iteration ( at iteration (p p + 1) + 1) is determined as:is determined as:

The weight vector The weight vector WW33 of the wining neuron 3 of the wining neuron 3 becomes closer to the input vector becomes closer to the input vector X X with each with each iteration.iteration.

20.0

44.0

01.0

0.01

21.0

43.0)()()1( 333 ppp WWW

Page 29: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 29

Competitive Learning AlgorithmCompetitive Learning Algorithm

Set initial synaptic weights to small randomSet initial synaptic weights to small randomvalues, say in an interval [0, 1], and assign a smallvalues, say in an interval [0, 1], and assign a smallpositive value to the learning rate parameter positive value to the learning rate parameter ..

Step 1Step 1: : InitializationInitialization..

Page 30: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 30

Activate the Kohonen network by applying theActivate the Kohonen network by applying theinput vector input vector XX, and find the winner-takes-all (best, and find the winner-takes-all (bestmatching) neuron matching) neuron jjXX at iteration at iteration pp, using the, using theminimum-distance Euclidean criterionminimum-distance Euclidean criterion

where where n n is the number of neurons in the inputis the number of neurons in the inputlayer, and layer, and m m is the number of is the number of neurons in theneurons in theKohonen layer.Kohonen layer.

,)()()(2/1

1

2][

n

iijij

jpwxpminpj WXX

j =1,2, . . .,m

Step 2Step 2: : Activation and Similarity MatchingActivation and Similarity Matching..

Page 31: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 31

Update the synaptic weightsUpdate the synaptic weights

where where wwijij((pp) is the weight correction at iteration ) is the weight correction at iteration pp..The weight correction is determined by theThe weight correction is determined by thecompetitive learning rule:competitive learning rule:

where where is the is the learning rate learning rate parameter, and parameter, and jj((pp) is) isthe neighborhood function centered around thethe neighborhood function centered around thewinner-takes-all neuron winner-takes-all neuron jjXX at iteration at iteration pp..

)()()1( pwpwpw ijijij

)(,0

)(,)()(

][pj

pjpwxpw

j

jijiij

Step 3Step 3: : LearningLearning..

Page 32: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 32

Increase iteration Increase iteration p p by one, go back to Step 2 andby one, go back to Step 2 andcontinue until the minimum-distance Euclideancontinue until the minimum-distance Euclideancriterion is satisfied, or no noticeable changescriterion is satisfied, or no noticeable changesoccur in the feature map.occur in the feature map.

Step 4Step 4: : IterationIteration..

Page 33: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 33

Competitive learning in the Kohonen networkCompetitive learning in the Kohonen network

To illustrate competitive learning, consider the To illustrate competitive learning, consider the Kohonen network with 100 neurons arranged in the Kohonen network with 100 neurons arranged in the form of a two-dimensional lattice with 10 rows and form of a two-dimensional lattice with 10 rows and 10 columns. The network is required to classify 10 columns. The network is required to classify two-dimensional input vectors - each neuron in the two-dimensional input vectors - each neuron in the network should respond only to the input vectors network should respond only to the input vectors occurring in its region.occurring in its region.

The network is trained with 1000 two-dimensional The network is trained with 1000 two-dimensional input vectors generated randomly in a square input vectors generated randomly in a square region in the interval between –1 and +1. The region in the interval between –1 and +1. The learning rate parameter a is equal to 0.1.learning rate parameter a is equal to 0.1.

Page 34: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 34

Initial random weightsInitial random weights

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1-1-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

W(2

,j)

W(1,j)

Page 35: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 35

Network after 100 iterationsNetwork after 100 iterations

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8-1-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1

W(2

,j)

W(1,j)

Page 36: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 36

Network after 1000 iterationsNetwork after 1000 iterations

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8-1-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1

W(2

,j)

W(1,j)

Page 37: 5/16/2015Intelligent Systems and Soft Computing1 Introduction Introduction Hebbian learning Hebbian learning Generalised Hebbian learning algorithm Generalised

04/18/23 Intelligent Systems and Soft Computing 37

Network after 10,000 iterationsNetwork after 10,000 iterations

-0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8-1-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

1

W(2

,j)

W(1,j)