iii - connexionist approach neural networks
DESCRIPTION
III - Connexionist approach Neural networks. 1 - Introduction 1.1 - Use. 1.2 - Origins. Initial idea Serve neurobiology (description of the nervous system) Purposes Create and adapt a neuron model (the formal neuron), its elementary functions. 1.3 - History. - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/1.jpg)
1
III - Connexionist approachIII - Connexionist approachNeural networksNeural networks
![Page 2: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/2.jpg)
2V 3.19
1 - Introduction1 - Introduction1.1 - Use1.1 - Use
![Page 3: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/3.jpg)
3V 3.19
1.2 - Origins1.2 - Origins
Initial idea
• Serve neurobiology (description of the nervous system) Purposes
• Create and adapt a neuron model (the formal neuron), its elementary functions.
![Page 4: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/4.jpg)
4V 3.19
1.3 - History1.3 - History
1943 First formal neuron model (W.Mac Culloch, Pitts, Chicago University) 1949 Connexion self-organisation in a neural network (D.O.Hebb, Montréal) 1959 Adaline (B.W.Hoff), Perceptron (Rosenblatt) 1969 Limits of the perceptron shown (S.Papert et D.Minsky, MIT) 1984 First prototype (lBoltzmann’s machine) realised by T.Sejnowski (Baltimore
University) 1985 Back-propagation algorithm found-out
![Page 5: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/5.jpg)
5V 3.19
arborisationterminale
axonesoma
dendrite
sinapse
2 - General concepts2 - General concepts2.1 - Some neurophysiology…2.1 - Some neurophysiology…
A neuron is a nervous cell, it is crossed by nervous impulse from dentrites towards the axon.
Figure 3.1 - A neuron
synapse
axon
![Page 6: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/6.jpg)
6V 3.19
2 - General concepts2 - General concepts2.1 - Some neurophysiology… (2)2.1 - Some neurophysiology… (2)
When considering the brain or the neuron, a lot of questions still remain
• How is information organised in the brain ?
• In which conditions is a synapse being created ?
• Is the position of a neuron in the brain important ?
• …
![Page 7: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/7.jpg)
7V 3.19
2.2 - Formal neuron2.2 - Formal neuron(Mac Culloch & Pitts’ model, 1943)(Mac Culloch & Pitts’ model, 1943)
l A formal neuron applies a trigger function to the pondered sum of its entries (with a delay). This model is a simplified version of our biological neuron.
l s = (i.ei)
l Figure 3.2 - Formal neuron
tS
w1
w2
wn
e1
e2
en
v s
![Page 8: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/8.jpg)
8V 3.19
2.2 - Formal neuron (2)2.2 - Formal neuron (2)
Notationsei stimulusi coefficient / synaptic weightv soma potential transfer function (usually a sigmoïd)s answer
The neuron can be in two states• excited, if s = 1• not excited, if s = 0
Thus, a neuron is going to separate the space of inputs with an hyperplan. This is why a neural network is good at classification.
The action of a single neuron is quite easy ; only the cooperation of a great number of neurons can make complex tasks.
![Page 9: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/9.jpg)
9V 3.19
2.3 - Transfer function2.3 - Transfer function
can be
• v < -> s = 0,
• v > -> s = 1
Figure 3.3 - Transfer curve
q
s
v
![Page 10: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/10.jpg)
10V 3.19
2.3 - Transfer function (2)2.3 - Transfer function (2)
Problem for : it’s impossible to derivate the function, a sigmoïd function is preferred :
Figure 3.4 - Sigmoïd function
bs
vs = t( ) = v
(exp b ) -1v
(exp b ) +1v
![Page 11: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/11.jpg)
11V 3.19
3 - Learning3 - Learning3.1 - Network3.1 - Network
In connecting neurons together, one obtains a strongly non linear model (because of t) called a connexionist model or also called ”neural network".
There are two families
• static systems (non chained);
• dynamic systems (chained).
![Page 12: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/12.jpg)
12V 3.19
3.2 - Learning methodology3.2 - Learning methodology
A neural network is an adaptive model. There exists learning algorithms that ‘adapt’ the system to the real process.
The process is described with a set of observations that represent the learning base. The learning algorithm identifies the weights of the model in order to get as small an error as possible.
![Page 13: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/13.jpg)
13V 3.19
3.3 - Learning method3.3 - Learning method(supervised)(supervised)
Calculation of the square of the error
E = (vj-vdj)2
Calculation of the gradient of the error
Only vi depends on wik . The output doesn’t depend on the weight.
![Page 14: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/14.jpg)
14V 3.19
3.3 - Learning method3.3 - Learning method(supervised) 2(supervised) 2
Let’s declare di = (vi - vdi)
with : learning rate (taux d'apprentissage).
![Page 15: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/15.jpg)
15V 3.19
3.3 - Learning method3.3 - Learning method(supervised) 3(supervised) 3
Each neuron ‘cuts’ the entries into two regions.
Figure 3.5 - Regions
![Page 16: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/16.jpg)
16V 3.19
3.3 - Learning method3.3 - Learning method(supervised) 4(supervised) 4
The main quality of a neural network isn’t its ability to restore an example which has been learnt, but rather its capacity to generalise (i.e. to give the right answer to an input that hasn’t been learnt)
Two kinds of learning :
• Non supervised learning ;
• Supervised learning.
![Page 17: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/17.jpg)
17V 3.19
3.4 - Non supervised learning3.4 - Non supervised learning
There is no target vector. The network organises itself when giving an input vector. Uses
• Séparation de sources en traitement du signal
• Prétraitement d’images...
![Page 18: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/18.jpg)
18V 3.19
3.5 - Supervised learning3.5 - Supervised learning(95% of NN applications)(95% of NN applications)
The network will have to learn though vector couples (ik, ok) ; the set of the ‘k’ couples is the learning base.
The learning aims is to find for each weight ij a value in order to obtain a small difference between the answer to the input vector and the output vector.
If examples are “good” and if weight are correctly preset, the network will
converge rapidly (i.e. will stop with = |ei-edi| < ).
For a network with more than three layers, the previous method isn’t useful anymore, because the output is unknown for all hidden layers.
The method then used is the ‘back-propagation algorithm’ of the gradient of the error (1982-85).
With this method it’s possible to get non linear relations between an input and an output vector.
![Page 19: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/19.jpg)
19V 3.19
3.5 - Supervised learning (2)3.5 - Supervised learning (2)
Applications
• Classification
• Pattern recognition
• Process identification
• Non linear systems (signal processing...)
![Page 20: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/20.jpg)
20V 3.19
4 - Network architecture4 - Network architecture
Static network with full connection (multilayer network)• Ni number of neurons of the input layer• Nh number of hidden neurons• No number of neurons of the input layer
Figure 3.6 - A Multilayer network
![Page 21: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/21.jpg)
21V 3.19
4 - Network architecture (2)4 - Network architecture (2)
Perceptron Adaline Hopfield’s architecture Kohonen
![Page 22: III - Connexionist approach Neural networks](https://reader035.vdocuments.site/reader035/viewer/2022062314/5681368e550346895d9e1783/html5/thumbnails/22.jpg)
22V 3.19
5 - Conclusion5 - Conclusion