multi layer perceptrons (mlp) course website: the back-propagation algorithm following hertz...

Post on 21-Dec-2015

215 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Multi Layer Perceptrons (MLP)

Course website: http://horn.tau.ac.il/course06.html

The back-propagation algorithm

Following Hertz chapter 6

Feedforward Networks A connection is allowed from a node in layer i only to

nodes in layer i + 1. Most widely used architecture.

Conceptually, nodes at higher levels successively abstract features from preceding layers

Network Architecture

Examples of binary-neuron feed-forward networks

MLP with sigmoid transfer-functions

The backprop algorithm

Initialize weights to small random numbers Choose pattern and apply to input layer Propagate signals forward through the network Compute deltas for output layer by comparing actual outputs with

desired ones. Compute deltas for preceding layers by backpropagating errors Update all weights Repeat from step 2 for next pattern

Application to data

Data divided into training-set and test-set BP is based on minimizing error on train-set Generalization error is the error on the test-set Further training may lead to an increase in

generalization error – over-training Know when to stop… can use cross-validation

set (mini-test-set chosen out of the train-set) Constrain number of free parameters. This

helps minimizing over-training

The sun-spots problem

Time-series in lag-space

The MLP network and the cost function with complexity term

First hidden layer – the resulting ‘receptive fields’

The second hidden layer

Exercise No 1. Submit answers electronically to Roy by April 21st.

Consider a 2-dimensional square divided into 16 black and white sub- squares, like a 4X4 chessboard (e.g. the plane of 0<x<1 and 0<y<1 is divided into sub-squares like 0<x<.25 0<y<.25 etc).

Build a feed-forward neural network whose input is composed of the coordinate values x and y, and whose output is a binary variable corresponding to the color associated with the input point.

Suggestion: use a sigmoid function throughout the network, even for the output, upon which you are free to later impose a binary decision.

1. Explain why one needs many hidden neurons to solve this problem.2. Show how the performance of the network improves as function of the

number of training epochs. 3. Show how it improves as function of the number of input points.4. Display the 'visual fields' of the hidden neurons for your best solution.

Discuss this result. 5. Choose a random set of training points and a random set of test points.

These sets should have moderate sizes. Compute both the training error and the generalization error as function of the number of training epochs.

6. Comment on any further insights you may have from this exercise.

top related