machine learning continuous time-delay nn limit-cycles, stability and convergence

Post on 31-Dec-2015

232 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

MACHINE LEARNING

Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Recurrent Neural Networks

Sofar, we have considered only feed-forward neural networksApart for Hebbian Learning

Most biological network have recurrent connections.

This change of direction in the flow of information is interesting, as it can allow:

• To keep a memory of the activation of the neuron• To propagate the information across output neurons

Neuron models

Binary neuronsDiscrete time

Real number neuronsDiscrete time

Real number neuronsContinuous time

Perceptron NNsHopfield network

BackProp NNs

Kohonen map

Cont. Time Recur. NN

Echo-state network

Several CPG models

Abstract

Realistic

Dynamical Systems and NN

Dynamical Systems are at the core of the control systems underlying many of the vertebrates control system for skillful motion

Central Pattern Generator

Pure cyclic patterns underlying basic locomotion

Dynamical Systems and NN

Dynamical Systems are at the core of the control systems underlying many of the vertebrates control system for skillful motion

Adaptive Controllers:Dynamical modulation of CPG

Dynamical Systems

Dynamical Systems

Dynamical Systems

Dynamical Systems

Dynamical Systems

Dynamical Systems

Dynamical Systems

Dynamical Systems: Applications

Model of human three-dimensional reaching movements To find a generic representation of motions that allows

both robust visual recognition and flexible regeneration of motion.

Dynamical System Modulation

Adaptation to sudden target displacement

Different initial conditions

Dynamical Systems: Applications

Adaptation to sudden target displacement

Different initial conditions

Dynamical Systems: Applications

Adaptation to differentcontexts

Online adaptation to changesin the context

Dynamical Systems: Applications

Neuron models

Binary neuronsDiscrete time

Real number neuronsDiscrete time

Real number neuronsContinuous time

Perceptron NNsHopfield network

BackProp NNs

Kohonen map

Cont. Time Recur. NN

Echo-state network

Several CPG models

Abstract

Realistic

Leaky integrator neuron model

Idea: add a state variable mj (~membrane potential) that is controlled by a differential equation

)(1

1)1(

tSDj etx

Discrete time

jmDj

jj

j

ex

Smdt

dm

1

1

Real time

i

iij xwS

Leaky integrator neuron modelIdea: add a state variable mj (membrane potential) that is controlled by a differential equation

)(1

1bmDj

jj

j

jex

Smdt

dm

jj Sm on depends that speeda with toconverges

0.5S0.1S

sum) (dendriticinput :

bias :

constant time:

rate firing :

potential membrane :

S

b

x

m

j

j

j

i

iij xwS

Leaky integrator neuron model

This type of neuron models are used in:• Recurrent neural networks for time series analysis (e.g. echo-state networks) • Neural oscillators• Several CPG models• Associative memories, e.g. the continuous time version of the Hopfield model

Behavior of a single neuron

The behavior of a single leaky-integrator neuron without self-connection is a linear differential equation that can be solved analytically. Here S is a constant input:

1

)(

0

1

1

)0(

bmDex

mtm with

Smdt

dm

))((

/0

1

1)(

)()(

btmD

t

etx

SeSmtm

Behavior of a single neuron

))((

/0

1

1)(

)()(

btmD

t

etx

SeSmtm

tau = 0.2; D = 1.0; m0 = 0.0;S = 3.0;b = 0.0;

1

Behavior of a single neuron

The behavior of a single leaky-integrator neuron with a self-connection gives a nonlinear differential equation that cannot be solved analytically

1

)(

0

11

1

1

)0(

bmDex

mtm with

xwSmdt

dm

Nonlinear term

Behavior of a single neuron: numerical simulation

)(

0

11

1

1

)0(

bmDex

mtm with

xwSmdt

dm

tau = 0.2;D = 1;w11 = -0.5;b = 0.0;S = 3.0;

1

Fixed points with inhibitory self-connection

sigmoidthe is z where

bmwSm

mfxwSmdt

dm

)(

0)~(~

0)~()(1

11

11

Finding the (stable or unstable) fixed points:

tau = 0.2;D = 1;w11 = -20;b = 0.0;

w11 = -20, S=30

m~

1

Fixed points with inhibitory self-connection

10)~(~0 11 bmwSm

dt

dm

tau = 0.2;D = 1;w11 = -20;b = 0.0;

w11 = -20, S=30

10~ m

Fixed points with excitatory self-connection

1

Finding the (stable or unstable) fixed points:

tau = 0.2;D = 1;w11 = 20;b = 0.0;

w11 = 20, S=-10

0)~(~0 11 bmwSmdt

dm

m~ m~ m~

Fixed points with excitatory self-connection

1

Finding the (stable or unstable) fixed points:

tau = 0.2;D = 1;w11 = 20;b = 0.0;

w11= 20, S= -10

This neuron will converge to one of the three fixed points depending on initial conditions

Fixed points

m~

0)~(~0 11 bmwSmdt

dm

Stable fixed point

m~

0)~(

mfm

Stable and unstable fixed points

0)~(

mfm

0)~(

mfm

0)~(

mfm

Bifurcation

w11= -20

Stable

By changing the value of w11, the neuron stability properties changes. The system has undergone a bifurcation

m~

w11= 20

Unstable StableStable

m~ m~ m~

Two-neuron oscillator

21

Two-neuron network: possible behaviors

One stable pointOne stable point

One unstableOne saddle

One limit cycle

Three stable pointsTwo saddles

Four stable pointsOne unstableFour saddles

See Beer (1995), Adaptive Behavior, Vol 3 No 4

Conclusion:

even very simple leaky-integrator

neural networks can exhibit rich dynamics

Four-neuron oscillator

21

3 4

Modulation of a four-neuron oscillator

21

3 4

Modulation of a four-neuron oscillator

Applications of a four-neuron oscillator

Each neuron’s activation function is governed by:

Applications of a four-neuron oscillator

Transition from walking to trotting and then galloping gait following an increase of the tonic input from 1 to 1.4 and 1.6 respectively.

Applications of a four-neuron oscillator

Simple circuit to implement a sitting and lying down behavior by sequential inhibition of the legs

Applications of a four-neuron oscillator

How to design leaky-integrator neural networks?

• Recurrent back-propagation algorithm• with the use of an energy function, cf. Hopfield• Genetic algorithms• Linear regression (echo state network)• Use guidance from dynamical systems theory

Application of leaky-integrator neural networks: Modeling Human Data

Muscle Model

Coupled Oscillators for basic cyclic motion and reflexes

Time-Delay NN acting as associative memory for storing sequences of activation

Muscle Model

Application of leaky-integrator neural networks: Modeling Human Data

Human Data

Simulated Data

Schematic setup of Echo State Network

Schematic setup of ESN (II)

Output weights :trained

Inputs (time series) Internal state Output (time series)

Input weights :Random values

Internal weights :Random values

How do we train WWoutout ? It is a supervisedsupervised learning algorithm.

The training dataset is )(),( tdtuD

:)(tu

:)(td

the input time series the desired output

time series

...

t

t

t

B C BA AA AA AA

B

C

Simply do a linear regression…

Linear regression on the (high dimensional) space of the inputs AND internal states.

Geometrical illustration with a 3 units network

Data acquisition

Network inputs and outputs

n

ntini tyd )(,

)(maxarg)(*,

ni

dnii

Blue line : desired outputRed line : network output

Neuron models

Binary neuronsDiscrete time

Real number neuronsDiscrete time

Perceptron NNsHopfield network

BackProp NNs

Kohonen map

Abstract

Realistic

BACKPROPAGATION

A two-layer Feed-Forward Neural Network

Outputs

OutputNeurons

Inputs

InputNeurons

HiddenNeurons

The output of the hidden nodes is unknown. Thus, the error must be back-propagated from output neuron to hidden neurons.

BPRNN

Backprogagation has also been generalized to allow learning in recurrent neural networks

(Elman, Jordan type of RNN Networks)

Learning time series

Recurrent Neural Networks

Recurrent neural network:

Context units Input units

Hidden units

Output layer

JORDAN NETWORK

11 tytctciiii

y

c x

Context Units:

h

Recurrent Neural Networks

Recurrent neural network:

Context units Input units

Hidden units

Output layer

ELMAN NETWORK

1 thtcii

y

c x

The context units store the content of the hidden units:

h

Recurrent Neural Networks

Context units Input units

Hidden units

Output layer

ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONS

y

c x

ROBOT PERCEPTIONS

h

ROBOT ACTIONS

ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONSGeneralization

Recurrent Neural Networks: Robotics Applications

ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONSGeneralization

Recurrent Neural Networks: Robotics Applications

ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONSGeneralization

Recurrent Neural Networks: Robotics Applications

Ito, Noda, Hashino & Tani, Dynamic and interactive generation of object handling behaviors

by a small humanoid robot using a dynamic neural network model, Neural Networks, April, 2006

Recurrent Neural Networks: Robotics Applications

Recurrent Neural Networks: Robotics Applications

ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONSGeneralization

Recurrent Neural Networks: Robotics Applications

Recurrent Neural Networks: Robotics Applications

Neuron models

Binary neuronsDiscrete time

Real number neuronsDiscrete time

Real number neuronsContinuous time

Spiking neurons(integrate and fire)

Perceptron NNsHopfield network

BackProp NNs

Kohonen map

Cont. Time Recur. NN

Echo-state network

Several CPG models

Liquid-state machine Several comp. neurosc. models

Abstract

Realistic

Rate coding versus spike coding

Important question: is information in the brain encoded in rates of spikes or in the timing of individual spikes?

Answer: probably both!

Rates encode information sent to muscles

Visual processing can be done very quickly (~150ms), with just a few spikes (Thorpe S., Fize D., and Marlot C. 1996, Nature).

Time

Rate coding

Spike coding

Rate coding versus spike coding

Integrate-and-fire neuronIntegrate-and-fire: like leaky-integrator models, but with the production of spikes when the membrane potential exceeds a threshold

It combines leaky-integration and reset

See Spiking Neuron Models. Single Neurons, Populations, Plasticity, Gerstner and Kistler, Cambridge University Press, 2002

(Gerstner 2002)

Neuron models

Binary neuronsDiscrete time

Real number neuronsDiscrete time

Real number neuronsContinuous time

Spiking neurons(integrate and fire)

Biophysical models

Perceptron NNsHopfield network

BackProp NNs

Kohonen map

Cont. Time Recur. NN

Echo-state network

Several CPG models

Liquid-state machine Several comp. neurosc. models

Squid neuron (H.&H.) Numerous comp. neurosc. models

Abstract

Realistic

Hodgkin and Huxley neuron model

REFERENCES

Original Paper:A. L. Hodgkin and A. F. Huxley, A quantitative description of membrane current and its application to conduction and excitation in nerve, J Physiol. 1952 August 28; 117(4): 500–544. http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1392413&blobtype=pdf

Recent Update: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek, Computation in a Single Neuron: Hodgkin and Huxley RevisitedNeural Computation, Vol. 15, No. 8: 1715-1749, 2003.http://www.mitpressjournals.org/doi/pdfplus/10.1162/08997660360675017

FURTHER READING I

• Ito, Noda, Hashino & Tani, Dynamic and interactive generation of object handling behaviorsby a small humanoid robot using a dynamic neural network model, Neural Networks, April, 2006http://www.bdc.brain.riken.go.jp/~tani/papers/NN2006.pdf

• H. Jaeger, "The echo state approach to analysing and training recurrent neural networks" (GMD-Report 148, German National Research Institute for Computer Science 2001). ftp://borneo.gmd.de/pub/indy/publications_herbert/EchoStatesTechRep.pdf

• B. Mathayomchan and R. D. Beer, Center-Crossing Recurrent Neural Networks for the Evolution of Rhythmic Behavior, Neural Comput., September 1, 2002; 14(9): 2043 - 2051. http://www.mitpressjournals.org/doi/pdf/10.1162/089976602320263999

• S. R. D. Beer, Parameter space structure of continuous-time recurrent neural networks.Neural Comput., December 1, 2006; 18(12): 3009 - 3051. •http://www.mitpressjournals.org/doi/pdf/10.1162/neco.2006.18.12.3009

• Pham, Q.C., and Slotine, J.J.E., "Stable Concurrent Synchronization in Dynamic System Networks," Neural Networks, 20(1), 2007. http://web.mit.edu/nsl/www/preprints/Polyrhythms05.pdf

• Billard, A. and Ijspeert, A.J. (2000) Biologically inspired neural controllers for motor control in a quadruped robot.. In Proceedings of the International Joint Conference on Neural Networks, Come (Italy), July. http://lasa.epfl.ch/publications/uploadedFiles/AB_Ijspeert_IJCINN2000.pdf

• Billard, A. and Mataric, M. (2001) Learning human arm movements by imitation: Evaluation of a biologically-inspired connectionist architecture. Robotics & Autonomous Systems 941, 1-16. http://lasa.epfl.ch/publications/uploadedFiles/AB_Mataric_RAS2001.pdf

FURTHER READING II

• Herbert Jaeger and Harald Haas, Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication, Science 2, Vol. 304. no. 5667, pp. 78 - 80 http://www.sciencemag.org/cgi/reprint/304/5667/78.pdf

• S. Psujek, J. Ames, and R. D. Beer Connection and coordination: the interplay between architecture and dynamics in evolved model pattern generators. Neural Comput., March 1, 2006; 18(3): 729 - 747. http://www.mitpressjournals.org/doi/pdf/10.1162/neco.2006.18.3.729

top related