machine learning continuous time-delay nn limit-cycles, stability and convergence

72
MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Upload: calvin-fletcher

Post on 31-Dec-2015

232 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

MACHINE LEARNING

Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Page 2: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Recurrent Neural Networks

Sofar, we have considered only feed-forward neural networksApart for Hebbian Learning

Most biological network have recurrent connections.

This change of direction in the flow of information is interesting, as it can allow:

• To keep a memory of the activation of the neuron• To propagate the information across output neurons

Page 3: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Neuron models

Binary neuronsDiscrete time

Real number neuronsDiscrete time

Real number neuronsContinuous time

Perceptron NNsHopfield network

BackProp NNs

Kohonen map

Cont. Time Recur. NN

Echo-state network

Several CPG models

Abstract

Realistic

Page 4: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Dynamical Systems and NN

Dynamical Systems are at the core of the control systems underlying many of the vertebrates control system for skillful motion

Central Pattern Generator

Pure cyclic patterns underlying basic locomotion

Page 5: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Dynamical Systems and NN

Dynamical Systems are at the core of the control systems underlying many of the vertebrates control system for skillful motion

Adaptive Controllers:Dynamical modulation of CPG

Page 6: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Dynamical Systems

Page 7: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Dynamical Systems

Page 8: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Dynamical Systems

Page 9: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Dynamical Systems

Page 10: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Dynamical Systems

Page 11: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Dynamical Systems

Page 12: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence
Page 13: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Dynamical Systems

Page 14: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Dynamical Systems: Applications

Model of human three-dimensional reaching movements To find a generic representation of motions that allows

both robust visual recognition and flexible regeneration of motion.

Page 15: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Dynamical System Modulation

Adaptation to sudden target displacement

Different initial conditions

Dynamical Systems: Applications

Page 16: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Adaptation to sudden target displacement

Different initial conditions

Dynamical Systems: Applications

Page 17: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Adaptation to differentcontexts

Online adaptation to changesin the context

Dynamical Systems: Applications

Page 18: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Neuron models

Binary neuronsDiscrete time

Real number neuronsDiscrete time

Real number neuronsContinuous time

Perceptron NNsHopfield network

BackProp NNs

Kohonen map

Cont. Time Recur. NN

Echo-state network

Several CPG models

Abstract

Realistic

Page 19: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Leaky integrator neuron model

Idea: add a state variable mj (~membrane potential) that is controlled by a differential equation

)(1

1)1(

tSDj etx

Discrete time

jmDj

jj

j

ex

Smdt

dm

1

1

Real time

i

iij xwS

Page 20: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Leaky integrator neuron modelIdea: add a state variable mj (membrane potential) that is controlled by a differential equation

)(1

1bmDj

jj

j

jex

Smdt

dm

jj Sm on depends that speeda with toconverges

0.5S0.1S

sum) (dendriticinput :

bias :

constant time:

rate firing :

potential membrane :

S

b

x

m

j

j

j

i

iij xwS

Page 21: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Leaky integrator neuron model

This type of neuron models are used in:• Recurrent neural networks for time series analysis (e.g. echo-state networks) • Neural oscillators• Several CPG models• Associative memories, e.g. the continuous time version of the Hopfield model

Page 22: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Behavior of a single neuron

The behavior of a single leaky-integrator neuron without self-connection is a linear differential equation that can be solved analytically. Here S is a constant input:

1

)(

0

1

1

)0(

bmDex

mtm with

Smdt

dm

))((

/0

1

1)(

)()(

btmD

t

etx

SeSmtm

Page 23: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Behavior of a single neuron

))((

/0

1

1)(

)()(

btmD

t

etx

SeSmtm

tau = 0.2; D = 1.0; m0 = 0.0;S = 3.0;b = 0.0;

1

Page 24: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Behavior of a single neuron

The behavior of a single leaky-integrator neuron with a self-connection gives a nonlinear differential equation that cannot be solved analytically

1

)(

0

11

1

1

)0(

bmDex

mtm with

xwSmdt

dm

Nonlinear term

Page 25: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Behavior of a single neuron: numerical simulation

)(

0

11

1

1

)0(

bmDex

mtm with

xwSmdt

dm

tau = 0.2;D = 1;w11 = -0.5;b = 0.0;S = 3.0;

1

Page 26: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Fixed points with inhibitory self-connection

sigmoidthe is z where

bmwSm

mfxwSmdt

dm

)(

0)~(~

0)~()(1

11

11

Finding the (stable or unstable) fixed points:

tau = 0.2;D = 1;w11 = -20;b = 0.0;

w11 = -20, S=30

m~

1

Page 27: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Fixed points with inhibitory self-connection

10)~(~0 11 bmwSm

dt

dm

tau = 0.2;D = 1;w11 = -20;b = 0.0;

w11 = -20, S=30

10~ m

Page 28: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Fixed points with excitatory self-connection

1

Finding the (stable or unstable) fixed points:

tau = 0.2;D = 1;w11 = 20;b = 0.0;

w11 = 20, S=-10

0)~(~0 11 bmwSmdt

dm

m~ m~ m~

Page 29: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Fixed points with excitatory self-connection

1

Finding the (stable or unstable) fixed points:

tau = 0.2;D = 1;w11 = 20;b = 0.0;

w11= 20, S= -10

This neuron will converge to one of the three fixed points depending on initial conditions

Fixed points

m~

0)~(~0 11 bmwSmdt

dm

Page 30: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Stable fixed point

m~

0)~(

mfm

Page 31: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Stable and unstable fixed points

0)~(

mfm

0)~(

mfm

0)~(

mfm

Page 32: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Bifurcation

w11= -20

Stable

By changing the value of w11, the neuron stability properties changes. The system has undergone a bifurcation

m~

w11= 20

Unstable StableStable

m~ m~ m~

Page 33: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Two-neuron oscillator

21

Page 34: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Two-neuron network: possible behaviors

One stable pointOne stable point

One unstableOne saddle

One limit cycle

Three stable pointsTwo saddles

Four stable pointsOne unstableFour saddles

See Beer (1995), Adaptive Behavior, Vol 3 No 4

Page 35: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Conclusion:

even very simple leaky-integrator

neural networks can exhibit rich dynamics

Page 36: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Four-neuron oscillator

21

3 4

Page 37: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Modulation of a four-neuron oscillator

21

3 4

Page 38: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Modulation of a four-neuron oscillator

Page 39: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Applications of a four-neuron oscillator

Each neuron’s activation function is governed by:

Page 40: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Applications of a four-neuron oscillator

Transition from walking to trotting and then galloping gait following an increase of the tonic input from 1 to 1.4 and 1.6 respectively.

Page 41: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Applications of a four-neuron oscillator

Simple circuit to implement a sitting and lying down behavior by sequential inhibition of the legs

Page 42: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Applications of a four-neuron oscillator

Page 43: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

How to design leaky-integrator neural networks?

• Recurrent back-propagation algorithm• with the use of an energy function, cf. Hopfield• Genetic algorithms• Linear regression (echo state network)• Use guidance from dynamical systems theory

Page 44: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Application of leaky-integrator neural networks: Modeling Human Data

Muscle Model

Coupled Oscillators for basic cyclic motion and reflexes

Time-Delay NN acting as associative memory for storing sequences of activation

Page 45: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Muscle Model

Application of leaky-integrator neural networks: Modeling Human Data

Human Data

Simulated Data

Page 46: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Schematic setup of Echo State Network

Page 47: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Schematic setup of ESN (II)

Output weights :trained

Inputs (time series) Internal state Output (time series)

Input weights :Random values

Internal weights :Random values

Page 48: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

How do we train WWoutout ? It is a supervisedsupervised learning algorithm.

The training dataset is )(),( tdtuD

:)(tu

:)(td

the input time series the desired output

time series

...

t

t

t

B C BA AA AA AA

B

C

Page 49: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Simply do a linear regression…

Linear regression on the (high dimensional) space of the inputs AND internal states.

Geometrical illustration with a 3 units network

Page 50: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Data acquisition

Page 51: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Network inputs and outputs

n

ntini tyd )(,

)(maxarg)(*,

ni

dnii

Blue line : desired outputRed line : network output

Page 52: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Neuron models

Binary neuronsDiscrete time

Real number neuronsDiscrete time

Perceptron NNsHopfield network

BackProp NNs

Kohonen map

Abstract

Realistic

Page 53: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

BACKPROPAGATION

A two-layer Feed-Forward Neural Network

Outputs

OutputNeurons

Inputs

InputNeurons

HiddenNeurons

The output of the hidden nodes is unknown. Thus, the error must be back-propagated from output neuron to hidden neurons.

Page 54: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

BPRNN

Backprogagation has also been generalized to allow learning in recurrent neural networks

(Elman, Jordan type of RNN Networks)

Learning time series

Page 55: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Recurrent Neural Networks

Recurrent neural network:

Context units Input units

Hidden units

Output layer

JORDAN NETWORK

11 tytctciiii

y

c x

Context Units:

h

Page 56: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Recurrent Neural Networks

Recurrent neural network:

Context units Input units

Hidden units

Output layer

ELMAN NETWORK

1 thtcii

y

c x

The context units store the content of the hidden units:

h

Page 57: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Recurrent Neural Networks

Context units Input units

Hidden units

Output layer

ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONS

y

c x

ROBOT PERCEPTIONS

h

ROBOT ACTIONS

Page 58: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONSGeneralization

Recurrent Neural Networks: Robotics Applications

Page 59: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONSGeneralization

Recurrent Neural Networks: Robotics Applications

Page 60: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONSGeneralization

Recurrent Neural Networks: Robotics Applications

Ito, Noda, Hashino & Tani, Dynamic and interactive generation of object handling behaviors

by a small humanoid robot using a dynamic neural network model, Neural Networks, April, 2006

Page 61: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Recurrent Neural Networks: Robotics Applications

Page 62: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Recurrent Neural Networks: Robotics Applications

Page 63: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

ASSOCIATE SEQUENCES OF SENSORI-MOTOR PERCEPTIONSGeneralization

Recurrent Neural Networks: Robotics Applications

Page 64: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Recurrent Neural Networks: Robotics Applications

Page 65: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Neuron models

Binary neuronsDiscrete time

Real number neuronsDiscrete time

Real number neuronsContinuous time

Spiking neurons(integrate and fire)

Perceptron NNsHopfield network

BackProp NNs

Kohonen map

Cont. Time Recur. NN

Echo-state network

Several CPG models

Liquid-state machine Several comp. neurosc. models

Abstract

Realistic

Page 66: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Rate coding versus spike coding

Important question: is information in the brain encoded in rates of spikes or in the timing of individual spikes?

Answer: probably both!

Rates encode information sent to muscles

Visual processing can be done very quickly (~150ms), with just a few spikes (Thorpe S., Fize D., and Marlot C. 1996, Nature).

Page 67: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Time

Rate coding

Spike coding

Rate coding versus spike coding

Page 68: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Integrate-and-fire neuronIntegrate-and-fire: like leaky-integrator models, but with the production of spikes when the membrane potential exceeds a threshold

It combines leaky-integration and reset

See Spiking Neuron Models. Single Neurons, Populations, Plasticity, Gerstner and Kistler, Cambridge University Press, 2002

(Gerstner 2002)

Page 69: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Neuron models

Binary neuronsDiscrete time

Real number neuronsDiscrete time

Real number neuronsContinuous time

Spiking neurons(integrate and fire)

Biophysical models

Perceptron NNsHopfield network

BackProp NNs

Kohonen map

Cont. Time Recur. NN

Echo-state network

Several CPG models

Liquid-state machine Several comp. neurosc. models

Squid neuron (H.&H.) Numerous comp. neurosc. models

Abstract

Realistic

Page 70: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

Hodgkin and Huxley neuron model

REFERENCES

Original Paper:A. L. Hodgkin and A. F. Huxley, A quantitative description of membrane current and its application to conduction and excitation in nerve, J Physiol. 1952 August 28; 117(4): 500–544. http://www.pubmedcentral.nih.gov/picrender.fcgi?artid=1392413&blobtype=pdf

Recent Update: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek, Computation in a Single Neuron: Hodgkin and Huxley RevisitedNeural Computation, Vol. 15, No. 8: 1715-1749, 2003.http://www.mitpressjournals.org/doi/pdfplus/10.1162/08997660360675017

Page 71: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

FURTHER READING I

• Ito, Noda, Hashino & Tani, Dynamic and interactive generation of object handling behaviorsby a small humanoid robot using a dynamic neural network model, Neural Networks, April, 2006http://www.bdc.brain.riken.go.jp/~tani/papers/NN2006.pdf

• H. Jaeger, "The echo state approach to analysing and training recurrent neural networks" (GMD-Report 148, German National Research Institute for Computer Science 2001). ftp://borneo.gmd.de/pub/indy/publications_herbert/EchoStatesTechRep.pdf

• B. Mathayomchan and R. D. Beer, Center-Crossing Recurrent Neural Networks for the Evolution of Rhythmic Behavior, Neural Comput., September 1, 2002; 14(9): 2043 - 2051. http://www.mitpressjournals.org/doi/pdf/10.1162/089976602320263999

• S. R. D. Beer, Parameter space structure of continuous-time recurrent neural networks.Neural Comput., December 1, 2006; 18(12): 3009 - 3051. •http://www.mitpressjournals.org/doi/pdf/10.1162/neco.2006.18.12.3009

• Pham, Q.C., and Slotine, J.J.E., "Stable Concurrent Synchronization in Dynamic System Networks," Neural Networks, 20(1), 2007. http://web.mit.edu/nsl/www/preprints/Polyrhythms05.pdf

• Billard, A. and Ijspeert, A.J. (2000) Biologically inspired neural controllers for motor control in a quadruped robot.. In Proceedings of the International Joint Conference on Neural Networks, Come (Italy), July. http://lasa.epfl.ch/publications/uploadedFiles/AB_Ijspeert_IJCINN2000.pdf

• Billard, A. and Mataric, M. (2001) Learning human arm movements by imitation: Evaluation of a biologically-inspired connectionist architecture. Robotics & Autonomous Systems 941, 1-16. http://lasa.epfl.ch/publications/uploadedFiles/AB_Mataric_RAS2001.pdf

Page 72: MACHINE LEARNING Continuous Time-Delay NN Limit-Cycles, Stability and Convergence

FURTHER READING II

• Herbert Jaeger and Harald Haas, Harnessing Nonlinearity: Predicting Chaotic Systems and Saving Energy in Wireless Communication, Science 2, Vol. 304. no. 5667, pp. 78 - 80 http://www.sciencemag.org/cgi/reprint/304/5667/78.pdf

• S. Psujek, J. Ames, and R. D. Beer Connection and coordination: the interplay between architecture and dynamics in evolved model pattern generators. Neural Comput., March 1, 2006; 18(3): 729 - 747. http://www.mitpressjournals.org/doi/pdf/10.1162/neco.2006.18.3.729