intentional systems: review of neurodynamics, modeling ... · intentional systems: review of...

21
ARTICLE IN PRESS UNCORRECTED PROOF Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics of Life Reviews (2007), doi:10.1016/j.plrev.2007.10.002 JID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.1 (1-21) Physics of Life Reviews ••• (••••) •••••• www.elsevier.com/locate/plrev 1 1 2 2 3 3 4 4 5 5 6 6 7 7 8 8 9 9 10 10 11 11 12 12 13 13 14 14 15 15 16 16 17 17 18 18 19 19 20 20 21 21 22 22 23 23 24 24 25 25 26 26 27 27 28 28 29 29 30 30 31 31 32 32 33 33 34 34 35 35 36 36 37 37 38 38 39 39 40 40 41 41 42 42 43 43 44 44 45 45 46 46 47 47 48 48 49 49 50 50 51 51 52 52 Review Intentional systems: Review of neurodynamics, modeling, and robotics implementation Robert Kozma Computational NeuroDynamics Laboratory, Center of Intentional Robotics, Department of Computer Science, University of Memphis, TN 38152, USA Received 1 October 2007; received in revised form 2 October 2007; accepted 3 October 2007 Communicated by L. Perlovsky Abstract We present an intentional neurodynamic theory for higher cognition and intelligence. This theory provides a unifying frame- work for integrating symbolic and subsymbolic methods as complementary aspects of human intelligence. Top-down symbolic approaches benefit from the vast experience with logical reasoning and with high-level knowledge processing in humans. Con- nectionist methods use bottom-up approach to generate intelligent behavior by mimicking subsymbolic aspects of the operation of brains and nervous systems. Neurophysiological correlates of intentionality and cognition include sequences of oscillatory patterns of mesoscopic neural activity. Oscillatory patterns are viewed as intermittent representations of generalized symbol systems, with which brains compute. These dynamical symbols are not rigid but flexible. They disappear soon after they emerged through spatio- temporal phase transitions. Intentional neurodynamics provides a solution to the notoriously difficult symbol grounding problem. Some examples of implementations of the corresponding dynamic principles are described in this review. © 2007 Published by Elsevier B.V. PACS: 05.45.-a; 0.45.Gg; 82.40.Bj; 84.35.+i Keywords: Intentionality; Neurodynamics; Metastability; Intermittency chaos Contents 1. Prologue ............................................................................ 2 2. Intelligence: The parts and the whole ........................................................ 3 2.1. Top-down approaches .............................................................. 3 2.2. Bottom-up models ................................................................ 4 2.3. Dynamic Logic: Emergence of symbols from subsymbolic principles .............................. 5 2.4. Phenomenology of intentionality ...................................................... 5 2.5. Intentionality: The unifying principle of the complementary nature ............................... 6 3. Neural correlates of intentionality ........................................................... 7 3.1. Attributes of cortical neurodynamics .................................................... 7 E-mail address: [email protected]. 1571-0645/$ – see front matter © 2007 Published by Elsevier B.V. doi:10.1016/j.plrev.2007.10.002

Upload: others

Post on 26-Feb-2020

13 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.1 (1-21)

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50

51

52

UN

CO

RR

EC

TED

PR

OO

F

Physics of Life Reviews ••• (••••) •••–•••www.elsevier.com/locate/plrev

Review

Intentional systems: Review of neurodynamics, modeling, androbotics implementation

Robert Kozma

Computational NeuroDynamics Laboratory, Center of Intentional Robotics, Department of Computer Science, University of Memphis,TN 38152, USA

Received 1 October 2007; received in revised form 2 October 2007; accepted 3 October 2007

Communicated by L. Perlovsky

Abstract

We present an intentional neurodynamic theory for higher cognition and intelligence. This theory provides a unifying frame-work for integrating symbolic and subsymbolic methods as complementary aspects of human intelligence. Top-down symbolicapproaches benefit from the vast experience with logical reasoning and with high-level knowledge processing in humans. Con-nectionist methods use bottom-up approach to generate intelligent behavior by mimicking subsymbolic aspects of the operation ofbrains and nervous systems. Neurophysiological correlates of intentionality and cognition include sequences of oscillatory patternsof mesoscopic neural activity. Oscillatory patterns are viewed as intermittent representations of generalized symbol systems, withwhich brains compute. These dynamical symbols are not rigid but flexible. They disappear soon after they emerged through spatio-temporal phase transitions. Intentional neurodynamics provides a solution to the notoriously difficult symbol grounding problem.Some examples of implementations of the corresponding dynamic principles are described in this review.© 2007 Published by Elsevier B.V.

PACS: 05.45.-a; 0.45.Gg; 82.40.Bj; 84.35.+i

Keywords: Intentionality; Neurodynamics; Metastability; Intermittency chaos

Contents

1. Prologue . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22. Intelligence: The parts and the whole . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

2.1. Top-down approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2. Bottom-up models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.3. Dynamic Logic: Emergence of symbols from subsymbolic principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.4. Phenomenology of intentionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.5. Intentionality: The unifying principle of the complementary nature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3. Neural correlates of intentionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.1. Attributes of cortical neurodynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

E-mail address: [email protected].

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

50

51

521571-0645/$ – see front matter © 2007 Published by Elsevier B.V.doi:10.1016/j.plrev.2007.10.002

Page 2: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.2 (1-21)

2 R. Kozma / Physics of Life Reviews ••• (••••) •••–•••

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

3.2. Phase cones in cortical EEG measurements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83.3. Phase transitions in brains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

4. Modelling spatio-temporal oscillations—K models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.1. K-sets theory and implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.2. Hierarchy of K-sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

4.2.1. KO set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.2.2. KI sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.2.3. KII sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104.2.4. KIII sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104.2.5. KIV sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

5. Construction of intentional dynamic systems at KIV level . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105.1. Design of the intentional KIV system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105.2. Learning in KIV models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125.3. Generalized KIV for multi-agent cooperation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

6. Intentional dynamics for control and navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146.1. Basic principles of intentional control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146.2. KIV in a virtual environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156.3. Implementing KIV for SRR2K rover navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

7. Perspectives on intentional dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

“We have now accumulated sufficient evidence to see that whatever language the central nervous system is using,it is characterized by less logical and arithmetic depth than what we are normally used to.”

(Von Neumann, 1958)

1. Prologue

Exploring the origin, functioning, and the very nature of human intelligence have been among the major scientificendeavors mankind has been fascinated with for millenniums. The invention of digital computers over half a centuryago has drastically changed the focus of investigations into intelligence. Digital computers represent a new researchtool, which potentially parallel the capabilities of brains. Von Neumann has been one of the pioneers of the new digitalcomputing era. While appreciating the enormous potential of computers, he warned about a mechanistic parallelbetween brains and computers. In his last work about the relationship between computers and brains, he pointed outthat the operation of brains follows principles other than the potentially very high precision of algorithms postulatedby Turing machines [1], which provide a theoretical framework for the design of digital computers. Von Neumannfinds it implausible that brains would use such algorithms in their operations. At higher levels of abstraction, in thelast pages of his final post humus work, Von Neumann contends that the language of the brain cannot be mathematics[2]. He continues:

“It is only proper to realize that language is a largely historical accident. The basic human languages are tradition-ally transmitted to us in various forms, but their very multiplicity proves that there is nothing absolute and necessaryabout them. Just as languages like Greek and Sanskrit are historical facts and not absolute logical necessities, itis only reasonable to assume that logics and mathematics are similarly historical, accidental forms of expression.They may have essential variants, i.e. they may exist in other forms than the ones to which we are accustomed.Indeed, the nature of the central nervous system and of the message systems that it transmits, indicate positivelythat this is so. We have now accumulated sufficient evidence to see that whatever language the central nervoussystem is using, it is characterized by less logical and arithmetic depth than what we are normally used to.”

(Von Neumann, 1958)

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 3: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.3 (1-21)

R. Kozma / Physics of Life Reviews ••• (••••) •••–••• 3

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

If the language of the brain is not mathematics, if it is not a precise sequence of well-defined logical statementssuch the ones used in mathematics, then what is it? Von Neumann was unable to elaborate on this question due tohis early tragic death. Half a century of research involving artificially intelligent computer designs could not givethe answer either. This is partly due to the fact that Von Neumann’s warning about principal limitations of the earlydesigns of digital computers, called today ‘Von Neumann computer architectures’, fell on deaf ears. Biological andhuman intelligence uses different ways of operations from the one implemented in symbol-manipulating digital com-puters. Nevertheless, researchers excited by seemingly unlimited power of digital computers embarked on magnificentprojects of building increasingly complex computer systems to imitate and surpass human intelligence, without im-itating natural intelligence. These projects gave impressive results, but notoriously fell short of producing systemsapproaching human intelligence.

The last half a century produced crucial advances in brain research, in part due to advances in experimental tech-niques. It has been a major challenge to reconcile the apparent contradiction between the absence of clearly definedsymbolic representations in brains, as evidenced by neurodynamics, and the symbolic nature of higher-level cogni-tion and consciousness. In the philosophy of artificial intelligence this is addressed as the notoriously difficult symbolgrounding problem. Namely, if there are abstract symbols in intelligent systems like brains, how these symbols acquiremeaning in the context of the very specific and unique life experience of the individual? The neurodynamic approachto cognition and intelligence provides a solution to this problem using the concept of metastability of brain dynamics.Neurodynamics considers brain as a dynamic system moving along a complex nonconvergent trajectory influenced bythe subject’s past and present experiences and anticipated future events. The trajectory may rest intermittently, for afraction of a second, at a given spatio-temporal pattern. This pattern has meaning to the individual based on its previ-ous experiences. In this sense one may call this pattern a representation of the meaning of the given sensory influence,in the context of the present internal state and the intended future states. However, the spatio-temporal patterns arenot stable. Swift transitions destroy them again and again, as the system moves along its trajectory. The transient,metastable spatio-temporal patterns can be considered as the words, and the transitions among patterns as the gram-mar of the language of the brain during its never ending cognitive processing cycles. After an overview of variousapproaches to intelligent designs, the details of the neurodynamic approach are introduced, including mathematicaland computational models and practical implementations.

2. Intelligence: The parts and the whole

2.1. Top-down approaches

Top-down approaches postulate intelligence as high-level property of human brains and, to a lesser extent, otheranimal brains. Accordingly, the ability of manipulating symbols in a logical and meaningful way is a prominent cri-terion of intelligence. Top-down or symbolic approaches to cognition are proven to be powerful concepts dominatingthe field from the 60s through the 80s, called Artificial Intelligence (AI). The symbolic approach to intelligence is wellillustrated by Newell and Simon’s physical-symbol system hypothesis [3–6]. A physical-symbol system is a physicaldevice that contains a set of interpretable and combinable items (symbols) and a set of processes that can operate onthe items (copying, conjoining, creating, and destroying them according to instructions). The physical-symbol systemhypothesis states that a physical symbol system has the necessary and sufficient means for general intelligent action.This is a very strong theoretical claim on the nature of intelligence. It states that any system that manipulates symbolsis sufficient for producing intelligent behavior, and further that all intelligent systems are necessarily implementationsof physical-symbol systems. In practical terms, the types of syntactic manipulation of symbols found in formal logicand formal linguistic systems typifies this view of cognition and intelligence. In this viewpoint, external events andperceptions are converted into inner symbols to represent the state of the world. This inner symbolic code stores andrepresents all of the system’s long-term knowledge.

Actions take place through the logical manipulation of these symbols to discover solutions for the current problemspresented by the environment. Problem solving takes the form of a search through a problem space of symbols, andthe search is performed by the logical manipulation of the symbols through stated operations. These solutions areimplemented by forming plans and sending commands to the motor system to execute the plans in order to solve theproblem. In the symbolic viewpoint, intelligence is typified by and resides at the level of deliberative thought. TheTurning test is a prominent manifestation of such anthropomorphic view of intelligence. In practical terms the Turing

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 4: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.4 (1-21)

4 R. Kozma / Physics of Life Reviews ••• (••••) •••–•••

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

test postulates that a system (natural or artificial) is intelligent if it produces behavior undistinguishable from humanintelligence based on the judgment of a human examiner [1]. Modern examples of systems that pursue this paradigminclude SOAR [7] and ACT-R [8].

The symbolic approach works well as a model of cognition, and provides many impressive examples of intelligentbehavior. However, various practical and conceptual challenges to this viewpoint have emerged. On the practical side,symbolic models are notoriously inflexible and have difficulty to scale up from small and constrained environments toreal world problems. The conceptual challenges can be often traced back to the physical-symbol system hypothesis. Ifsymbolic systems are necessary and sufficient for intelligent behavior, why cannot AI produce the flexibility of behav-ior exhibited routinely by biological organisms? On the philosophical side, Dreyfus’ situated intelligence approach isa prominent example of the criticism of symbolism. Following Heiddeger’s and Merleau–Ponty’s traditions, Dreyfusascertains that intelligence should be defined in the context of the environment. Therefore, a fixed and disembodiedsymbol system, such as the ones postulated by many AI approaches, cannot grasp the essence of intelligence [9].

The limitations of symbolic systems to cope with real world problems have lead many to a new viewpoint of intel-ligence. By assuming that the major scope of intelligence is the deliberative thought, the hard problems of intelligenceappear to be those related to logic and language. From this viewpoint, the abilities of organisms to orient themselvesspatio-temporally, form perceptual categories and develop basic motor skills seem to be easy problems that can beimmediately solved once basic systems exist to take care of the harder problems of deliberative thought. But if thephysical-symbol system hypothesis does not hold and deliberative thought is not the basic manifestation of intelli-gence, then this viewpoint may be exactly backwards. In that case the environmental interaction and embodimentcan represent the crucial aspect of intelligence. Basic sensory and perceptive abilities, which are so easily dismissedas simple because children learn them seemingly with ease, are seen in the embodiment paradigm as complex andessential components of cognition. Situated cognitive principles find their successful implementations in the field ofembodied robotics, e.g., subsumption architecture [10], evolutionary strategies [11].

2.2. Bottom-up models

Bottom-up view of cognition provides an alternative to the symbolic or top-down theory of the mind. Bottom-upmodels try to generate intelligent behavior by imitating certain aspects of brains, in particular the complex connectivitypatterns between various microscopic, mesoscopic, and macroscopic brain components. These are the connectionistmodels of cognition, which emphasize parallel-distributed processing, while symbolic systems tend to process infor-mation in a serial fashion. Connectionist approaches employ adaptive and distributed structures, while symbols arestatic localized structures. Connectionist models offer many attractive features when compared with standard symbolicapproaches. They have a level of biological plausibility that allows for easier visualization of how brains might processinformation. Parallel-distributed representations are robust and flexible. They allow for pattern completion and gen-eralization performance. They are capable of adaptive learning. In short, connectionist models are useful tools whichare in many ways complementary to symbolic approaches. Clark [12] categorizes modern connectionism into threegenerations, i.e., first, second, and third generations. We add the fourth generation to reflect the newest developmentin the field [6,13]:

• First-generation connectionism: It began with the perceptron and the work of the cyberneticists in the 50s. Itinvolves simple neural structures with limited capabilities. Their limitations draw criticism by representativesof the symbolist AI school in the 60s, which resulted in abandonment of connectionist principles by mainstreamresearch establishment for decades. The resistance to connectionist ideas is understandable; it is in fact a repetitionof millennia-old philosophical shift from nominalism to realism [14]. Connectionism has been revived in the mid80s, motivated by the need for powerful practical tools of machine learning in various applications. Extensivetheoretical and computational research laid the foundations of the revival of connectionist approaches [8,15–17].The public breakthrough happened about 20 years ago [18] leading to explosive growth of the field.

• Second-generation connectionism: It gained momentum since the 80s, especially after the establishment of neuralnetwork conferences and societies worldwide. It extends first-generation networks to deal effectively with com-plex dynamics and spatio-temporal events. It involves time-lagged and recurrent neural network architectures anda range of adaptation and learning algorithms [19–21].

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 5: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.5 (1-21)

R. Kozma / Physics of Life Reviews ••• (••••) •••–••• 5

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

• Third-generation connectionism: It is typified by even more complex dynamic and time involving properties[22]. These models use biologically inspired modular architectures, along with various recurrent and hard-codedconnections. Because of the increasing emphasis on dynamic and time properties, third-generation connectionismhas also been called dynamic connectionism. Third generation connectionist models include DARWIN [23,24],and the Distributed Adaptive Control (DAC) models [25–27].

• Fourth generation connectionism: The newest development of neural modeling, which goes beyond Clark’s orig-inal categorization schema [6,28]. It involves nonconvergent/ chaotic sequences of spatio-temporal oscillations.It is based on advances in EEG analysis, which gave spatio-temporal amplitude modulation (AM) patterns ofunprecedented clarity. The K (Katchalsky) models are prominent examples of this category, which are rooted inFreeman intuitive ideas from the 70s [29] and gained prominence since the turn of the century [30–34]. Neuroper-colation represents another prominent family of models aiming at describing spatio-temporal dynamics of neuralpopulations with discontinuous transitions [35].

2.3. Dynamic Logic: Emergence of symbols from subsymbolic principles

Dynamic Logic (DL) is a cognitively motivated mechanism of pattern recognition and knowledge elicitation fromreal-life complex data [14]. Dynamic logic follows the flexibility of the learning process in the human mind. It over-comes the current limitations of artificially intelligent systems caused by the combinatorial complexity inherent in themanipulation of large data sets. Dynamic logic starts with an initially vague approximation of the world, then max-imizes the similarity between concept models and the perceived environment through a sequence of iterative actionand perception steps. Biological interpretation of this mechanism is the knowledge instinct: we always have to adaptour internal models of the external world to fit actual sensory percepts. We do not consciously perceive all the detailsof the knowledge instinct operation, which involves routine learning mechanisms.

Dynamic approach to intelligence creates the opportunity of integrating bottom-up and top-down methods of in-telligence. It goes beyond bottom-up connectionist approaches by extracting symbolic knowledge from sub-symbolicstructures manifested in the form of spatio-temporal fluctuations of brain activity. Traditional approaches to knowl-edge extraction and ontology generation in machine learning and trained systems have focused on static systems,extracting grammatical or logical rules from dynamical systems or creating ontology primarily from text or numericaldata databases [22,36,37]. These approaches are often limited by the nature of extracted knowledge representations,which are static structures with no dynamics.

The basic premise of DL is that prior to active engagement with the environment, our knowledge is limited and thecorresponding model is fuzzy. After continued interaction with the environment, we receive relevant information andour model of the object becomes more and more crisp. At a certain point the learned knowledge reaches consciousnessand becomes structured and crisp [38]. DL thus provides a biologically motivated mechanism of the emergence ofsymbolic knowledge from subsymbolic background. DL is a promising approach toward elicitation of knowledgefrom spatio-temporal neural oscillations observed in EEG experiments [39]. DL thus can provide the tool to study thelanguage of the brain manifested in the form of sequences of metastable spatio-temporal AM oscillation patterns.

2.4. Phenomenology of intentionality

Intentionality is the fundamental unifying principle which generates the basic conditions of intelligence startingat the level of simple amphibians like the salamander, through small mammals like rats, up to the level of humanintelligence. Intelligence is characterized by the flexible and creative pursuit of endogenously defined goals. Humansand animals are not passive receivers of perceptual information and they actively search for sensory input [40]. Anexample of the salamander brain components supporting intentional action are shown in Fig. 1. The simple limbicsystem provides projections between primary sensory areas, the hippocampus and the pyriform cortex for motoractions [41]. The system is in constant engagement with the environment through sensory inflow and motor outflow.

Intentionality includes the following components [40]:

• An internal model of the world. This model is often crude, imperfect, and it reflects the limited resources andconstraints of the subject. This model has been formed according to the individual’s internal desires to surviveand prosper in the world.

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 6: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.6 (1-21)

6 R. Kozma / Physics of Life Reviews ••• (••••) •••–•••

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

Fig. 1. Illustration of the salamander limbic system manifesting basic intentional behavior [41].

Table 1Complementary aspects in various disciplines

Discipline Aspect A Aspect B

Physics Energy InformationControl theory Feedback FeedforwardCalculus Second order PDE First order PDE

Diffusion DriftMemory Associative DeclarativeHierarchy Low-level High-levelGeneral Primary Secondary

• Hypothesis formation about expected states the individual may face in the future. The hypotheses as expressed asmeaningful goals to be aimed at.

• Formulating a plan of action to achieve the goals and informing the sensory and perceptual apparatus about theexpected future input in a process called re-afference.

• Action into the environment in accordance with the plan to achieve the formulated goals. Manipulating the sensors,adjust their properties and orientations to receive the sensory data.

• Generalize and categorize the sensory data and combine them into multisensory percepts called Gestalts.• Verify and test the hypotheses, and update and learn the brain model to correspond better to the perceived data.

Form new/updated hypotheses and continue the whole process again.

The cyclic operation of prediction, testing hypotheses by action, sensing, perceiving, and assimilation is calledintentionality. The significance of the dynamical approach to intelligence is emphasized by our hypothesis that bio-logical systems, in particular brains, use nonlinear dynamics principles to achieve intentional behavior [42]. Therefore,understanding dynamics of cognition and its relevance to intentionality is a crucial step towards building more intel-ligent machines [43]. The dynamical hypothesis on intentionality and intelligence goes beyond the basic notion ofgoal-oriented behavior, or sophisticated manipulations with symbolic representations to achieve given goals. Inten-tionality is endogenously rooted in the agent and it cannot be implanted into it from outside by any external agency.Intentionality is manifested in and evolved through the dynamical change in the state of the agent upon its interactionwith the environment.

2.5. Intentionality: The unifying principle of the complementary nature

Natural processes often exhibit complementary behavior, when opposing aspects of a phenomenon co-exist andinterfere with each other. Table 1 lists such behaviors in various disciplines. The unifying principle of complementarynature can be illustrated using the intermediate-range or mesoscopic operating paradigm. Accordingly, a wide rangeof complex systems exist in a delicate balance between local fragmentation of individual components and the overallglobal dominance of extreme dilution in a distributed structure.

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 7: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.7 (1-21)

R. Kozma / Physics of Life Reviews ••• (••••) •••–••• 7

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

Fig. 2. Schematic representation of the model of psychological processes in Freud’s Project [46].

Brains and biological intelligence provide prominent examples of the mesoscopic paradigm according to the com-plementary principle. This is documented using brain monitoring techniques at the level of neurophysiology [31].At the level of cognitive functions, the complementary principle has been thoroughly studied and illustrated throughmetastable phase transitions in human cognitive behaviors [44,45]. In the next section we continue with the discussionof neurodynamic manifestations of the complementary principle. It is important to mention that complementary prin-ciple is a major theme of psychology and brain studies for a century. Fig. 2 shows Pribram’s view on psychologicalprocesses, following Freud’s ideas [46]. Conscious awareness ω is under the joint influence of psychological Ψ andperipheral Φ components. The psychological component is illustrated as a diffusive nuclear process, while peripheralincludes exogenous and endogenous projections between receptors and effectors and the cortical systems.

3. Neural correlates of intentionality

3.1. Attributes of cortical neurodynamics

The theory of nonlinear systems and chaos provides a mathematical tool to analyze complex neural processes inbrains. Katchalsky [47] described spatio-temporal oscillations and sudden transitions in neural systems. Emergenceof small number of macroscopic activity patterns as the result of the interaction of huge amount of microscopic com-ponents has been attributed to Haken’s “slaving principle” [48]. Accordingly, the relatively slow macroscopic orderparameters “enslave” the faster microscopic elements and produce large-scale spatio-temporal cooperative patterns ina system at the edge of stability. This theory has been developed by Kelso into the concept of metastable brains [44,45]. Metastability is manifested through rapid transitions between coherent states as the result of the complementaryeffects of overall coordination and disintegration of individual components. The Haken–Kelso–Bunz (HKB) modelgives a coherent phenomenology and mathematical formulation of metastability.

Nonlinear neurodynamic methods help to design advanced signal processing techniques to identify neural corre-lates of cognitive processing using EEG signals. Hilbert transform was applied to get the instantaneous phase, givingorders of magnitude increase in temporal resolution [49]. The obtained data gave spatio-temporal patterns of unprece-dented clarity and supported new theory of spatio-temporal neurodynamics, with several main principles:

• Self-Organized criticality: Brains maintain themselves at the edge of global instability by inducing a multitude ofsmall and large adjustments. The time intervals and sizes of the changes have fractal distributions, as manifestedin histograms and in 1/f forms of spatial and temporal power spectral densities, with exponents between −2 and−3 corresponding to brown noise and black noise, respectively [50–53].

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 8: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.8 (1-21)

8 R. Kozma / Physics of Life Reviews ••• (••••) •••–•••

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

• First order phase transitions: Each adjustment is a sudden and irreversible change in the state of a neural populationthat carries the population across a separatrix from one basin of attraction to another [54]. The state changes areoverlapping for populations of all sizes from less than a mm to an entire cerebral hemisphere.

• Chaotic itinerancy: Phase transitions at all levels tend to be recurrent at irregular time intervals reflecting prop-erties of chaotic itinerant processes [55]. Normally each attractor begins to dissolve as soon as it is accessed,allowing the brain to escape entrapment.

• Anomalous dispersion: The spread of each phase transition across the population is too rapid to be accountedfor by serial synaptic transmission. Each population by virtue of long axons and small world effects has a groupvelocity at which information is transmitted and a much higher phase velocity at which a phase transition spreads.This ensures synchrony over domains ranging from less than 1 mm to over 20 cm covering the hemisphere.

3.2. Phase cones in cortical EEG measurements

High-density 8 × 8 subdural arrays of EEG electrodes were fixed over sensory cortices of rabbits [56]. Phase wasmeasured by curve fitting using nonlinear regression with the cosine as the basis function. Hilbert transform has thepotential to provide high temporal resolution of the phase, and it has been applied successfully to EEG analysis inthe past years [57–59]. Hilbert analysis introduces the analytic phase and amplitude measures, and phase differencesat the given frequency band. Analytic phase differences between successive temporal instances are shown in Fig. 5,as the function of time (y-axis) and channel number (x-axis). Fig. 5 shows that for certain time segments, the phasedifferences have a low, relatively constant level over the entire array. For example, phase differences are uniformlylow between steps 200 to 500. This is an example of uniformly low phase plateau. During the plateaus, a sustainedand synchronized amplitude pattern is maintained, indicating the emergence of category formation by the subject. Atsome other time intervals, e.g., at time steps around 100 and 600, the phase differences are highly variable across thearray. The plateaus of constancy in phase difference were separated by jumps and dips with variable phase values.

Spatial phase patterns in 2-D were measured by fitting a cone as the basis function to the 8 × 8 phase surfaces.Measurement gave estimates of two fundamental state variables at each point in time: the rate of change in phasewith time (the frequency), and the rate of change in phase with distance (the gradient). These 2 quantities enableddescription of intermittent, radially symmetric spatio-temporal patterns of phase, the phase cones. The apex of eachcone showed the location and onset time of abrupt re-initialization of phase at a frequency in the beta-gamma range.Recurrence rates of larger phase cones were in the theta range. Beta-gamma phase patterns in the ms-mm to m-sranges evidence that neocortex maintains a scale-free state of self-organized criticality in each hemisphere as thebasis for its rapid and repetitive integration of sensory input with experience. An act of perception is described asa widespread, almost instantaneous re-organization of neocortical background activity. It is concluded that phasecones reflect chaotic state transitions leading to new cortical patterns assimilating sensory input. The overlappingcones show that neocortex maintains a stable, scale-free state of self-organized criticality by homeostatic regulationof neural firing, through which it adapts instantly and globally to rapid environmental changes.

Multiple phase cones may co-exist in the monitored spatial window covered the EEG array. EEG experimentshave high noise levels and the cone fitting is typically a very difficult pattern recognition problem. Dynamic Logic(DL) when applied to the analysis of multi-channel EEG data, can provide optimum estimation of AM patternsmodelled through the emergence of multiple phase cones [60]. Modelling is done in the context of the given levelof knowledge available on the cognitive behavior of the subject. Starting with a vague initial model, the description isiteratively improved, leading to more and more precise models on the apex locations and phase gradients. Integratingdynamic logic and EEG analysis has the potential of providing an accurate estimation of phase cones, which leads tobreakthrough in identifying the cognitive meaning of metastable AM patterns.

3.3. Phase transitions in brains

Studying phase transitions gains increasing popularity in various research fields beyond physics, including popula-tion dynamics, the spread of infectious diseases, social interactions, neural systems, and computer networks [61–63].Neural signal propagation through axonal effects has velocity in the order of 1–2 m/s, and supports synchronizationover large areas of cortex [64–66]. This creates small-world effects [67,68] in analogy to the rapid disseminationof information through social contacts. The number of edges linked to a node is also called its degree. In certain

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 9: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.9 (1-21)

R. Kozma / Physics of Life Reviews ••• (••••) •••–••• 9

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

networks, like the www and biological systems, the edge distribution between nodes follows a power law, i.e., itis scale-free [69,70]. A consistent mathematical theory of scale-free networks is given by [71]. The importance oflong-distance correlations has been emphasized by numerous brain theorists [72–79]. Phase transitions studied by theneuropercolation model offer a new perspective on scale-free behavior in networks relevant to brain functions [35,80].

Phase transitions between chaotic states constitute the dynamics that explains how brains perform such remarkablefeats as abstraction of the essentials of figures from complex, unknown and unpredictable backgrounds, generalizationover examples of recurring objects, reliable assignment to classes that lead to appropriate actions, planning futureactions based on past experience, and constant updating by way of the learning process [49].

4. Modelling spatio-temporal oscillations—K models

4.1. K-sets theory and implementation

A hierarchical approach to spatio-temporal Neurodynamics is based on K-sets. Low-level K-sets were introducedby Freeman in the 70s, named in the honor of Aharon Kachalsky, an early pioneer of neural dynamics [29]. K-sets aremulti-scale models, describing increasing complexity of structure and dynamical behaviors. K-sets are mesoscopicmodels, which represent an intermediate-level between microscopic neurons and macroscopic brain structures. K-setsare topological specifications of the hierarchy of connectivity in neuron populations. The dynamics of K-set are mod-elled using a system of nonlinear ordinary differential equations with distributed parameters. K-dynamics expressed inODEs predict the oscillatory waveforms that are generated by neural populations. K-sets describe the spatial patternsof phase and amplitude of the oscillations, generated by components at each level. They model observable fields ofneural activity comprising EEG, LFP, and MEG.

K-sets consist of a hierarchy of components of increasing complexity, including K0, KI, KII, KIII, KIV, and KVsystems. They model the hierarchy of the brain starting from the mm scale to the complete hemisphere. Low-levelK-sets, up to and including KIII, have been studied intensively in the mid 90s and they model sensory cortices. Theyhave been applied to solve various classification and pattern recognition problems [81]. KIII sets are complex dynamicsystems modelling classification in various cortical areas, having typically hundreds of degrees of freedom. In earlyapplications, KIII sets exhibited extreme sensitivity to model parameters which prevented their broad use in practice[82]. In the past decade systematic analysis has identified regions of robust performance [33], and stability propertiesof K-sets have been determined [83,84]. Today, K-sets are used in a wide range of applications, including detection ofchemicals [85], classification [32,86], time series prediction [87], and robot navigation [88,89]. Recent developmentsinclude the KIV sets [77] which model intentional behavior.

4.2. Hierarchy of K-sets

4.2.1. KO setThe KO set represents a non-interacting collection of neurons. It models dendritic integration in average neurons

and a sigmoid static nonlinearity for axon transmission. They are described by a state-dependent, linear, 2nd orderordinary differential equation (ODE). The KO set is governed by a point attractor with zero output and stays atequilibrium except when perturbed. KO models a neuron population of about 104 neurons.

4.2.2. KI setsA KI set represents a collection of K0 sets, which can be either excitatory or inhibitory units. At the KI level it is

important to have the same type of units in the system, so we have no negative feedback. Accordingly, we can talkabout excitatory or inhibitory KI sets, i.e., KIE and KII , respectively. As a result, the dynamics of a KI is describeda simple fixed point convergence. If KI has sufficient functional connection density, then it is able to maintain a non-zero state of background activity by mutual excitation (or inhibition). KI typically operates far from thermodynamicequilibrium. The stability of the KIE set under impulse perturbation is demonstrated using the periglomerular cells inthe olfactory bulb [31]. Its critical contribution is the sustained level of excitatory output. Neural interaction by stablemutual excitation (or mutual inhibition) is fundamental to understanding brain dynamics.

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 10: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.10 (1-21)

10 R. Kozma / Physics of Life Reviews ••• (••••) •••–•••

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

Fig. 3. The K-set hierarchy showing progression of neural population models from cell level to hemisphere wide simulation, K0 through KIV. Theprogression of models of increasing complexity follows the organizational levels of brains. K0 is governed by a point attractor with zero outputand stays at equilibrium except when perturbed. KI corresponds to a cortical column that has sufficient functional connection density to maintaina state of non-zero background activity. KII represents a collection of excitatory and inhibitory populations, which can exhibit limit cycle periodicoscillations at a narrow band frequency in the gamma range. KIII is formed by the interaction of several KII sets through long axonal pathwayswith distributed delays. It simulates the known dynamics of sensory areas that generate broad-band chaotic oscillations. KIV is formed by theinteraction of 3 KIII sets. It models the hemisphere with multiple sensory areas and the genesis of simple forms of intentional behaviors withintermittent synchronization–desynchronization.

4.2.3. KII setsA KII set represents a collection of excitatory and inhibitory cells, KIE and KII . Under sustained excitation from

a KIE set but without the equivalent of sensory input, the KII set is governed by limit cycle dynamics. With simulatedsensory input comprising an order parameter, the KII set undergoes a state transition to oscillation at a narrow bandfrequency in the gamma range.

4.2.4. KIII setsThe KIII model consists of several interconnected KII sets, and it models a given sensory system in brains, e.g.,

olfactory, visual, auditory, or somatosensory modality. KIII can be used as an associative memory which encodesinput data into nonconvergent spatio-temporal patterns [33,81]. KIII generates aperiodic, chaotic oscillations. TheKIII chaotic memories have several advantages as compared to convergent recurrent networks: (1) they produce robustmemories based on relatively few learning examples even in noisy environment; (2) the encoding capacity of a networkwith a given number of nodes is exponentially larger than their convergent counterparts; (3) they recall the stored datavery quickly, just as humans and animals can recognize a learnt patter within a fraction of a second.

4.2.5. KIV setsA KIV set is formed by the interaction of 3 KIII sets. It is used to model the interactions of the primordial vertebrate

forebrain in the genesis of simple forms of intentional behavior [77,90]. KIV models provide a biologically feasibleplatform to study cognitive behavior associated with learning and action-perception cycle, and as such will be thefocus of this review. Fig. 3 illustrates the hierarchy of K-sets.

5. Construction of intentional dynamic systems at KIV level

5.1. Design of the intentional KIV system

The structure and operation of the intentional KIV model is described in [77,90]. KIV has 4 major components.Three of the components are KIII sets describing the cortex, the hippocampal formation, and the midline forebrain.They are connected through the coordinating activity of the amygdala to the brain stem (BS) and the rest of the limbicsystem. Fig. 4 illustrates the connections between components of KIV. The connections are shown as bidirectional,

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 11: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.11 (1-21)

R. Kozma / Physics of Life Reviews ••• (••••) •••–••• 11

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

FFig. 4. KIV model of the brain, which consists of three KIII sets (cortex, hippocampal formation, and midline forebrain), the amygdala striatumand BS, brain stem. The amygdala is a KII set, while the brain stem and drives are KI sets. The sparse long connections that comprise KIV areshown as bidirectional, but they are not reciprocal. The convergence location and output are provided by the amygdala. In this simplified model theamygdala provides the goal-oriented direction for the motor system, that is superimposed on local tactile and other protective reflexes [90].

Table 2Characterization of the hierarchy of K-sets

Type Structure Inherent Dynamics Examples in brains*

KO Single unit Nonlinear I/O function aremade of KOs

All higher level K-sets

KI Populations of excitatory orinhibitory units

Fixed point convergence tozero or non-zero value

PG, DG, BG, BS

KII Interacting populations ofexcit. and inhib. units

Limit cycle oscillations;frequency in gamma band

OB, AON, PC, CA1, CA3,CA2, HT, BG, BS, Amygdala

KIII Several interacting KII and KIsets

Chaotic oscillations Cortex, HippocampalFormation Midline Forebrain

KIV Interacting KIII sets Spatio-temporal dynamicswith global phase transitions(itinerant chaos)

Hemisphere-wide cooperationof Cortex, HF, MF areascoordinated by the Amygdala

* Notations: PG—periglomerular; OB—olfactory bulb; AON—anterior olfactory nucleus; PC—prepyriform cortex; HF—hippocampal forma-tion; DG—dentate gyrus; CA1, CA2, CA3—cornu ammonis sections of the hippocampus; MF—midline forebrain; BG—basal ganglia; HT—hypothalamus; DB—diagonal band; SP—septum.

but they are not reciprocal. The output of a node in a KII set is directed to nodes of another KII set, but it does notreceive output from the same nodes but other nodes in the same KII set. An overview of the function of the various Kunits of the KIV model is given in Table 2.

In Fig. 4 three types of sensory signals can be distinguished. These sensory signals provide stimulus to a given partof the brain, namely the sensory cortices, midline forebrain (MF), and the hippocampal formation (HF), respectively.The corresponding types of sensory signals are listed below:

• Exteroceptors;• Interoceptors (including proprioception);• Orientation signals.

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 12: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.12 (1-21)

12 R. Kozma / Physics of Life Reviews ••• (••••) •••–•••

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

Fig. 5. The raster plot shows the successive differences of the unwrapped analytic phase, changing with channel serial number (right abscissa) andtime (left abscissa). The 3000 time steps correspond to 6 s duration of the experiment; [110].

The environmental sensory information enters through broad-band exteroceptors (visual, audio, somatosensory,etc.) and processed in the OB and the PC in the form of a spatially distributed sequence of AM patterns. These AMpatterns are superimposed on the spatial maps in CA3. In the present model, separate receptor arrays are used forhippocampus, which simplifies internal connectivity and receptor specialization. The cortical KIII system initiates thefunction of pattern recognition by the agency of sensory input-induced destabilization of high-dimensional dynamics.The input is gated by the septal generator (like a sniff or saccade). This actualizes an attractor landscape formed byprevious experience in the OB/PC, which in our model is the common sensorium. The hippocampal KIII system, onthe other hand, uses the classification embodied in the outputs of the OB and PC as its content-laden input, to which thedentate gyrus (DG) contributes the temporal and spatial location of environmental events. These events contributedto the previously learned relationships between the expectations and the experience of the system in search of itsassigned goal.

The Midline Forebrain formation is a KIII set, which receives interoceptor signals through the basal ganglia, andprocesses them in the hypothalamus and the septum. MF provides the value system of the KIV, using information onthe internal goals and conditions in the animal. It provides the “Why?” stream to the amygdala, which combines thiswith the “What?” and “Where?” information coming from the cortex and the hippocampus to make a decision aboutthe next step/action to be taken. MF is also a KIII unit, which contributes to the formation of the global KIV coherentstate. The coherent KIV state evolves through a sequence of metastable AM patterns.

5.2. Learning in KIV models

In order to use the arrays of K sets as novel computational and memory devices, learning has been implemented atthe KIII level. The operation of the dynamic memory can be described as follows. In the absence of stimuli the systemis in a high dimensional state of spatially coherent basal activity. The basal state is described by a nonconvergent(chaotic) global attractor. In response to an external stimulus, the system is kicked out of the basal state and constrainedto a local memory basin of an attractor wing. This wing is of lower dimension than the basal attractor giving spatiallypatterned amplitude modulation (AM) of the coherent carrier wave. The system resides in this localized wing for theduration of the stimulus and it returns to the basal state after the stimulus ends. The system memory is defined as thecollection of basins and attractor wings, and a recall is the induction by a state transition of a spatio-temporal gammaoscillation with a spatial AM pattern. Three learning processes are defined [33]:

• Hebbian reinforcement learning of stimulus patterns; this is fast and irreversible;• Habituation of background activity; slow, cumulative, and reversible;• Normalization of nodal activities to maintain homeostatic balance; very long-range optimization outside real time.

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 13: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.13 (1-21)

R. Kozma / Physics of Life Reviews ••• (••••) •••–••• 13

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

Fig. 6. Illustration of Hebbian correlation learning in the KIII set. Due to the nonconvergent nature of typical K-sets dynamics, the average error σi

is calculated for each channel over a time window of T ≈ 0.1 s; [87].

Fig. 7. Schematic view of KIV-based decision making in the network of N agents. The notations in the case of the ith agent are: Bi—preprocessor,Ci—classifier, Di—comparator, Ei—controller. EC denotes the entorhinal cortex model, which performs the overall decision-making. Bi is a KIset; Ci, Di, Ei, are KII sets and together they comprise the ith KIII set. The EC in cooperation with the D1, D2, DN sets represents the high-levelKIV behavior with intermittent phase transitions.

Various learning processes exist in a subtle balance and their relative importance changes at various stages of thememory process. In the framework of this study, stable operation of the network is studied without using normal-ization. Hebbian learning is paired with reinforcement, reward or punishment; i.e., learning takes place only if thereinforcement signal is present. This is episodic, not continuous, long-term and irreversible. Habituation is an auto-matic process in every primary sensory area that serves to screen out stimuli that are irrelevant, confusing, ambiguousor otherwise unwanted. It constitutes an adaptive filter to reduce the impact of environmental noise that is contin-uous and uninformative. It is a central process that does not occur at the level of sensory receptors. It is modelledby incremental weight decay that decreases the sensitivity of the KIII system to stimuli from the environment thatare not designated as desired or significant by accompanying reinforcement. Learning contributes to the formation ofconvoluted attractor basins, which facilitate phase transitions in the dynamical model at the edge of chaos [91]. Learn-ing takes place in the CA1 and CC units of the hippocampus and cortex, respectively. KII sets consist of interactingexcitatory and inhibitory layers, and the lateral weights between the nodes in the excitatory layers are adapted by thelearning effects; see Fig. 6 for illustration.

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 14: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.14 (1-21)

14 R. Kozma / Physics of Life Reviews ••• (••••) •••–•••

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

5.3. Generalized KIV for multi-agent cooperation

The foregoing discussions illustrate how 3 KIII systems interact in KIV to model major components of the limbicsystem. Fig. 7 shows a generalized KIV architecture for cooperation in a case of a network of agents. In general, thereare N agents, each of which has its dedicated inputs and corresponding low-level task. The system has the followingcomponents:

• Preprocessor: Input compression, normalization, units B1, B2, . . . , BN.• Classifier: Identification/recognition of data, C1, C2, . . . , CN;• Comparator: Low-level decision making, D1, D2, . . . , DN;• Controller: Achieve dynamical (chaotic) balance, E1, E2, . . . , EN;• Extractor: of common modes; EC—entorhinal cortex. Detect covariant oscillations in separate KIII sets.

The preprocessor, classifier, comparator, and controller modules perform tasks belonging to the individual agents.The extractor module (EC) is privileged with connections to all KIII agents through connection to the comparatorunits. EC has a crucial KIV-level function, i.e., it extracts the coherent components of the individual KIII units. Thiscoherent component is typically very small, <1%. However, this small covariant fraction of the signals indicates high-level interaction in the network. EC makes the decision based on the covariant component, as it is manifested throughintermittent phase transitions.

The defining feature of the distributed KIV intentional system is the high-level operation of constructing an imageof a future state of itself in relation to its environment and its goal. KIV generates nested frames of actions to be takenstepwise and serial predictions of the frames of multi-sensory input at each step. Its advance at each serial step isconditional on conformance of predicted and actual frames within the limits of abstraction and generalization that aredetermined by the basins of attraction in the perceptual attractor landscapes. The landscapes embody past experiencesfrom interacting with the environment, which are stored in the connectivity matrices of the KII sets. These matricesstore the perceptual consequences of the intentional actions taken by KIV, which constitute all what the agents canknow about the environment. At this level, KIV is a model of the brain of animals acting without language. Thishas the advantage of exposing the bedrock of cognition that evolved prior to any overlay of capacities for symbolmanipulation. Further developments in KV sets are directed toward higher cognition, such as language [92].

6. Intentional dynamics for control and navigation

6.1. Basic principles of intentional control

Biologically-inspired neural architectures are widely used for control of mobile robots and demonstrated ro-bust navigation capabilities in challenging real life scenarios. These approaches include subsumption methods [93],BISMARC—Biologically Inspired System for Map-based Autonomous Rover Control [94,95], ethology inspiredhierarchical organizations of behavior [96], behavior-based control algorithm using fuzzy logic [97]. Brain-like archi-tectures are increasingly popular in intelligent control, including learning cognitive maps in the hippocampus [98,99],the role of place cells in navigation [100], visual mapping and the hippocampus [101–103], learning in the cortico-hippocampal system [104]. Due to the high level of detail in reproducing the parts of the brain involved in navigationthe behavioral analysis includes simple navigational tasks.

The approach using K-sets as dynamical memories is motivated by experimental findings related to activity patternsof population of neurons. Unlike most of the other neural networks, K models are based on experimental findings onthe mesoscopic dynamics of brain activity. The model of navigation includes the activity of the hippocampus andthe cortex. It is assumed that the hippocampus processes information related to global landmarks, and the cortexdeals with information received from the vicinity of the agent. An important difference between K models and otherbiologically-inspired control architectures is the way information is processed. K models encode information in AMpatterns of oscillations generated by the Hebbian synapses in layers of the KIII subsystems, while other models usefiring rate and Hebbian synapses [99,105]. Despite these differences, the performance of K model matches or exceedsthat of the standard approaches [88,104].

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 15: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.15 (1-21)

R. Kozma / Physics of Life Reviews ••• (••••) •••–••• 15

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

Fig. 8. Schematic view of the KIV model of the cortico-hippocampal formation with subcortical projections for motor actions. KIV consists of thehippocampal formation, 2 sensory cortices, link to motor system coordinated by the entorhinal cortex influenced by amygdala striatum. Notations:PG—periglomerular; OB—olfactory bulb; AON—anterior olfactory nucleus; PC—prepyriform cortex; HF—hippocampal formation; DG—dentategyrus; CA1, CA2, CA3—sections of the hippocampus; EC—entorhinal cortex. Dashed circles demarcate basic KIII units.

6.2. KIV in a virtual environment

Experiments have been designed and implemented in computer simulation to demonstrate the potential of KIVoperating on intentional dynamic principles. In the experiments, an autonomous agent moves in a 2-dimensional en-vironment. During its movement, the agent continuously receives two types of sensory data: (1) distance to obstacles;(2) orientation toward some preset goal location. KIV makes decisions about its actions toward the goal. The sensory-control mechanism of the system is a simplified KIV set consisting of 2 KIII sets [104,106,107]. The simplifiedKIV set is illustrated in Fig. 8, which has two KIIIs for sensory cortices and one KIII for hippocampus. We use thehippocampus for orientation with respect to landmarks and one sensory cortex for near-field obstacle detection. Foruniformity, we use the terminology of olfaction in cortices for every sensory modality, this is to follow Freeman’soriginal motivation of KIII as the model of olfaction.

Fig. 9 shows an example of the suboptimal path traversed by the agent from start (S) to the goal (G). The optimalityof the trajectories generated during navigation by calculating the ratio of good and bad moves. A good move is definedas a step towards the goal, while a bad move is a step away from the goal. Detailed results are reported in [104] fordifferent sizes of the sensory horizon. As the size of the sensory horizon increases the ratio of good and bad movesalso increases and the simulated animal learns to reach the goal without bumping into the obstacles.

These results indicate that the KIV model is indeed a suitable level of abstraction to grasp essential properties ofthe limbic system and use the obtained insights for goal-oriented navigation. Large-scale synchronization, interruptedintermittently by short periods of desynchronization, is an emergent property of the limbic system as a unified organof intentionality. The KIV model is capable of demonstrating this intentional dynamics, thus it represents a practicaltool to implement basic intentional behavior in animats and adaptive robots.

6.3. Implementing KIV for SRR2K rover navigation

The operation of the KIV system has been demonstrated for real-time processing of sensory inputs and onboarddynamic behavior tasking using SRR2K (Sample Return Rover) platform at the Planetary Robotics indoor facilityof NASA/JPL. The experiments illustrate robust obstacle avoidance combined with goal-oriented navigation by theSRR2K robot [13,108]. Here a brief overview of the results is given. SRR2K is a four-wheeled mobile robot withindependently steered wheels and independently controlled shoulder joints of a robot arm; see Fig. 10. The primarysensing modalities of SRR2K are: (1) a stereo camera pair of 5 cm separation, 15 cm of height and 130 degree field ofview (Hazcam); (2) a goal camera mounted on a manipulator arm with 20 degree field of view (Viewcam); (3) internal

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 16: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.16 (1-21)

16 R. Kozma / Physics of Life Reviews ••• (••••) •••–•••

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

FFig. 9. Navigation in a simulated Mars-like environment. Polygons represent obstacles (rocks). The start and goal locations are indicated by S (start)and G (goal). Environment based on [96]. The size of the grid is 50 by 50. Circles represent the three landmarks used by the localization system[104].

Fig. 10. Sample Return Rover (SRR2K) situated in the Planetary Robotics indoor facility imitating natural terrain in planetary environments.SRR2K has 4 independently controllable wheels. The figure also shows a robot arm which is not used in the present experiments. The sensorysystem consists of visual cameras, infra red sensors, accelerometers, and global orientation sensors [108].

gyroscope registering along coordinates pitch, roll, and yaw; (4) Crossbow accelerometer in x, y, and z coordinates;(5) a Sun sensor for global positioning information [95].

To simplify measurement conditions and data acquisition, the top-mounted goal camera, the robot arm, and theglobal positioning system are not used in the present experiments. We have the following sensory data available asSRR2K traverses the terrain: (1) visual data vector consisting of 10 wavelet coefficients determined from the recorded480 × 640 pixel image of a Hazcam; (2) recordings using the mean and variance of the gyroscope and accelerometerreadings along the 3 spatial coordinates; (3) rover heading which is a single angle value determined by comparing theorientation of the rover with respect to the direction of the goal.

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 17: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.17 (1-21)

R. Kozma / Physics of Life Reviews ••• (••••) •••–••• 17

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

Fig. 11. Finite State Machine operating on SRR2K’s on-board computer. In the present experiments KIV receives sensory data from SRR2K and itcommunicates its control commands ANGLE_TO_TURN and DISTANCE_TO_GO to the FSM of SRR2K for execution.

We apply KIV set for robot navigation. KIV is the brain of an intentional robot that acts into its environmentby exploration and it learns from the sensory consequences of its actions. By cumulative learning, KIV creates aninternal model of its environment and uses the model to guide its actions while avoiding hazards and reaching the goalspecified by the human controller. The standard control algorithm of SRR2K uses a Finite State Machine (FSM) todecide its action at any moment of time. The KIV-based control uses the simplified KIV architecture given in Fig. 8,which must interface with the existing FSM. The rover is at a given state at any time instant, and it transits to a nextstate based on the present state and available input information. The control algorithm CRUISE_XY_Z accepts 2controlled variables ANGLE_TO_TURN and DISTANCE_TO_GO. These variables are supplied by KIV through acontrol file periodically downloaded to SRR2K. A third variable DESIRED_VELOCITY can be controlled as well.However, in the limited task for the present project, the velocity was given a value of 10 cm/s which has not beenchanged for simplicity. The applied FSM is depicted in Fig. 11, where the communication channels linking to KIVare indicated as well.

In order to develop an efficient control paradigm, SRR2K should be able to avoid obstacles instead of trying toclimb through them even if that means not taking the direct path toward the goal. If the robot rides over bumps, itexperiences increased oscillations and high RMS values of the gyroscope and accelerator. It needs to learn to avoidthis situation. The RMS oscillation is used as a negative reinforcement learning signal. On the other hand, movingtoward the goal location is desirable and produces a positive reinforcement learning signal. The essence of the task isto anticipate the undesirable bumpy oscillation a few steps ahead based on the visual signal. SRR2K need to developthis association on its own. A usual AI approach would provide some rule base using the visual signal. For example,turn left if you see a bump on the right. This may be a successful strategy on relatively simple tasks, but clearly thereis no way to develop a rule base which can give instructions in a general traversal with unexpected environmentalchallenges, like the ones are faced at interplanetary missions.

SRR2K has demonstrated that its ability to learn the association between different sensory modalities. It traversessuccessfully a terrain with obstacles without bumping into them as it reaches the goal position [108]. Detailed eval-uation of the system’s performance is in progress. KIV based control has the option to take not only instantaneoussensory values, but also data observed several steps earlier. This involves a short-term memory (STM) with a certainmemory depth. In a given task, a memory could be 3–4 steps deep, or more. One can optimize the system performancebased on the memory depth [109]. Such optimization has not been done in the SRR2K experiments; it is planned tobe completed in the future.

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 18: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.18 (1-21)

18 R. Kozma / Physics of Life Reviews ••• (••••) •••–•••

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

Fig. 12. Illustration of the operation of the intentional action perception cycle using populations of artificial agents equipped with KIV brains. Afterthe initialization, and agent population evolves in a virtual environment. The best ones mature and enter actual robot embodiment. During theiraction into the environment, the robots generate EEG signals just as animals. The EEG signals are searched for manifestations of cognitive activity,as observed in biological brains. The observed patterns, concepts and behavioral schemas are used for initiating a new cycle with a new generationof more intelligent agents.

7. Perspectives on intentional dynamics

The past 50 years evidenced the proliferation of artificially intelligent techniques. Still the field is not even closeto implementing intelligence at the level of small mammals or reptiles, not to mention human intelligence. Significantpart of the problem is due to the popular view which considers symbolic and subsymbolic aspects as competing, evenmutually exclusive sides of intelligence. Biological intelligence in our view is the manifestation of a complex systemwhich feeds on the interference of these complementary aspects through a mechanism identified as spatio-temporalintentional neurodynamics.

KIV implements the intentional neurodynamic principles in a computational model and it demonstrates successfullearning and testing performance in virtual and hardware domains. KIV embodies a biologically-motivated dynamicpattern-based computation paradigm. This represents a drastic departure from today’s digital computer principles,which are based on computation with numbers represented by strings of digits. Brains do not work with digital num-bers, rather they operate using a sequence of amplitude-modulated (AM) patterns of activity, as observed in EEG,MEG, and fMRI measurements. Using neurodynamic principles, one can replace traditional symbol-based computa-tion with pattern-based processing. In our approach, categories are not ‘a priori’ defined. Rather they emerge throughthe self-organized activity of interacting neural populations in space and time. They emerge and dissolve at a rate ofseveral cycles per second, as dictated by the underlying theta rhythm observed in brains. The feasibility of the pro-posed approach has been demonstrated in various practically relevant problems of classification, pattern recognition,decision making, and control.

An example of a comprehensive implementation of emergent symbolic representations in the style of brains isillustrated in Fig. 12. This system envisions the implementation of the intentional action-perception cycle, usingintentional agents equipped with KIV brains as they explore the environment. The cycle starts with the initiationof a population of virtual agents, and selecting the best ones after the initial optimization period. The best onesare embodied in robots and evolve in their interactions with the environment as they complete their tasks. In theiroperation, the agents produce EEG-like signals which are recorded and evaluated. Using advanced pattern recognitionmethods such as Dynamic Logic, the artificial EEG signals can be searched for meaningful patterns. Once meaningfulemergent patterns have been identified and extracted, those can be used to advance the internal models and to obtaingenerations of agents with improving performance.

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 19: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.19 (1-21)

R. Kozma / Physics of Life Reviews ••• (••••) •••–••• 19

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

Nonlinear dynamic theory of brains has a great potential for deployment in artificial designs. The underlying prin-ciple motivated by neurophysiological findings on pattern-based computing in brains through the sequence of corticalphase transitions. There is substantial evidence indicating that cortical phase transitions are manifestations of high-level cognitions and potentially of consciousness. KIV produces EEG-like signals, which are sometimes so realisticthat they would trick even an experienced physician. KIV demonstrates most of the basic dynamic behaviors ob-served in biological brains. In this sense KIV provides the robot with a realistic brain blueprint which can be used andimplemented in everyday practice. The practical implementation of KIV principles is called SODAS (self-organizedontogenetic development of autonomous systems). SODAS is one of the very few, if not the only, functioning artificialbrain designs that exist today.

Acknowledgements

The author is thankful to Dr. W.J. Freeman (UC Berkeley), Dr. L.I. Perlovsky (Harvard and AFRL/HAFB), Dr.D.S. Harter (TAMU/Commerce), Dr. H. Voicu (UTA), Dr. P. Erdi (KFKI/KZoo), Dr. T. Huntsberger (Caltech/JPL), andto Dr. E. Tunstel (JHU/APL) for useful discussions and for joint research efforts referenced in this review. Financialsupport by National Research Council is greatly appreciated.

References

[1] Turing A. Computing machinery and intelligence. Mind 1950;59:433–60.[2] Neumann JV. The computer and the brain. New Haven, CT: Yale Univ. Press; 1958.[3] Newell A, Simon HA. Human problem solving. Englewood Cliffs, NJ: Prentice-Hall; 1972.[4] Newell A. Physical symbol systems. Cognitive Science 1980;4:135–83.[5] Newell A. Unified theories of cognition. Cambridge, MA: Harvard University Press; 1990.[6] Harter D., Kozma R. Nonconvergent dynamics and cognitive systems. Unpublished.[7] Laird J, Newell A, Rosenbloom P. Soar: An architecture for general intelligence. Artificial Intelligence 1987;33:1–64.[8] Anderson J, Silverstein J, Ritz S, Jones R. Distinctive features, categorical perception, and probability learning: Some applications of a neural

model. Psychol Review 1977;84:413–51.[9] Dreyfus HL. What computers still can’t do—a critique of artificial reason. Cambridge, MA: MIT Press; 1992.

[10] Mataric M, Brooks R. Cambrian intelligence. Cambridge, MA: The MIT Press; 1999. p. 37–58 [Ch. Learning a distributed map representationbased on navigation behaviors].

[11] Floreano D, Mondada F, Uribe A, Roggen D. Evolution of embodied intelligence. Lecture Notes Artif Intell 2004;3139:293–311.[12] Clark A. Mindware: An introduction to the philosophy of cognitive science. Oxford University Press; 2001.[13] Kozma R, Aghazarian H, Huntsberger T, Tunstel E, Freeman WJ. Computational aspects of cognition and consciousness in intelligent

devices. IEEE Comp Int Mag 2007;2(3):53–64.[14] Perlovsky LI. Neural networks and intellect. New York, NY: Oxford Univ. Press; 2001.[15] Kohonen T. Correlation matrix memories. IEEE Trans Comp 1972;C-21:353–9.[16] Werbos P. Beyond regression: New tools for prediction and analysis in the behavioral. Sciences PhD thesis, Harvard, Cambridge, MA; 1974.[17] Grossberg S. How does a brain build a cognitive code?. Psych Review 1980;87:1–51.[18] Rumelhart DE, McClelland JL. Parallel distributed processing: Explorations in the microstructure of cognition. Cambridge, MA: MIT Press;

1986.[19] Hopfield J. Neuronal networks and physical systems with emergent collective computational abilities. Proc Nat Acad Sci 1982;81:3058–92.[20] Bishop C. Neural networks for pattern recognition. Oxford University Press; 1995.[21] Haykin S. Neural networks—A comprehensive foundation. Prentice-Hall, NJ; 1998.[22] Towell G, Shavlik J. Knowledge-based artificial neural networks. Artificial Intelligence 1994;70:119–65.[23] Sporns O, Almassy N, Edelman G. Plasticity in value systems and its role in adaptive behavior. Adaptive Behavior 2000;7(3–4).[24] Edelman GM, Tononi G. A universe of consciousness: How matter becomes imagination. New York, NY: Basic Books; 2000.[25] Vershure P, Krose B, Pfeifer R. Distributed adaptive control: The self-organization of behavior. Robotics and Autonomous Systems

1992;9:181–96.[26] Pfeifer R, Scheier C. Understanding intelligence. Cambridge, MA: MIT Press; 1999.[27] Vershure P, Althaus P. A real-world rational agent: Unifying old and new AI. Cognitive Science 2003;27(4):561–90.[28] Kozma R. Neurodynamics of higher-level cognition and consciousness. Heidelberg: Springer Verlag; 2007. p. 129–59 [Ch. Neurodynamics

of intentional behavior generation].[29] Freeman WJ. Mass action in the nervous system. New York: Academic Press; 1975.[30] Chang H, Freeman WJ, Burke B. Optimization of olfactory model in software to give 1/f power spectra reveals numerical instabilities in

solutions governed by aperiodic (chaotic) attractors. Neur Netw 1998;11:449–66.[31] Freeman WJ. Neurodynamics: An exploration of mesoscopic brain dynamics. London, UK: Springer-Verlag; 2000.[32] Freeman WJ, Kozma R, Werbos P. Biocomplexity—adaptive behavior in complex stochastic dynamical systems. BioSystems

2001;59(2):109–23.

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 20: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.20 (1-21)

20 R. Kozma / Physics of Life Reviews ••• (••••) •••–•••

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46 46

47 47

48 48

49 49

50 50

51 51

52 52

UN

CO

RR

EC

TED

PR

OO

F

[33] Kozma R, Freeman WJ. Chaotic resonance—methods and applications for robust classification of noisy and variable patterns. Int J Bifurca-tion and Chaos 2001;11:1607–29.

[34] Li G, Lou Z, Wang X, Li X, Freeman WJ. Application of chaotic neural model based on olfactory system on pattern recognitions. SpringerLect Notes Comp Sci 2006;3610:378–81.

[35] Kozma R, Puljic M, Bollobas B, Balister P, Freeman WJ. Phase transitions in the neuropercolation model of neural populations with mixedlocal and non-local interactions. Biol Cybernetics 2005;92:367–79.

[36] Kozma R, Sakuma M, Yokoyama Y, Kitamura M. On the accuracy of mapping by neural networks trained by backpropagation with forgetting.Neurocomputing 1996;13(2–4):295–311.

[37] Duch W, Zurada RSJM. Computational intelligence methods for rule-based data understanding. Proc IEEE 2004;92(5):771–805.[38] Perlovsky L. Toward physics of the mind: Concepts, emotions, consciousness, and symbols. Phys Life Rev 2006;3(1):23–55.[39] Kozma R., Deming R., Perlovsky L. Optimal estimation of parameters of transient mixture processes using dynamic logic approach. In:

IEEE/INNS 2007 inf joint conf neur netw IJCNN07, Orlando, USA; 2007, p. 1602.[40] Nunez R, Freeman WJ. Restoring to cognition the forgotten primacy of action, intention, and emotion. J Consciousness Studies 1999;6(11–

12):ix–xx.[41] Freeman W.J. The limbic action-perception cycle controlling goal-directed animal behavior. In: IEEE inf joint conf neur netw IJCNN02,

Honolulu, HI; 2002, p. 2249–54.[42] Harter D., Kozma R. Navigation and cognitive map formation using aperiodic neurodynamics. In: Proc of 8th int conf on simulation of

adaptive behavior (SAB’04), LA, CA, vol. 8; 2004, p. 450–5.[43] Kozma R, Fukuda T. Intentional dynamic systems—fundamental concepts and applications (editorial). Int J Intell Syst 2006;21(9):875–9.[44] Kelso J. Dynamic patterns: The self-organization of brain and behavior. Cambridge, MA: MIT Press; 1995.[45] Kelso J, Engstrom D. The complementary nature. Cambridge, MA: MIT Press; 2006.[46] Pribram K, Gill M. Freud’s project reassessed. New York: Basic Books; 1976.[47] Katchalsky A, Rowland V, Huberman B. Dynamic patterns of brain cell assemblies. Neuroscience Res Program Bull 1974:12.[48] Haken H. Synergetics: An introduction. Berlin: Springer-Verlag; 1983.[49] Freeman WJ, Burke B, Holmes M. Aperiodic phase re-setting in scalp EEG of beta-gamma oscillations by state transitions at alpha-theta

rates. Hum Brain Mapp 2003;19:248–72.[50] Schroder M. Fractals, chaos, power laws: Minutes from and infinite paradise. San Francisco: WH Freeman; 1991.[51] Bak P. How nature works—the science of self-organized criticality. New York: Springer-Verlag; 1996.[52] Jensen HJ. Self-organized criticality—Emergent behavior in physical and biological systems. Cambridge University Press; 1998.[53] Freeman WJ. Origin, structure, and role of background EEG activity. Part 4. Neural frame simulation. Clin Neurophysiology 2006;117:572–

89.[54] Freeman WJ. Origin, structure, and role of background EEG activity. Part 3. Neural frame classification. Clin Neurophysiology

2005;116:1118–29.[55] Tsuda I. Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems. Beh and Brain Sci 2001;24(5):793–810.[56] Freeman WJ, Barrie J. Analysis of spatial patterns of phase in neocortical gamma EEGs in rabbit. J Neurophysiol 2000;84:1266–78.[57] Lachaux J, Rodriquez E, Martinerie J, Varela F. Measuring phase synchrony in brain signals. Hum Brain Mapp 1999;8:194–208.[58] LeVan-Quyen M, Foucher J, Lachaux J, Rodriguez E, Lutz A, Martinerie J, Varela F. Comparison of Hilbert transform and wavelet methods

for the analysis of neuronal synchrony. J Neurosci Meth 2001;111:83–98.[59] Quiroga R, Kraskov A, Kreuz T, Grassberger P. Performance of different synchronization measures in real data: A case study on electroen-

cephalographic signals. Physical Rev E 2002;6504:U645–58. Art. no. 041903.[60] Kozma R., Deming R., Perlovsky L. Optimal estimation of parameters of transient mixture processes using dynamic logic approach. In:

Conference on knowledge-intensive multi-agent systems KIMAS’07, Boston, MA.[61] Kauffman S. The origins of order—self-organization and selection in evolution. Oxford Univ. Press; 1993.[62] Crutchfield J. The calculi of emergence: Computation, dynamics, and induction. Physica D 1994;75:11–54.[63] Watts D, Strogatz S. Collective dynamics of “small-world” networks. Nature 1998;393:440–2.[64] Bressler S, Kelso J. Cortical coordination dynamics and cognition. Trends in Cognitive Sciences 2001;5:26–36.[65] Bressler S. Understanding cognition through large-scale cortical networks. Current Directions in Psychological Science 2002;11:58–61.[66] Freeman WJ. Origin, structure, and role of background EEG activity. Part 2. Analytic amplitude. Clin Neurophysiology 2004;115:2077–88.[67] Watts D. Six degrees: The science of a connected age. New York: Norton; 2003.[68] Wang X, Chen G. Complex networks: small-world, scale-free and beyond. IEEE Trans Circuits Syst 2003;31:6–20.[69] Albert R, Barabasi A. Statistical mechanics of complex networks. Reviews of Modern Physics 2002;74:47.[70] Barabasi A, Bonabeau E. Scale-free networks. Scientific American 2003;288:60–9.[71] Bollobas B, Riordan O. Handbook of graphs and networks. Weinheim: Wiley-VCH; 2003. p. 1–34 [Ch. Results on scale-free random graphs].[72] Ingber L. Neocortical dynamics and human EEG rhythms. New York: Oxford UP; 1995. p. 628–81 [Ch. Statistical mechanics of multiple

scales of neocortical interactions].[73] Hoppensteadt F, Izhkevich E. Thalamo-cortical interactions modeled by weakly connected oscillators: could the brain use FM radio princi-

ples?. BioSystems 1998;48:85–94.[74] Friston K. The labile brain. neuronal transients and nonlinear coupling. Phil Trans R Soc Lond B 2000;355:215–36.[75] Linkenkaer-Hansen K, Nikouline V, Palva J, Iimoniemi R. Long-range temporal correlations and scaling behavior in human brain oscilla-

tions. J Neurosci 2001;15:1370–7.[76] Kaneko K, Tsuda I. Complex systems: Chaos and beyond. A constructive approach with applications in life sciences. Springer-Verlag; 2001.[77] Kozma R, Freeman WJ, Erdi P. The KIV model—nonlinear spatio-temporal dynamics of the primordial vertebrate forebrain. Neurocomput-

ing 2003;52–54:819–25.

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

Page 21: Intentional systems: Review of neurodynamics, modeling ... · Intentional systems: Review of neurodynamics, modeling ... ... and

ARTICLE IN PRESSJID:PLREV AID:44 /REV [m3SC+; v 1.78; Prn:22/10/2007; 14:41] P.21 (1-21)

R. Kozma / Physics of Life Reviews ••• (••••) •••–••• 21

1 1

2 2

3 3

4 4

5 5

6 6

7 7

8 8

9 9

10 10

11 11

12 12

13 13

14 14

15 15

16 16

17 17

18 18

19 19

20 20

21 21

22 22

23 23

24 24

25 25

26 26

27 27

28 28

29 29

30 30

31 31

32 32

33 33

34 34

35 35

36 36

37 37

38 38

39 39

40 40

41 41

42 42

43 43

44 44

45 45

46

47

48

49

50

51

52

Q2

Q1

UN

CO

RR

EC

TED

PR

OO

F

[78] Stam C, Breakspear M, Cappellen V, van Walsum A, van Dijk B. Nonlinear synchronization in EEG and whole-head recordings of healthysubjects. Hum Brain Mapp 2003;19:63–78.

[79] Taylor JG. Mind and consciousness: Towards a final answer?. Phys Life Rev 2005;2(1):1–45.[80] Balister P, Bollobas B, Kozma R. Large deviations for mean field models of probabilistic cellular automata. Random Structures and Algo-

rithms 2006;29:399–415.[81] Chang H, Freeman WJ. Parameter optimization in models of the olfactory neural system. Neural Networks 1996;9:1–14.[82] Freeman WJ, Chang H, Burke B, Rose P. Taming chaos: stabilization of aperiodic attractors by noise. IEEE Trans Circ and Syst I

1997;44:989–96.[83] Xu D, Principe J. Dynamical analysis of neural oscillators in an olfactory cortex model. IEEE Trans Neur Netw 2004;15:1053–62.[84] Ilin R, Kozma R. Stability of coupled excitatory-inhibitory neural populations and application to control of multi-stable systems. Physics

Lett A 2006;360(1):66–83.[85] Gutierrez-Galvez A., Gutierrez-Osuna R. Contrast enhancement of sensor-array patterns through hebbian/antihebbian learning. In: Proc 11th

Int Symp Olfaction and Elect. Nose, Barcelona, Spain.[86] Chang H, Freeman WJ, Burke B. Optimization of olfactory model in software to give 1/f power spectra reveals numerical instabilities in

solutions governed by aperiodic (chaotic) attractors. Neural Networks 1998;11:449–66.[87] Beliaev I, Kozma R. Time series prediction using chaotic neural networks on the cats benchmark test. Neurocomputing 2007;70(13):2426–39.[88] Harter D, Kozma R. Chaotic neurodynamics for autonomous agents. IEEE Trans Neural Networks 2005;16(4):565–79.[89] Harter D, Kozma R. Aperiodic dynamics and the self-organization of cognitive maps in autonomous agents. Int J of Intelligent Systems

2006;21:955–71.[90] Kozma R, Freeman WJ. Basic principles of the KIV model and its application to the navigation problem. J Integrative Neurosci 2003;2:125–

40.[91] Chua LO. CNN. A paradigm for complexity. World Scientific; 1998.[92] Freeman WJ. Ndn, volume transmission, and self-organization in brain dynamics. J Integr Neurosci 2005;4(4):407–21.[93] Gat E, Desai R, Ivlev R, Loch J, Miller D. Behavior control for robotic exploration of planetary surfaces. IEEE Trans Robotics and Autom

1994;10(4):490–503.[94] Huntsberger T, Rose J. Bismarc—a biologically inspired system for map-based autonomous rover control. Neural Networks 1998;11(7–

8):1497–510.[95] Huntsberger T., Cheng Y., Baumgartner E., Robinson M., Schenker P. Sensory fusion for planetary surface robotic navigation, rendezvous,

and manipulation operations. In: Proc. Int. Conf. on Advanced Robotics, Lisbon, Portugal; 2003, p. 1417–24.[96] Tunstel E. Ethology as an inspiration for adaptive behavior synthesis in autonomous planetary rovers. Autonomous Robots 2001;11:333–9.[97] Seraji H, Howard A. Behavior-based robot navigation on challenging terrain: A fuzzy logic approach. IEEE Trans Robotics and Autom

2002;18(3):308–21.[98] O’Keefe J, Recce M. Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 1993;3:317–30.[99] Blum K, Abbott LP. A model of spatial map formation in the hippocampus of the rat. Neural Computation 1996;8:85–93.

[100] Touretzky D, Redish A. Theory of rodent navigation based on interacting representations of space. Hippocampus 1996;6(3):247–70.[101] Bachelder I, Waxman A. Mobile robot visual mapping and localization: A view based neurocomputational architecture that emulates hip-

pocampal place learning. Neural Networks 1994;7:1083–99.[102] Arleo A, Gerstner W. Spatial cognition and neuro-mimetic navigation: A model of hippocampal place cell activity. Biological Cybernetics

2000;83:287–99.[103] Hasselmo M, Hay J, Ilyn M, Gorchetchnikov A. Neuromodulation, theta rhythm and rat spatial navigation. Neural Networks 2002;15:689–

707.[104] Voicu H, Kozma R, Wong D, Freeman WJ. Spatial navigation model based on chaotic attractor networks. Connect Sci 2004;16(1):1–19.[105] Trullier O, Wiener S, Berthoz A, Meyer J. Biologically-based artificial navigation systems: Review and prospects. Progress in Neurobiology

1997;51:483–544.[106] Kozma R., Myers M. Analysis of phase transitions in KIV with amygdala during simulated navigation control. In: IEEE Inf Joint Conf Neur

Netw IJCNN05, Montreal, Canada; 2005, p. 125–30.[107] Kozma R, Wong D, Demirer M, Freeman WJ. Learning intentional behavior in the k-model of the amygdala and enthorhinal cortex with the

cortico-hippocampal formation. Neurocomputing 2005;65–66:23–30.[108] Huntsberger T, Tunstel E, Kozma R. Intelligence for space robotics. San Antonio, TX: TCI Press; 2006. p. 403–22 [Ch. Onboard learning

strategies for planetary surface rovers].[109] Kozma R., Muthu S., Implementing reinforcement learning in the chaotic KIV model using mobile robot Aibo. In: IEEE/RSJ Int. Conf. on

Intelligent Robots and Systems IROS’04, Sendai, Japan; 2004, p. 1617–22.[110] Demirer R., Kozma R., Caglar M., Polatoglu Y. Hilbert transform optimization to detect cortical phase transitions in beta-gamma band.

Submitted for publication.

Please cite this article in press as: R. Kozma, Intentional systems: Review of neurodynamics, modeling, and robotics implementation, Physics ofLife Reviews (2007), doi:10.1016/j.plrev.2007.10.002

46

47

48

49

50

51

52