[ieee africon 2007 - windhoek, south africa (2007.10.26-2007.10.28)] africon 2007 - emotion modeling...

7
1 Abstract—This paper describes a method by which emotion can be modeled as an integral part of a simple brain model based on the neuro-physiological working of the brain. An example is presented of this model being implemented in a simulated robotics environment. I. INTRODUCTION brain model, with integrated emotion modeling, was developed and tested under dynamic conditions using a C++ OpenGL dynamic computer simulation of a simple robot in a virtual confine. The simulation illustrated that the integrated brain model, referred to as the X-zistor Concept [1] and described in patent specifications [2] and [3], allowed the robot to principally sense (sight, tactile and audio), learn, recognize, make associations, think, dream, move, feel emotions (positive and negative), experience instincts, pain, phobias and ambivalence. The brain model was developed assuming all behaviours, including subjective states like pain and emotion, originate strictly as a result of the neuro- physiological functions of the brain. The purpose of this paper is to extract the emotion model from the integrated brain model and define it in simple terms. II. RELATED RESEARCH Current studies are showing an increased interest in defining the relationship between cognition and emotion and ways to implement it in agent architectures [4],[5],[6]. Logic alone has shown to have serious shortfalls when used to model human intelligence in artificial intelligence systems [7]. The view that emotion is an integral part of rational behaviour is receiving increasing support from brain research studies [8],[9],[10]. Proof of attempts to analyse emotion in the brain can be traced back many centuries. Over two hundred years ago, David Hume, the famous exponent of philosophical naturalism, in his An Enquiry concerning the Principles of Morals [11], said we should ‘follow a very simple method,’ where we have been cured of ‘our passion for hypotheses and systems’ and ‘reject every system...however subtle or ingenious, which is not founded on fact and observation’. The X-zistor Concept originated from a similar simplistic view that, using the available brain functions for which substantive evidence exist, and subjecting them to an in-depth functional analysis, it should be possible to isolate the basic underlying functions which lead to all higher tier behavioural complexities, such as cognition and emotion. With regard to emotion and intelligence, Marvin Minsky already twenty years ago expressed doubt as to whether machines could exhibit intelligent behaviour without emotions [12]. Perhaps the closest approach to emotion modeling in the X-zistor Concept, is the suggestion by Doug Riecken to encode some ‘primitive type instinct-goals’ and then put the agent into the world to learn from experience. Over time, ‘those instincts [might] become associated with experiences in that culture, and get discolored and reshaped by other concept words used by humans, such as happiness or sadness or anger’ [13]. The emotion model imbedded in the X-zistor Concept assumes that emotion is permanently and prominently involved in cognition, and that it offers the prime evaluation system used by the brain to base motivation and intelligence on. In terms of the many different agent architectures being investigated today, the X-zistor Concept, is relatively simple and avoids defining a higher-level abstraction hierarchy, like deliberative layers and reflective processes (meta-management layers) as proposed by Beaudoin [14], and assumes that all higher-level effects evolve purely from a specific set of ‘primitive’ or ‘basic’ brain functions over time. It is important to note that the emotion model presented here can only exist in the context of a larger integrated brain model providing these ‘basic’ brain functions. III. THE INTEGRATED BRAIN MODEL To illustrate the principal by which emotion was integrated into the X-zistor Concept, some of the basic building blocks of the brain model will first be described in mathematical terms, after which the emotion part will be discussed. A simple robot implementation will then be provided and some analogies between the model and human behaviour presented. The following definitions, pertaining to the X- zistor Concept, will first be defined (the reader should not confuse these terms with prior interpretations and the derived definition for emotion will be valid throughout this paper.) Emotion Modeling for Robots Rocco van Schalkwyk A PBMR (Pty) Ltd, PO Box 9396, Centurion, 0046, South Africa 1-4244-0987-X/07/$25.00 ©2007 IEEE.

Upload: rocco

Post on 19-Mar-2017

230 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE AFRICON 2007 - Windhoek, South Africa (2007.10.26-2007.10.28)] AFRICON 2007 - Emotion modeling for robots

1

Abstract—This paper describes a method by which emotion

can be modeled as an integral part of a simple brain model based on the neuro-physiological working of the brain. An example is presented of this model being implemented in a simulated robotics environment.

I. INTRODUCTION

brain model, with integrated emotion modeling, was developed and tested under dynamic conditions using a

C++ OpenGL dynamic computer simulation of a simple robot in a virtual confine. The simulation illustrated that the integrated brain model, referred to as the X-zistor Concept [1] and described in patent specifications [2] and [3], allowed the robot to principally sense (sight, tactile and audio), learn, recognize, make associations, think, dream, move, feel emotions (positive and negative), experience instincts, pain, phobias and ambivalence. The brain model was developed assuming all behaviours, including subjective states like pain and emotion, originate strictly as a result of the neuro-physiological functions of the brain. The purpose of this paper is to extract the emotion model from the integrated brain model and define it in simple terms.

II. RELATED RESEARCH Current studies are showing an increased interest in defining the relationship between cognition and emotion and ways to implement it in agent architectures [4],[5],[6]. Logic alone has shown to have serious shortfalls when used to model human intelligence in artificial intelligence systems [7]. The view that emotion is an integral part of rational behaviour is receiving increasing support from brain research studies [8],[9],[10]. Proof of attempts to analyse emotion in the brain can be traced back many centuries. Over two hundred years ago, David Hume, the famous exponent of philosophical naturalism, in his An Enquiry concerning the Principles of Morals [11], said we should ‘follow a very simple method,’ where we have been cured of ‘our passion for hypotheses and systems’ and ‘reject every system...however subtle or ingenious, which is not founded on fact and observation’. The X-zistor Concept originated from a similar simplistic view that, using the

available brain functions for which substantive evidence exist, and subjecting them to an in-depth functional analysis, it should be possible to isolate the basic underlying functions which lead to all higher tier behavioural complexities, such as cognition and emotion. With regard to emotion and intelligence, Marvin Minsky already twenty years ago expressed doubt as to whether machines could exhibit intelligent behaviour without emotions [12]. Perhaps the closest approach to emotion modeling in the X-zistor Concept, is the suggestion by Doug Riecken to encode some ‘primitive type instinct-goals’ and then put the agent into the world to learn from experience. Over time, ‘those instincts [might] become associated with experiences in that culture, and get discolored and reshaped by other concept words used by humans, such as happiness or sadness or anger’ [13]. The emotion model imbedded in the X-zistor Concept assumes that emotion is permanently and prominently involved in cognition, and that it offers the prime evaluation system used by the brain to base motivation and intelligence on. In terms of the many different agent architectures being investigated today, the X-zistor Concept, is relatively simple and avoids defining a higher-level abstraction hierarchy, like deliberative layers and reflective processes (meta-management layers) as proposed by Beaudoin [14], and assumes that all higher-level effects evolve purely from a specific set of ‘primitive’ or ‘basic’ brain functions over time. It is important to note that the emotion model presented here can only exist in the context of a larger integrated brain model providing these ‘basic’ brain functions.

III. THE INTEGRATED BRAIN MODEL

To illustrate the principal by which emotion was integrated into the X-zistor Concept, some of the basic building blocks of the brain model will first be described in mathematical terms, after which the emotion part will be discussed. A simple robot implementation will then be provided and some analogies between the model and human behaviour presented. The following definitions, pertaining to the X-zistor Concept, will first be defined (the reader should not confuse these terms with prior interpretations and the derived definition for emotion will be valid throughout this paper.)

Emotion Modeling for Robots Rocco van Schalkwyk

A

PBMR (Pty) Ltd, PO Box 9396, Centurion, 0046, South Africa

1-4244-0987-X/07/$25.00 ©2007 IEEE.

Page 2: [IEEE AFRICON 2007 - Windhoek, South Africa (2007.10.26-2007.10.28)] AFRICON 2007 - Emotion modeling for robots

2

A. Sense

A sense translates a physical condition in the environment or body into a corresponding representation in the brain:

B. Drive

A drive translates a physical condition in the environment or body into a corresponding representation in the brain, and using a predetermined logic, presents also the extent to which it is conducive or detrimental to the well-being of the robot. The representation will thus also contain information interpretable by other brain functions as the extent (strength) to which the condition is conducive or detrimental to the robot. If the condition is conducive, we will refer to it as positive, or causing satiation. If the condition is detrimental, we will refer to it as negative, or causing deprivation.

C. Reflex

A reflex evaluates a drive, using a predetermined logic, resulting in a representation in the brain, interpretable as a preprogrammed set of motion commands.

D. Motion

A motion evaluates a reflex, using a predetermined logic, resulting in a representation in the brain, interpretable by the motion effectors as motions that must be performed.

E. Association

An association is a tuple of ‘lower resolution’ representations (Slr , Dlr , Elr , Mlr - but not R) that is stored at a given time t by the brain – based on the presence of S, D, E, and M in the brain – in a linked fashion, so that when both S and D are experienced together again as a combination, Elr and Mlr will be re-evoked by the brain. Elr, as part of an association, will represent the strength or weight of that association, positive (satiation) or negative (deprivation).

If an association already exists for a certain S and D

combination, and this S and D combination is experienced again, i.e. ‘recognised’, the current association being formed will be used to update the association in the brain of the robot using a predetermined arbitration logic based on the relative strength (weight) of the current and existing associations as represented by their Elr values. The reflex R is not stored, only the motion effector commands present at the time when the association was formed, M. The brain normally stores the association less vividly (lower resolution) than experienced, to aid the agent (robot) in distinguishing between thought and reality in future, e.g. if thinking about eating an apple is just as nice as eating it, we could easily prefer to just think about it rather than to eat it. Thinking is formally achieved by the model when ‘recognition’ fails and actions to satisfy a certain drive are guessed by the brain based on a correlation logic repetitively interrogating the association database (memory). Day-dreaming occurs when no drives need to be satisfied based on a reinforcement-strength logic repetitively interrogating the association database. Sleep-dreaming is

PC X S

where PC = Physical condition, e.g. sound, , pressure, etc.

X = translation function into brain syntax

S = representation of physical condition into brain syntax

PC X D Ed E where PC = Physical condition, e.g. robot fuel level

X = translation function into brain syntax

D = representation of physical drive condition in brain syntax

Ed = evaluation of drive strength using preset drive logic

E = representation of drive strength

D and E Er R

where D = representation of physical drive condition in brain syntax

E = representation of drive strength

Er = evaluation using preset reflex logic

R = representation of preprogrammed motion command set

At = (Slr , Dlr , Elr , Mlr) where At = Association formed at time t

Slr = lower resolution representation of sense

Dlr = lower resolution representation identifying the drive

Elr = lower resolution representation of the drive strength

Mlr =lower resolution representation of motions performed

R Em M

where R = representation of preprogrammed motion command set

Em = evaluation using preset motion logic

M = representation of motions of the reflex motion command set

Page 3: [IEEE AFRICON 2007 - Windhoek, South Africa (2007.10.26-2007.10.28)] AFRICON 2007 - Emotion modeling for robots

3

achieved using the exact same logic, but visual stimuli and motion recollection are subdued.

The representation E has a further important attribute in that

when generated, say by a strong deprivation event (e.g. a painful collision) or satiation event (e.g. food located), it gradually diminishes as a function of time E = E(t) which means that often associations formed after a high E event, will be influenced by this event in that they will also have increased Elr values. Even associations formed immediately preceding strong E events, will adopt increased Elr values by virtue of the fact that the representations forming these associations were still present in the brain to some extent when the strong E event took place. The extent to which preceding or follow-up associations are modified, is determined by preset relations which forms an important part of the X-zistor Concept.

IV. THE EMOTION MODEL

The drive strength representation E, and the way it is stored and retrieved in an association by the X-zistor Concept, offers a model of the physical mechanism belying the subjective human experience of emotion. It becomes more obvious when reflexes are linked to it like smiling when E is positive (satiation) and frowning or crying when E is negative (deprivation).

The model offers remarkable resemblances with human

emotional experiences when expanded to contain more than just one drive, e.g. a combination of drives such as thirst, hunger, avoiding cold, avoiding heat, sexual deprivation and pain. Using the X-Zistor Concept, pain can easily be modeled as simply another drive, using the brain’s corporeal body map to locate the pain and its own nerves to generate representations which can serve to identify the drive D and drive strengths E which can be interpreted by the reflex function, e.g. when the robot encounters extreme temperatures (burning) or pressures (bruising) in the robot body, reflexes can be triggered resulting in quick withdrawal from the pain source, adrenaline injection, crying reflexes, frowning, etc. This will aid in teaching the robot to avoid these conditions, through reinforcement-learning. When a pain source is recognized, the negative Elr state evoked by the robot brain can be thought of as a form of ‘fear’.

Complexity of emotion can be achieved by defining

interdependencies between drives, so that drives will evoke other drives in a parasitic fashion, meaning that an average resultant E value is derived from the individual contributions of all the drives present at the time in the robot body.

In humans there are normally many different active drives

and E states present which lead to innumerable emotional nuances for which we develop many different names, leading

us to think emotions are complex and mystical, rather than different outcomes of the same biological mechanism.

A. Definition of Emotion Based on the above, emotion can simply be defined as a

resultant average E or Elr state in the brain. Describing it in words, emotion can be described as the

resultant average drive strength representation in the brain generated by a situation or the lower resolution drive strength representation evoked in the brain by an association upon recognition or recollection of a situation.

V. A SIMPLE ROBOT IMPLEMENTATION

The proposed emotion model can be implemented in a simple X-zistor Concept robot to illustrate its working. We can assume that the robot resides in a simple confine facing a black door in a white wall. It requires water for its survival which is fed to it from a letterbox in the door. When the robot’s water tank drops below a certain level, it will experience thirst and will require water in order for it to be satiated. We will further assume it uses two wheels that can only move forward and backward at constant speed, or stand still. The robot’s brain will be a simple computer-based spreadsheet (we will disregard the machine code running in the background) and all representations will consist of numerical numbers.

For the purposes of this demonstration, we will assume the

robot can only adopt 2 positions: Position 1 – in front of the door and Position 2 – right up against the door.

A. Sense Implementation For the robot we will assume only one sense, namely optic

sensing. It will take the optical image of the black door in the white wall and simplify it into a 5-lens mosaic panel, the panes of which can only be black or white, with a numerical values as follows:

white = 0 black = 1

For the 2 positions, the robot can assume the optic state viewed through the mosaic panel and the associated numerical representations, will be as follows:

Position 1: Just in front of the black door: Mosaic View: Numerical Representation: 01110

Page 4: [IEEE AFRICON 2007 - Windhoek, South Africa (2007.10.26-2007.10.28)] AFRICON 2007 - Emotion modeling for robots

4

Position 2: Right up against the black door: Mosaic View: Numerical Representation: 11111

B. Drive Implementation For the robot we will assume only one drive, namely thirst

and give it a numerical identification key of, say 5. The drive will take the change in the water tank level as a function of time (∂Level/∂t) into account when determining if the robot is experiencing satiation or deprivation, and represent the drive strength values E as numerical values, as follows:

≥∂

<∂

∂−=

0100

0100

tLevelif

tLevelif

E

C. Reflex Implementation For the robot we will assume 2 reflexes, namely smiling and

crying. If E = 100, which means the water tank is being filled, we will define it as satiation and the robot brain will send a smile command. This condition will be represented by the numerical identification key of 1000. If E = -100, which means the water tank is being depleted, we will define it as deprivation and the robot brain will send a cry command. This condition will be represented by the numerical identification key of 999.

Like with humans, the cry command is merely an attempt to

attract a caretaker to guide the robot to the water source.

D. Motion Implementation For the robot we assume only that, in terms of motions, the

smile reflex, when triggered, will be executed. It will be represented by the numerical identification key of 10. The cry reflex will represented by the numerical identification key 20. ‘Wheel motion forward’ will be given by 11 and ‘No wheel motion’ by 00.

E. Association Implementation

For the robot we will assume it is standing away from the door in Position 1 (stationary), its water tank is being depleted (i.e. ∂Level/∂t < 0), and it is crying. The association formed by this situation at time t can be expressed in the spreadsheet

brain format as follows (assuming that E = Elr for this demonstration):

At = (01110, 5, -100, 999, 00)

It is the only association in the robot’s brain and it has no

notion of moving forward in order to elicit water from the letterbox in the door.

Let us now assume the robot is pushed forward by a

caretaker at time t + ∆t into Position 2 against the door where it comes to a halt so that the water tank is filled up (i.e. ∂Level/∂t > 0), it experiences satiation (E = 100) and it is smiling. The following new association is formed at time t + ∆t:

A t + ∆t = (11111, 5, 100, 10, 00) Because of back-propagation, at the time t + ∆t when the

robot moved from Position 1 to Position 2 (both wheels moving), the effect of the strong satiation event on the previous association at Position 1 will be as follows:

A t + ∆t = (01110, 5, 100, 10, 11)

To see what the robot has learned, we can now move it back

to Position 1. This time it will recognize the optic state (01110) and the presence of the thirst drive (5), and using the above association to move with both wheels forward (11) and with a smile (10) based on a positive E state (100). When it reaches Position 2, it will recognize the optic state (11111) and the presence of the thirst drive (5) based on the above association – and stop with both wheels (00) and smile (10) due to the positive E state (100).

If the robot is moved further back to say Position 0, and

pushed once to Position 1, it will learn through back-propagation to navigate from this position to the black door while the recognition of the satiation value of the associations, will make it experience E=100, i.e. satiation and it will smile. The effect of this will be a robot that smiles as it recognized the optic states which will lead it to the black door and the water supply.

Similarly, it will learn to avoid deprivation events such as

walking into the wall if we choose to give it a pain drive. It should be clear from the above that the emotion model is not based on pre-programmed ‘cosmetic’ responses, but on the concept of drive strengths which are physical states in the brain. Because it is a physical state in the mind, words can be associated with it (just another motion) and the robot can learn to repeat words like: nice, happy, unpleasant, bad, sad etc. - words we as humans also use to describe our emotional states.

Page 5: [IEEE AFRICON 2007 - Windhoek, South Africa (2007.10.26-2007.10.28)] AFRICON 2007 - Emotion modeling for robots

5

VI. SIMAI-X1 VIRTUAL ROBOT IMPLEMENTATION

SIMAI-X1, or ‘Simmy”, is a C++ OpenGL virtual robot simulation developed to demonstrate the basic principles of the X-zistor Concept under dynamic conditions, including emotion modeling.

It comprises a simple robot in a square confine with an

obstacle and a food source. The virtual robot learns to navigate in the confine until it stops colliding with objects and quickly moves to the food source when hungry, while showing emotional states such as pain, fear, joy, crying, laughter, etc.

As the robot learns to navigate around its confine, an

association database is populated which can be exported as an experience file and later imported into an untrained robot.

VII. HARDWARE ROBOT IMPLEMENTATION

Based on the SIMAI-X1 results, a project was undertaken to drive an Evolution Robotics ER-1 ‘notebook’ robot using an X-zistor Concept behavior engine.

The approach involved translating the integrated brain

model into Python code and running it on a separate computer while the on-board ER-1 processor is used to process the robot’s sensor inputs e.g. translate the CCTV camera ‘framegrabs’ into optic states for which numerical representations are required. Communication between the robot and the remote computer was established using a Telnet connection over a wireless LAN.

This project is still in its infancy but is showing the

promise of little technical risk and it is estimated that such a hardware implementation can easily be achieved using this brain model. Plans are included to expand the application to an office environment and to integrate the onboard ER-1 object recognition software with the X-zistor Concept behaviour engine.

Currently, other hardware platforms are being investigated

for candidates to integrate this behaviour engine onto in a similar fashion e.g. the Lego MindStorms NXT [4] and RidgeSoft’s IntelliBrain-Bot educational robot [5].

VIII. CONCLUSION

No exceptions to the generality of the X-Zistor Concept as a brain model could to date be found in terms of principally describing behaviours and subjective states experienced by humans. These basic behaviours could easily and convincingly be modeled in a virtual robot. The emotion model presented here also succeeded in providing the virtual robot with

extremely lifelike emotions. It can thus be concluded that the X-zistor Concept with its

imbedded emotion model, can make the building of machines with emotion, as defined here, possible.

The benefit of understanding all the mechanisms underlying

emotions, make it more than just a platform for building robots. It can also play a role in: • understanding the working of the brain, e.g. the insight

gained from the model that there could possibly be more than five senses, for instance the ‘sense of balance’.

• the analyses of neuro-physiological and psychological disorders

• numerous industries where autonomous robots can make a contribution, e.g. aerospace exploration, defense, security and safety, surveillance, education and entertainment, mining, agriculture, retail, frailty care, testing and inspection of high risk environments, e.g. petro-chemical plants, refineries, smelters, explosives factories and nuclear reactors and nuclear waste facilities.

A lot of expansion to the model and its computational

implementation has already been done, and a lot of scope for refinement still exists, but already some controversial questions can be asked: When is a robot with the above features no longer a ‘simulation’ but a ‘replica’? In what way are the emotions created here inferior to the ones created in biological matter?

Media reports have indicated that British scientists are

calling for a public discussion on the increasing use of robots, while experts have criticized a government–commissioned report (by the British Office of Science and Innovation) stating that calls could be made to grant intelligent machines human rights.

A large contingent of AI experts feel that discussing robot

rights at the time of this paper is premature, and it could be that at this stage the model is perhaps nothing more than a ‘Bohr-atom’ of the mind, but the fact that ANS (artificial neural networks) exist, which have much of the required X-zistor Concept functionality as features inherent in their structures, makes it an option to convert the concept into neural network applications with almost limitless scope. This will close the loop on the neuro-physiological origins of the model, and set the scene for interesting future developments.

REFERENCES [1] SIMAI-X1 “Simmy” C++ OpenGL simulation of the X-zistor Concept –

downloadable robot simulation. Available: http://www.mjvn.co.za/x-zistor

[2] Van Schalkwyk, R. (2002) A Method and Device to Artificially Reproduce and Add to the Functionality of Living Creatures. Provisional Patent Specification (South Africa). Pat. Nr: 2002/1207.

[3] Van Schalkwyk, R. (2003) A Method and Device to Illustrate and Perform Living Creature Type Functionality. Provisional Patent Specification (South Africa). Pat. Nr: 2003/0850.

Page 6: [IEEE AFRICON 2007 - Windhoek, South Africa (2007.10.26-2007.10.28)] AFRICON 2007 - Emotion modeling for robots

6

[4] Oliveira, E. and Sarmento, L. Emotional Valence-based Mechanisms and Agent Personality. Proc. SBIA’02. LNAI, Springer Verlag, 2002.

[5] Gadanho, S. C. and Hallam, J. Emotion-driven learning for animat control. Proceedings of the fifth international conference on simulation of adaptive behavior, September 1998, Univ. of Zurich, Zurich, Switzerland, 2004;

[6] Toda, M. (1993). The Urge Theory of Emotion and Cognition. (SCCS Tech. Rep. No. 93-1-01). Chuyko University.

[7] Dreyfus, H. (1992) What computers still can’t do: a critique of artificial reason. The MIT Press, Cambridge. Massachusetts.

[8] LeDoux, J. The Emotional Brain. Simon & Shutzer. New York [9] Damásio, A. Descartes’ Error – Emotion, Reason and the Human Brain.

Gosset/Putnam Press. NY. 1994. [10] Cytowic, R. E. (1993). The man who tasted shapes. London: Abacus. [11] Hume, D. Enquiry Concerning the Principles of Morals. Hackett

Publishing Company; New Edition (June 1983). [12] Minsky, M.L. 1987. The Society of Mind. William Heinemann Ltd.,

London. [13] Riecken, D. A Conversation with Marvin Minsky about Agents.

Communications of the ACM 37, 7 (July 1994), 23 – 29. [14] Beaudoin, L. P. 1994. Goal Processing in Autonomous Agents. PhD

Thesis, School of Computer Science, The University of Birmingham. (available at http://www.cs.bham.ac.uk/research/cogaff/).

[15] Lego MindStorms NXT Available: http://www.lego.com [16] RidgeSoft’s IntelliBrain-Bot Available:

http://www.ridgesoft.com/intellibrainbot/intellibrainbot.htm

Page 7: [IEEE AFRICON 2007 - Windhoek, South Africa (2007.10.26-2007.10.28)] AFRICON 2007 - Emotion modeling for robots

Copyright Information

© 2007 IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists,

or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.