machine consciousness: a critical view from enactive ...jonas, h. (1966), the phenomenon of life:...

1
Machine consciousness: a critical view from enactive cognitive science Tom Froese ([email protected]) Centre for Computational Neuroscience and Robotics, University of Sussex, Brighton, UK Cliff, D. (1991), “Computational Neuroethology: A Provisional Manifesto”, in: J.-A. Meyer & S.W. Wilson (eds.), From Animals to Animats: Proc. of the 1st Int. Conf. on the Simulation of Adaptive Behavior, Cambridge, MA: The MIT Press, pp. 29-39 Di Paolo, E.A. (2003), “Organismically-inspired robotics: homeostatic adaptation and teleology beyond the closed sensorimotor loop”, in: K. Murase & T. Asakura (eds.), Dynamical Systems Approach to Embodiment and Sociality, Adelaide, Australia: Advanced Knowledge International, pp. 19-42 Froese, T., Virgo, N. & Izquierdo, E. (2007), “Autonomy: a review and a reappraisal”, in: F. Almeida e Costa et al. (eds.), Proc. of the 9th Euro. Conf. on Artificial Life, Berlin, Germany: Springer Verlag, in press Froese, T. & Ziemke, T. (in preparation), “Enactive Artificial Intelligence” Jonas, H. (1966), The Phenomenon of Life: Toward a Philosophical Biology, Evanston, Illinois: Northwestern University Press, 2001 Kant, I. (1790), Kritik der Urteilskraft, trans. by: W.S. Pluhar, Critique of Judgment, Indianapolis, IN: Hacket Publishing Company, 1987 Maturana, H.R. & Varela, F.J. (1980), Autopoiesis and Cognition: The Realization of the Living, Dordrecht: Kluwer Academic Publishers Pfeifer, R. & Scheier, C. (1999), Understanding Intelligence, Cambridge, MA: The MIT Press Thompson, E. (2007), Mind in Life: Biology, Phenomenology, and the Sciences of Mind, Cambridge, MA: The MIT Press Varela, F.J. (1992), “Autopoiesis and a Biology of Intentionality”, in: B. McMullin & N. Murphy (eds.), Proc. of Autopoiesis and Perception: A Workshop with ESPRIT BRA 3352, Dublin City University Weber, A. & Varela, F.J. (2002), “Life after Kant: Natural purposes and the autopoietic foundations of biological individuality”, Phenomenology and the Cognitive Sciences, 1, pp. 97-125 Ziemke, T. (2007), “What’s life got to do with it?”, in: A. Chella & R. Manzotti (eds.), Artificial Consciousness, Imprint Academic, pp. 48-66 Current robots are neither natural purposes nor autopoietic. We therefore have no grounds of attributing intrinsic teleology, autonomous agency, or the capacity of sense-making to them (Froese, Virgo & Izquierdo 2007). From the enactive point of view they provide no basis for machine consciousness (Ziemke 2007). However, there is no a priori reason why AI could not develop artificial agents with the right kind of existential prerequisites. This requires taking biological autonomy and its precarious condition seriously, as well as shifting the design perspective from engineering an ‘agent’ toward engineering the appropriate conditions of emergence of an agent. In other words, before the field of machine consciousness can get off the ground we need to develop an enactive artificial intelligence (Froese & Ziemke in preparation). Embodied-embedded AI has established itself as a viable methodology for synthesizing and understanding cognition (Pfeifer & Scheier 1999). Its methodologies successfully address many real- world situations that require robust, flexible, and context-sensitive behaviour. This is done by designing appropriate sensori- motor loops such that the effects of actuators impact on the sensors via the environment, and then, via an internal link, again impact on the actuators. The agent can thereby structures its sensory inputs through its ongoing interaction. The presence of such a feedback loop is often seen as a necessary condition for intrinsically meaningful and goal-driven behavior (Cliff 1991). One of the most significant contributions of enactive cognitive science has been its emphasis of the role of sensorimotor coordination in perception and cognition. Introduction Conclusions References Web address: http://froese.wordpress.com Copyright 2007 ‘Agent’ Q. Why is the existence of a closed sensorimotor feedback loop not a sufficient condition for the system to have intrinsic teleology? A. Because in order for ‘sensation’ and ‘action’ to constitute purposive behavior there must be interposed between them a center of concern. Jonas (1966) puts his finger on metabolism as the source of all intrinsic value and purpose. He proposes that to an entity that carries on its existence by way of constant regenerative activity we impute concern. In other words, the minimum concern is to be, i.e. to carry on being. Accordingly, a robot cannot be said to be an individual subject in its own right in the same way that a living organism can. Biological individuality It was Kant who first made the connection between intrinsic teleology and a modern understanding of self-organization (Weber & Varela 2002). Kant refers to a living being as a natural purpose: “a thing exists as a natural purpose if it is (though in a double sense) both cause and effect of itself” (Kant 1790, §64). He outlines two criteria that must be met for something to be considered as a natural purpose: (i) every part exists for the sake of the others and the whole, and (ii) the parts combine into the unity of a whole because they are reciprocally cause and effect of their form. Moreover, because of this self-organizing reciprocal causality all relations of cause and effect are also relations of means and purpose. Note that both artifacts and organisms fulfill criterion (i) while (at the moment at least) only living beings also fulfill criterion (ii). Natural purpose Contemporary AI systems “cannot be rightly seen as centres of concern, or put simply as subjects, the way that animals can. […] Such robots can never by truly autonomous. In other words the presence of a closed sensorimotor loop does not fully solve the problem of meaning in AI” (Di Paolo 2003). Sensations Actions Environment “a feedback mechanism may be going, or may be at rest: in either state the machine exists. The organism has to keep going, because to be going is its very existence – which is revocable – and, threatened with extinction, it is concerned in existing” (Jonas 1966, p. 126). Can we make Jonas’ criticism more precise? Fortunately, his philosophy has recently been related to both the work of Kant as well as Maturana and Varela (Weber & Varela 2002). This connection will allow us to diagnose more clearly why current AI systems lack (i) intrinsic teleology, and (ii) autonomous agency. Hans Jonas Biological autonomy “We truly stand on the shoulders of a giant” (Weber & Varela 2002) “an autopoietic system is organized (defined as a unity) as a network of processes of production (synthesis and destruction) of components such that these components: continuously regenerate and realize the network that produces them, and constitute the system as a distinguish- able unity in the domain in which they exist” (Weber & Varela 2002) The notion of autopoiesis as the minimal organization of the living first originated in the work of the biologists Maturana and Varela in the early 1970s (e.g. Maturana & Varela 1980). It has given rise to an extensive tradition in systemic thinking that has influenced a diverse range of fields. Recently, there have been efforts of more tightly integrating the notion of autopoiesis into the framework of enactive cognitive science as the biological foundation of autonomous agency (e.g. Thompson 2007; Weber & Varela 2002). Through the constitution of a global identity autopoietic systems bring forth their own domain of interactions. They stand in relation to their worlds of meaning through mutual specification or co-determination (Varela 1992). F. J. Varela H. R. Maturana Encounters are evaluated in relation to the viability of the self-constituted identity, and thereby acquire a meaning which is relative to the current situation and needs of the autonomous entity. This is what is meant by the notion of sense-making (Weber & Varela 2002). An autopoietic unity

Upload: others

Post on 14-Aug-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Machine consciousness: a critical view from enactive ...Jonas, H. (1966), The Phenomenon of Life: Toward a Philosophical Biology, Evanston, Illinois: ... (2007), Mind in Life: Biology,

Machine consciousness: a critical view from enactive cognitive scienceTom Froese ([email protected])Centre for Computational Neuroscience and Robotics,University of Sussex, Brighton, UK

Cliff, D. (1991), “Computational Neuroethology: A Provisional Manifesto”, in: J.-A. Meyer & S.W. Wilson (eds.), From Animals to Animats: Proc. of the 1st Int. Conf. on the Simulation of Adaptive Behavior, Cambridge, MA: The MIT Press, pp. 29-39

Di Paolo, E.A. (2003), “Organismically-inspired robotics: homeostatic adaptation and teleology beyond the closed sensorimotor loop”, in: K. Murase & T. Asakura (eds.), Dynamical Systems Approach to Embodiment and Sociality, Adelaide, Australia: Advanced Knowledge International, pp. 19-42

Froese, T., Virgo, N. & Izquierdo, E. (2007), “Autonomy: a review and a reappraisal”, in: F. Almeida e Costa et al. (eds.), Proc. of the 9th Euro. Conf. on Artificial Life, Berlin, Germany: Springer Verlag, in press

Froese, T. & Ziemke, T. (in preparation), “Enactive Artificial Intelligence”

Jonas, H. (1966), The Phenomenon of Life: Toward a Philosophical Biology, Evanston, Illinois: Northwestern University Press, 2001

Kant, I. (1790), Kritik der Urteilskraft, trans. by: W.S. Pluhar, Critique of Judgment, Indianapolis, IN: Hacket Publishing Company, 1987

Maturana, H.R. & Varela, F.J. (1980), Autopoiesis and Cognition: The Realization of the Living, Dordrecht: Kluwer Academic Publishers

Pfeifer, R. & Scheier, C. (1999), Understanding Intelligence, Cambridge, MA: The MIT Press

Thompson, E. (2007), Mind in Life: Biology, Phenomenology, and the Sciences of Mind, Cambridge, MA: The MIT Press

Varela, F.J. (1992), “Autopoiesis and a Biology of Intentionality”, in: B. McMullin & N. Murphy (eds.), Proc. of Autopoiesis and Perception: A Workshop with ESPRIT BRA 3352, Dublin City University

Weber, A. & Varela, F.J. (2002), “Life after Kant: Natural purposes and the autopoietic foundations of biological individuality”, Phenomenology and the Cognitive Sciences, 1, pp. 97-125

Ziemke, T. (2007), “What’s life got to do with it?”, in: A. Chella & R. Manzotti (eds.), Artificial Consciousness, Imprint Academic, pp. 48-66

Current robots are neither natural purposes nor autopoietic. We therefore have no grounds of attributing intrinsic teleology, autonomous agency, or the capacity of sense-making to them (Froese, Virgo & Izquierdo 2007). From the enactive point of view they provide no basis for machine consciousness (Ziemke 2007).

However, there is no a priori reason why AI could not develop artificial agents with the right kind of existential prerequisites. This requires taking biological autonomy and its precarious condition seriously, as well as shifting the design perspective from engineering an ‘agent’ toward engineering the appropriate conditions of emergence of an agent.

In other words, before the field of machine consciousness can get off the ground we need to develop an enactive artificial intelligence (Froese & Ziemke in preparation).

Embodied-embedded AI has established itself as a viable methodology for synthesizing and understanding cognition (Pfeifer & Scheier 1999).

Its methodologies successfully address many real-world situations that require robust, flexible, and context-sensitive behaviour.

This is done by designing appropriate sensori-motor loops such that the effects of actuators impact on the sensors via the environment, and then, via an internal link, again impact on the actuators. The agent can thereby structures its sensory inputs through its ongoing interaction.

The presence of such a feedback loop is often seen as a necessary condition for intrinsically meaningful and goal-driven behavior (Cliff 1991).

One of the most significant contributions of enactive cognitive science has been its emphasis of the role of sensorimotor coordination in perception and cognition.

Introduction

Conclusions

References

Web address: http://froese.wordpress.com Copyright 2007

‘Agent’

Q. Why is the existence of a closed sensorimotor feedback loop not a sufficient condition for the system to have intrinsic teleology?A. Because in order for ‘sensation’ and ‘action’ to constitute purposive behavior there must be interposed between them a center of concern.

Jonas (1966) puts his finger on metabolism as the source of all intrinsic value and purpose. He proposes that to an entity that carries on its existence by way of constant regenerative activity we impute concern. In other words, the minimum concern is to be, i.e. to carry on being.

Accordingly, a robot cannot be said to be an individual subject in its own right in the same way that a living organism can.

Biological individuality

It was Kant who first made the connection between intrinsic teleology and a modern understanding of self-organization (Weber & Varela 2002).

Kant refers to a living being as a natural purpose: “a thing exists as a natural purpose if it is (though in a double sense) both cause and effect of itself” (Kant 1790, §64).

He outlines two criteria that must be met for something to be considered as a natural purpose: (i) every part exists for the sake of the others and the whole, and (ii) the parts combine into the unity of a whole because they are reciprocally cause and effect of their form.

Moreover, because of this self-organizing reciprocal causality all relations of cause and effect are also relations of means and purpose.

Note that both artifacts and organisms fulfill criterion (i) while (at the moment at least) only living beings also fulfill criterion (ii).

Natural purpose

Contemporary AI systems “cannot be rightly seen as centres of concern, or put simply as subjects, the way that animals can. […] Such robots can never by truly autonomous. In other words the presence of a closed sensorimotor loop

does not fully solve the problem of meaning in AI” (Di Paolo 2003).

Sensations Actions

Environment

“a feedback mechanism may be going, or may be at rest: in either state the machine exists. The organism has to keep going, because to be going is its

very existence – which is revocable – and, threatened with extinction, it is concerned in existing” (Jonas 1966, p. 126).

Can we make Jonas’ criticism more precise? Fortunately, his philosophy has recently been related to both the work of Kant as well as Maturana and Varela (Weber & Varela 2002).

This connection will allow us to diagnose more clearly why current AI systems lack (i) intrinsic teleology, and (ii) autonomous agency.

Hans Jonas

Biological autonomy

“We truly stand on the shoulders of a giant” (Weber & Varela 2002)

“an autopoietic system is organized (defined as a unity) as a network of processes of production (synthesis and destruction) of components such that these components: continuously regenerate and realize the

network that produces them, and constitute the system as a distinguish-able unity in the domain in which they exist” (Weber & Varela 2002)

The notion of autopoiesis as the minimal organization of the living first originated in the work of the biologists Maturana and Varela in the early 1970s (e.g. Maturana & Varela 1980).

It has given rise to an extensive tradition in systemic thinkingthat has influenced a diverse range of fields. Recently, there have been efforts of more tightly integrating the notion of autopoiesis into the framework of enactive cognitive science as the biological foundation of autonomous agency (e.g. Thompson 2007; Weber & Varela 2002).

Through the constitution of a global identity autopoietic systems bring forth their own domain of interactions. They stand in relation to their worlds of meaning through mutual specification or co-determination (Varela 1992).

F. J. Varela

H. R. Maturana

Encounters are evaluated in relation to the viability of the self-constituted identity, and thereby acquire a meaningwhich is relative to the current situation and needs of the autonomous entity. This is what is meant by the notion of sense-making (Weber & Varela 2002).

An autopoietic unity