robotics dr. tom froese. “why does the burnt kitten avoid the fire?” ashby (1960)

Post on 28-Dec-2015

215 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

ROBOTICSDr. Tom Froese

“Why does the burnt kitten avoid the fire?”Ashby (1960)

The Ultrastable System (Homeostat)

Adaptation to inverting goggles

• Erismann 1930s• Kohler 1950s and 60s

• They demonstrated the plasticity of perceptual systems by perturbation.

• Inverted senses would, part by part, adapt over time and the perceived world would return to the “normal” state.

The rise of computer science

”Sense-think-act" cycle

LabVIEW Robotics (2014)

Challenges for symbolic AI• Robustness

• Lack of noise and fault-tolerance; lack of generalizability• If a situation arises which has not been predefined a traditional

symbol processing model will break down

• Integrated learning• Learning mechanisms are ad hoc and imposed on top of non-learning

systems

• Real-time performance

• Sequential processing• Programs are sequential and work on a step-by-step basisPfeifer (1996)

The Frame Problem

Pfeifer (1996)

The Frame Problem• The robot R1 has been told that its battery is in a room

with a bomb and that it must move the battery out of the room before the bomb goes off.

• Both the battery and the bomb are on a wagon. R1 knows that the action of pulling the wagon out of the room will remove the battery from the room.

• It does so and as it is outside, the bomb goes off

• Poor R1 had not realized that pulling the wagon would bring the bomb out along with the battery.• Dennett (1984)

The Frame Problem• The designers realized that the robot would have to be

made to recognize not just the intended implications of its acts, but also its side-effects by deducing these implications from the descriptions it uses in formulating its plans.

• They called their next model the robot deducer, or short R1D1, and did the same experiment. R1D1 started considering the implications of pulling the wagon out of the room.

• It had just finished deducing that pulling the wagon out of the room would not change the color of the room's walls when the bomb went off.

The Frame Problem• The problem was obvious. The robot must be taught the

difference between relevant and irrelevant implications. • R2D1, the robot-relevant-deducer, was again tested. The

designers saw R2D1 sitting outside the room containing the ticking bomb. "Do something!" they yelled at it.

• "I am", it retorted. "I am busily ignoring some thousands of implications I have determined to be irrelevant. Just as soon as I find an irrelevant implication, I put it on the list of those I must ignore, and ..."

• the bomb went off.

The Symbol Grounding Problem• “once we remove the human interpreter from the loop, as

in the case of autonomous agents, we have to take into account that the system needs to interact with the environment on its own.

• Thus, if there are symbols in the system, their meaning must be grounded in the system's own experience in the interaction with the real world.

• Symbol systems in which symbols only refer to other symbols are not grounded because the connection to the outside world is missing. The symbols only have meaning to a designer or a user, not to the system itself.”• Pfeifer (1996); see also Harnad (1990)

Searle’s (1980) “Chinese room” argument

Braitenberg’s (1984) “Vehicles”

Brooks’ (1991) “Creatures”

Herbert

Genghis

Allen

“The key observation is that the world is its own best model. It is always exactly up to date. It always contains everydetail there is to be known. The trick is to sense itappropriately and often enough.”

Brooks (1990)

Brook’s (1991) “Creatures”

Behavior-based robotics

LabVIEW Robotics (2014)

Brooks’ “subsumption architecture”

Brooks’ creatures and creatures-no-more

CogBaxter

Roomba

PackBot

Pfeifer’s (1996b) “Fungus Eaters”

Pfeifer’s (1996b) “Fungus Eaters”

Pfeifer’s (1996b) design principles

Humanoid robot walking

Asimo takes a nasty fall down the stairs

Passive dynamic walking

• Collins et al. (2001) built a passive dynamic walking robot based on the ideas of McGeer.

• It was built from metal rods, springs, and weights.

• The robot could walk down a plank without power, sensors, or a control system.

• The robot was also able to walk efficiently on a flat surface by giving it a small push.

• McGeer had previously noticed that adding knees made passive walking more stable for bipedal machines.

Passive dynamic walking

Collins et al. (2001)

New cognitive science (4E)

References• Ashby, W. R. (1960). Design for a Brain: The Origin of Adaptive

Behaviour (Second ed.). London: Chapman & Hall• Braitenberg, V. (1984). Vehicles: Experiments in Synthetic Psychology.

Cambridge, MA: MIT Press• Brooks, R. A. (1990). Elephants don't play chess. Robotics and

Autonomous Systems, 6, 3-15• Brooks, R. A. (1991). Intelligence without representation. Artificial

Intelligence, 47(1-3), 139-160. • Collins, S. H., Wisse, M., & Ruina, A. (2001). A three-dimensional

passive-dynamic walking robot with two legs and knees. International Journal of Robotics Research, 20(7), 607-615

• Dennett, D. C. (1984). Cognitive wheels: The frame problem of AI. In C. Hookway (Ed.), Minds, Machines and Evolution: Philosophical Studies (pp. 129-152). Cambridge: Cambridge University Press

• Harnad, S. (1990). The symbol grounding problem. Physica D: Nonlinear Phenomena, 42, 335-346

References• Pfeifer, R. (1996). Symbols, patterns, and behaviour:

Towards a new understanding of intelligence. Proceedings of the 10th Annual Conference of Japanese Society for Artificial Intelligence (pp. 1-15). Tokyo: JSAI

• Pfeifer, R. (1996b). Building "fungus eaters:" Design principles of autonomous agents. In P. Maes, M. J. Matarić, J.-A. Meyer, J. Pollack & S. W. Wilson (Eds.), From Animals to Animats 4: Proceedings of the Fourth International Conference on Simulation of Adaptive Behavior (pp. 3-12). Cambridge, MA: MIT Press

• Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3): 417-424

HomeworkPlease read the whole article if possible:

• Di Paolo, E. A. (2015). El enactivismo y la naturalización de la mente. In D. Pérez Chico & M. G. Bedia (Eds.), Nueva Ciencia Cognitiva: Hacia una Teoría Integral de la Mente (in press). Zaragoza: PUZ

• Optional:

• van Gelder, T. & Port, R. F. (1995). It’s about time: An overview of the dynamical approach to cognition. In: R. F. Port & T. van Gelder (eds.), Mind as Motion: Explorations in the Dynamics of Cognition (pp. 1-43). Cambridge, MA: MIT Press

top related