human-robot interaction michal de vries. humanoid robots as cooperative partners for people...

Post on 21-Dec-2015

223 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Human-robot interaction

Michal de Vries

Humanoid robots as cooperative partners for people

Breazeal, Brooks, Gray, Hoffman, Kidd, Lee, Lieberman, Lockerd and Mulanda

International Journal of Humanoid RobotsSubmitted 2003, published in 2004

Overview

1) Introduction

2) Understanding others

3) Social robotics

4) Meet Leonardo

5) Task learning

6) Discussion

1) Introduction

The goal of the paper: developing robots with social abilities.

Such robots can understand natural human instructions (such as natural language, gestures, emotional expressions).

Learning new skills should be done quickly. Create robots that play a role in the daily

lives of ordinary people.

2) Understanding others

Theory of Mind: People attribute mental states (beliefs, desires, goals) to understand and predict behavior.

This is even the case with non-living things of sufficient complexity (Breitenberg).

Although this is far from scientific, it is suprisingly useful (Dennett, 1987).

Mirror neurons are a possible neural mechanism for the Theory of Mind (Gallese & Goldman, 1998).

3) Social robots

Human-robot collaboration: No master-slave relation between human and robot, but cooperating partners.

Joint Intention Theory: Doing something together as a team where the teammates share the same goal and same plan of execution.

Robots understanding others

Most robots interact with people as objects or socially impaired people.

Social robots must be capable of understanding the intentions, beliefs, goals and desires of people. It must also understand social cues (and vice versa).

Such robots must be able to take multiple points of view. Common vs. partial knowledge.

How should social robots learn?

It is a trend in machine learning to eschew built-in structure or a priori knowledge of the environment. The main focus is on statistical learning techniques.

Such techniques need hunderds or thousands examples to learn something succesfully.

How should robots learn?

Learning without built-in structure is a problem: A robot needs to learn quickly. Learning in biology is robust and fast.

Furthermore, humans are also born with

innate cognitive and behavioral machinery which develops in an environment.

So the authors use a combination of bottom-up and top-down processing.

4) Meet Leonardo

Leonardo: a robot with 65 degree of freedom

Leonardo's architecture

Leo's computational architecture

Understanding speech

Leo cannot speak, but has a natural language understanding system called Nautilus.

Nautilus supports: For instance, a basic vocabulary, simple contexts and spatial relations.

The vision system

Leo percieves the environment with 3 camera systems.

A camera behind the robot to track people and objects in Leo's environment (peripheral information).

An overhead camera mounted in the ceiling that faces vertically down to track gestures and objects (color, position, shape, size).

The third camera system is in Leo's eyes and processes face recognition and facial features.

Attention

Leo's attentional system computes the level of saliency (interest) for objects and events.

Three factors compute saliency: perceptual properties internal states (belief system) socially directed reference

Beliefs

Seeing reflects the state of the world as it is directly precieve.

Beliefs are representational and are held even if they do not happen to agree with immediate perceptual experience.

Leo's belief system gets input (visual, tactile information and speech) and merges this information into a coherent set of beliefs.

Beliefs

Beliefs must be processed and updated correctly.

Leo can compare his beliefs with beliefs of others. It must make a distinction between his beliefs and the beliefs of others, but also know which beliefs are common knowledge.

Leo can represent beliefs of others by monitoring people's looks at objects, their gestures and talks over time.

5) Task learning

Leo can learn from natural human instructions.

“Leo, this is a hammer.” By hearing its own name, the word “this” and

in combination with a pointing gesture to a hammer, the speech understanding passes this knowledge to the spatial reasoning and belief system.

Leo can show whether he understands the instructions. “Leo, show me the hammer.”

Leo can evaluate its own capabilities.

Push the button

Leo learns pushing buttons

How Leo learns pushing buttons

Task:“Buttons-On-and-Off” Leo indicates that he does not know such task and goes into learning mode.

Subtask:“Button-On” The same reaction. A person learns Leo this by demonstrating it.

It says “Press button 1” and turns it on. Leo encodes the goal state associated with

an action performed in the tutorial setting by comparing the world state before and after its execution.

The same holds for the subtask “Button-Off”.

A schematic overview

All subtasks must be learned in order to master the overall task.

6) Discussion

Social skills (performing/recognizing) are important for robots in relation with humans.

No master-slave relation, but collaboration. Knowing what matters: attention. An action can be determined in more ways:

Reinforcement learning (large state-action spaces -> large number of trials)

Learning by imitation (a lot faster, but requires innate knowlegde)

Some remarks

Leo cannot speak, but speech is very important in social interaction.

The practical and biological implausibility of 3 camera systems, especially an overhead camera.

The biological implausibility of innate knowledge. Of course, we are biased towards some behaviour, but we do not have an a priori vocabulary.

No role for mirror neurons?

Extra Info

More information about Leo can be found at:http://robotic.media.mit.edu/projects/robots/leonardo/overview/overview.html

Questions?

top related