motion control for social behaviours · •quantity of motion is related to the arousal dimension...
Post on 14-Aug-2020
0 Views
Preview:
TRANSCRIPT
Aryel Beck
a.beck@ntu.edu.sg
Supervisor: Nadia Magnenat-Thalmann
Collaborators: Zhang Zhijun, Rubha Shri Narayanan, Neetha Das
10-03-2015
Motion Control for Social Behaviours
INTRODUCTION
• In order for companion robots to be socially accepted they need to display appropriate social reactions and behaviors.
• They should use the same modalities as human do (i.e. voice, facial and body expression).
-----> We need to build autonomous robots that have some social intelligence.
• Overall Research Question:Given a set of perceptions (enablers), how can a robot generate socially accepted behaviours?
• Our Research Question:Given a set of perceptions, how can a robot generate appropriate gaze behaviours?
2
What is Social Attention
Social attention is the focus of cognitive processes on an individual or group in a mixed setting.
We can divide attention in two overlapping systems:• Top down attention and its role during social
interactions• Bottom-up attention and its role during social
interactions
Gaze/Attention/EmotionGaze longer and direct dominant positions Dovidio 85
Approach oriented/Avoidance Oriented Adams 2005
Eyes look away for cognitive processing Mutlu 2013 and
others
Frequency of blink related to muscle tension
Average blink rate to be used
Harris 1996
Blinking rate 11.6/min normally. 19.6 for schizophrenic Itti 2004
Positive affect broaden and negative affect narrow the
scope of attention
Fredickson 2004
Affective states low in motivational intensity broaden and
affective states high in motivational intensity narrow the
scope of attention.
Gable & Jones
2011
There is an increase in the rate of gaze-aways over time. Bickmore 2012
Gaze direction systematically influenced the perceived
emotion disposition conveyed by neutral faces.
Adams2005
Current Trend for Attention Systems
– Most of the systems in robotics use saliency maps.
– Common features used: color hue, sound localization (Ruesch 2008, Nakajima 2013).
– Data-driven methods (Mutlu 2013) and they shorcomings.
– These systems are not socially driven.
– For future work: Not much research on modeling the effect of emotion on attention.
Infotech 2011
Perception/Decision/Action
Perception
Decision
Action
Example of Social Robots that can realize the “Action”
The Nadine Robot
The Nadine robot uses pneumatic motors to display natural looking movements.
It has 27 Degrees of Freedom:• 7 for the face• 3 for the neck• 7 in each arm• 3 in the waist
• Main Classes of the controller:– I2p Agent Control server: i2p Interface that receives instructions from the
Network– Nadine Controller: Execute the command, sync the output and send 1 frame
to the hardware every 30ms.– Text to Speech: Synthetizes the speech and produces the lip animation.– Joint: Stores the trajectory and state of each joint.– XML Library of Animations: Load and store the Pre-defined animations (XML).– Online Movement Generation: Inverse Kinematics and Gaze
Perception: Kinect Skeleton Tracker
Facial Expression Tracking(Screenshot)
Audio Source -Screenshot
Decision Mechanism
• Behavior Based Architecture (bottom up) to drive the attention as well as others social reflexes behaviours.
• Behavior-based controllers consist of a collection of behaviors.
• Behaviors are processes or control laws that achieve and/or maintain goals
• We use a “Winner takes all Coordinator”
Behavior Based Architecture for Social Attention
• Advantages of the Proposed System:– Bottom up decision mechanisms to model bottom up
attention.– Advantage of Modality Saliency in comparison to
saliency maps.– Robust to sensors defect.– Behaviours can be combined and layered to construct
social intelligence.– Local fusion. Each behaviour has its own data
structure and records. – Other social reflexes can be added in the same
architecture
Our System
• Our decision system for Attention is composed of 3 behaviours.
– Direct Attention towards speaker.
– Direct Attention towards user through Vision.
– Direct Attention towards interesting Gestures.
Direct Attention towards speaker: Speaker Detection
(1) Compute the linear function given the direction of the sound.
(2) For each user, compute the linear functions that passes “through” her.
(3) Compare the slope of the linear function to find the speaker.
Direct Attention towards user through Vision: Features Extracted
• Agent should focus more on users closer to her.
• Users are placed within their social distance
Features Extracted
• Quantity of motion is related to the arousal dimension of emotion.
• We compute it as follow:
(1) Change of coordinate so that the origin is the Pelvis.
(2) For each upper body joint compute the distance travelled from 1 frame to the other.
(3) Divide this value by the height of the user.
Features Extracted
• Summary Features used to drive the attention:
– Movement Detection
– Distance between user and agent (proxemics)
– Users Orientation (Kinect SDK)
Direct Attention towards user through Vision
• Overview:
– Each kinect frame: Records users positions, compute Quantity of Motion, Social Distance, Attention.
– Keep a short history of the recent events (Small buffer (100 frames))
– Score each user using a weighted average of their features.
– Look at the user with the highest score.
System Testing and Integration
• Material for user testing:
– The attention system works with a higher “cognitive” level provided by a chatbot and google speech to text.
• Rather than putting a video, please try the system after lunch (No food inside the room!).
• Work is still in progress but feedback is useful
Future Work
• Test the system with users.
• Add more sound categories to the attention system.
• Add an emotion layer that affect the behaviors. A lot research in psychology points towards the effect of emotions on attention.
• Thanks for your attention!
Any
questions
?
top related