selected references feil-seifer, d.j. & matarić, m.j. (2006). “shaping human behavior by...

1
Selected References Feil-Seifer, D.J. & Matarić, M.J. (2006). “Shaping Human Behavior by Observing Mobility Gestures”. Poster paper in 1 st Annual Conference on Human-Robot Interaction, pp. 337-338, Salt Lake City, UT. Gat, E. (1998). “On Three-Layer Architectures”. In Artificial Intelligence and Mobile Robotics, Kortenkamp, D., Bonnasso, R.P., & Murphy, R., eds., AAAI Press, pp. 195-210. Ramesh, A. & Matarić, M.J. (2002) “Learning movement sequences from demonstration”. In Proceedings of the International Conference on Development and Learning, pp. 203-208. Rao, P.A., Beidel, D.C., & Murray, M.J. (2008). “Social Skills Interventions for Children with Asperger’s Syndrome or High-Functioning Autism: A Review and Recommendations”. Jrnl. of Autism and Developmental Disorders, 38(2), pp. 353-361. Rogers, S.J. & Williams, J.H.G. (2006). Imitation and the Social Mind: Autism and Typical Development . New York, NY: The Guilford Press. Tapus, A., Matarić, M.J., & Scassellati, B. (2007). “The Grand Challenges in Socially Assistive Robotics”. IEEE Robotics and Automation Magazine, 14(1), pp. 35-42. Research Goal Develop a framework for a humanoid robot to engage in interactive and adaptive game-playing activities with a human. Such games highlight joint attention, placing participants in situations where the goals are not immediately or explicitly clear, but, rather, must be inferred from referential robot gestures and/or movements, encouraging forms of interaction that are useful within assistive domains. Importance of social cues Facial expressions, eye gaze, head movement, posture, gestures, and other nonverbal cues play a crucial role in what can be considered “typical” social interactions. However, there are some populations of children and adults whose circumstances impair such social development: children with autism spectrum disorder tend to avoid eye contact and, thus, often miss intentions and emotions expressed in the face and body; the early-to- moderate stages of Alzheimer’s disease often limit a patient’s vocabulary and hinder his ability to form coherent sentences; post-stroke rehabilitation patients frequently have reduced motor activity, thus limiting social expressiveness. Social assistance Interactive and engaging tools that explicitly promote motions that are common in social cues are useful for assisting these populations. Clinical studies have demonstrated the effectiveness of social skills training programs in groups with special needs, and have proposed methods to enhance intervention strategies for different populations. Contemporary research suggests that physically embodied robotic systems can activity, in particular through the use involve training through imitation. Effectors 19 degrees of freedom: 7 in each arm (shoulder forward and backward, shoulder in and out, elbow tilt, elbow twist, wrist twist, grabber open and close; left and right arms), 2 in the head (pan and tilt), 2 in the lips (upper and lower), and 1 in the eyebrows. Sensors Stereoscopic vision is facilitated by a color cameras the robot’s eyes. Encoders in each of the joints allows for accurate motion control. Mounted atop a Pioneer P2 base, other sensors can be utilized, such as a sonar array or laser rangefinder for distance sensing, an upward- facing high-definition color camera for detailed tracking, and buttons for added user interaction. Interaction Its high degrees of freedom allow the robot to be incredibly expressive using individual and combined motions of the head, face, and arms. An extensive gesture and facial expression library has been developed to enhance the interactive experience. The robot is more to scale with respect to human users than many other humanoid platforms; the entire robot stands at one meter tall, making it an ideal choice for interaction. Bandit Robot Platform The Power of Suggestion: Teaching Sequences Through Assistive Robot Motions Ross Mead and Maja J Matarić {rossmead | mataric}@usc.edu The Inter The Inter action action Lab Lab Sequence Depths 0 5 10 15 20 25 30 35 40 45 50 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Participant (#) No Assistance Head Gestures Arm Gestures Assistance Levels 0 5 10 15 20 25 30 35 40 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Participant (#) Head Gestures Arm Gestures Why Sequences? A Preliminary Study A single task-oriented motion can be used to shape a user’s behavior; however, simple behavior shaping does not suffice for special-needs populations. Connecting multiple shaping movements in a sequence results in behavior chaining. A sequencing game was chosen to demonstrate the feasibility of a robotic system capable of shaping and chaining human behavior. For the purposes of this study, a sequence is defined as a series of button presses in a particular order; however, to make the game challenging and to better provide opportunities for interaction, the user is not told the sequence and, thus, he or she is presented with two choices: 1) determine the sequence by exploring different button combinations or 2) elicit help and guidance from the robot to determine the sequence. The sequence is initially short and simple, but increases in difficulty over time. At any point, the user may request assistance from the robot. In response, the robot engages in motions and behaviors that signify the task objective (e.g., using eye gaze, head orientation, hand waving, and/or finger pointing).

Post on 19-Dec-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Selected References Feil-Seifer, D.J. & Matarić, M.J. (2006). “Shaping Human Behavior by Observing Mobility Gestures”. Poster paper in 1 st Annual Conference

Selected ReferencesFeil-Seifer, D.J. & Matarić, M.J. (2006). “Shaping Human Behavior by Observing Mobility Gestures”. Poster paper in 1st Annual Conference on Human-Robot Interaction, pp. 337-338, Salt Lake City, UT.

Gat, E. (1998). “On Three-Layer Architectures”. In Artificial Intelligence and Mobile Robotics, Kortenkamp, D., Bonnasso, R.P., & Murphy, R., eds., AAAI Press, pp. 195-210.

Ramesh, A. & Matarić, M.J. (2002) “Learning movement sequences from demonstration”. In Proceedings of the International Conference on Development and Learning, pp. 203-208.

Rao, P.A., Beidel, D.C., & Murray, M.J. (2008). “Social Skills Interventions for Children with Asperger’s Syndrome or High-Functioning Autism: A Review and Recommendations”. Jrnl. of Autism and Developmental Disorders, 38(2), pp. 353-361.

Rogers, S.J. & Williams, J.H.G. (2006). Imitation and the Social Mind: Autism and Typical Development. New York, NY: The Guilford Press.

Tapus, A., Matarić, M.J., & Scassellati, B. (2007). “The Grand Challenges in Socially Assistive Robotics”. IEEE Robotics and Automation Magazine, 14(1), pp. 35-42.

Research GoalDevelop a framework for a humanoid robot to engage in interactive and adaptive game-playing activities with a human. Such games highlight joint attention, placing participants in situations where the goals are not immediately or explicitly clear, but, rather, must be inferred from referential robot gestures and/or movements, encouraging forms of interaction that are useful within assistive domains.

Importance of social cuesFacial expressions, eye gaze, head movement, posture, gestures, and other nonverbal cues play a crucial role in what can be considered “typical” social interactions. However, there are some populations of children and adults whose circumstances impair such social development: children with autism spectrum disorder tend to avoid eye contact and, thus, often miss intentions and emotions expressed in the face and body; the early-to-moderate stages of Alzheimer’s disease often limit a patient’s vocabulary and hinder his ability to form coherent sentences; post-stroke rehabilitation patients frequently have reduced motor activity, thus limiting social expressiveness.

Social assistanceInteractive and engaging tools that explicitly promote motions that are common in social cues are useful for assisting these populations. Clinical studies have demonstrated the effectiveness of social skills training programs in groups with special needs, and have proposed methods to enhance intervention strategies for different populations. Contemporary research suggests that physically embodied robotic systems can be used to improve social activity, in particular through the use of instructional games that involve training through imitation.

Effectors19 degrees of freedom: 7 in each arm (shoulder forward and backward, shoulder in and out, elbow tilt, elbow twist, wrist twist, grabber open and close; left and right arms), 2 in the head (pan and tilt), 2 in the lips (upper and lower), and 1 in the eyebrows.

SensorsStereoscopic vision is facilitated by a color cameras the robot’s eyes. Encoders in each of the joints allows for accurate motion control. Mounted atop a Pioneer P2 base, other sensors can be utilized, such as a sonar array or laser rangefinder for distance sensing, an upward-facing high-definition color camera for detailed tracking, and buttons for added user interaction.

InteractionIts high degrees of freedom allow the robot to be incredibly expressive using individual and combined motions of the head, face, and arms. An extensive gesture and facial expression library has been developed to enhance the interactive experience. The robot is more to scale with respect to human users than many other humanoid platforms; the entire robot stands at one meter tall, making it an ideal choice for interaction.

Bandit Robot Platform

The Power of Suggestion:Teaching Sequences Through

Assistive Robot MotionsRoss Mead and Maja J Matarić{rossmead | mataric}@usc.edu

The InterThe Interaction Labaction Lab

Sequence Depths

0

5

10

15

20

25

30

35

40

45

50

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Participant (#)

Sequence Depth (buttons pressed)

No AssistanceHead GesturesArm Gestures

Assistance Levels

0

5

10

15

20

25

30

35

40

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Participant (#)

Assistance Level (buttons pressed)

Head GesturesArm Gestures

Why Sequences? A Preliminary StudyA single task-oriented motion can be used to shape a user’s behavior; however, simple behavior shaping does not suffice for special-needs populations. Connecting multiple shaping movements in a sequence results in behavior chaining. A sequencing game was chosen to demonstrate the feasibility of a robotic system capable of shaping and chaining human behavior. For the purposes of this study, a sequence is defined as a series of button presses in a particular order; however, to make the game challenging and to better provide opportunities for interaction, the user is not told the sequence and, thus, he or she is presented with two choices: 1) determine the sequence by exploring different button combinations or 2) elicit help and guidance from the robot to determine the sequence. The sequence is initially short and simple, but increases in difficulty over time. At any point, the user may request assistance from the robot. In response, the robot engages in motions and behaviors that signify the task objective (e.g., using eye gaze, head orientation, hand waving, and/or finger pointing).