manipulation under uncertainty (or: executing planned grasps robustly) kaijen hsiao tomás...

34
Manipulation Under Uncertainty (or: Executing Planned Grasps Robustly) Kaijen Hsiao Tomás Lozano-Pérez Leslie Kaelbling Computer Science and Artificial Intelligence MIT NEMS 2008

Post on 21-Dec-2015

215 views

Category:

Documents


0 download

TRANSCRIPT

Manipulation Under Uncertainty(or: Executing Planned Grasps Robustly)

Kaijen HsiaoTomás Lozano-PérezLeslie KaelblingComputer Science and Artificial Intelligence Lab, MITNEMS 2008

Manipulation Planning

If you know all shapes and positions exactly, you can generate a trajectory that will work

Even Small Uncertainty Can Kill

Moderate Uncertainty (not groping blindly) Initial conditions (ultimately from vision)

Object shape is roughly known (contacted vertices should be within ~1 cm of actual positions)

Object is on table and pose (x, y, rotation) is roughly known (center of mass std ~5 cm, 30 deg)

Online sensing: robot proprioception tactile sensors on fingers/hand

Planned/demonstrated trajectories (that would work under zero uncertainty) given

Model uncertainty explicitly “Belief state”: probability distribution over positions of

object relative to robot

Use online sensing to update belief state throughout manipulation (SE)

Select manipulation actions based on belief state (π)

Controller

SE

Environment

belief

actionsensing

State Estimation Transition Model: how robot actions affect the

state Do we move the object during the grasp

execution? (currently, any contact spreads out the belief state somewhat)

Observation Model: P(sensor input | state) How consistent are various object positions with

the current sensory input (robot pose and touch)? Bayes’ Rule

Control: Three approaches Formulate as a POMDP, solve for optimal policy

Continuous, multi-dimensional state, action, observation spaces

->Wildly intractable Find most likely state, plan trajectory, execute

Bad if rest of execution is open loop Maybe good if replanning is continuous, but too slow for

execution-time Will not select actions to gain information

Our approach: define new robust primitives, use information state to select plan, execute

Robust Motion PrimitiveMove-until(goal, condition):

Repeat until belief state condition is satisfied: Assume object is in its most likely location Guarded move to object-relative goal If contact is made:

Undo last motion Update belief state

Termination conditions: Claims success: robot believes, with high

probability, that it is near the object-relative goal Claims failure: some number of attempts have not

achieved the belief condition

Robust primitive

Most likely robot-relative position Where it actually is

Initial belief state (X, Y, theta)

Summed over theta (easier to visualize)

Tried to move down; finger hit corner

Probability of observation | location

Updated belief

Re-centered around mean

Trying again, with new belief

Back up Try again

Executing a trajectory

Given a sequence of way-points in a trajectory Attempt to execute each one robustly using

move-until

So, now we can try to close the gripper on the box:

Final state and observation

Grasp Observation probabilities

Updated belief state: Success!

Goal: variance < 1 cm x, 15 cm y, 6 deg theta

What if Y coord of grasp matters?

Need explicit information gathering

Use variance of belief to select trajectory

If this is your start belief, just run grasp trajectory

The Approach

belief

updateworld

policy

strategy selector

most likely state

commandgenerator

relative motion robot

commands

sensor observations

current belief

Trajectories (grasp, poke, …)

Strategy Selector

Planner to automatically pick good strategies based on start uncertainties and goals Simulate all particles forward using selected robot

movements, including tipping probabilities (tipping = failure)

Group into qualitatively similar outcomes Use forward search to select trajectories/info-

gathering actions

Currently use hand-written conditions on belief state

Grasping a Brita Pitcher

Target grasp:

Put one finger through the handle and grasp

Belief-Based Controller w/2 Info-Grasps

Brita Results

Increasing uncertainty

0

10

20

30

40

50

60

70

80

90

100

loc 1, rot 3 loc 3, rot 9 loc 5, rot 15 loc 5, rot 30

Uncertainty standard dev (cm in each dir, deg rot)

Per

cen

t g

rasp

ed c

orr

ectl

y

demo with perfect info,robot stuck

MLS controller withcontact grasp, static robot

most likely state controllerplain, static robot

guarded moves (no belief)

demo with imperfect info

MLS with 2 info-grasps,robot moving

MLS with 1 info-grasp,robot moving

MLS with 2 info-grasps,static robot

Related Work Grasp planning without regard to uncertainty (can

be used as input to this research) (Lozano-Perez et al, 1992, Saxena et al, 2008)

Finding a fixed trajectory that is likely to succeed under uncertainty (Alterovitz et al. 2007, Burns and Brock 2007, Melchior and Simmons 2007, Prentice and Roy 2007)

Visual servoing (tons of work) Using tactile sensors to precisely locate object

before grasping (Petrovskaya et al. 2006) Regrasping to find stable grasp positions (Platt,

Fagg, Grupen, 2002) POMDPs for grasping (Hsiao et al. 2007)

Current Work

Real robot results (7-DOF Barrett Arm/Hand and Willow Garage PR2)

Automatic strategy selection

Key Ideas Belief-based strategy:

Maintain a belief state (updated based on actions and observations)

Express your actions relative to the current best state estimate

Choose strategies based on higher-order properties of your belief state (variance, bimodality, etc).

Acknowledgements

This material is based upon work supported by the National Science Foundation under Grant No. 0712012. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

The End.

Box Results

Goal: 1 cm x, 1 cm y, 6 degrees theta

Object uncertainty: standard deviations of 5 cm x, 5 cm y, 30 degrees theta

Mean state controller with info-grasp

120/122, 98.4%

Cup Results

Goal: 1 cm x, 1 cm y

Uncertainty std Met Goal

1 cm, 30 deg 150/152 (98.7%)

3 cm, 30 deg 62/66 (93.9%)

5 cm, 30 deg 36/40 (90.0%)