robotic grasping of novel objects using vision ashutosh saxena, justin driemeyer, andrew y. ng...

Download Robotic Grasping of Novel Objects using Vision Ashutosh Saxena, Justin Driemeyer, Andrew Y. Ng Center Computer Science Department Stanford University,

If you can't read please download the document

Upload: lora-thomasine-wilkinson

Post on 18-Jan-2018

230 views

Category:

Documents


0 download

DESCRIPTION

Abstract ► ► A learning algorithm to identify the points to grasp the objects.  No 3-D model  2 or more 2-d images. ► ► Triangulate to obtain the 3-d location grasping points.

TRANSCRIPT

Robotic Grasping of Novel Objects using Vision Ashutosh Saxena, Justin Driemeyer, Andrew Y. Ng Center Computer Science Department Stanford University, Stanford, CA Abstract What is the novel object? Difficulties to grasp the novel objects. 3-D model Grasping points Abstract A learning algorithm to identify the points to grasp the objects. No 3-D model 2 or more 2-d images. Triangulate to obtain the 3-d location grasping points. Abstract Two robotic manipulation platforms to demonstrate this approach. Grasp a wide variety of objects. Unloading items from dishwashers. Some related works Saxena et al., 2005;2007d Most of the information in the 3-d structure may already be contained in the 2-d images. Castiello, 2005 Learning from previous knowledge is an important component of grasping novel objects. Goodale et al., 1991 Given only a quick glance at almost any rigid object, most primates can quickly choose a grasp to pick it up, even without knowledge of the object type. Learning the Grasping Point What is the grasping point? There are two steps of their approach. predict the 2-d location of the grasp in each image. given two (or more) images of an object taken from different camera positions, they can predict the 3-d position of a grasping point. They will consider only objects that can be picked up without performing complex manipulation. Synthetic Data for Training Require a labeled training set. they can apply supervised learning to identify patches that contain grasping points. How to generate this training set? Ray tracer. One model is created, numbers of samples generated. 2500 examples comprising from five objects classes. (e) Pencil (d) Book (c) Eraser (a) Martini glass(b) Mug Probabilistic Model They first predict whether each region in the image contains the projection of a grasping point. But there are inaccuracies in the physical positioning of the arms -> camera inaccuracies. So that Probabilistic Model Equation one: z(u,v) = 1 means this point is grasping point, they assume Equation two: The is learned using standard maximum likelihood. X are the features. 3-D Grasp Model They discretize the 3-d work-space of the robotic arm into a regular 3-d grid, and associate with each grid element j a random variable, so that = 1 if grid cell j contains a grasping point, From each camera location i = 1,...,N, One image is taken, and there is a ray from this location. Let be the indices of the grid-cells lying on the ray. if and only if or or . 3-D Grasp Model Assuming that any grid-cell along a ray is equally likely to be a grasping point: 3-D Grasp Model Next, using naive Bayes-like independence assumption: 3-D Grasp Model Choosing the grid cell j in the 3-d robot workspace that maximizes the conditional log-likelihood in Eq. 5. Planning After identifying a 3-d point at which to grasp an object, they need to: Find an arm pose that realizes the grasp. Plan a path to reach that arm pose. 5-dof arm platform Using Probabilistic Road-Maps(PRMs) [Schwarzer et al., 2005] to find the path. Planning 7-dof arm platform They use the full algorithm in [Saxena et al., 2007b], for predicting the 3-d orientation of a grasp. Experiments Experiment 1: Synthetic data Evaluating the predictive accuracy of the algorithm on synthetic images, The average accuracy for classifying whether a 2-d image patch is a projection of a grasping point was 94.2%. Experiment 2: Grasping novel objects They tested their algorithm on 5-dof robotic arm platform, On average, the robot picked up the novel objects 87.8% of the time. Experiments Experiment 3: Unload items from dishwashers Experiment 4: Grasping using 7-dof arm It achieved a grasping success rate of 60% for bowls, and 80% for plates. Conclusions The algorithm generalizes very well to novel objects and environments. It is just the first step!