bryan willimon, stan birchfield, ian walker department of electrical and computer engineering...

Post on 05-Jan-2016

213 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Bryan Willimon, Stan Birchfield, Ian Walker

Department of Electrical and Computer Engineering

Clemson UniversityIROS 2010

Rigid and Non-Rigid Classification Using Interactive

Perception

What is Interactive Perception?

Interactive Perception is the concept of gathering information about a particular object through interaction

Raccoons and cats use this technique by moving objects around using their front paws.

What is Interactive Perception?

The information gathered is either complimenting information obtained through vision or adding new information that can’t be determined through vision alone

Previous Related Work on Interactive Perception

P. Fitzpatrick. First Contact: an active vision approach to segmentation. IROS 2003

Segmentation through image differencing

Learning about prismatic and revolute joints on planar rigid objects

D. Katz and O. Brock. Manipulating articulated objects with interactive perception. ICRA 2008

Previous work focused on rigid objects

Goal of Our Approach

Isolated Object Classify Object

Learn about Object

Color Histogram Labeling Use color values (RGB) of the object to create a 3-D

histogram Each histogram is normalized by number of pixels in object

to create a probability distribution Each histogram is then compared to histograms of previous

objects for a match using histogram intersection* White area is found by using same technique as in graph-

based segmentation and used as a binary mask to locate object in image

Skeletonization

Use binary mask from previous step to create a skeleton of the object

Skeleton is a single-pixel wide outline of the area Prairie-fire analogy

Iteration 1Iteration 3Iteration 5Iteration 7Iteration 9Iteration 10Iteration 11Iteration 13Iteration 15Iteration 17Iteration 47

Monitoring Object Interaction

Use KLT feature points to track movement of the object as the robot interacts with it

Only concerned with feature points on the object and disregard all other points

Calculate distance between each feature point every flength frames (flength=5)

Monitoring Object Interaction (cont.)

Idea: Like features keep a constant inter-feature distance, features from different groups have variable intra-distance

Features were separated into groups by measuring the intra-distance amount after flength frames

If the intra-distance between two features changes by less than a threshold, then they are within the same group

Otherwise, they are within different groups Separate groups relate to

separate parts of an object

Labeling Revolute Joints using Motion

For each feature group, create an ellipse that encapsulates all features

Calculate major axis of ellipse using PCA End points of major axis correspond to a revolute joint

and the endpoint of the extremity

Labeling Revolute Joints using Motion (cont.)

Using the skeleton, locate intersection points and end points

Intersection points (Red) = Rigid or Non-rigid joints End points (Green) = Interaction points Interaction points are locations that the robot uses to

“push” or “poke” the object

Labeling Revolute Joints using Motion (cont.)

Map estimated revolute joint from major axis of ellipse to actual joint in skeleton

After multiple interactions from the robot, a final skeleton is created with revolute joints labeled (red)

Experimental Results

Sorting using socks and shoes

Articulated rigid object - pliers

Classification experiment - toys

*D. Katz and O. Brock. Manipulating articulated objects with interactive perception. ICRA 2008

Comparing objects of the same type to that of similar work* Pliers from our results compared to shears in their results*

Our approach Katz-Brock approach

Results Articulated rigid object (Pliers)

Final Skeleton used for Classification

Results Classification (cont.) Experiment

(Toys)

1 2 3 4

Results Classification (cont.) Experiment

(Toys)

5 6 7 8

Results Classification (cont.) Experiment

(Toys)

Classification Experiment without use of Skeleton

*Rows = Query image, Columns = Database image

Results Classification (cont.) Experiment

Misclassification

Classification Experiment with use of Skeleton

*Rows = Query image, Columns = Database image

Results Classification (cont.) Experiment

Classification Corrected

Results Sorting (cont.) using socks

and shoes

1 2 3 4 5

Results Sorting (cont.) using socks

and shoes

Classification Experiment without use of Skeleton

Misclassification

Results Sorting (cont.) using socks

and shoes

Classification Experiment with use of Skeleton

Classification Corrected

Conclusion

The results demonstrated that our approach provided a way to classify rigid and non-rigid objects and label them for sorting and/or pairing purposes Most of the previous work only considers planar rigid

objects This approach builds on and exceeds previous work in the

scope of “interactive perception” We gather more information with interaction like a skeleton

of the object, color, and movable joints. Other works only look to segment the object or find

revolute and prismatic joints

Future Work

Create a 3-D environment instead of a 2-D environment Modify classification area to allow for interactions from

more than 2 directions Improve the gripper of the robot for more robust grasping Enhance classification algorithm and learning strategy

Use more characteristics to properly label a wider range of objects

Questions?

top related