haptic exploration for 3d shape reconstruction …haptic exploration for 3d shape reconstruction...

6
Haptic Exploration for 3D Shape Reconstruction using Five-Finger Hands A. Bierbaum, K. Welke, D. Burger, T. Asfour, and R. Dillmann University of Karlsruhe Institute for Computer Science and Engineering (CSE/IAIM) Haid-und-Neu-Str. 7, 76131 Karlsruhe, Germany Email: {bierbaum,welke,asfour,dillmann}@ira.uka.de Abstract—In order for humanoid robots to enter human- centered environments, it is indispensable to equip them with the ability to recognize and classify objects in such an environment. A promising way to acquire the object models necessary for object manipulation appears in the supplement of the information gathered by computer vision techniques with data from haptic exploration. In this paper we present a framework for haptic exploration which is intended for use with both, a five-finger humanoid robot hand as well as with a human hand. We describe experiments and results on haptic exploration and shape estimation of 2D and 3D objects by the human hand. Volumetric shape data is acquired by a human operator hand using a data glove. The exploring human hand is located by a stereo camera system, whereas the finger configuration is calculated from the glove data. I. I NTRODUCTION In humans, different types of haptic exploratory procedures (EPs) for perceiving texture, weight, rigidity, contact, size and the exact shape of a touched object are known [1]. These EPs require the exploring agent to initiate contact with the object and are therefore also referred to as active touch sensing. In this paper, the contour following EP for shape recovery is subject of interest. A volumetric object model composed this way delivers a rather high amount of information for discriminating between objects. Also, volumetric object data is most suitable for supplementing and verifying geometric information in multimodal object representations. Several approaches have been proposed for acquiring object shape information by robots through haptic exploration. An early, comprising experimental setup was presented in [2], where a dextrous robot hand was used in conjunction with a manipulator arm. The hand probed contact by enclosing the test objects at predefined positions and evaluating joint angle and force readings. The resulting sparse point clouds were fitted to superquadric models defined by a set of parameters describing shape and pose. In addition to the contact locations, the contact normal information gathered during haptic exploration was used in [3]. Instead of a superquadric model here a polyhedral model was chosen as volumetric object representation. For object recognition the Euclidian distances of the polyhedral surface points to the borders of a surrounding cubic workspace box were measured at equidistant intervals and matched to those of synthetic models. This approach was evaluated only in simulation. The basic kinematics of contact and its application in contour following are presented thoroughly in [4]. Moreover, several procedures for haptic exploration of object features have been investigated in [5], [6]. Other approaches in active contact sensing concentrate on the detection of local surface features [7]. In our approach, a framework has been developed that allows haptic exploration of unknown objects for recovery of the exact global shape using different types of manipulators equipped with contact sensing devices. This separates our work from approaches involving model based pose estimation of known objects as in [8]. As we are interested to integrate the framework as basis for haptic exploration in our humanoid robot system [9] which is equipped with two five-finger human-like and human-sized hands, we focus on the application of exploring with five- finger hands. In particular, the developed framework allows us to use the human hand of an operator as exploring manipulator by deploying a data glove with attached tactile sensors. This gives us the opportunity to already investigate the EPs of interest until the humanoid robot hand is fully integrated into our robot. It also provides rich possibilities for immediate comparison of haptic exploration results by a human versus a humanoid robot hand. Our aim is to establish a robust exploratory process for 3D shape reconstruction and modeling which can be performed in an unstructured environment, while only providing observabil- ity of the hand pose, the finger configuration and the contact information. Currently we are limited by the constraint that the object being explored must remain in a fixed pose. As modeling primitive we chose an extended superquadric function as in [2] which can represent a variety of cubical and spherical geometries. In the context of grasp planning this type of representation has just recently been investigated for model- ing physical objects by superquadric decomposition [10]. Yet, in this paper we only address modeling of basic superquadric shapes. Also, we can not give results related to non-convex objects at this stage of our work, as those are not reflected by the chosen model, although the exploration process itself is not limited to convex objects. This paper is organized as follows. In the next section

Upload: others

Post on 27-Jun-2020

21 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Haptic Exploration for 3D Shape Reconstruction …Haptic Exploration for 3D Shape Reconstruction using Five-Finger Hands A. Bierbaum, K. Welke, D. Burger, T. Asfour, and R. Dillmann

Haptic Exploration for 3D Shape Reconstructionusing Five-Finger Hands

A. Bierbaum, K. Welke, D. Burger, T. Asfour, and R. DillmannUniversity of Karlsruhe

Institute for Computer Science and Engineering (CSE/IAIM)Haid-und-Neu-Str. 7, 76131 Karlsruhe, Germany

Email: {bierbaum,welke,asfour,dillmann}@ira.uka.de

Abstract—In order for humanoid robots to enter human-centered environments, it is indispensable to equip them with theability to recognize and classify objects in such an environment.A promising way to acquire the object models necessary forobject manipulation appears in the supplement of the informationgathered by computer vision techniques with data from hapticexploration. In this paper we present a framework for hapticexploration which is intended for use with both, a five-fingerhumanoid robot hand as well as with a human hand. Wedescribe experiments and results on haptic exploration and shapeestimation of 2D and 3D objects by the human hand. Volumetricshape data is acquired by a human operator hand using a dataglove. The exploring human hand is located by a stereo camerasystem, whereas the finger configuration is calculated from theglove data.

I. INTRODUCTION

In humans, different types of haptic exploratory procedures(EPs) for perceiving texture, weight, rigidity, contact, size andthe exact shape of a touched object are known [1]. These EPsrequire the exploring agent to initiate contact with the objectand are therefore also referred to as active touch sensing.

In this paper, the contour following EP for shape recoveryis subject of interest. A volumetric object model composedthis way delivers a rather high amount of information fordiscriminating between objects. Also, volumetric object datais most suitable for supplementing and verifying geometricinformation in multimodal object representations.

Several approaches have been proposed for acquiring objectshape information by robots through haptic exploration. Anearly, comprising experimental setup was presented in [2],where a dextrous robot hand was used in conjunction witha manipulator arm. The hand probed contact by enclosing thetest objects at predefined positions and evaluating joint angleand force readings. The resulting sparse point clouds werefitted to superquadric models defined by a set of parametersdescribing shape and pose.

In addition to the contact locations, the contact normalinformation gathered during haptic exploration was used in [3].Instead of a superquadric model here a polyhedral modelwas chosen as volumetric object representation. For objectrecognition the Euclidian distances of the polyhedral surfacepoints to the borders of a surrounding cubic workspace boxwere measured at equidistant intervals and matched to those

of synthetic models. This approach was evaluated only insimulation.

The basic kinematics of contact and its application incontour following are presented thoroughly in [4]. Moreover,several procedures for haptic exploration of object featureshave been investigated in [5], [6]. Other approaches in activecontact sensing concentrate on the detection of local surfacefeatures [7].

In our approach, a framework has been developed thatallows haptic exploration of unknown objects for recovery ofthe exact global shape using different types of manipulatorsequipped with contact sensing devices. This separates ourwork from approaches involving model based pose estimationof known objects as in [8].

As we are interested to integrate the framework as basis forhaptic exploration in our humanoid robot system [9] whichis equipped with two five-finger human-like and human-sizedhands, we focus on the application of exploring with five-finger hands. In particular, the developed framework allows usto use the human hand of an operator as exploring manipulatorby deploying a data glove with attached tactile sensors. Thisgives us the opportunity to already investigate the EPs ofinterest until the humanoid robot hand is fully integrated intoour robot. It also provides rich possibilities for immediatecomparison of haptic exploration results by a human versus ahumanoid robot hand.

Our aim is to establish a robust exploratory process for 3Dshape reconstruction and modeling which can be performed inan unstructured environment, while only providing observabil-ity of the hand pose, the finger configuration and the contactinformation. Currently we are limited by the constraint thatthe object being explored must remain in a fixed pose.

As modeling primitive we chose an extended superquadricfunction as in [2] which can represent a variety of cubical andspherical geometries. In the context of grasp planning this typeof representation has just recently been investigated for model-ing physical objects by superquadric decomposition [10]. Yet,in this paper we only address modeling of basic superquadricshapes. Also, we can not give results related to non-convexobjects at this stage of our work, as those are not reflectedby the chosen model, although the exploration process itselfis not limited to convex objects.

This paper is organized as follows. In the next section

Page 2: Haptic Exploration for 3D Shape Reconstruction …Haptic Exploration for 3D Shape Reconstruction using Five-Finger Hands A. Bierbaum, K. Welke, D. Burger, T. Asfour, and R. Dillmann

the relevant details and components of our system for hapticexploration focusing on object shape recovery are described.This includes a description of the human hand model andvisual tracking of the operator’s hand. In Section III we presentexperiments on haptic exploration of 2D and 3D real worldobjects. Finally we give a conclusion and an outlook on ourfuture work in SectionIV.

II. SYSTEM DESCRIPTION

Figure 1 gives a system overview with the componentsinvolved in acquisition of contact points during contour fol-lowing EP.

Fig. 1. System for acquisition of object shape data from haptic explorationusing a human or a humanoid robot hand as exploring manipulator.

During haptic exploration with the human hand, the subjectwears a data glove that serves as an input device for calculatingthe joint angle configuration of the hand. The data glove weuse is equipped with binary micro switches at the distal pha-langes. When touching an object, the switches are actuated aslocal contact force exceeds a given threshold. During actuationthe clicking of the switch also provides the operator with amechanical sensation, that helps to control the contact pressureduring exploration. The data glove is made of stretch fabricand uses 23 resistive bend sensors that provide measurementdata of all finger joint angle positions and the orientation ofthe palm.

Before starting exploration the operator needs to calibratethe data glove sensors with the forward kinematics of theunderlying hand model. Subject to calibration are abduc-tion/adduction and flexion/extension of all fingers, curvatureof the palm and the thumb motion.

A linear relation is used for projecting glove sensor readingsto joint angles. Calibration is accomplished by engaging posi-tions which result in minimum and maximum sensor responsein a separate calibration procedure.

Wrist position and orientation are determined visually in thereference frame of the camera coordinate system as describedin Section II-B. During exploration the subject visually guidesthe finger tips following desired paths. When the micro switch

is actuated by touch, the current location of the sensor in theglobal reference frame is registered as a point in the 3D objectpoint cloud. The sensor position is calculated using the forwardkinematics of the hand model as described in the followingsection.

For exploration with our humanoid robot platform ARMAR-III we may later use a model for the forward kinematics of therobot hand as presented in [11]. The robot is equipped withan advanced version of this hand with joint angle encodersattached to all controllable degrees of freedom (DoF) andtactile sensors at the fingertips.

The data acquired as 3D point coordinates in the hapticpoint cloud set is finally used for performing a superquadricmodel estimation.

A. Human hand model

The forward kinematics of the human hand must be modeledaccurately to transform the coordinates of the tactile sensorlocations gathered from the data glove sensor readings to aglobal reference frame in which the resulting 3D contact pointcloud is accumulated. Furthermore, the model is required tocover the entire common configuration space of the humanhand and the data glove, so that the human operator ispreferably not restricted in the choice of hand movements thatcan be projected to the model.

A complex hand model deploying 27 DoFs was introducedin [12] and used in several studies requiring exact models([13], [14]) for hand pose estimation from sensor input.The Carpometacarpals joints (CMC) were fixed, assumingthe palm to be a rigid part of the hand. The fingers weremodeled as serial kinematic chains, attached to the palmat the metacarpophalangeal joints (MCPs). Interphalangealjoint (IP), distal interphalangeal joints (DIP) and proximalinterphalangeal joints (PIP) have one DoF for flexion andextension. All MCPs joints have two DoFs, one for flexionand extension and one for abduction and adduction. The CMCjoint of the thumb is modeled as a saddle joint. Several variantsof this model exist in literature (see [15], [16]).

Fig. 2. The used hand model for haptic exploration by a human operatorwearing a data glove.

Page 3: Haptic Exploration for 3D Shape Reconstruction …Haptic Exploration for 3D Shape Reconstruction using Five-Finger Hands A. Bierbaum, K. Welke, D. Burger, T. Asfour, and R. Dillmann

The hand model we use in the presented framework isshown in Fig. 2. We have added two modifications to thebasic model to improve the representation of real human handkinematics. The first modification affects the modeling of thethumb’s CMC joint. Following [16], the first metacarpal of thethumb performs a constrained rotation around a third orthog-onal axis in the CMC joint, which contrasts the CMCs jointmodel as a two DoF saddle joint. For reasons of simplicity wemodel this joint as a three DoF joint.

The second modification is to overcome the inadequaterepresentation of the palm as a rigid body. As we want toincorporate the distal phalanges of the ring and little fingerin the exploration process, we have extended the model byadding one DoF at the CMCs of these fingers respectively. Bydoing this the ability of the palm is reflected to fold and curve,when the little finger is moved towards the thumb across thepalms inner side [17]. It is important to model this behavioras a human operator will occasionally utilize these types ofmovement when touching an object with the whole hand.

The resulting hand model consists of 26 DoFs. The fourfingers have 4 DoFs each at the DIP, 4 DoFs at the PIP and8 DoFs at the MCPs. The thumb is modeled with 1 DOFat its IP, its MCP is modeled with 2 DoFs and its CMC, asmentioned before, with a 3 DoF joint. Additionally we modelthe palm with 2 DoFs representing the CMCs of the little andring fingers and add 2 DoFs for the wrist movement.

We have used the Open InventorTM1 standard for construct-ing the hand model. This 3D modeling package allows theimplementation of local reference frames as described beforein a transparent way.

B. Wrist tracking

As mentioned earlier, the absolute position and the orien-tation of the data glove are determined using vision. In orderto track the data glove in a robust manner, we use a markerbracelet which is attached to the wrist of the subject wearingthe data glove and is tracked using a stereo camera system.Figure 3 shows the marker bracelet used for our experiments.The bracelet comprises twelve red markers for tracking andone green marker for initialization. All markers are printedon yellow background. The wrist localization consists of twophases. In the initialization phase, the pose of the wrist isestimated without the knowledge of past poses. Once an initialpose has been calculated, a particle filter approach is used totrack the pose of the wrist.

For the initialization, the relevant markers are identified byHSV color segmentation performed in the current scene. Onlygreen and red markers inside yellow areas are consideredfor color segmentation. With the calibration of the stereocamera system, the 3D coordinates of the green marker andthe second and third closest red markers are calculated. Sincethese markers lie all in the same plane, the plane normalcan be calculated from this information. Given the radius ofthe bracelet, the center of the circle described by the three

1http://oss.sgi.com/projects/inventor/

Fig. 3. Bracelet and coordinate system.

considered markers may be calculated. The x-axis of thecoordinate system (denoted red) can be derived from planenormal and center. The y-axis (denoted green) is calculatedfrom the difference of the green marker and the center. Thez-axis is calculated as the cross product of x- and y-axis.

After initialization, the bracelet is tracked using a particlefilter [18] based on a model of the bracelet comprising all12 red markers. The configuration of the model is definedby the 6D pose of the band. In each iteration of the particlefilter algorithm 100 new configurations are generated usinga gaussian distribution with variance σ2 = 0.45. Furthermorethe movement of the model is estimated by taking into accountthe movement in the previous frame. In order to retrieve theestimated 6D pose of the wrist band, the visible part of themodel is projected into both camera images using the cameracalibration. To validate each particle, a weighting functionis used which compares the segmentation mask for the redcolor with the model. In order to derive a weighting functionfor each configuration, we count all red pixels inside yellowregions f and all red pixels, which overlap with the projectedmodel m. The probability for each particle z and the currentimages i can then be formulated in the following way:

p(z|i) ∝ exp(λ ∗ m

f

)where λ defines the sector of the exponential function whichis used. After all particles have been weighted according tothis equation the 6D pose is estimated by the weighted sumof all configurations.

C. Superquadric fitting for shape estimation

The concept of superquadrics has been introduced in [19]as a family of parametric 3D shapes, among which thesuperellipsoid has become the most popular one and thereforeis often termed as superquadric, a convention we will preservehere.

A superquadric centered in the origin and with its axesaligned to the x, y, z coordinate axes can be described withthe following parametric equation

χ(η, ω) =

a1 cosε1(η) cosε2(ω)a2 cosε1(η) sinε2(ω)

a3 sinε1(η)

.

Page 4: Haptic Exploration for 3D Shape Reconstruction …Haptic Exploration for 3D Shape Reconstruction using Five-Finger Hands A. Bierbaum, K. Welke, D. Burger, T. Asfour, and R. Dillmann

The parameters a1, a2, a3 describe the extent of the su-perquadric along each axis. The exponents ε1, ε2 ε [0..2]produce a variety of convex shapes and describe the shapingcharacteristics from cubic to round in x and y directions.This way different 3D primitive shapes can be modeled, e.g.boxes (ε1, ε2 ≈ 0), cylinders (ε1 = 1, ε2 ≈ 0) and ellipsoids(ε2 = 1).

To locate the superquadric arbitrarily in space, we furtherintroduce a rotational matrix R and a translation vector x0,which add 6 more parameters to our model.

As superellipsoids are restricted to symmetric shapes only,we also add deformation parameters {tx, ty ε [−1..1]} formodeling tapering in z direction as described in [20]. Thisenables our model to also represent wedge resembling shapes.Applying a scaled tapering deformation function

Dt(x, y, z) =

tx za3

+ 1ty

za3

+ 11

xyz

we finally get the model function

m = RDt(χ(η, ω)) + x0 .

To estimate the 11 parameters of our superquadric modelfrom the 3D contact point data we use the Levenberg-Marquardt non-linear least-squares algorithm [21] to minimizethe radial Eucledian distance d between the data points andthe superquadric surface

d = ‖x‖(

1− 1F (x)

).

as proposed in [22]. Here, F (x) is the inside-outside func-tion of the superquadric, which has a value of 1 for pointsx = (x, y, z)> on the surface of the superquadric, while pointsinside result in F < 1 and points outside result in F > 1.

III. EXPERIMENTS FOR OBJECT SHAPE RECOVERY

For evaluation of the system described above we haveperformed experiments related to the exploration by a humansubject. The experimental setup hereby is shown in Fig. 4.

The region viewable by the stereo camera system definesthe workspace in which the objects to be explored have to

Fig. 4. Experimental setup. Here a planar grid structure is subject to contourfollowing.

(a) Triangle contour (b) Grid structure contour

Fig. 5. Resulting point clouds for tactile contour following of planar shapesand corresponding fitted planes. Red points are situated below the plane, greenpoints above.

reside. The human operator wears a data glove with the wristmarker attached and performs a contour following EP withthe tip of the index finger upon the object, i.e. the humanoperator visually guides the contact sensor along the contours.As mentioned earlier, the micro switch for contact sensing isalso located at the distal phalanx. During the EP the object isfixed within the workspace. The human operator is allowed tomove hand and fingers arbitrarily within the workspace as longas the marker bracelet may be visually detected and localized.In case the marker localization fails, the subject needs to movethe hand until it can be detected again.

A. 2D contour following in 3D space

As an initial experiment we chose to follow the visiblecontours of a planar structure to verify whether the explorationsystem delivers length and angle preserving point clouds.These properties were inspected visually from the resultingpoint cloud data. We calculated the PCA for all points in thepoint cloud for approximation of the plane the structures arelocated in. Further, we determined the standard deviation ofthe point locations in respect to this plane.

As planar shapes we chose a circle, an isoceles triangle anda 3 × 3 grid. The edge length of the bounding box for eachof these shapes was set to 160mm. The subject followed thecontours of a printout of the respective shape with the indexfinger. Resulting point clouds for triangle and grid contours areshown in Fig. 5. The figures show that the contours of the testshapes are situated in different planes, which originates froma change in the position of the camera system between thetwo explorations. The duration of the exploration process was40 seconds for the circle and triangle shapes and 2 minutesfor the grid structure. Exploration speed was mainly limitedby the performance of the wrist tracking algorithm.

During circle contour following 245 contact points wereacquired, the standard deviation to the fitted plane was cal-culated to σ = 5.37mm. For the triangle contour followingexploration 259 data points were acquired with σ = 6.02mm.For the grid exploration finally, 1387 data points were acquiredwith σ = 5.86mm.

B. Edge tracking of a 3D object

In this experiment the subject had to follow the edges ofa rectangular box situated on a table in the workspace. Theexploration procedure delivered 1334 data points, the resulting

Page 5: Haptic Exploration for 3D Shape Reconstruction …Haptic Exploration for 3D Shape Reconstruction using Five-Finger Hands A. Bierbaum, K. Welke, D. Burger, T. Asfour, and R. Dillmann

Fig. 6. Resulting point cloud for haptic contour following of a rectangularbox.

point cloud is depicted in Fig. 6. The box dimensions were150×50×120mm and therefore in the range of the dimensionof the human hand itself.

It is not possible to directly fit exploration data to asuperellipsoid which comprises only points from the edges ofan object as the data is ambiguous in describing the adjacentsurfaces without having additional surface normal information.Yet, we wanted to proof the systems capability to generatecontiguous 3D point clouds of real world objects.

C. Arbitrary touch exploration and superquadric fitting of a3D object

In a further experiment the exploring subject acquired con-tact coordinate information by arbitrarily touching all reach-able surfaces of the objects while no preference was given tothe edges. For exploration a different box and an upside-downplaced salad bowl were chosen. As all fingers were involvedthe exploration process could be performed within less thana minute while acquiring still enough data points. During theexploration it was considered not to acquire too many datapoints as this significantly extends the amount of calculationtime for the superquadric estimate. From the acquired dataapproximating superquadric representations were successfullyestimated, as shown in figure 7.

The estimation algorithm could also handle incompletesurface point data as in case of the box, where it was notpossible to acquire point data from the bottom side on whichit was situated. The acquired data set comprised 176 points.

For the salad bowl the resulting data set comprised 472points. As the superquadric function involved naturally onlydescribes convex shapes the approximation of the salad bowlwas rendered in the figure only in the representative rangeω ε [−π..0], which describes one half of the superellipsoid.

In case of the explored objects the extension coefficientsa1, a2, a3 of the superquadric model were in good correspon-dence to the dimensions of the real object.

It can be seen in the result plots, especially in figure 6 and7, that the haptic point cloud exhibits basic noise and outliers

Fig. 7. Surface exploration data and superquadric approximation for a boxand a salad bowl.

in some regions. From these experiments we found severalreasons to affect the quality of the resulting point clouds:

1) The parametrization of our human hand model intro-duces a static error which could be minimized by adapt-ing the model to the data glove’s basic dimensions. Yet,hands of different individuals lead to irregular stretchingof the glove fabric and occasionally to misalignmentbetween finger joints and strain gauge sensors. Thisintroduces a static model error which is not covered bythe calibration process. We decided not to address thisissue as our final goal is the operation of the system witha humanoid robot hand, where these problems cease toapply.

2) The measurement signals of the glove’s strain gaugesensors are naturally afflicted with background noisewhich limits the resolution of physical features. We alsoexpect this type of measurement noise for the joint anglesensors of our robot hand.

3) Beside the above, the pose data from visual tracking issuperimposed by anisotropic interference as the estima-

Page 6: Haptic Exploration for 3D Shape Reconstruction …Haptic Exploration for 3D Shape Reconstruction using Five-Finger Hands A. Bierbaum, K. Welke, D. Burger, T. Asfour, and R. Dillmann

tion of the wrists z-coordinate shows a significant higheramount of noise level than for the x- and y-coordinates,which is natural for stereo-camera systems. Also, somenoise arises in the pose estimation, as the bracelet stillhas some clearance to move when worn by an individual.Further, estimation uncertainty increases if only a lownumber of markers can be detected in the scene, e.g. dueto lighting conditions. The latter problems will becomediminished when using a robot arm for exploration, aswe can attach the marker band in a stable position andget additional pose information from joint angle sensors.This can be used in conjunction with the visual trackingsystem using data fusion.

IV. CONCLUSIONS AND FUTURE WORK

In this paper we presented a framework for acquiring andestimating volumetric object models via haptic explorationby contour following. In a first evaluation results for hapticexploration of 2D and 3D shapes by a human subject wearinga data glove in an unstructured environment were described.

It could be shown that the underlying human hand modeland the data acquisition process are sufficiently precise enoughto acquire contact position data when exploring the shape ofobjects having the magnitude of dimension as the human handdoes, while the exploration process could be performed usingthe modeled set of degrees of freedom of the human fingers.Further we could demonstrate fitting the acquired contact datato superquadric shapes.

As a next step we will address the transfer of the hapticexploration framework to our humanoid robot platform. Theplatform already incorporates the same stereo vision systemas used for the experiments described in this paper. A tactilesensor system for the robot hand has been developed thatprovides more information over the binary switches deployedwith exploration by a human.

Hence, the focus of our work will move to the developmentof autonomous and robust visually guided haptic explorationstrategies for shape recovery by a humanoid robot with five-finger hands.

ACKNOWLEDGEMENT

The work described in this paper was conducted withinthe EU Cognitive Systems project PACO-PLUS (FP6-2004-IST-4-027657) funded by the European Commission and theGerman Humanoid Research project (SFB 588) funded by theGerman Research Foundation (DFG: Deutsche Forschungsge-meinschaft).

REFERENCES

[1] Susan J. Lederman and Roberta L. Klatzky, “Hand movements: Awindow into haptic object recognition,” Cognitive Psychology, vol. 19,no. 3, pp. 342–368, 1987.

[2] P.K. Allen and K.S. Roberts, “Haptic object recognition using amulti-fingered dextrous hand,” in Robotics and Automation, 1989.Proceedings., 1989 IEEE International Conference on, 14-19 May 1989,pp. 342–347 vol.1.

[3] S. Caselli, C. Magnanini, and F. Zanichelli, “Haptic object recognitionwith a dextrous hand based on volumetric shape representations,” inMultisensor Fusion and Integration for Intelligent Systems, 1994. IEEEInternational Conference on MFI ’94., 2-5 Oct. 1994, pp. 280–287.

[4] David J. Montana, “The kinematics of contact and grasp,” InternationalJournal of Robotics Research, vol. 7, no. 3, pp. 17 – 32, June 1988.

[5] P.K. Allen, “Mapping haptic exploratory procedures to multiple shaperepresentations,” in Robotics and Automation, 1990. Proceedings., 1990IEEE International Conference on, 13-18 May 1990, vol. 3, pp. 1679–1684.

[6] N. Chen, H. Zhang, and R. Rink, “Edge tracking using tactile servo,” inIntelligent Robots and Systems 95. ’Human Robot Interaction and Coop-erative Robots’, Proceedings. 1995 IEEE/RSJ International Conferenceon, 5-9 Aug. 1995, vol. 2, pp. 84–89.

[7] A.M. Okamura and M.R. Cutkosky, “Haptic exploration of fine surfacefeatures,” in Robotics and Automation, 1999. Proceedings. 1999 IEEEInternational Conference on, 10-15 May 1999, vol. 4, pp. 2930–2936.

[8] A. Petrovskaya, O. Khatib, S. Thrun, and A.Y. Ng, “Bayesian estimationfor autonomous object manipulation based on tactile sensors,” inRobotics and Automation, 2006. ICRA 2006. Proceedings 2006 IEEEInternational Conference on, May 15-19, 2006, pp. 707–714.

[9] T. Asfour, K. Regenstein, P. Azad, J. Schroder, A. Bierbaum,N. Vahrenkamp, and R. Dillmann, “Armar-iii: An integrated humanoidplatform for sensory-motor control,” in Humanoid Robots, 2006 6thIEEE-RAS International Conference on, Dec. 2006, pp. 169–175.

[10] Corey Goldfeder, Peter K. Allen, Claire Lackner, and Raphael Pelossof,“Grasp planning via decomposition trees,” in Robotics and Automation,2007 IEEE International Conference on, 10-14 April 2007, pp. 4679–4684.

[11] A. Kargov, T. Asfour, C. Pylatiuk, R. Oberle, H. Klosek, S. Schulz,K. Regenstein, G. Bretthauer, and R. Dillmann, “Development of ananthropomorphic hand for a mobile assistive robot,” in RehabilitationRobotics, 2005. ICORR 2005. 9th International Conference on, 28 June-1 July 2005, pp. 182–186.

[12] ] J. Lee and T. Kunii, “Constraint-based hand animation,” in Models andTechniques in Computer Animation, Tokyo, 1993, pp. 110–127, Springer,Tokyo.

[13] Y. Yasumuro, Qian Chen, and K. Chihara, “3d modeling of human handwith motion constraints,” in 3-D Digital Imaging and Modeling, 1997.Proceedings., International Conference on Recent Advances in, 12-15May 1997, pp. 275–282.

[14] A. Erol, G. Bebis, M. Nicolescu, R.D. Boyle, and X. Twombly, “Areview on vision-based full dof hand motion estimation,” in ComputerVision and Pattern Recognition, 2005 IEEE Computer Society Confer-ence on, 20-26 June 2005, vol. 3, pp. 75–75.

[15] M. Bray, E. Koller-Meier, and L. Van Gool, “Smart particle filtering for3d hand tracking,” in Automatic Face and Gesture Recognition, 2004.Proceedings. Sixth IEEE International Conference on, 17-19 May 2004,pp. 675–680.

[16] B. Buchholz and T.J. Armstrong, “A kinematic model of the humanhand to evaluate its prehensile capabilities,” Journal of Biomechanics,vol. 25, no. 2, pp. 149–162, 1992.

[17] J.J. Kuch and T.S. Huang, “Vision based hand modeling and trackingfor virtual teleconferencing and telecollaboration,” in Computer Vision,1995. Proceedings., Fifth International Conference on, 20-23 June 1995,pp. 666–671.

[18] Michael Isard and Andrew Blake, “ICONDENSATION: Unifying low-level and high-level tracking in a stochastic framework,” Lecture Notesin Computer Science, vol. 1406, pp. 893–908, 1998.

[19] A.H. Barr, “Superquadrics and angle-preserving transformations,” Com-puter Graphics and Applications, IEEE, vol. 1, no. 1, pp. 11–23, Jan1981.

[20] F. Solina and R. Bajcsy, “Recovery of parametric models from rangeimages: the case for superquadrics with global deformations,” PatternAnalysis and Machine Intelligence, IEEE Transactions on, vol. 12, no.2, pp. 131–147, Feb. 1990.

[21] Donald W. Marquardt, “An algorithm for least-squares estimation ofnonlinear parameters,” SIAM Journal on Applied Mathematics, vol. 11,no. 2, pp. 431–441, 1963.

[22] A.D. Gross and T.E. Boult, “Error of fit measures for recoveringparametric solids,” in Computer Vision., 1988. Second InternationalConference on, December 5-8, 1988, pp. 690–694.