image-based visual servoing using improved image moments in 6

11
International Journal of Control, Automation, and Systems (2013) 11(3):1-11 DOI ISSN:1598-6446 eISSN:2005-4092 http://www.springer.com/12555 Image-based Visual Servoing Using Improved Image Moments in 6-DOF Robot Systems Yimin Zhao, Wen-Fang Xie*, and Sining Liu Abstract: This paper addresses the challenges of choosing proper image features for planar symmetric shape objects and designing visual servoing controller to enhance the tracking performance in image- based visual servoing (IBVS). Six image moments are chosen as the image features and the analytical image interaction matrix related to the image features are derived. A controller is designed to efficient- ly increase the robustness of the visual servoing system. Experimental results on a 6-DOF robot visual servoing system are provided to illustrate the effectiveness of the proposed method. Keywords: Image moment, image-based visual servoing, robotic system, visual servoing. 1. INTRODUCTION Visual servoing has been applied widely to different areas, from fruit picking to robotized laparoscopic sur- gery [1], especially in industrial field, such as assembling, packaging, drilling and painting [2-4]. According to the features used as feedback in minimizing the positioning error, visual servoing is classified into three categories, Position Based Visual Servoing (PBVS), Image Based Visual Servoing (IBVS) and Hybrid Visual Servoing. Compared with the other visual servoing methods, IBVS has three main advantages [5-7]. First, IBVS is a “model-free” method [5], which means it does not re- quire the model of target object. Second, IBVS is robust to camera model errors [6] and is insensitive to the cam- era calibration error. Third, in image plane, the image feature point trajectories are controlled to move approx- imately along straight lines [7]. However, one of the drawbacks of IBVS lie in that there may exist image sin- gularities and image local minima leading to IBVS fail- ure. The choice of the visual features is a key point to solve the problem of image singularities. Lots of efforts have been contributed to determine some decoupling visual features to deliver a triangular or diagonal interac- tion matrix [8,9]. In IBVS, the geometric features such as points, seg- ments or straight lines [2] are usually chosen as the im- age features and utilized as the inputs of controllers. However, these features can only be applied in some limited target objects [5]. Also, the geometric features might be occluded from the view of camera. Therefore, the complete geometric image features cannot be ex- tracted properly. For instance, the four corners of the rectangular object are considered as the image features. During the operation of system, one or two corner points may be covered by the intruding or unexpected object, such as workpiece or human hand. In this case, the num- ber of image features and that of desired image features does not match each other, which may lead to the failure of servoing. Recently, in order to track the objects which do not have enough detectable geometric features and to enhance the robustness of visual servoing systems, sev- eral novel features are adopted for visual servoing. For example, laser points [10,11] and the polar signatures of an object contour [13] are used as image feature in IBVS. The image moment is normally used for pattern- recognition in computer vision [9,13-15] and has been adopted for control scheme design due to its generic representation of any object, with simple or complex shape [16]. Image moments can be computed easily from binary or segmented image or from a set of extracted points of interest, disregarding the object shape complexity. Low-order moments have an intuitive meaning, since they are directly related to the area, the centroid and the orientation of the object in image plane [8]. Using the image moments as features in visual servoing renders the corresponding image interaction matrix with maximal decoupled structure. Thus, the inherent problem—singularity of the interaction matrix is solved and the performance of IBVS is improved. In [8], a set of image moments have been proposed as image features. Based on the invariants to 2-D translation, 2-D rotation, and scale, two image features are selected to control the rotational velocity around x axis ω x and the rotational velocity around y axis ω y for non-centered symmetrical objects. But these two image features can- not be used for centered symmetrical objects since the elements of interaction matrix related to any of these two features are equal to zero when the object is parallel to the image plane. In that case, the other two image fea- tures S x , S y are proposed to control the rotational veloci- © ICROS, KIEE and Springer 2013 __________ Manuscript received May 28, 2012; revised November 16, 2012; accepted February 24, 2013. Recommended by Editorial Board member Dong-Joong Kang under the direction of Editor Hyouk Ryeol Choi . This work was supported in part by the Natural Sciences and Engineering Research Council (NSERC), Canada. Yimin Zhao, Wen-Fang Xie, and Sining Liu are with the Department of Mechanical & Industrial Engineering, Concordia University., 1455 De Maisonneuve W., Montreal, QC, H3G 1M8, Canada (e-mails: [email protected], [email protected]. ca, [email protected]). * Corresponding author.

Upload: phungnhi

Post on 02-Feb-2017

223 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Image-based Visual Servoing Using Improved Image Moments in 6

International Journal of Control, Automation, and Systems (2013) 11(3):1-11 DOI

ISSN:1598-6446 eISSN:2005-4092 http://www.springer.com/12555

Image-based Visual Servoing Using Improved Image Moments in 6-DOF Robot Systems

Yimin Zhao, Wen-Fang Xie*, and Sining Liu

Abstract: This paper addresses the challenges of choosing proper image features for planar symmetric shape objects and designing visual servoing controller to enhance the tracking performance in image-based visual servoing (IBVS). Six image moments are chosen as the image features and the analytical image interaction matrix related to the image features are derived. A controller is designed to efficient-ly increase the robustness of the visual servoing system. Experimental results on a 6-DOF robot visual servoing system are provided to illustrate the effectiveness of the proposed method. Keywords: Image moment, image-based visual servoing, robotic system, visual servoing.

1. INTRODUCTION Visual servoing has been applied widely to different

areas, from fruit picking to robotized laparoscopic sur-gery [1], especially in industrial field, such as assembling, packaging, drilling and painting [2-4]. According to the features used as feedback in minimizing the positioning error, visual servoing is classified into three categories, Position Based Visual Servoing (PBVS), Image Based Visual Servoing (IBVS) and Hybrid Visual Servoing.

Compared with the other visual servoing methods, IBVS has three main advantages [5-7]. First, IBVS is a “model-free” method [5], which means it does not re-quire the model of target object. Second, IBVS is robust to camera model errors [6] and is insensitive to the cam-era calibration error. Third, in image plane, the image feature point trajectories are controlled to move approx-imately along straight lines [7]. However, one of the drawbacks of IBVS lie in that there may exist image sin-gularities and image local minima leading to IBVS fail-ure. The choice of the visual features is a key point to solve the problem of image singularities. Lots of efforts have been contributed to determine some decoupling visual features to deliver a triangular or diagonal interac-tion matrix [8,9].

In IBVS, the geometric features such as points, seg-ments or straight lines [2] are usually chosen as the im-age features and utilized as the inputs of controllers. However, these features can only be applied in some limited target objects [5]. Also, the geometric features

might be occluded from the view of camera. Therefore, the complete geometric image features cannot be ex-tracted properly. For instance, the four corners of the rectangular object are considered as the image features. During the operation of system, one or two corner points may be covered by the intruding or unexpected object, such as workpiece or human hand. In this case, the num-ber of image features and that of desired image features does not match each other, which may lead to the failure of servoing. Recently, in order to track the objects which do not have enough detectable geometric features and to enhance the robustness of visual servoing systems, sev-eral novel features are adopted for visual servoing. For example, laser points [10,11] and the polar signatures of an object contour [13] are used as image feature in IBVS.

The image moment is normally used for pattern-recognition in computer vision [9,13-15] and has been adopted for control scheme design due to its generic representation of any object, with simple or complex shape [16]. Image moments can be computed easily from binary or segmented image or from a set of extracted points of interest, disregarding the object shape complexity. Low-order moments have an intuitive meaning, since they are directly related to the area, the centroid and the orientation of the object in image plane [8]. Using the image moments as features in visual servoing renders the corresponding image interaction matrix with maximal decoupled structure. Thus, the inherent problem—singularity of the interaction matrix is solved and the performance of IBVS is improved.

In [8], a set of image moments have been proposed as image features. Based on the invariants to 2-D translation, 2-D rotation, and scale, two image features are selected to control the rotational velocity around x axis ωx and the rotational velocity around y axis ωy for non-centered symmetrical objects. But these two image features can-not be used for centered symmetrical objects since the elements of interaction matrix related to any of these two features are equal to zero when the object is parallel to the image plane. In that case, the other two image fea-tures Sx, Sy are proposed to control the rotational veloci-

© ICROS, KIEE and Springer 2013

__________ Manuscript received May 28, 2012; revised November 16, 2012; accepted February 24, 2013. Recommended by Editorial Board member Dong-Joong Kang under the direction of Editor Hyouk Ryeol Choi . This work was supported in part by the Natural Sciences and Engineering Research Council (NSERC), Canada. Yimin Zhao, Wen-Fang Xie, and Sining Liu are with the Department of Mechanical & Industrial Engineering, Concordia University., 1455 De Maisonneuve W., Montreal, QC, H3G 1M8, Canada (e-mails: [email protected], [email protected], [email protected]). * Corresponding author.

Page 2: Image-based Visual Servoing Using Improved Image Moments in 6

Yimin Zhao, Wen-Fang Xie, and Sining Liu

2

ties ωx, ωy for symmetrical objects in order to avoid the singularity of interaction matrix. However, the simula-tion results show that these two image features for cen-tered symmetrical objects used in [8] might not represent the pose of object correctly all the time.

In this paper, two new improved image moments re-lated to the object pose rotating around x axis and y axis respectively are proposed for centered symmetrical ob-jects. Along with the other four common image moments, we illustrate how to analytically derive the image inte-raction matrix describing the relationship between the motion of camera and velocity of image features. In or-der to control the motion of camera, an IBVS controller is developed for object tracking by using the derived interaction matrix with individual image features. The developed controller is applied to a 6-DOF robot visual servoing system and the experimental results demon-strate the effectiveness of the IBVS.

This paper is organized as follows. Section 2 details the image feature extraction using image moments and the derivation of interaction matrix. In Section 3, an IBVS controller and robot dynamic controller are de-signed for a 6-DOF robot to track the object in Field of View (FOV). In Section 4, simulation is carried out to demonstrate that the proposed image features give better control performance compared with those in [8]. Section 5 provides experimental results to illustrate the effective-ness of the proposed method. Conclusion and future work are given in Section 6.

2. IBVS USING IMAGE MOMENTS

In this section, we introduce the development of the

IBVS using image moments for a 6-DOF robot system. The objective of IBVS is to control the end effector of the robot to approach an unknown object with various pose and shape. The considered robotic eye-in-hand sys-tem configuration is shown in Fig. 1, which is composed of a 6-DOF robot and a camera mounted on the robot end effector.

In Fig. 1, H denotes the transformation between two reference frames. To accomplish IBVS for such robotic system, we first derive the interaction matrix which indi-cates the relationship between the motion of selected image features and the velocity of camera based on the

chosen six image features and then design an IBVS con-troller to control the motion of the robot.

2.1. Image feature extraction

In order to adjust and control the 6 degree-of-freedom of camera, at least six image features are chosen for vis-ual servoing. For an image with a density distribution function I(x, y), the two-dimensional geometric moment mij and central moments μij of order i + j is defined as

( , ) ,i jijm x y I x y dxdy

¥ ¥

-¥ -¥= ò ò (1)

( ) ( ) ( , ) ,i jij g gx x y y I x y dxdym

¥ ¥

-¥ -¥= - -ò ò (2)

where (xg, yg) are the coordinates of centroid in image frame.

As mentioned in [18,19], the low-order moments have their own properties which can denote the geometric characteristics of target object in image. Four image features are chosen as the same as those in [8]. a = m00: the size of object (zeroth order moment);

10 00/ ,gx m m= 01 00/ :gy m m= the coordinate of centroid on x axis and y axis respectively (first order moments)

and 11

20 02

21 arctan :2

mq

m mæ ö

= ç ÷-è ø the orientation angle [16]

(second order moments). However, there are many methods to choose the rest two image features. in [20], the well-known skewness are chosen as the rest two image features [3]. In [8], the invariant moments have been utilized as image features and four image features (two sets of image moments) have been proposed to obtain the improved decoupled visual servoing behavior. However, the first set of image features used in [8] are zero for centered symmetric shape object, which will cause singularity of image interaction matrix. So two new image features are proposed for centered symmetric shape objects in [8]. Nevertheless, our simulation results show that the new image features for centered symmetric objects used in [8] might not represent the pose of object correctly all the time. Thus, the improved image features to control the rotation around x axis and y axis are proposed in this paper. Since it is better to use moments of orders as low as possible to reduce the sensitivity of the control scheme to image noise, a new set of invariant which can tell the right pose of object and has lower order is introduced.

In [8], two image features 2 3 2 3

2 3 2 3

( ) /( ) /

x

y

S c c s s KS s c c s K

= +ìïí = -ïî

are

proposed to control ωx and ωy (the details are referred to [8]). In this paper, two image features are proposed to replace those proposed in [8] while the target objects are centered symmetric shape objects. The final improved two image features are as follows

94

1 2 1 2 194

1 2 1 2 1

0.1 ( ) /

( ) / ,

x

y

P c c s s M

P s c c s M

ìï = - +ïíï = -ïî

(3)

3D Workspace

World reference frame

Robot

reference frame

Camera

Robot Robot End- Effector

EB H

CE H

EB H

BW H

Object

OW H

OC H

Fig. 1. Robotic Eye-in-Hand system configuration.

Page 3: Image-based Visual Servoing Using Improved Image Moments in 6

Image-based Visual Servoing Using Improved Image Moments in 6-DOF Robot Systems

3

Table 1. The error of image features Px (epx).

epx 104 β

-15˚ -10˚ -5˚ 0˚ 5˚ 10˚ 15˚

α

-15˚ -78 -61 -15 -26 -41 -61 -62 -12˚ -48 -41 -19 -19 -37 -51 -50 -9˚ -40 -29 -30 -23 -40 -29 -30 -6˚ -26 -20 -16 -11 -16 -20 -24 -3˚ -12 -14 -15 -8 -8.3 -11 -9.7 0˚ -0.9 -5 -0.1 0 0.5 -7.5 0.9 3˚ 12 14 15 0.7 7.9 10 9.9 6˚ 25 21 16 11 16 22 23 9˚ 39 29 28 23 39 29 30

12˚ 44 41 20 18 37 41 49 15˚ 62 61 13 26 41 61 62

Table 2. The error of image features Py (epy).

epx 104 β

-15˚ -10˚ -5˚ 0˚ 5˚ 10˚ 15˚

α

-15˚ -105 -59 -72 -66 -69 -58 -100 -12˚ -74 -51 -109 -51 -46 -51 -74 -9˚ -55 -56 -53 -44 -33 -51 -74 -6˚ -35 -32 -37 -39 -25 -23 -29 -3˚ -12 -13 -19 -19 -13 -12 -14 0˚ -0.3 -0.1 -1.5 0 1.3 -0.1 -0.6 3˚ 12 13 19 19 14 12 14 6˚ 37 31 37 37 25 23 30 9˚ 55 52 54 44 34 29 67

12˚ 75 53 108 54 50 47 79 15˚ 105 58 75 66 69 58 98

where 1 20 02 ,c m m= - 2 03 213 ,c m m= - 2 30 123 ,s m m= -

1 20 02 .M m m= + To validate the above two features, we had carried out

the simulations to test if the values of the proposed features could roughly change linearly with the pose of object. For example, assume that the moment error of image feature relating to the pose of the rotation around x axis by +10o from the initial position is A. Then if this object has the pose with the rotation around x axis by –10o from the initial position, the moment error of image feature should be –A. Table 1 to Table 4 show the relationship between the moment errors and the rotational angle of the object from the initial position around the camera coordinate x axis and y axis, where α and β represent the rotation angles around x and y axes respectively. Fig. 2 to Fig. 5 are the representations of moment errors in Table 1 to Table 4 respectively. Figs. 2 and 3 show moment errors epx, epy which represent the errors of proposed image features in (3) with different poses for a normal rectangular object. Figs. 4 and 5 show eSx, eSy, which indicate the errors of two image features denoted by Sx, Sy defined in [8].

Referring to Tables 1 and 2, it is clear that for different poses, the proposed image features Px and Py can represent the pose correctly. The results show that the errors of the proposed feature Px and Py change linearly with the pose of object. Moreover, Tables 1 and 2 also show that the features Px and Py can represent the pose rotating around x and y axis respectively, i.e., when β is

fixed value, epx only changes with α. Hence, one can conclude that Px is almost independent of ωy. Meanwhile, when α is a fixed value, epy only changes with β. For example, if β is 0, epy is 0 and if β is +9o, epy stays around 0.0041 when α changes from –15o to +15o. It is concluded that Py is nearly independent of ωx.

Tables 3 and 4 show that Sx is almost independent of ωx and Sy is almost independent of ωy. However, the features Sy chosen in [8] cannot represent the correct pose. In general, for the opposite pose, the control signals should have opposite sign to drive the robot to the desired position, which demands the feedback errors to have different signs. In Table 4, it is clear that Sy cannot satisfy this condition, i.e., choosing the features Sx and Sy cannot produce the effective controller. The analysis above is based on the rectangle shape object but the proposed image features can be applicable for other centered symmetric shape object such as star and date core. Based on Fig. 2 to Fig. 5, we can draw the conclusion that the proposed image features in this paper can represent the right pose of planar object (the ratio of thickness and area of object, i.e., thickness/area £ 0.001) if the angle of view-points from the normal of planar object is in the range of ±15o in 3D workspace. Hence the visual servoing based on these image features is expected to generate proper control signal for the robot, which will be demonstrated in Section 4. Meanwhile it is noticed that the proposed image features can also be used to handle the non-symmetric objects and can provide similar control performance to those in [8].

Table 3. The error of image features Sx (eSx).

epx 104 β

-15˚ -10˚ -5˚ 0˚ 5˚ 10˚ 15˚

α

-15˚ 346 47 92 162 197 248 -341 -12˚ 344 341 72 171 167 250 338 -9˚ 189 195 133 134 177 195 191 -6˚ -189 134 92 96 92 135 192 -3˚ 128 89 89 67 50 660 144 0˚ 4.6 2 0.3 0 0.5 3.8 0.5 3˚ -127 -192 -85 6.7 -46 -70 -144 6˚ -183 -34 -95 -96 -95 -144 -189 9˚ -178 -195 -126 -134 -179 -195 -192

12˚ -318 -350 -77 -170 -70 -251 -302 15˚ 362 -450 -900 -162 -162 -249 -339

Table 4. The error of image features Sy (eSy).

epx 104 β

-15˚ -10˚ -5˚ 0˚ 5˚ 10˚ 15˚

α

-15˚ 285 231 442 -331 310 230 287 -12˚ 206 233 592 545 478 216 197 -9˚ 322 211 269 177 168 190 307 -6˚ 121 155 191 155 136 107 94 -3˚ 33 99 99 80 71 50 25 0˚ -1.3 -0.2 12 0 12 0.2 3.7 3˚ -30 -102 91 87 104 -50 -2.5 6˚ 110 5.6 -57 12 -10 48 131 9˚ 29 198 -28 48 -175 179 15

12˚ -209 -230 39 -169 -272 -240 -199 15˚ -251 -278 45 -371 -305 -228 -233

Page 4: Image-based Visual Servoing Using Improved Image Moments in 6

Yimin Zhao, Wen-Fang Xie, and Sining Liu

4

-15 -10 -5 0 5 10 15

-20-10

010

20-0.01

0

0.01

0.02

ba

e P x

Fig. 2. Representation of errors epx of image feature Px

on [ 15 ; 15 ] [ 15 ; 15 ]- ° + ° ´ - ° + ° for a rectangle object.

-15 -10 -5 0 5 10 15

-20-10

010

20-0.02

-0.01

0

0.01

0.02

ba

e P y

Fig. 3. Representation of errors epy of image feature Py

on [ 15 ; 15 ] [ 15 ; 15 ]- ° + ° ´ - ° + ° for a rectangle object.

-15-10

-50

510

15

-20-10

010

20

-0.05

0

0.05

0.1

ba

e S x

Fig. 4. Representation of errors eSx of image feature Sx

on [ 15 ; 15 ] [ 15 ; 15 ]- ° + ° ´ - ° + ° for a rectangle object.

-15-10-5051015

-20-10

010

20

-0.04-0.02

00.020.040.06

b

a

e S y

Fig. 5. Representation of errors eSy of image feature Sy

on [ 15 ; 15 ] [ 15 ; 15 ]- ° + ° ´ - ° + ° for a rectangle object.

2.2. Derivation of image interaction matrix The six image features are chosen as zero-th order

moment- area of object, coordinate of object’s centroid, orientation angle, and two improve image features (3), i.e.

00[ ] .Tg g x ys m x y P Pq= For any image point

[ , ]Tx y in a final binary image, the relationship between the change of geometric image moments mij and the camera velocity is illustrated as follows [8]:

,ij mijm J r=& & (4)

where the interaction matrix is

,mij vx vy vz x y zJ m m m m m mw w wé ù= ë û

1,

, 1

,

, 1 , 1

1, 1,

1, 1 1, 1

2

3

3

.

vx i j

vy i j

vz i j

x i j i j

y i j i j

z i j i j

m i mZ

m j mZ

i jm mZ

i jm m j m

i jm m i m

m im jm

w

w

w

l

l

ll

ll

-

-

+ -

+ -

- + + -

ì = -ïïï = -ïï + +ï =ïíï + +

= +ïï

+ +ï= - -ï

ï= -ïî

The relationship between the change of geometric central image moments mij and the camera velocity is shown as follows:

,ij ijJ rmm =& & (5)

where [ ],ij vx vy vz x y zJm w w wm m m m m m=

,

, 1 1, 1 ,

20211

1, , 100 00

1, 1, 1 ,

220

1,00

00

2

3 ( 2 3)

4 4

3 (2 3)

4

vx

vy

vz i j

g gx i j i j i j

g g gi j i j

g gy i j i j i j

gi

i jZ

x yi j i i j

x y ymmi jm m

y xi j i i j

xmi

m

w

w

mm

m m

m m m ml l l

m ml l l l

m m m ml l l

ml l

+ - +

- -

+ + -

-

=

=

+ +=

+ += + + + +

æ öæ öç ÷- - - -ç ÷ ç ÷è ø è ø

+ += - - + + +

æ öç ÷+ -ç ÷è ø

11, 1

00

1, 1 1, 1

4

.

g gj i j

z i j i j

x ymjm

i jw

ml l

m m m

-

- + + -

ìïïïïïïïïïïíïïïïïï æ öï + -ç ÷ï è øï

= -ïî

Hence, the interaction matrices corresponding to m00, xg, yg, θ [8] are rewritten here (the details are referred to [8])

00 00 01 102 3 30 0 0 ,mJ m m mZ l l

é ù= -ê úë û (6)

Page 5: Image-based Visual Servoing Using Improved Image Moments in 6

Image-based Visual Servoing Using Improved Image Moments in 6-DOF Robot Systems

5

11

00

220

00

202

00

11

00

340

34

340

34,

g g gxg

gg

g gyg

g gg

x x ymJz z m

xmy

m

y ymJ

z z m

x ym xm

ll l

ll l

l ll l

l l

ì é= - -ï ê

ëïï ùï ú- - +ï úï ûíï é

= - + -ï êï ëï ùï - + - úï ûî

(7)

11 112 211

1 ( ).4

J J Jq m mm D= D -

D + (8)

For the proposed image moments Px and Py, taking the time derivative of (3), we obtain

94

1 2 1 2 1 2 1 2 1134

1 2 1 2 1 1

94

1 2 1 2 1 2 1 2 1

134

1 2 1 2 1 1

( ) /

9 ( ) /4

( ) /

9 ( ) / ,4

x

y

P c c c c s s s s M

c c s s M M

P s c s c c s c s M

s c c s M M

ì= - + + +ï

ïï

+ +ïïíï = + - -ïïï - +ïî

& & & & &

&

& & & & &

&

(9)

where 1 112 ,s m=& & 2 30 123 ,s m m= -& & & 1 20 02 ,c m m= -& & & 2c =& 03 213 .m m-& & The interaction matrix for central moment Jμij can be

obtained from (5). Hence, the interaction matrices cor-responding to Px and Py are

94

2 1 1 2 2 1 1 2 1134

1 2 1 2 1 1

94

2 1 1 2 2 1 1 2 1

134

1 2 1 2 1 1

( ) /

9 ( ) /4

( ) /

9 ( ) / ,4

Px c c s s

M

Py s c c s

M

J c J c J s J s J M

c c s s J M

J c J s J s J c J M

s c c s J M

ì= - + + +ï

ïï

+ +ïïíï = + - -ïïï - +ïî

where 1 112 ,sJ Jm= 2 11 123 ,sJ J Jm m= -021 20 ,cJ J J

mm= -

1 03 213 .cJ J Jm m= - Based on the chosen six image features, the overall inte-raction matrix is calculated by

00.

Timage m xg yg Px PyJ J J J J J Jqé ù= ë û (10)

An IBVS control algorithm is developed to improve the tracking performance for the robotic system.

3. IBVS CONTROLLER DESIGN

Based on the derived interaction matrix, an IBVS con-

troller is designed to generate the feedback control signal for IBVS, as shown in Fig. 6. The controller aims at driv-

ing the end-effector to approach to an unknown object with different poses and shapes in FOV.

In the proposed algorithm, the visual servoing is ac-complished in three stages. The following is the detailed description of each stage.

3.1. Teaching stage

In this stage, the robot is taught with the desired posi-tion of object. This stage is accomplished manually. Since the image is in the desired position, all the six de-sired image features sd are recorded and will be used as the reference to calculate the image features’ error .s% For the same object, this stage is only needed to be per-formed once.

3.2. Design the IBVS control law

The desired image features are denoted as 00[d ds m= ]Tgd gd d xd ydx y s sq and the current values of the im-

age features are denoted as 00[ ] .Tg g x ys m x y s sq=

Thus the errors of image features are .ds s s= -% The general form of IBVS controller is

1 ,imageu KJ s-= - % (11)

where u is the feedback control signal, K is the proportional gain, s% are the errors of image features, and Jimage is the overall interaction matrix.

3.3. Design robot dynamic controller

Computed torque control [21] is applied as robot dynamic control scheme. For the robot, the general form of an n-joint robot dynamics can be written as [22,23]

( ) ( , ) ( ) ,vM q q C q q q q G q t+ + G + =&& & & & (12)

where q is a vector of the robot joint variable; M(q) is a positive definite, symmetric inertia matrix; ( , )C q q& is a vector grouping the Coriolis and centrifugal joint toques;

vqG & is a vector grouping the dissipative (friction) joint torques; G(q) is the a vector grouping the gravity joint torques; τ is the command vector for the joint torques.

From the structure of robot dynamic controller, one easily obtains the robot dynamic control law

( )( ( ) ( ))

( , ) ( ),d p d v d

d d d v d d

M q K q q K q q

C q q q q G q

t = - + -

+ + G +

& &

& & & (13)

where ( , ),C q q& vqG & and ( )G q can be computed using algorithm in [24]. The visual servoing control algorithm can be summarized in Fig. 7.

ds s~

s

+

-

Fig. 6. IBVS control diagram.

Page 6: Image-based Visual Servoing Using Improved Image Moments in 6

Yimin Zhao, Wen-Fang Xie, and Sining Liu

6

uqJqd )(1-=&

lim~ ss ³ Yes

No

u

t

s~

Fig. 7. Visual servoing control algorithm diagram.

4. SIMULATION RESULTS

The proposed algorthism has been simulated in

Matlab/Simulink and Robotics Toolbox [24]. The robot model is adopted as PUMA 260 [25] and the camera is JAI CM-030 GE [26]. In this part, a simple rectanglar object whose corners can be easily be recognized in the acquired images. The desired camera position is set as such that the rectangle is paralell to image plane at a depth of 0.15 m. Two initial positions of camera are considered.

4.1. Initial position I

Initial position I is composed of a translation of 0.15, 0.3 and 0.4 m along camera axis x, y, z, respectively, and of a rotation 10°, –20°, and –15° around these axes. The desired image and initial image are shown in Fig. 8. The simulation results are shown in Fig. 9 and 10. By comparing Fig. 9 and 10, we can observe that the improvments over the algorithm in [8] are obtained in terms of camera and image-conners trajectories.

4.2. Initial position II

Initial position II includes a displacement that is com-posed of a translation of –0.15, 0.3 and 0.15 m along camera axis x, y, z, respectively, and of a rotation 15°, –5°, and 10° around these axes. The desired image and initial image are shown in Fig. 11. The simulation results are shown in Fig. 12 and 13. From Fig. 12 and 13, we may notice that the improvements over the algorithm in [8] are obtained in terms of the camera and image cor-ners trajectories.

The control performance comparison is summarized in Table 5. From Table 5, it is noticed that settling time of error response by using the proposed image features is smaller than that by using the image features in [8]. The performance index-integral of time and absolute errors (ITAEs) of the coordinate errors in image plane is calcu- lated for comparison purpose. The ITAEs of coordinate

(a) (b)

Fig. 8. Desired image (a) and initial image I (b).

0 100 200 300 400 500 600

0

100

200

300

400

u (pixels)

v (p

ixel

s)

4

2

3

1

Initial

Desired

0 100 200 300 400 500 600

0

100

200

300

400

u (pixels)

v (p

ixel

s)

Desired

Initial

1 2

34

(a) (b)

Fig. 9. Trajectories in image plane (a) by the proposed image features and (b) by the image features of [8].

10 20 30 40 50 60 70 80 90 100-200

-100

0

100

200

Iteration

Erro

rs (p

ixel

s)

x1 y1 x2 y2 x3 y3 x4 y4

(a)

10 20 30 40 50 60 70 80 90 100-200

-100

0

100

200

Iteration

Erro

rs (p

ixel

s)

x1 y1 x2 y2 x3 y3 x4 y4

(b)

Fig. 10. Point coordinates error (a) by the proposed im-age features and (b) by the image features of [8].

(a) (b)

Fig. 11. Desired image (a) and initial image II (b).

Page 7: Image-based Visual Servoing Using Improved Image Moments in 6

Image-based Visual Servoing Using Improved Image Moments in 6-DOF Robot Systems

7

errors in the proposed algorithm are smaller than those in [8]. It is concluded that the proposed image features in this paper can provide better control performance than those in [8] when the rectangular object is tracked. The 00

0 100 200 300 400 500 600

0

100

200

300

400

u (pixels)

v (p

ixel

s)

3

2

4

1

Desired

Initial

0 100 200 300 400 500 600

0

100

200

300

400

u (pixels)

v (p

ixel

s)

Initial

Desired

4 3

21

(a) (b)

Fig. 12. Trajectories in image plane (a) by the proposed image features and (b) by the image features of [8].

10 20 30 40 50 60 70 80 90 100-250

-200

-150

-100

-50

0

50

Iteration

Erro

rs (p

ixel

s)

(a)

10 20 30 40 50 60 70 80 90 100-250

-200

-150

-100

-50

0

50

Iteration

Erro

rs (p

ixel

s)

x1 y1 x2 y2 x3 y3 x4 y4

(b)

Fig. 13. Point coordinates error (a) by the proposed im-age features and (b) by the image features of [8].

Table 5. Comparison of control performance.

Visual servoing using proposed image features

Visual servoing using image

features of [7] Initial position 1 2 1 2 Settling time

(2%) (s) 8.5 9.8 8.8 9.8

ITAE

x1 65.1 21.1 75.8 32.6 x2 31.9 40.1 34.3 45.8 x3 33.9 41.2 36.5 45.4 x4 72.1 20.9 78.5 43.1 y1 70.3 35.1 74.5 42.4 y2 63.9 30.1 67.7 44.7 y3 27.7 26.6 33.2 34.2 y4 38.1 35.1 42.5 41.8

Step Size=1(ms)

following experiments will show that the proposed image features are applicable for the other centered symmetrical shape objects such as star and date core.

5. EXPERIMENTAL RESULTS

5.1. System setup

The proposed algorithm has been tested on robotic visual servoing system shown in Fig. 14. The robot is PUMA 200 [25] and the camera is JAI CM-030 GE [26]. The camera desired position and initial position I are the same as those in Section 4. The camera initial position II includes a displacement that is composed of a translation of –0.15, 0.3 and 0.4 m along camera axis x, y, z, respec-tively, and a rotation of 15°, –5°, and –10° around these axes.

5.2. Image processing

In visual servoing, all the image features are extracted from the image taken by the camera. Therefore, the qual-ity of the image affects the accuracy or even the success of visual servoing system. In the real industrial environ-ment, the good illumination cannot be guaranteed. Also, the dust and speckles in the field of view (FOV) pose the complex background of the image and influence the fea-tures extraction algorithm to obtain the correct features. In order to overcome these problems, image processing is required before we extract the image features in the visual servoing process. A four-step image processing method [27] is adopted to process the images with com-plex background under the uneven illumination:

Step 1: Read in a red green and blue (RGB) image taken by the camera and transform it into a gray-scale image. And gray-scale morphological operations [28,29] are applied to process the image.

Step 2: Image pre-processing part--In this step, gray-scale closing [28] will be chosen as the image processing operator to enhance the image.

Step 3: Segmentation--A single threshold method called “Otsu’s method [17]” has been utilized to carry out segmentation.

Step 4: Remove the speckles caused in the segmenta-tion by using closing operation.

Fig. 14. Robotic visual servoing system setup.

Page 8: Image-based Visual Servoing Using Improved Image Moments in 6

Yimin Zhao, Wen-Fang Xie, and Sining Liu

8

(a) The images with complex and unexpected speckles.

(b) The images obtained after applying Step 2.

(c) The final images after applying Step 4.

Fig. 15. Image pre-processing test results. In this experiment, the above mentioned image

processing algorithm was added into control loop to deal with the image with complex background. Fig. 15 shows the image processing results. Fig. 15 shows that the image processing algorithm could remove the speckles and deal with the bad illumination. The final images after processing are ready for image moment feature extrac-tion.

5.3. Experimental results

At first, the experiment is accomplished with good camera calibration. The three shape objects have been tested: rectangle, star and date core. After that, in the robustness test, the camera calibration error will be add-ed into IBVS system.

Firstly, for a rectangular object, two tasks with differ-ent initial poses of camera are to be accomplished. The initial values and the desired values of image features are provided in Table 6. Fig. 16 shows the initial and final

images in task 1. The image feature errors of task 1 are displayed in Fig. 17 (The units of y axis are different for the image features, e.g., for the object centroid, the unit of y axis is pixel and for the object orientation angle, the unit is degree, and so on.

In task 2, the initial image and desired image are shown in Fig. 18. The experimental results of task 2 are shown in Fig. 19.

(a) (b)

Fig. 16. Initial image (a) and final image (b) of task 1.

1 ,xg- 2 ,yg- 33 5 10 ,00m -- ´ 54 5 10 ,Px

-- ´ 55 5 10 ,Py-- ´ 6 5q-

Fig. 17. Image feature errors of task 1.

(a) (b)

Fig. 18. Initial image (a) and final image (b) of task 2.

1 ,xg- 2 ,yg- 33 5 10 ,00m -- ´ 54 5 10 ,Px

-- ´ 55 5 10 ,Py-- ´ 6 5q-

Fig. 19. Image feature errors of task 2.

Table 6. The initial values and desired values of image

features in task 1 and task 2.

Image features

Desired value

Initial value for

task 1

Errors of task 1

Initial value for

task 2

Errors of task 2

xg -4.7 -174.2 -167.5 171.4 176.1 yg -7.6 168.3 175.9 90.9 98.5

m00 (pixel2) 14224 623 -13601 3240 010984 θ (degree) 88.3 65.9 -22.4 101 12.7

Px -0.0195 0.1943 0.21388 -0.0296 -0.0110 Py -0.0075 -0.0439 -0.0354 0.0724 -0.0800

Page 9: Image-based Visual Servoing Using Improved Image Moments in 6

Image-based Visual Servoing Using Improved Image Moments in 6-DOF Robot Systems

9

Table 7. The initial values and desired values of image features in task 3.

Image features

Desired value

Initial value for task 3

Error of task 3

xg -14.7 157.2 171.9 yg -9.9 71.9 81.8

m00 (pixel2) 11646 2580 -9066 θ (degree) 83.8 96.2 12,4

Px -0.10525 0.07902 0.18427 Py -0.00753 -0.05804 0.05501

(a) (b)

Fig. 20. Initial image (a) and final image (b) of task 3.

1 ,xg- 2 ,yg- 33 5 10 ,00m -- ´ 54 5 10 ,Px

-- ´ 55 5 10 ,Py-- ´ 6 5q-

Fig. 21. Image feature errors of task 3. In task 3 and task 4, the shapes of target object are star

and date core respectively. For task 3, the initial and the desired values of image features of star object are provided in Table 7. The initial and final images of task 3 are shown in Fig. 20. The experimental results are shown in Fig. 21.

For task 4, the initial and the desired values of image features of date core are provided in Table 8. The initial and final images of task 4 are shown in Fig. 22. The ex-perimental results are shown in Fig. 23. From the ex-perimental results for task 1 through 4, it is clear that all the image feature errors approach to zero, which means that the visual servoing is successful in all cases.

In order to test the robustness of system under the bad camera calibration, 5% error has been added to the focus length. The pose and shape of the object and initial con-dition are the same as those in task 1. In Fig. 24, the im-age feature errors are displayed. The success of visual servoing demonstrates that the reasonably bad camera calibration will not affect the effectiveness of the pro-posed method.

Table 8. The initial values and desired values of image features in task 4.

Image features

Desired value

Initial value for task 4

Error of task 4

xg -11.3 -193.9 -182.6 yg -5.4 181.4 186.8

m00 (pixel2) 8668 466 -8202 θ (degree) 88.6 64.3 -22.3

Px 0.06152 0.05286 -0.00866 Py 0.09001 -0.05519 -0.1452

(a) (b)

Fig. 22. Initial image (a) and final image (b) of task 4.

1 ,xg- 2 ,yg- 33 5 10 ,00m -- ´ 54 5 10 ,Px-- ´ 55 5 10 ,Py

-- ´ 6 5q-

Fig. 23. Image feature errors of task 4.

1 ,xg- 2 ,yg- 33 5 10 ,00m -- ´ 54 5 10 ,Px

-- ´ 55 5 10 ,Py-- ´ 6 5q-

Fig. 24. Image feature errors of robustness test.

6. CONCLUSION In this paper, an improved IBVS algorithm by using

the image moments as the image features is proposed. Based on the chosen image features, we have addressed the problem of deriving analytical image interaction ma-trix describing the relationship between the motion of

Page 10: Image-based Visual Servoing Using Improved Image Moments in 6

Yimin Zhao, Wen-Fang Xie, and Sining Liu

10

camera and velocity of image features. In order to de-couple the motion of camera, a controller has also been proposed by using the derived interaction matrix with individual image features. A series of control signals have been generated for the robot to track the target ob-ject with various shapes and poses. In the experimental part, image processing algorithm was added in control loop to deal with the image in the real industrial envi-ronment. The experimental results demonstrate that the tracking performance of the robotic system has been greatly increased. The future work will be focused on extending the one camera system to the binocular cam-eras system so as to consider the thickness of object, and on tracking the moving object instead of the stationary one.

REFERENCES

[1] A. Krupa, C. Doignon, J. Gangloff, and M. de Ma-thelin, “Combined image-based and depth visual servoing applied to robotized laparoscopic sur-gery,” Proc. of IEEE/RSJ International Conf. on Intelligent Robots and Systems EPFL, vol. 1, pp. 323-329, October 2002.

[2] S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Trans. on Robotics and Automation, vol. 12, pp. 651-670, Oc-tober 1996.

[3] N. Aouf, H. Rajabi, N. Rajabi, H. Alanbari, and C. Perron, “Visual object tracking by a camera mounted on a 6DOF industrial robot,” Proc. of IEEE International Conf. on Robotics, Automation and Mechatronics, vol. 1, no. 3, pp. 213-218, De-cember 2004.

[4] J.-K. Oh, S. Lee, and C.-H. Lee, “Stereo vision based automation for a bin-picking solution,” In-ternational Journal of Control, Automation, and Systems, vol. 10, no. 2, pp. 362-373, April 2012.

[5] L. Deng, Comparison of Image-based and Posi-tion-based Robot Visual Servoing Methods and Im-provements, PhD Dissertation, University of Water-loo, 2003.

[6] B. Espiau, “Effect of camera calibration errors on visual servoing in robotics,” Experimental Robotics III: The 3rd Intel. Symposium on experimental ro-botics, Lecture notes in control and information sciences, vol. 200, Springer-Verlag, 1993.

[7] F. Chaumette, “Potential problems of stability and convergence in image-based and position-based visual servoing,” The Conference of Vision and Control, Lecture Notes in Control and Information Science, vol. 237, pp. 66-78, Springer-Verlag, 1998.

[8] F. Chaumette, “Image moments: a general and use-ful set of features for visual servoing,” IEEE Trans. on Robotics, vol. 20, no. 4, pp. 713-723, August 2004.

[9] J. P. Wang and H. Cho, “Micropeg and hole align-ment using image moments based visual servoing method,” IEEE Trans. on Industrial Electronics, vol. 55, no. 3, pp. 1286-1294, March 2008.

[10] Z. Li, W. F. Xie, and X. W. Tu, “Switching control

of image based visual servoing with laser pointer in robotic assembly systems,” Proc. of IEEE Interna-tional Conf. on Systems, Man and Cybernetics, pp. 2383-2389, October 2007.

[11] A. Krupa and J. Gangloff, “Autonomous retrieval and positioning of surgical instruments in robotized laparoscopic surgery using visual servoing and la-ser pointers,” Proc. of IEEE International Conf. on Robotics and Automation, pp. 3769-3774, May 2002.

[12] C. Collewet and F. Chaumette, “A contour ap-proach for image-based control of objects with complex shape,” Proc. of IEEE/RSJ International Conf. on Intelligent Robots System, vol. 1, pp. 751-756, November 2000.

[13] P. R. Giordano, A. de Luca, and G. Oriolo, “3D structure identification from image moments,” Proc. of IEEE International Conf. on Robotics and Auto-mation, pp. 93-100, May 2008.

[14] S. Liu, W. F. Xie, and C. Y. Su, “Image-based vis-ual servoing using improved image moments,” Proc. of IEEE International Conf. on Information and Automation, June 2009.

[15] O. Tahri and F. Chaumette, “Point-based and re-gion image moments for visual servoing of planar objects,” IEEE Trans. on Robotics, vol. 21, no. 6, pp. 1116-1127, December 2005.

[16] R. J. Prokop and A. P. Reeves, “A survey of mo-ment-based techniques for unoccluded object repre-sentation and recognition,” CVGIP: Graphical Models and Image Processing, vol. 54, no. 5, pp. 438-460, September 1992.

[17] http://en.wikipedia.org/wiki/Mathematical_morph ology [18] M. K. Hu, “Visual pattern recognition by moment

invariants,” IRE Trans. on Information Theory, vol. IT-8, pp. 179-187, 1962.

[19] P. I. Corke and S. A. Hutchinson, “A new parti-tioned approach to image-based visual servo con-trol,” IEEE Trans. on Robotics and Automation, vol. 17, no. 4, pp. 507-515, August 2001.

[20] G. Well, C. Venaille, and C. Torras, “Vision-based robot positioning using neural networks,” Image Vision Computation, vol. 14, no. 10, pp. 715-732, December 1996.

[21] P. I. Corke, A Robotics Toolbox for MATLAB, Re-lease 7.1, 2002.

[22] N. Zemiti, G. Morel, A. Micaelli, B. Cagneau, and D. Bellot, “On the force control of kinematically defective manipulators interacting with unknown environment,” IEEE Trans. on Control System Technology, vol. 18, no. 2, pp. 307-322, 2010.

[23] L. Sicavicco and B. Sicilano, “Modeling and con-trol of robot manipulators,” Advanced Textbooks in Control and Signal Processing Series, 2nd ed., Spring Verlag, 2000.

[24] P. I. Corke, “Robotics toolbox for MATLAB,” IEEE Trans. on Robotics and Automation, vol. 3, no. 1, pp. 24-32, 1996.

[25] UNIMATE Inc., PUMA 260 Reference Manual,

Page 11: Image-based Visual Servoing Using Improved Image Moments in 6

Image-based Visual Servoing Using Improved Image Moments in 6-DOF Robot Systems

11

1984. [26] JAI Inc., CM-030 GE/CB-030 GE User’s Manual,

Document , Version 2.0, August 2011. [27] S. Liu, Image-based Visual Servoing using Im-

proved Image Moments in 6-DOF Robot Systems, Master Thesis, Concordia University, 2008.

[28] E. Dougherty and R. Lotufo, “Hands-on morpho-logical image processing, ser. Tutorial Tests in Opt-ical Engineering,” SPIE Press, vol. TT59, 2003.

[29] P. Siva and C. C. W. Hulls, “Dynamic segmenta-tion of small image windows for visual servoing,” Proc. of IEEE International Conference on Mecha-tronics and Automation, vol. 2, pp. 643-648, July 2005.

Yimin Zhao is a software engineer with Photronics Inc. Canada. He received his Ph.D. from Concordia University in 2012 and his Master degree from Hebei Uni-versity of Technology in 1989. Before he joined Concordia University, he is a pro-fessor with Hebei University of Science and Technology, China. His research interests include multi-body dynamic

system control, nonlinear control, process control, and visual servoing.

Wen-Fang Xie is an associate professor with the Department of Mechanical and Industrial Engineering at Concordia Uni-versity, Montreal, Canada. She was an Industrial Research Fellowship holder from Natural Sciences and Engineering Research Council of Canada (NSERC) and served as a senior research engineer in InCoreTec, Inc. Canada before she

joined Concordia University in 2003. She received her Ph.D. from The Hong Kong Polytechnic University in 1999 and her Master degree from Beijing University of Aeronautics and Astronautics in 1991. Her research interests include nonlinear control in mechatronics, artificial intelligent control, advanced process control and system identification and visual servoing.

Sining Liu received her B.E. degree in Automation Science and Technology from Xi’an Jiaotong University, Xi’an, China, in 2006 and her M.Sc. degree in Mechanical Engineering from Concordia University, Montreal, Canada, in 2008. She is currently a Ph.D. candidate at the Department of Mechanical and Industrial Engineering, Concordia University, Mon-

treal, Canada. Her current research interests cover image processing, visual servoing, hysteresis and adaptive control.