Transcript
Page 1: A 3D sliding mode control approach for position based visual …scientiairanica.sharif.edu/article_3677_47808b071c497787... · 2020-06-14 · Transactions B: Mechanical Engineering

Scientia Iranica B (2015) 22(3), 844{853

Sharif University of TechnologyScientia Iranica

Transactions B: Mechanical Engineeringwww.scientiairanica.com

A 3D sliding mode control approach for position basedvisual servoing system

M. Parsapour, S. RayatDoost and H.D. Taghirad�

Advanced Robotics and Automated Systems (ARAS), Industrial Control Center of Excellence (ICEE), Faculty of ElectricalEngineering, K.N. Toosi University of Technology, Tehran, Iran.

Received 5 May 2014; received in revised form 3 September 2014; accepted 20 October 2014

KEYWORDSNonlinear controller;Position-based visualservoing;Unscented Kalman�lter;PD-type slidingsurface;Lyapunov theory.

Abstract. The performance of visual servoing systems can be enhanced through nonlinearcontrollers. In this paper, a sliding mode control is employed for such a purpose. Thecontroller design is based on the outputs of a pose estimator which is implemented on thescheme of the Position-Based Visual Servoing (PBVS) approach. Accordingly, a robustestimator based on unscented Kalman observer cascading with Kalman �lter is used toestimate the position, velocity and acceleration of the target. Therefore, a PD-type slidingsurface is selected as a suitable manifold. The combination of the estimator and nonlinearcontroller provides a robust and stable structure in PBVS approach. The stability analysisis veri�ed through Lyapunov theory. The performance of the proposed algorithm is veri�edexperimentally through an industrial visual servoing system.© 2015 Sharif University of Technology. All rights reserved.

1. Introduction

Visual Servoing (VS) is applied for tracking the move-ment of any speci�c objects based on a vision input.Di�erent research �elds, like robotics, image process-ing, and control are applied to achieve VS. It haswide applications in robotics and mechatronic systemslike medical robotics, planetary robotics and especiallytheir extensive application in industrial robots [1,2].The respective control loop in VS has di�erent ar-chitectures such as look-and-move structure and perWeiss structure [3]. Look-and-move structure has aninternal feedback controller, as being used in manyindustrial robots. Such setups may accept Cartesianvelocity or incremental position commands and permitsto simplify the design of control signal [3].

*. Corresponding author. Tel.: +98 21 8846 9084;Fax: +98 21 8846 2066E-mail addresses: [email protected] (M.Parsapour); [email protected] (S. RayatDoost);[email protected] (H.D. Taghirad)

There are three main approaches in VS [4],Position Based Visual Servoing (PBVS) [5], ImageBased Visual Servoing (IBVS) [6], and \2&1/2 D"visual servoing [7], where PVBS is the most frequentlyused method [5]. In PBVS, the control signal isproduced based on the estimation of position andorientation (pose) of the target with respect to thecamera. The accuracy of the estimated pose is directlyrelated to the measurement noise and the cameracalibration [8]. Extended Kalman Filter (EKF) andUnscented Kalman Filter (UKF) have been developedto deal with the pose estimation in the noisy anduncertain situations. The aforementioned estimatorshave shown to be quite e�ective in practice [9-11].In order to include the velocity and acceleration ofthe target, the appropriate dynamic model for therelative motion between the camera and the target isnecessary. Conventional models are applied based onthe constant velocity or the acceleration model whichassumes invariable relative velocity or acceleration ateach sample time [1].

After estimating the pose of the target object, the

Page 2: A 3D sliding mode control approach for position based visual …scientiairanica.sharif.edu/article_3677_47808b071c497787... · 2020-06-14 · Transactions B: Mechanical Engineering

M. Parsapour et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 844{853 845

main goal in VS problem is to enhance the performanceof tracking via a controller. Since the system in hand (arobot) has nonlinear dynamics, a nonlinear controllerhas to be designed for this purpose. For such a design,we use Sliding Mode Control (SMC) in order to achievethe robust performance in the noisy environment asin the industrial environments. Generally, in manyforms of VS, the path planning and controlling theend e�ector of robot are performed separately. Byusing SMC approach, the aforementioned tasks can becombined together. The combination helps to tune thewhole control system together. In SMC, all states ofthe system are enforced to converge toward a desiredsliding surface within a �nite time and to stay on thismanifold for all further times. In the present paper,we de�ne a PD-type sliding mode surface to generatea desired path. The novelty of such a selection isthe employment of the estimated position, velocityand acceleration of the target for de�ning the slidingmanifold. The estimated values are obtained froma UKF cascade structure which has been recentlyproposed in [12]. The information of the estimatedmodel and the observation inherits uncertainties whichcan be directly considered in the proposed controller.The stability of the closed-loop system is proved byLyapunov theory. As the target object is dynamicallychanging (in contrast to the pre-planned path), thesliding surface is adapted to the varying positions inthe present case.

So far, various types of SMC have been used forVS system in [12-14]. Usually, based on the nonlineardynamics model of the system, SMC is designed. Tothe best knowledge of the authors, the stability analysisof the closed-loop system with the combination ofthe pose estimator in uncertain and noisy situationshas not been focused signi�cantly. SMC for PBVSproblem has been introduced theoretically for a 6DOF robot manipulator in [15]. The desired path isde�ned in which errors are bounded, and the targetvisibility is guaranteed. In [16], a sliding surface isdesigned with the consideration of uncertainties inthe nonlinear dynamics of the robot. The proposedalgorithm in [17] used dynamics of robot that is toocomplex to implement on industrial robots. In our case,PD-type sliding surface is developed for a �ve degreesof freedom robot manipulator, with PBVS approach.The existence of the internal feedback controller onthe industrial robot manipulator makes the controllerdesign simpler, since the robot accepts the positioncommand in the task space. Through the internalloop, a simpli�ed motion kinematics model can beused for the industrial robot, where the controllerinputs are the joint velocity signal. To producethe control signal, the information of the pose es-timator is used as the sliding manifold input. Fi-nally, the tracking performance of the adapted control

Figure 1. Experimental setup.

scheme on the industrial robot is veri�ed experimen-tally.

2. Theoretical background

In this section, �rst the experimental setup, that hasbeen used for VS purpose, is presented. Then, theformulation of UKO+KF pose estimators is reviewed.Furthermore, practical implementation issues of apply-ing control outputs to an industrial manipulator aredescribed.

2.1. Experimental setupThe experimental hardware setup, which is shown inFigure 1, consists of a 5-DOF RV-2AJ robot manipu-lator produced by Mitsubishi Co. augmented with aone degree of freedom linear gantry. This robot has�ve degrees of freedom motion, with one degree ofredundancy, but the orientation of the wrist about thetool axis is not presented in the structure. Additionallya PC equipped with a Pentium IV (1.84 GHz) processorand a 1 GB of RAM is utilized as the processor.A camera is attached on the end e�ector of themanipulator, which is a Unibrain Co. product with30 fs frame rate and a wide lens with 2.1 mm focallength. Since we need a real-time setup, OPENCVLibrary is used in the Visual Studio environment.

2.2. Feature extraction and pose estimationIn the conventional pose estimation, the relative poseof the object with respect to the camera frame in 3Dcoordinates is calculated. To perform this calculation,some speci�ed points as feature points attached tothe target object in 2D coordinates of the image areneeded. In Figure 2, the projections of feature pointsare illustrated in the image frame which is denoted bypimi = (ui; vi); i = 1; � � � ; n. The feature points in

Page 3: A 3D sliding mode control approach for position based visual …scientiairanica.sharif.edu/article_3677_47808b071c497787... · 2020-06-14 · Transactions B: Mechanical Engineering

846 M. Parsapour et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 844{853

Figure 2. Camera, object and image frames.

the object frame can be expressed as poi = (xoi ; yoi ; zoi );i = 1; 2; � � � ; n, n � 3, and represented in the cameraframe by pci = (xci ; yci ; zci ). The relationship between pciand poi can be de�ned as the following equation:

pci = R(roll, pitch, yaw):poi + T; (1)

in which, roll, pitch and yaw are the orientation Eulerangles and the translation are represented by the vectorT = [XY Z]T .

Using a pin-hole camera model, the relationshipbetween pimi and pci can be formulated as follows:

�uivi

�] =

h� fPx � f

Py

i264xcizciycizci

375 ; (2)

where f is the focal length, and Px and Py are inter-pixel spacing parameters along ui and vi. By foldingthe 3n equations (1) into the camera model (2), a setof 2n nonlinear equations is achieved. In order to solvethis set of nonlinear equations and achieve a uniquesolution, at least 3 non-aligned points on the objectare required [17]. There are several researches thathave applied EKF and UKF for pose estimation [18,19].Recently, [10] has proposed EKF in addition to KFthat guarantees its convergence; however, in [12], it isshown that the performance of this method is directlydependent on the initial condition, and therefore, theemployment of UKF instead of EKF leads to a betterperformance. Due to the promising features of thisestimator, this technique has been used as the poseestimator engine in this paper. In this estimator, thenonlinear-uncertain estimation problem is decomposedinto a nonlinear-certain observation in addition to alinear-uncertain estimation problem. The �rst part is

handled using the Unscented Kalman Observer (UKO)and the second part is accomplished by a Kalman Filter(KF). The pose estimator is fed to a robust and fastmodi�ed Principal Component Analysis (PCA) basedfeature extractor proposed in [12]. This robust methodis used in this article to estimate the target pose in anuncertain and noisy environment.

2.3. Singularity avoidanceAfter estimating the desired dynamic parameter ofthe target, these parameters have to be converted tothe joint space of the robot manipulator. Since thereis no prior knowledge on the desired trajectory, andthe robot is a �ve degrees-of-freedom manipulator, therobot may encounter singular con�gurations within itsprescribed trajectory. To remedy this issue, inversekinematics of the manipulator is solved in a real-timeroutine to generate a singular free motion with theconsideration of the joint limits for the robot. Apractical motion planner, which is proposed by theauthors in [20], has been fully elaborated. The appliedmethod shows a well performance when there is noprior information about the via-points and the �naldestination of the desired trajectory.

Figure 3 represents the complete loop of the visualservoing system. An image is captured by the cameraapplied as the input of the \Feature Extractor" block.After estimating the target pose by UKO+KF, therelative pose is commanded to the SMC as a desiredposition.

3. Design of SMC for PBVS system

The whole process of the VS loop is demonstrated inFigure 4. In the �rst step, an image is captured bya camera. Then the desired features are extractedfrom the mentioned image. After that, the position,velocity and acceleration of the object are estimatedby UKO+KF estimator. Error, which is de�ned asthe relative pose, is applied as the input to the slidingsurface. Then the control signal (in velocity level)is produced via SMC, which is proportional to eachDegree Of Freedom (DOF) of the robot. According tothe singularity avoidance, such signals are employedto solve the inverse kinematics problem in order tocalculate the angle of each joint. In the �nal step, theoutcome signal is commanded the internal controller ofour industrial robot. Note that, because of the internal

Figure 3. The complete VS loop.

Page 4: A 3D sliding mode control approach for position based visual …scientiairanica.sharif.edu/article_3677_47808b071c497787... · 2020-06-14 · Transactions B: Mechanical Engineering

M. Parsapour et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 844{853 847

G (X;Y; Z; roll, pitch, yaw) =�uivi

�; i = 1; 2; � � � ; n; n � 3

=

264� fPx �X+cos(roll)cos(pitch)xoi+(cos(roll)sin(pitch)sin(yaw)�sin(roll)cos(yaw))yoi+(cos(roll)sin(pitch)cos(yaw)+sin(roll)sin(yaw))zoi

Z�sin(pitch)xoi+cos(pitch) sin(yaw)yoi+cos(roll) cos(yaw)zoi

� fPy � Y+sin(roll) cos(pitch)xoi+(sin(roll) sin(pitch) sin(yaw)+cos(roll) cos(yaw))yoi (sin(roll) sin(pitch) sin(yaw)�cos(roll) sin(yaw))zoi

Z�sin(pitch)xoi+cos(pitch) sin(yaw)yoi+cos(roll) cos(yaw)zoi

375 ;(3)

Box I

Figure 4. Block diagram of the overall process.

controller, we can assume the controller as a prefecttracker. The whole process is repeated until the targetobject is tracked perfectly.

Traditionally, a visual servoing system includesthe feature tracking and the feedback controlling astwo separate processes. This separation limits tunethe whole system performance. In the present paper,we propose to use SMC to perform VS for mentionedtasks. The path planning is accomplished using thesliding mode surface, and the performance is preservedby the sliding mode control output. The stability ofthe whole control system is proved by Lyapunov theory.Moreover, the robust behavior of SMC against the esti-mation noise provides a suitable tracking performancefor the whole visual servoing system.

3.1. Controller designIn this approach, instead of using the dynamics ofthe system, we have used the estimated values from

UKO+KF and their corresponding desired values tode�ne the sliding mode surface. The main purpose istracking a speci�c object whether the object is movingor not. According to Section 2.1, by substitutingEq. (1) into the pin-hole camera model (Eq. (2)), aset of nonlinear equations is obtained as Eq. (3) andshown in Box I.

These nonlinear equations show the relation be-tween \the relative pose of the target object withrespect to the end-e�ector (camera) frame in the 3Dcoordinates" and \the feature points attached to thetarget object in 2D coordinate of the image frame".

By considering a constant velocity model thatassumes invariable relative velocity of the target objectwith respect to the end-e�ector at each sample time,we may reach the following model for the relative poseof the object with respect to the end-e�ector:

Wk = AWk�1 + �k;

Pk = G(WK) + �k; (4)

where:

W=hX; _X;Y; _Y ; Z; _Z; roll; _roll; pitch; _pitch; yaw; _yaw

iT;

and A is a block diagonal matrix with�1 T0 1

�blocks.

�k and �k are the model uncertainty and the mea-surement noise, respectively, and they are expressedby a zero mean Gaussian noise. Pk is a vector ofthe normalized coordinates of the feature points in theimage plane.

ui and vi (for at least 3 non-aligned feature pointswhich are attached to the target object) are calculatedby Eq. (2). These points are fed into a UKO toestimate the vector W . The estimation process isexplained in [21]. According to the proposed structurein [10,12], the pose vector is estimated by UKO in aniterative loop with the desired stop condition. Here,the estimated velocity is zero, since the same input isused in several epochs. Moreover, utilizing only oneset of input data causes an inaccurate estimation. Toovercome such a problem, and to estimate the velocity

Page 5: A 3D sliding mode control approach for position based visual …scientiairanica.sharif.edu/article_3677_47808b071c497787... · 2020-06-14 · Transactions B: Mechanical Engineering

848 M. Parsapour et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 844{853

and acceleration of the object, the UKO output isfed to a linear Kalman �lter. Then by changing theoutput and considering a constant acceleration modelthat assumes invariable relative acceleration at eachsample time, we may reach the following linear model:

Vk = BVk�1 + vk;

Pok = CVk + �k; (5)

where:

V =�X; _X; �X;Y; _Y ; �Y ; Z; _Z; �Z; roll; _roll; �roll;

pitch; _pitch ; �pitch; yaw; _yaw; �yaw�;

V is the state vector that shows the relative motionparameters between the camera and the target object.

B is a block diagonal matrix with

241 T 0:5T 2

0 1 T0 0 1

35blocks. Pok is the pose vector with 6 translational androtational elements as X, Y , Z, roll, pitch and yaw. �kand vk are the model uncertainty and the measurementnoise vectors, respectively. vk is related to the staticpose estimator precision. C is a 6�18 matrix with thefollowing format:

cij =

(1 j = 4i� 30 otherwise

: (6)

Having mentioned the linear model and the desiredaccuracy of the initial estimation, the KF method isimplemented by the recursive formulation to estimatethe pose, velocity and acceleration of the target objectwith respect to the end-e�ector. After the estimationprocess, we can de�ne the desired sliding surface. Letus de�ne the following PD-type sliding surface:

S =�ddt

+ ��e = _e+ �e; (7)

in which e is the relative pose of estimated positionof the target with respect to the position of theend-e�ector: and, � is the positive constant whichspeci�es the rate of convergence toward the slidingmode manifold. The sliding surface vector in Eq. (7)is a �ve tuple S =

�sx sy sz sA sB

�T whichcontains the manifold motion variable in x, y and zdirections, and orientation about the z and y axis ofthe robot base coordinate, called in here A, and Borientations, respectively. Based on the relative pose,the state vector is converged toward the sliding surface.Then the state shall smoothly slide on the surface toreach the target state. Because of the movement of

the target, the trajectory path is dynamically changing.Therefore, the position of the current state may divergefrom the sliding surface. Hence, the process has tobe executed iteratively with a short refreshing periodto adapt to the varying positions. There are twoapproaches to de�ne the required control signal.

The �rst control design is proposed as follows:

v1e�e = ��(t) sgn(S); (8)

in which �(t) is a varying time coe�cient which willbe discussed in the next section. sgn(:) denotes thesignum function.

The second control signal design is proposed asfollows:

v2e�e = ���1�e+ vdes � k sgn(S); (9)

in which k is the positive constants; �e is the accelerationerror between the end-e�ector and the goal; and, vdesis the desired velocity of the object.

The �rst design consists of only a switchingterm, while in the second design, a linearizing termis added to the switching term to ensure the slidingcondition [22].

3.2. Stability analysisIn the design of the control laws, the sliding conditionof the manifold is equal to _e + �e = 0 that constrainsthe motion of the system. Choosing � > 0 guaranteesthat all the states of the system tend to zero as timetends to in�nity. The rate of the convergence can becontrolled by the choice of �. The following derivativeequation is needed to obtain the control signal:

_S = �e+ � _e = �e+ ��v1e�e � vdes

�; (10)

where _e represents the di�erence between the velocityof the end-e�ector and the target. Furthermore, ve�eis the end e�ector velocity, which has been used as thecontrol signal in the practical implementation.

The stability analysis of this algorithm is basedon Lyapunov direct method. Consider the followingpositive de�nite Lyapunov function candidate V =12S

TS. To show the stability of this algorithm, thederivative of the Lyapunov function candidate has to benegative. To provide these circumstances, the controlsignal has to be limited by a speci�ed function. Thisfunction is a subject to the robot velocity constraintsand the amplitude of the estimation noise. Thefollowing inequality can be speci�ed from Eq. (8).Function '(t) will satisfy the following inequality [23]: �e� �vdes

1 � '(t): (11)

Based on the upper bound of the control signal, thederivative of the Lyapunov function candidate, _S, is

Page 6: A 3D sliding mode control approach for position based visual …scientiairanica.sharif.edu/article_3677_47808b071c497787... · 2020-06-14 · Transactions B: Mechanical Engineering

M. Parsapour et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 844{853 849

determined by:

_V = S _S = S (�e� �vdes) + S�v1e�e; (12)

_V � �jSj'(t) + �Sv1e�e: (13)

By folding the �rst control signal Eq. (8) into Eq. (12)the following inequality can be obtained for the derivateof Lyapunov function candidate:

_V � �jSj'(t) + �S(��(t) sgn(S)): (14)

With the consideration of �(t) � '(t) + �0, thederivative of the Lyapunov function will be semi-negative de�nite (obviously, _V = 0 when S = 0).

_V � ���0jSj: (15)

Therefore, the trajectory reaches the manifold S = 0in the �nite time and, once on the manifold, it cannotleave it, as forced by the condition (Eq. (15)).

The same procedure can be done for the secondcontroller design. Combining Eqs. (9) and (12), wehave:

_V =S(�e��vdes)+S�����1�e+vdes� k sgn(S)

�; (16)

_V = �k�jSj:In this approach, asymptotic stability is veri�ed byEq. (16) because the derivate of the Lyapunov functioncandidate is negative de�nite.

There are some factors to select one of the con-trollers designs in our practical implementation. Thesecond control design has a linearizing term in contrastto the �rst control design. The analysis of the seconddesign can be failed owing to the accuracy estimationsin observations, in the presence of uncertainties. Onthe other hand, the �rst design permits to presume un-certainties in more suitable condition for VS problem.In the �rst design, �(t) is directly determined by thecontrol signal limitations, while in the second designthe estimated values from UKO+KF a�ected such alimitation. Moreover, the command of the robot mightbe greater than the legal bound of the control signal.Therefore, the �rst control design is selected for thepractical implementation.

3.3. Modi�ed controller designSMC has encountered some limitations in practice,especially the chattering [23]. The most conventionalmodi�cation used to limit the chattering is the bound-ary layer approach [23,24]. To attenuate chattering,the signum function can be replaced by a high-slopesaturation function. The control law is altered as:

v1e�e = ��(t)sat(s="); (17)

Table 1. The controller parameters.

Type ofswitchingfunction

' �0 � "

Signum 0.1 0.05 1 |

Saturation 0.1 0.13 1 "x = "y = "z = 0:5"A = "B = 0:38

where sat(:) is the saturation function and " is apositive constant. A suitable approximation requiresthe use of small ". For the stability analysis, asimilar analysis can be performed when jSj � ", andthe derivatives of the Lyapunov function satis�es theinequality _V � ���0jSj. Therefore, whenever jSj � ",jS(t)j will be decreasing, until it reaches the set fjSj �"g in a �nite time and remains inside thereafter. Insidethe boundary layer where jSj � ", we cannot haveasymptotic stability and only Uniformly UltimatelyBounded (UUB) stability with an ultimate bound canbe obtained. The ultimate bound can be reduced bydecreasing the depth of the boundary layer [23].

3.4. Controller parameters selectionAccording to the constraints of our experimental VSsystem, the controller parameters are tuned to theselections shown in Table 1. First, based on the tradeo�between _e and e from Eq. (7), the rate of convergence(�) is selected. Then from Eq. (11), with respect to themaximum acceleration of the robot, and the maximumvelocity of the goal object, function '(t) is chosen.According to the accuracy of the tracking error, �0 isselected. The width of the boundary layer (") is tunedin order to reduce the oscillations of the control signal.

4. Experimental results

In this section, the performance of the proposed al-gorithm is veri�ed through a few experiments. In the�rst experiment, e�ciency of visual regulation with twodi�erent types of switching functions in SMC techniqueis performed. Second experiment veri�es the overallperformance of the servoing system.

4.1. Experiment IVisual regulation is performed in this experiment. Thepose of the end e�ector should be regulated accordinglyfor any arbitrary �xed pose of the object. The desiredsliding mode manifold is produced based on the errorbetween the estimated pose of the target with respectto the pose of the end e�ector. The main behavior ofthe control signal depends on the type of the switchingfunction. Two di�erent types of functions, signum andsaturation, are employed. To analyze the behavior ofthese regulations, the initial pose of robot is reset tothe same pose for each round of the experiment.

The results of the sliding surface, the control sig-

Page 7: A 3D sliding mode control approach for position based visual …scientiairanica.sharif.edu/article_3677_47808b071c497787... · 2020-06-14 · Transactions B: Mechanical Engineering

850 M. Parsapour et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 844{853

Figure 5. Sliding surface with sign function in (a) Xdirection, and (b) A orientation.

Figure 6. Control signal with sign function in (a) Xdirection, and (b) A orientation.

Figure 7. Derivative of Lyapunov function with signfunction in (a) X direction, and (b) A orientation.

nal (velocity signal) and the derivative of the Lyapunovfunction in theX direction and A orientation are shownin Figures 5-7 for signum function, respectively. TheX direction and A orientation are selected as the rep-resentatives of translation and orientation respectively,to keep the number of illustrated �gures at a managinglevel. As can be seen in Figure 5(a), the trajectory lieson the sliding surface in the X direction, and convergestoward zero (in 0.2 second). The trajectories and theresulting sliding variables reach zero, which indicatesthat the error of the relative pose tends to zero in0.8 second and lies on it after the object has beentracked. This issue can be seen in Figure 6, too. Thevelocity in the X direction is produced as -0.15 cmper second during the �rst 0.2 second and after thetracking error tends to zero. The oscillations are due tothe chattering; however, the variations remain aroundzero. This indicates that the controllers have suitablycommanded the robot to stay on the sliding surface.As the object has no rotational motion, the controlsignal and the sliding surface in the A orientation aredithering around zero as shown in Figures 5 and 6(b).Figure 7 shows that the derivative of the Lyapunov

Figure 8. Sliding surface with sat function in (a) Xdirection, and (b) A orientation.

Figure 9. Control signal with sat function in (a) Xdirection, and (b) A orientation.

Figure 10. Derivative of Lyapunov function with satfunction in (a) X direction, and (b) A orientation.

function is always negative which veri�es the stabilityof this algorithm in practice.

In our case, the oscillations cannot signi�cantlyharm the mechanical parts of robot, since they areattenuated through the implemented �lters in the innerstructure of the robot. In practice, the chatteringphenomenon is not desirable. To have a better per-formance in controlling the robot, we have replacedthe signum function by the saturation function. Theresults of this case for the sliding surface, the controlsignal and the derivative of the Lyapunov functionin the X direction and A orientation are shown inFigures 8-10, respectively. The trajectories and theresulting sliding variable reach the desired conditionabout 0.2 second; however, between 0.2 and 0.5 secondsthe tracking errors are not exactly zero, and they arerestricted in a bounded region. This is in completeagreement of the stability analysis that guarantees onlyUUB condition on the tracking errors. In Figure 9,the same behavior can be seen in the control signal,too. The signal is produced in the X direction as�0:22 cm per second and after that, this signal reacheszero in 0.15 second. This signal is not smooth, butthe oscillations are signi�cantly reduced. Since the

Page 8: A 3D sliding mode control approach for position based visual …scientiairanica.sharif.edu/article_3677_47808b071c497787... · 2020-06-14 · Transactions B: Mechanical Engineering

M. Parsapour et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 844{853 851

Table 2. The obtained relative position from UKOestimator.

Position/orientation

Initialreal

value

Initialestimated

value

Finalestimated

valueX (cm) 9 9.03 0.25Y (cm) 6.5 -6.59 -0.24Z (cm) 28 27.78 18.10A (deg) -25 -23.91 1.00B (deg) 0 2.02 0.81

object has no rotational motion, the control signal andthe sliding surface are reached zero in 0.15 second.Figure 10 illustrates the derivative of the Lyapunovfunction which is negative during the end e�ectormovement. This veri�es the overall stability of thisalgorithm.

The results of regulating the robot pose froman initial relative pose to the desired position isnumerically summarized in Table 2. The estimatedvalues which are shown in Table 2 are from UKO+KFestimator. The \Initial Estimated Value" refers tothe relative pose before the end e�ector movement.The \Final Estimated Value" denotes the relative poseafter movement. Before performing Experiment I, the\Initial Real Value" is set manually. By comparingthe estimated and the real pose of the target objectfrom this experiment and Experiment II in [12], thenonlinear controller increases the performance of theVS system (i.e. the \Final Estimated Value" for thepose of the object is close to the desired pose). Thedesired pose di�erence between the end e�ector andthe target object is zero in the X, Y directions andA, B orientations, but the distance is 18 cm in the Z

direction (for safety). By assuming stop condition inour algorithm, we prevent the small movement of robotbecause of the measurement noise around the desiredpose. Thus, for the \Final Estimated Value", we willnot have exactly zero. The numerical results signifythe e�ectiveness of the suitable applied estimator inthe visual regulation methodology.

4.2. Experiment IIThe evaluation of the overall performance of theproposed visual servoing technique is the aim of Ex-periment II, which is done through designing a fewindependent motions of the object. For this purpose,the object is moved in the X, Y , Z, A and Bdirections. According to the previous experiment, asaturation type SMC with the width of " is consideredfor Experiment II.

In order to comprehend the experiment better,the estimated relative pose of the target object withrespect to the camera frame has been shown in the Fig-ure 11 for all directions, during the object movement.To show the tracking errors performance, Figure 12depicts the sliding mode surface only in the Y directionand B orientation as samples. Figure 11 shows thatwhen the object moves in any directions, the errors

Figure 12. Sliding surface in (a) Y direction, and (b) Borientation.

Figure 11. UKO+KF estimation of motion in (a) X direction, (b) Y direction, (c) Z direction, (d) A orientation, and (e)B orientation.

Page 9: A 3D sliding mode control approach for position based visual …scientiairanica.sharif.edu/article_3677_47808b071c497787... · 2020-06-14 · Transactions B: Mechanical Engineering

852 M. Parsapour et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 844{853

with respect to this motion will rapidly grow, andtherefore, the corresponding sliding surface will grow.Then, the proposed controller algorithm commands theend e�ector to move toward the target. For example,focus on Figure 11(a). After 6 seconds, the objectis approximately moved -6 cm in the X direction.Proportional to this movement, the sliding surfacein the X direction is generated, and the controllercompensates these errors. After approximately 4seconds, the estimated relative pose in the X directiondecreases to zero. Similar analysis can be observed forB orientation, as well. Figure 11(e) and 12 show thatafter 25 seconds, the object starts to rotate around Yaxis by about 22 degrees; the estimator detects thisrotation. Consequently, the controller rotates the ende�ector and the errors decrease down to zero. The dashline shows the desired relative pose in each direction,so the robot manipulator is able to keep the desiredrelative distance. Table 3, shows the performance ofthe end e�ector movement, numerically.

To analyze the stability of visual servoing forthe moving object, consider Figure 13 that depictsthe derivatives of the Lyapunov function whichis a suitable measure for the proposed controllerstrategy. The sliding surfaces in Figure 12 and thederivative of the Lyapunov functions in Figure 13show non-smooth movement. At the �rst glance,the jitters in the plots seem to be non-smoothmovement, but if the plot is suitably magni�edat the vicinity of jitters, it can be seen that thesignal is really smooth. This fact was shown inExperiment I, and the performance is smooth and

Figure 13. Derivative of Lyapunov function in (a) Ydirection, and (b) B orientation.

Table 3. Performance of tracking in visual servoingexperiment.

Position/orientation

Initialtime of

movement(sec)

Time oftracking

(sec)

Compensatedrelativedistance

(cm)X (cm) 6 10 -6.0Y (cm) 10 14 +6.0Z (cm) 16 19 -5.0A (deg) 19 24 +16.89 & -27.75B (deg) 25 30 22

proper. Note that, as the target is moved by human,the recorded trajectory includes minute oscillationsand vibrations which stems from the human handmotions. For the detail examination of the overallperformance of the VS system a video clip is givenin: http://saba.kntu.ac.ir/eecd/aras/movies/SMC-Vservo.mpg.

5. Conclusions

In this paper, we present a sliding mode controller inPBVS approach to control and path-plan an industrialrobot manipulator. By combining the two mentionedtasks through SMC technique, we can have a uni�edapproach to tune the whole control system in all DOFof robot manipulator. In the proposed approach, theposition, velocity and acceleration of the target objectare estimated through a robust estimator (UKO+KF).The estimated values are applied to de�ne a PD-typesliding mode surface. Then a stable and robust SMC isemployed to produce a suitable control signal to movethe robot toward the desired target. In order to selectthe controller parameters, we have considered somepractical limitations and error bounds, which assumeminimization of the relative pose in each DOF of therobot manipulator in hand. In experiments, we haveobserved that the proposed structure is performing wellin noisy backgrounds. It may be concluded, that, thistechnique can be used for the industrial implementa-tion of a visual servoing system. The idea of usingSMC can be extended for IBVS approach. The stabilityanalysis of this algorithm can be investigated underuncertainties, and as a result, controller parameters canbe tuned with speci�c objectives in future work.

References

1. Malis, E. and Benhimane, S. \A uni�ed approach to vi-sual tracking and servoing", Robotics and AutonomousSystems, 52(1), pp. 39-52 (2005).

2. Gans, N.R. and Hutchinson, S.A. \Stable visual ser-voing through hybrid switched-system control", IEEETrans. on Robotics, 23(3), pp. 530-540 (2007).

3. Hutchinson, S., Hager, G.D. and Croke, P.I. \A tuto-rial on visual servo control", IEEE Trans. on Roboticsand Automation, 12(5), pp. 651-670 (1996).

4. Chesi, G. and Vicino, A. \Visual servoing for largecamera displacements", IEEE Trans. on Robotics,20(4), pp. 724-735 (2004).

5. Chaumette, F. and Hutchinson, S. \Visual servo con-trol. Part I: Basic approaches", IEEE Robotics andAutomation Magazine, 13(4), pp. 82-90 (2006).

6. Malis, E., Mezouar, Y. and Rives, P. \Robustnessof image based visual servoing with a calibratedcamera in the presence of uncertainties in the three-dimensional structure", IEEE Trans. on Robotics,26(1), pp. 112-120 (2010).

Page 10: A 3D sliding mode control approach for position based visual …scientiairanica.sharif.edu/article_3677_47808b071c497787... · 2020-06-14 · Transactions B: Mechanical Engineering

M. Parsapour et al./Scientia Iranica, Transactions B: Mechanical Engineering 22 (2015) 844{853 853

7. Malis, E., Chaumette, F. and Boudet, S. \2 1/2-Dvisual servoing", IEEE Trans. Robot. Automat., 15,pp. 238-250 (1999).

8. Wilson, W.J. \Relative end e�ector control usingCartesian position based visual servoing", IEEE Trans.on Robotics and Automation, 12(5), pp. 684-696(1996).

9. Taghirad, H.D., Atashzar, S.F. and Shahbazi, M.\A robust solution to three-dimensional pose estima-tion using composite extended Kalman observer andKalman �lter", IET Computer Vision, 6(2), pp. 140-152 (2012).

10. Janabi-Shari�, F. and Marey, M. \A Kalman-�lter-based method for pose estimation in visual servoing",IEEE Trans. on Robotics, 26(5), pp. 939-947 (2010).

11. Salehian, M., RayatDoost, S. and Taghirad, H.D.\Robust unscented Kalman �lter for visual Servoingsystem", the 2nd International Conference on Control,Instrumentation, and Automation (ICCIA), Shiraz,Iran, pp. 1006-1011 (2011).

12. Stepanenko, Y., Cao, Y. and Su, C.Y. \Variablestructure control of robotic manipulator with PIDsliding surfaces", International Journal of Robust andNonlinear Control, 8(1), pp. 79-90 (1998).

13. Zanee, P., Morel, G. and Plestan, F. \Robust visionbased 3D trajectory tracking using sliding mode con-trol", Proc. of the IEEE Int. Conf. on Robotics andAutomation, 3, pp. 2088-2093 (2000).

14. Kim, J.K., Kim, D.W., Choi, S.J. and Won, S.C.\Image-based visual servoing using sliding mode con-trol", SICE-ICASE International Joint Conference,pp. 4996-5001 (2006).

15. Zanee, P., Morel, G. and Plestan, F. \Robust 3D visionbased control and planning", Proc. of the IEEE Int.Conf. on Robotics and Automation, 5, pp. 4423-4428(2004).

16. Li, F. and Xie, H.L. \Sliding mode variable structurecontrol for visual servoing system", International Jour-nal of Automation and Computing, 7(3), pp. 317-323(2010).

17. Sim, T.P., Hong, G.S. and Lim, K.B. \Modi�ed smithpredictor with DeMenthon-Horaud pose estimationalgorithm for 3D dynamic visual servoing", Robotica,20(6), pp. 615-624 (2002).

18. Yoon, Y. and Kosaka, A. and Kak, A.C. \A newKalman- �lter-based framework for fast and accuratevisual tracking of rigid objects", IEEE Trans. onRobotics, 24(5), pp. 1238- 1251 (2008).

19. Wang, J.H. and Chen, J.B. \Adaptive unscentedKalman �lter for initial alignment of strapdown iner-tial navigation systems", Proc. of the Ninth Int. Conf.on Machine Learning and Cybernetics, Qingdao, 3, pp.1384-1389 (2010).

20. Taghirad, H.D., Shahbazi, M., Atashzar, S.F. andRayatDoost, S. \A robust pose-based visual servoingtechnique for redundant manipulators", accepted forpublication in Robotica (May 2015).

21. Wan, E.A. and van der Merwe, R. \The unscentedKalman �lter for nonlinear estimation", in Proc. ofIEEE Symposium 2000 (AS-SPCC), Lake Louise, Al-berta, Canada, Oct., pp. 153-158 (2000).

22. Slotine, J.E. \Sliding controller design for nonlinearsystems", Int. Journal of Control, 40(2), pp. 421-434(1984).

23. Khalil, H.K., Nonlinear Systems, 3rd Edn., PrenticeHall, Upper Saddle River, New Jersey (2005).

Biographies

Mahsa Parsapour received her BSc and MSc degreesin Electrical Engineering from K.N. Toosi Universityof Technology, Tehran, Iran, in 2011 and 2013, respec-tively. She is currently a researcher at the AdvancedRobotics and Automated System (ARAS) group atK.N. Toosi University of Technology, Tehran, Iran.Her research interests are robotics, visual servoing,nonlinear control, and image processing.

Soheil RayatDoost received his BSc degree in Elec-trical Engineering from K.N. Toosi University of Tech-nology, Tehran, Iran, in 2009. After that, he worked asa researcher at the Advanced Robotics and AutomatedSystem (ARAS) group at K.N. Toosi University ofTechnology until 2013. He received his MSc degreefrom the Department of Electrical and Computer En-gineering, University of Tehran, Iran, in 2014. Hiscurrent research interests include: pattern recognition,computer vision, a�ective computing, identi�cationand robotics.

Hamid D. Taghirad received his BSc degree inMechanical Engineering from Sharif University of Tech-nology, Tehran, Iran, in 1989, his MSc in Mechan-ical Engineering in 1993, and his PhD in ElectricalEngineering in 1997, both from McGill University,Montreal, Canada. He is currently a Professor andthe dean of the Faculty of Electrical Engineering, andthe Director of the Advanced Robotics and AutomatedSystem (ARAS) at K.N. Toosi University of Technol-ogy, Tehran, Iran. He is a senior member of IEEE,the chairman of IEEE control system chapter in Iransection member of the board of Industrial ControlCenter of Excellence (ICCE), at K.N. Toosi Universityof Technology, editor in chief of Mechatronics Maga-zine, and Editorial board of International Journal ofRobotics: Theory and Application, and InternationalJournal of Advanced Robotic Systems. His researchinterest is robust and nonlinear control applied torobotic systems. His publications include �ve books,and more than 190 papers in international Journalsand conference proceedings.


Top Related