fast global reflectional symmetry detection for robotic grasping and visual tracking

Upload: waili8

Post on 06-Apr-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Fast Global Reflectional Symmetry Detection for Robotic Grasping and Visual Tracking

    1/9

    Fast Global Reflectional Symmetry Detection for Robotic Graspingand Visual Tracking

    Wai Ho Li, Alan M. Zhang and Lindsay Kleeman

    Centre for Perceptive and Intelligent Machines in Complex Environments:Intelligent Robotics

    Monash University, Clayton

    Melbourne, Australia

    {Wai.Li, Alan.Zhang, Lindsay.Kleeman}@eng.monash.edu.auAbstract

    Many objects found in domestic environmentsare symmetrical. In this paper, a novel methodto detect axis of reflectional symmetry in digi-tal images is presented. The proposed approachutilizes image gradients and edge pixel match-

    ing to find lines of symmetry. Preliminary re-sults, comparing the symmetry detector withthe Generalized Symmetry Transform [Reisfeldet al., 1995] is provided.

    1 Introduction

    Symmetry is one of many attention mechanisms thathumans use in everyday life. Many objects in the do-mestic environment are symmetrical. In general, cups,cans, bowls and boxes have reflectional symmetry abouta straight line. The way objects are grasped and ma-nipulated are also related to their axis of symmetry. For

    example, in the case of a can, grasping is done by apply-ing force on opposite sides of its axis of symmetry, in amanner perpendicular to the axis. This paper describesa reflectional symmetry detector that locates global mir-ror lines in 2D digital images.

    1.1 Related Research

    Symmetry has been viewed as an important attentionaloperator in Computer Vision for several decades. TheGeneralized Symmetry Transform [Reisfeld et al., 1995]

    is a multi-scale approach to detect reflectional and radialsymmetry in digital images. This transform producesa symmetry map, which is similar to an edge image.Each point on the map consists of the magnitude anddirection of symmetry inferred by all other points in theimage, calculated at a given scale. At large scale val-ues, the transform can been seen as a global context-free(low level) symmetry operator. With small scale val-ues, it acts in a similar fashion to interest point detec-tors, such as Harris Corners [Harris and Stephens, 1988].This transform has been used in human identification

    [Hayfron-Acquah et al., 2002] and corner detection [Ohand Chien, 2002] applications. Recently, a focused ver-sion of the Generalized Symmetry Transform [Choi andChien, 2004] has been suggested. This version of thetransform can recognize image features, such as polygoncorners, at a specific angle.

    Other global symmetry detection schemes have also

    been proposed. A clustering approach was suggested by[Labonte et al., 1993]. An algorithm involving General-ized Complex Moments was developed by [Shen et al.,1999], which is capable of finding both reflectional andradial symmetry in 2D high contrast grayscale images.Its computational complexity and ability to operate onnatural images have yet to be tested.

    Other robotic applications of symmetry include hu-man gait detection [Hayfron-Acquah et al., 2003], com-pletion of occluded shapes [Zabrodsky et al., 1993] andthe use of fast radial symmetry [Loy and Zelinsky, 2003]

    in the detection and tracking of human eyes.

    1.2 The Generalized Symmetry TransformThe Generalized Symmetry Transform uses image gradi-ent orientation, a distance-based Gaussian function andimage gradient intensity to generate symmetry maps.The symmetry map is essentially an image of symme-try contributions made by all pixel pairs in the origi-nal image. Two symmetry maps are produced by thetransform, one containing the magnitude of symmetrycontribution and the other the phase.

    With pi and pj being two pixels in the input image,we define a set g(p) as

    g(p) ={

    (i, j)|

    pi +pj

    2= p

    }(1)

    The following distance weight function is used

    D(i, j) =1

    2e

    ||pipj ||

    2 (2)

    along with a phase weight function

    P(i, j) = (1 cos(i + j 2ij))(1 cos(i j)) (3)

  • 8/3/2019 Fast Global Reflectional Symmetry Detection for Robotic Grasping and Visual Tracking

    2/9

    Figure 1: The Generalized Symmetry Transform

    The parameters involved are displayed in Figure 1.Note that alters the scale at which symmetry calcu-lations take place, thereby allowing for the generationof symmetry maps at different resolutions. The phasefunction, P(i, j), contains two terms. The first term,(1 cos(i + j 2ij)) is maximized when gradients atpi and pj are oriented in a symmetric fashion, similar tothat shown in Figure 3(a). The second term prevents thematching of pixels with similar image gradients, such aspixels along a straight line.

    The contribution function C(i, j) is defined as

    C(i, j) = D(i, j)P(i, j)rirj (4)

    ri and rj are logarithmic functions of the pixels gradientintensity.

    The contributions of all points are used in the gener-ation of the symmetry map. The symmetry magnitude,also known as the isotropic symmetry is

    M(p) =

    (i,j)g(p)

    C(i, j) (5)

    In order to obtain an axis of reflectional symmetryfrom the map, thresholding followed by a Hough Trans-

    form is required.

    2 Proposed Method

    The proposed approach to symmetry detection is uniquein several ways. Firstly, the method was designed withrobotic applications in mind. It is not meant to pro-vide a comprehensive coverage of symmetry at all scales,nor will it provide a Gaussian cloud of possible locations

    Figure 2: Fast Reflectional Symmetry Detection usingHough Transform. Note that only edge pixels are usedby this algorithm

    of symmetry. The symmetry detection algorithm uti-lizes Hough Transform [Duda and Hart, 1972] to quicklyprocess images and identify global lines of reflectionalsymmetry. The detector returns the axes of reflectionalsymmetry as straight lines. The algorithm is meant tooperate very quickly, and requires less computational re-sources than the Generalized Symmetry Transform.

    2.1 Fast Symmetry Detection using HoughTransform

    The input image is first converted into an edge image us-ing an appropriate edge filter. We used the Canny edgefilter for this task. Each pair of edge pixels, i and j, votesfor a particular r and in the Hough transform accumu-lator. A weight function W(i, j) is used to adjust theHough accumulation for edge pixels with non-symmetricgradient orientations. The edge points i and j are shownin Figure 2. In order to maximize Hough Space resolu-tion, the origin of the coordinate system is placed at thecentre of the image.

    The following equation is used to calculate . Notethat 2 < (i, j) 2

    (i, j) = arctan( yj yixj xi ) (6)

    The radius R(i, j) is found using

    R(i, j) = (xi + xj

    2) cos[(i, j)]+(

    yi + yj2

    ) sin[(i, j)] (7)

    The set f is the set of edge pixel pairs with the cor-

  • 8/3/2019 Fast Global Reflectional Symmetry Detection for Robotic Grasping and Visual Tracking

    3/9

    responding R and shown in Figure 2.

    f(r, ) = {(i, j) | R(i, j) = r, (i, j) = } (8)Note that only edge pixels are considered, in con-

    trast to the Generalized Symmetry Transform, whichprocesses all image pixels. In order to reject noisy edgepixels and to favour votes made by edge pixels with sym-

    metric image gradients, weighted Hough voting is used

    H(r, ) =

    i,jf (r,)

    W(i, j) (9)

    A Gaussian weight function was used in the Houghvoting process. The weight W(i, j) is maximized whengradient orientations of pixels are symmetric about theirmid point. Figure 3 illustrates the relationship betweenimage gradient angles and the level of symmetry. Whenangles i and j are equal, the level of symmetry is atits maximum. If |i j| = 2 , there is no symmetrybetween the edge points. Only the absolute magnitudeof angles are used in these calculations. Horizontal andvertical Sobel filtering is used to determine image gradi-ents. The parameter in equation 10 is used to adjustthe strictness of the gradient symmetry criteria. Alow sigma will favour edge pairs with high symmetry be-tween the gradient directions of its edge pixels. Using ahigh sigma has the reverse effect, allowing less symmetricedge pairs to have a greater effect in the voting.

    W(i, j) =1

    2e(ij )

    2

    22 (10)

    The Hough space is discretized into a fixed number ofcells or bins in both the R and direction. These bins

    accumulate votes made by the edge pixel pairs. After ac-cumulation, peaks in the Hough accumulator are iden-tified using Non-Maxima suppression. This algorithmlocates the Hough bin with the highest value and se-lects it as a peak. It then suppresses all bins around theselected peak to zero. This is repeated until all peakswith a value above a threshold is found. The defaultsuppression neighbourhood is 110Rmax by

    110

    max. Thesuppression threshold is half of the highest peak in theaccumulator at the start the suppression, and remainsthe same through all iterations.

    3 Preliminary Results

    Generalized Symmetry and Fast Symmetry detection re-sults are presented below. In order to quantify the com-parison between these two methods, simple pass-fail tri-als were used. The test images contained a grey vase witha vertical line of symmetry through its centre point. Thelocation of this line of reflectional symmetry was used asground truth. If the location of the symmetry line de-tected with Fast Symmetry was within 1 pixel of ground

    (a) |i j| = 0

    (b) |i j |

    2

    Figure 3: Angles used in weight calculations. Arrows in-dicate image gradient direction at the edge pixels. Max-imum symmetry occurs when i = j , in (a). (b) will

    receive a very low voting weight, as |i j | 2

  • 8/3/2019 Fast Global Reflectional Symmetry Detection for Robotic Grasping and Visual Tracking

    4/9

    truth, the test was successful. Otherwise, the symme-try detection was considered a failure. As GeneralizedSymmetry does not return a symmetry line, the locationwith maximum contributed value, that is, the point withmaximum isotropic symmetry, was used as an indicationof detection accuracy. If this point is located within 1pixel of the real symmetry line, the test was a success.

    Three test images were used. The first had a darkshape, the vase, against a light background. The othertwo images had the same vase set against backgroundswith changing intensity. The tests were repeated afteradding Gaussian noise to the input images. All the testimages were 64x81 pixels in size.

    3.1 Trial Results

    In all test cases, the Fast Symmetry detector was able tofind the vertical line of symmetry at the exact, groundtruth, location. The Generalized Symmetry Transformwas able to find the axis of symmetry in Test Image 1,when no noise was added. Lowering the scale parameter

    also produced a corner-detecting behaviour, as seen inFigure 4(c) and 7(d). With added noise, the GeneralizedSymmetry algorithm was only able to detect the axis ofsymmetry with = 10.

    For Test Image 2 and 3, where the background hadvariations in intensity, the Fast Symmetry detection wassucessful for both images. The Generalized Symmetryalgorithm was only able to find the line of symmetrywith a of 10, for Test Image 3. As seen in Figure 6(d),the line of symmetry found was near the top of the vase,and was not very distinct. With the addition of Gaussiannoise, the Generalized approach failed for both images,regardless of the scale factor, , used.

    Looking at Equation 3, the problem with varyingbackground gradients for the Generalized SymmetryTransform becomes more apparent. The transform wasdesigned to favour opposing image gradients, while re-

    jecting image gradients in the same direction. Basically,the algorithm assumes either light objects on a darkbackground, or dark objects on a light background. Withvariations in the background, this leads to zero contribu-tions being made by pixel pairs across the left and rightedges of the vase in Test Image 2. The algorithm wasstill able to find the vertical line of symmetry in Figure6 as the gradient variations only effected matches thatproduced symmetry along horizontal lines. That is, the

    background gradient affected pixel pairs that had pixelson the top and bottom of the vase, not those on the leftand right side.

    3.2 Comparison Between Fast Symmetryand Generalized Symmetry Detection

    In Figure 7, the multi-resolution nature of the General-ized Symmetry Transform can be seen. By varying ,

    (a) Test Image 1 (b) Fast Symmetry

    (c) Generalized Sym-metry, = 2.5

    (d) GeneralizedSymmetry, = 10,with edge imageoverlayed

    (e) Test Image 1 withwith Gaussian noiseadded

    (f) Fast Symmetry

    (g) Generalized Sym-metry, = 2.5

    (h) GeneralizedSymmetry, = 10

    Figure 4: Symmetry detection results for Test Image 1.(b)-(d) contain results for the test image. (e) is TestImage 1 with added Gaussian noise. The noise had =0.1, with image intensity defined between 0 and 1. (f)-(h) contain detection results for the noisy image. Brightpixels in the Generalized Symmetry results have highlevels of detected symmetry

  • 8/3/2019 Fast Global Reflectional Symmetry Detection for Robotic Grasping and Visual Tracking

    5/9

    (a) Test Image 2 (b) Fast Symmetry

    (c) Generalized Sym-metry, = 2.5

    (d) GeneralizedSymmetry, = 10

    (e) Test Image 2 withGaussian noise added (f) Fast Symmetry

    (g) Generalized Sym-

    metry, = 2.5

    (h) Generalized

    Symmetry, = 10

    Figure 5: Symmetry detection results for Test Image2. Note the intensity variation in the background of(a). (b)-(d) contain results for the test image. (e) isTest Image 2 with added Gaussian noise. The noise had = 0.1, with image intensity defined between 0 and 1.(f)-(h) contain detection results for the noisy image

    (a) Test Image 3 (b) Fast Symmetry

    (c) Generalized Sym-metry, = 2.5 (d) GeneralizedSymmetry, = 10

    Figure 6: Test image 3 results

    corners as well as symmetry lines could be seen on themaps. However, the choice of scale, , is not directlyobvious from the input image dimensions. Also, diago-nal symmetry lines were not found in Figure 7 using thetransform unless was set to between 9 and 11. Con-trary to this, our detector was able to find all four linesof reflectional symmetry. Even though the edge image

    produced by the Canny filter had several imperfectionsalong the left side of the square, the detector still func-tioned correctly.

    The computational complexity of both symmetry al-gorithm is O(n2), with n equal to the number of inputpixels. The number of possible pixel pairs is given byn(n1)

    2 . However, the Fast Symmetry algorithm onlyoperates on edge pixels, while the Generalized Symme-try algorithm operates on all image pixels. The numberof edge pixels is generally much smaller than the totalnumber of pixels in an image. Hence, the Fast Symmetryalgorithm requires fewer calculations than the General-ized approach. Our approach has combined the post-

    processing stage of applying Hough Transform to theSymmetry map into the algorithm itself. The calcula-tion of local gradient intensities have been removed bythe use of edge images, which removes pixels with lowimage gradient magnitude.

    There was a noticeable difference in the execution timefor the two algorithms. The original image in Figure 7 is64x64 pixels in size. The Generalized Symmetry Trans-

  • 8/3/2019 Fast Global Reflectional Symmetry Detection for Robotic Grasping and Visual Tracking

    6/9

    (a) Input Image (b) Fast Symmetry

    (c) Gen. Symmetry =15 (d) Gen. Symmetry =2.5

    Figure 7: Symmetry Detection performed on a simplesquare box. Top Left: Original Image. Top Right:Hough Transform Symmetry Detection Results. Bot-tom: Generalized Symmetry Transform Results

    form has n = 4096 for this image. This produced over8 million unique pixel pairs, a large number for such asmall image. With our edge-based detector, the edge im-age consisted of around 200 pixels. This meant less than30000 pairs of pixels. Clearly, the generalized method

    will have serious computational limitations if applied tolarger images, especially in time-critical situations, suchas visual tracking and robotic grasping.

    Both algorithms were implemented as a MATLABscript, not optimized or vertorized. On a host PC with aPentium4 2.2GHz processor and 1GB of main memory,the Fast Symmetry algorithm only required 5 to 10 sec-onds to complete for the 64x64 images. The GeneralizedSymmetry Transform required over three minutes to ex-ecute. For larger input images, around 300x300 in size,the Generalized Symmetry Transform took a very longto run under MATLAB, in the order of several hours.The Fast Symmetry algorithm, including Non-Maximal

    suppression to detect peaks in Hough space, requiredaround 10 to 20 seconds to process these images.

    The Fast Symmetry algorithm has also been imple-mented in C code. This program was tested on imagestaken with a Firewire (IEEE1394) camera. The sceneconsisted of symmetrical objects, such as cups and bot-tles, under non-uniform lighting, with some backgroundclutter. The program was able to perform symmetry

    detection on these test images in under 1 second. TheCanny edge detection as well as the peak finding wereincluded in the timing. By sampling the edge pixels be-fore pairing, similar to the approach used in Random-ized Hough Transform [Xu and Oja, 1993], the perfor-mance of our algorithm can be improved, with some lossin noise robustness. Sampling one-quarter of the edgepixels, the program was able to perform symmetry de-

    tection on 640x480 images in roughly 150 millisecondswith no noticable change in detection rate.

    3.3 Fast Symmetry Detection onHousehold Objects

    Source images containing objects commonly found indomestic environments were obtained via Google im-age searches. These images vary in size and complex-ity. Some of them have poor lighting and producednoisy edge images. Due to their size, symmetry detec-tion using the Generalized Transform was not feasible.As such, only the Fast Symmetry Detection results were

    computed.The results are shown in the Appendix.

    4 Applications and Future Work

    The Fast Symmetry Detection algorithm was designedfor use in robotic applications. It will be applied toobject tracking and object localization through stereomatching of symmetry lines. The detected symmetrycan provide additional information for grasping of ob-

    jects.

    The detector will be employed in a robotic systemto locate, manipulate and visually track household ob-

    jects. The detected symmetry, especially axes of symme-try perpendicular to the table plane, will be used by therobot as a way to identify possible objects in the scene.The experimental platform consists of a pan-tilt capablerobot head with colour stereo vision and a PUMA 6-Degree-of-Freedom robot arm with two-fingered end ef-fector. Objects found in the domestic environment, suchas cups, bowls, cans and plates will be the test cases inthese experiments.

    Previously, symmetry has mainly been applied to of-fline processing domains, such as image registration, 3Dmodel compression, solving for the shape of partially oc-cluded objects and so on. The speed of this algorithm

    should allow it to be used in real-time applications suchas tracking and grasping.

    Currently, peak detection in Hough space is a ma- jor weakness in the algorithm. As seen in the resultswhen operating on noisy real-world images, too manypeaks were found. That led to an excess of symmetrylines, even when was decreased to reject noisy matches.Approaches such as hierarachical Hough Transforms,

  • 8/3/2019 Fast Global Reflectional Symmetry Detection for Robotic Grasping and Visual Tracking

    7/9

    Smoothing of Hough space and Gaussian dispersion ofvotes may improve the peak finding results.

    In terms of pre-processing, better edge filtering willenhance the performance of the algorithm. Fewer noisyedges mean more accurate results and fewer pixels toprocess, which also leads to faster execution times. Theremoval of edges before performing the Hough votingis also a possibility. For example, if the robot alreadyknows that there are multiple objects placed on a table,then it can remove all edges oriented parallel to the ta-ble, and just look for vertical axes of symmetry using theremaining, non-horizontal edges. Perspective projectionalso has a negative impact on symmetry detection. Assuch, the removal of perspective effects is also a valu-able pre-processing step. This is only a problem whenobjects are close to the camera, and of sufficient size, tobe affected by perspective effects.

    5 Conclusion

    A viable alternative to the The Generalized Symmetry

    Transform has been proposed. The algorithm can detectreflectional lines of symmetry very quickly, even for noisyinput images. The algorithm is also able to find lines ofsymmetry for large images, containing background in-tensity variations. While the Generalized transform re-quires a scale factor to be set before use, the Fast Sym-metry Detector is able to detect reflectional symmetryacross the entire image in a scale free manner. How-ever, our algorithm cannot detect small, locally symmet-ric features such as the corners of a box and symmetry infine texture. The proposed algorithm is more computa-tionally efficient than Generalized Symmetry Transform.After further research, it will be applied to time-criticalapplications, such as visual tracking and robotic grasp-ing.

    Acknowledgements

    The work described in this paper was supported bythe ARC Centre for Perceptive and Intelligent Machinesin Complex Environments (PIMCE). We would like tothank Geoff Taylor for his help with locating symmetry-related references and implementation advice.

    References

    [Choi and Chien, 2004] I. Choi and S.I. Chien. A gen-eralized symmetry transform with selective atten-tion capability for specific corner angles. SPLetters,11(2):255257, February 2004.

    [Duda and Hart, 1972] Richard O. Duda and Peter E.Hart. Use of the hough transformation to detect linesand curves in pictures. Commun. ACM, 15(1):1115,1972.

    [Harris and Stephens, 1988] Chris Harris and MikeStephens. A combined corner and edge detector.In M. M. Matthews, editor, Proceedings of the 4thALVEY vision conference, pages 147151, Universityof Manchester, England, September 1988.

    [Hayfron-Acquah et al., 2002] James B. Hayfron-Acquah, Mark S. Nixon, and John N. Carter. Human

    identification by spatio-temporal symmetry. In ICPR02: Proceedings of the 16 th International Conferenceon Pattern Recognition (ICPR02) Volume 1, page10632, Washington, DC, USA, 2002. IEEE ComputerSociety.

    [Hayfron-Acquah et al., 2003] James B. Hayfron-Acquah, Mark S. Nixon, and John N. Carter.Automatic gait recognition by symmetry analysis.Pattern Recogn. Lett., 24(13):21752183, 2003.

    [Labonte et al., 1993] F. Labonte, Y. Shapira, andP. Cohen. A perceptually plausible model for globalsymmetry detection. In ICCV93, pages 258263, 1993.

    [Loy and Zelinsky, 2003] Gareth Loy and AlexanderZelinsky. Fast radial symmetry for detecting pointsof interest. IEEE Trans. Pattern Anal. Mach. Intell.,25(8):959973, 2003.

    [Oh and Chien, 2002] Hyun-Hwa Oh and Sung-Il Chien.Exact corner location using attentional general-ized symmetry transform. Pattern Recogn. Lett.,23(11):13611372, 2002.

    [Reisfeld et al., 1995] Daniel Reisfeld, Haim Wolfson,and Yehezkel Yeshurun. Context-free attentional op-erators: the generalized symmetry transform. Int. J.Comput. Vision, 14(2):119130, 1995.

    [Shen et al., 1999] Dinggang Shen, Horace Ho-Shing Ip,Kent K. T. Cheung, and Eam Khwang Teoh. Symme-try detection by generalized complex (gc) moments: Aclose-form solution. IEEE Trans. Pattern Anal. Mach.Intell., 21(5):466476, 1999.

    [Xu and Oja, 1993] Lei Xu and Erkki Oja. Randomizedhough transform (rht): basic mechanisms, algorithms,and computational complexities. CVGIP: Image Un-derst., 57(2):131154, 1993.

    [Zabrodsky et al., 1993] H. Zabrodsky, S. Peleg, andD. Avnir. Completion of Occluded Shapes using Sym-metry. In IEEE Computer Society Conference on

    Computer Vision and Pattern Recognition, pages 678679, New York, June 1993.

  • 8/3/2019 Fast Global Reflectional Symmetry Detection for Robotic Grasping and Visual Tracking

    8/9

    Appendix: Fast Symmetry Detection on

    Household Objects

    (a) Input image (b) Fast Symmetry

    Figure 8: Symmetry Detection on a White Cup. (a)Original Image. (b) Symmetry Lines detected by Fast

    Symmetry algorithm

    (a) Input image (b) Fast Symmetry

    Figure 9: Fast Symmetry Detection on two Highly Tex-tured Objects. The 45 degree line was inter-object sym-metry detected between the bottom edge of the rightcup, and the left edge of the upright cup

    (a) Input image (b) Fast Symmetry

    Figure 10: Symmetry Detection on three bottles placedside-by-side. The results was produced from a Houghspace peak search with 5o < < 5o

    (a) Input image with Gaus-

    sian Noise

    (b) Fast Symmetry

    Figure 11: Symmetry Detection on an image with noise.The results were produced from a Hough space peaksearch with 5o < < 5o

  • 8/3/2019 Fast Global Reflectional Symmetry Detection for Robotic Grasping and Visual Tracking

    9/9

    (a) Input image

    (b) Fast Symmetry

    Figure 12: For this image, the suppression size used wasdoubled to remove some inter-object symmetry.

    (a) Input image (b) Fast Symmetry

    Figure 13: Fast Symmetry Detection again on HighlyTextured Objects. The results were produced from aHough space peak search with 5o < < 5o

    (a) Input image

    (b) Fast Symmetry

    Figure 14: Symmetry Detection on a scene with Multiplecups (a) Original Image. (b) Symmetry Lines detectedby Fast Symmetry algorithm