robust 3d vision for autonomous space robotic...

8
Proceeding of the 6 th International Symposium on Artificial Intelligence and Robotics & Automation in Space: i-SAIRAS 2001, Canadian Space Agency, St-Hubert, Quebec, Canada, June 18-22, 2001. Robust 3D Vision for Autonomous Space Robotic Operations Mark Abraham, Piotr Jasiobedzki, Manickam Umasuthan MD Robotics, 9445 Airport Rd., Brampton, Ontario, Canada L6S 4J3 [email protected] Abstract This paper presents a model based vision system intended for satellite proximity operations. The system uses natural features on the satellite surfaces and three dimensional models of the satellites and docking interfaces, and it does not require any artificial markers or targets. The system processes images from stereo cameras to compute 3D data. The pose is determined using a fast version of an Iterative Closest Point algorithm. The system has been characterized under representative space-like conditions (illumination, material properties and trajectories) and proven to be robust to this environment. Keywords Space vision system, 3D tracking, Model based pose estimation, Iterative Closest Point 1 Introduction Recent space missions, such as ETS–VII – an autonomous rendezvous and docking of unmanned satellites [3, 11] and the International Space Station assembly [9], have demonstrated successful operations of vision systems on-orbit. These vision systems provided information about the position and orientation of observed spacecrafts and structures during unmanned and manned operations. However, these vision systems relied on the presence of visual targets or retro-reflectors. This restricted vision based operations to predefined tasks only (distances and viewing angles). It also required an ability to control the orientation of the observed spacecraft so, as the visual targets were always visible. Future space operations, such as, satellite inspection and servicing, will require much more advanced vision systems that will be able to detect and recognize observed objects from arbitrary viewpoints [4]. Such a capability will enable servicing spacecrafts to locate, approach, and dock or capture target satellites autonomously and to perform various servicing tasks [6]. Satellite servicing operations performed on-orbit, for example, inspection, hardware upgrades and refueling will extend the life of the satellites and reduce operational costs by eliminating the launches of hardware, which is already on-orbit. The technical challenge is to develop advanced and flexible vision systems capable of operating reliably in the space environment. Any approach taken to provide three-dimensional data to the robotic guidance system must be robust and reliable throughout a wide range of approach vectors and lighting conditions. The on-orbit environment consists of intense sunlight combined with dark shadows, which presents challenging lighting situations. The contrast in an image is often beyond the dynamic range of a camera resulting in the loss of some of the data within the scene. In this paper we describe a vision system developed by MD Robotics that is capable of computing the pose of known objects using only their geometrical models and natural surface features. The system operates reliably under simulated on-orbit conditions (illumination, distance, viewing angle, and partial data loss due to shadows and reflections). Performance of the system has been tested in a laboratory environment representative for satellite proximity operations. 2 Vision System tasks and requirements Space vision systems provide three-dimensional information about the position and orientation of observed spacecraft or structure. An image of Canadarm preparing to capture a free floating satellite, Spartan, is shown in Figure 1.

Upload: others

Post on 25-Sep-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Robust 3D Vision for Autonomous Space Robotic Operationsrobotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM077.pdf · on-line phase the vision system acquires images from

Proceeding of the 6th International Symposium on Artificial Intelligence and Robotics & Automation in Space: i-SAIRAS 2001, Canadian Space Agency, St-Hubert, Quebec, Canada, June 18-22, 2001.

Robust 3D Vision for Autonomous Space Robotic OperationsMark Abraham, Piotr Jasiobedzki, Manickam Umasuthan

MD Robotics, 9445 Airport Rd., Brampton, Ontario, Canada L6S 4J3

[email protected]

AbstractThis paper presents a model based vision systemintended for satellite proximity operations. Thesystem uses natural features on the satellite surfacesand three dimensional models of the satellites anddocking interfaces, and it does not require anyartificial markers or targets. The system processesimages from stereo cameras to compute 3D data. Thepose is determined using a fast version of an IterativeClosest Point algorithm. The system has beencharacterized under representative space-likeconditions (illumination, material properties andtrajectories) and proven to be robust to thisenvironment.

KeywordsSpace vision system, 3D tracking, Model based poseestimation, Iterative Closest Point

1 IntroductionRecent space missions, such as ETS–VII – anautonomous rendezvous and docking of unmannedsatellites [3, 11] and the International Space Stationassembly [9], have demonstrated successfuloperations of vision systems on-orbit. These visionsystems provided information about the position andorientation of observed spacecrafts and structuresduring unmanned and manned operations. However,these vision systems relied on the presence of visualtargets or retro-reflectors. This restricted vision basedoperations to predefined tasks only (distances andviewing angles). It also required an ability to controlthe orientation of the observed spacecraft so, as thevisual targets were always visible.

Future space operations, such as, satellite inspectionand servicing, will require much more advancedvision systems that will be able to detect andrecognize observed objects from arbitrary viewpoints[4]. Such a capability will enable servicing

spacecrafts to locate, approach, and dock or capturetarget satellites autonomously and to perform variousservicing tasks [6]. Satellite servicing operationsperformed on-orbit, for example, inspection,hardware upgrades and refueling will extend the lifeof the satellites and reduce operational costs byeliminating the launches of hardware, which isalready on-orbit.

The technical challenge is to develop advanced andflexible vision systems capable of operating reliablyin the space environment. Any approach taken toprovide three-dimensional data to the roboticguidance system must be robust and reliablethroughout a wide range of approach vectors andlighting conditions. The on-orbit environmentconsists of intense sunlight combined with darkshadows, which presents challenging lightingsituations. The contrast in an image is often beyondthe dynamic range of a camera resulting in the loss ofsome of the data within the scene.

In this paper we describe a vision system developedby MD Robotics that is capable of computing thepose of known objects using only their geometricalmodels and natural surface features. The systemoperates reliably under simulated on-orbit conditions(illumination, distance, viewing angle, and partialdata loss due to shadows and reflections).Performance of the system has been tested in alaboratory environment representative for satelliteproximity operations.

2 Vision System tasks andrequirements

Space vision systems provide three-dimensionalinformation about the position and orientation ofobserved spacecraft or structure. An image ofCanadarm preparing to capture a free floatingsatellite, Spartan, is shown in Figure 1.

Page 2: Robust 3D Vision for Autonomous Space Robotic Operationsrobotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM077.pdf · on-line phase the vision system acquires images from

Figure 1 Canadarm preparing to grasp theSpartan satellite.

Operating range of such a vision system may extendfrom approximately 100 m to contact. It is expectedthat Global Positioning Systems will be used to bringthe servicer within this range of the target spacecraft.Different sensors and algorithms will be used fordifferent phases of the satellite operation. Theoperations are typically divided into phases accordingto the distance long, medium and short range. At thelong range (100 – 20m) the vision system mustdetermine bearing and distance to the satellite ofinterest and its approximate orientation and motion.At the medium range (20 –2m) the distance,orientation and motion must be determined veryaccurately and with high confidence. Only when thisinformation is known the servicer spacecraft mightapproach the target satellite and dock with it using ashort range module (below 2m).

The short range subsystem will be used for welldefined operations of docking or grasping and theexpected range of distance and viewing angles willbe restricted by the allowed approach trajectories tothe target spacecraft. Lights mounted together withcameras may be used to illuminate the scene.Computer vision techniques that rely on presence ofwell defined features on or around the interface canbe used then. For example, techniques similar to theones tried by two vision system tested in space [3]and SVS [9] can be used, or a variation on techniquesapplied in industrial automation can be adopted.

The medium range poses a much more difficultchallenge as the viewing angles are unrestricted, thedistance varies significantly, and the vision systemmostly relies on ambient illumination from the Sun orEarth albedo. For the purpose of our work we assumethat the vision system is activated within the mediumrange and provided with an approximate attitudetowards a satellite of interest, its distance andorientation. Techniques for determining such

information are currently investigated, and conceptsand initial results are presented in a companion paper[7].

At the medium range it is expected that the visionsystem will be capable of estimating pose of theobserved object at a rate of 0.3 – 3 Hz. Thisrequirement is derived from a maximum relativemotion of a satellite that still allows capturing ordocking with it.

Expected accuracy of a vision system is related to acapture envelope of a mechanical interface.Depending on the interface this envelope can be from2” (for small mechanisms) to 10“ (Canadarm), theangular misalignment is in the range of 2 – 10degrees. However, these requirements are defined atcontact and are mostly relevant for the short rangesystem. For the medium range system it expected thataccuracy in order of 1% of the distance and severaldegrees of the object orientation will be sufficient.

On-orbit illumination and the imaging properties ofmaterials used to cover satellites pose significantchallenges for any vision system. The illuminationchanges from day to night as the spacecraft isorbiting the Earth, for example for the InternationalSpace Station this period is 90 minutes. The objectscan be illuminated by a combination of directsunlight (an intense point-like source creating hardshadows), Earth albedo (extended diffuse and almostshadow-less source) and on-board lights. Theinsulation materials used in spacecraft manufactureare either highly reflective metallic foils orfeatureless white blankets that loosely cover thesatellite body. This causes specular reflections andlack of distinct and visually stable features such aslines or corners.

3 Vision system architectureThe vision system currently under development atMD Robotics is a solution to the medium rangevision problem (20 – 2 meters) for the satelliteservicing operation. The system uses stereo camerasand a x86 family 6 computer that runs the developedsoftware. The cameras are rigidly coupled togetherand may be mounted close to a docking interface of aservicer satellite, inside a cargo bay of a shuttle or onan end-effector of a robotic manipulator. We chose touse cameras instead of scanning rangefinders [2, 8] asthey are lighter, cheaper, and contain no movingparts, which allows them to survive better the spaceenvironment. However, the cameras require ambientillumination or camera lights for their operation.

Architecture of the developed vision system is shownin Figure 2. The processing is divided into off-line

Page 2

Page 3: Robust 3D Vision for Autonomous Space Robotic Operationsrobotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM077.pdf · on-line phase the vision system acquires images from

and on-line processing. Off-line processing may beperformed on the ground or in space, and includescalibration of the cameras and pre-computing datathat will be used during the on-line phase. During theon-line phase the vision system acquires images fromthe cameras, computes the 3D data, determines andtracks 3D pose of the observed object. The visionsystem algorithms are embedded in a server that isresponsible for communication with the externalworld, i.e., user interfaces, robot controllers and aspacecraft Guidance, Navigation and Control system.

3Dcomputation

3D poseestimation

Calibration

Vision server

User interfaces,data log

off-line processing,ground or space

pose, confidence

stereo cameras

on-line processing,space

on-line processing,ground or space

Spacecraft orRobot Control

calibrationdata 3D model

pose

Figure 2 Vision system architecture

3.1 Camera calibrationThe system calibration and data initialization isperformed off-line and maybe be performed on theground before the launch or in space. The cameracalibration module estimates intrinsic and extrinsicparameters of the stereo cameras. The calibratedcamera model includes focal length, geometricaldistortions of the camera / lens assembly: radial andtangential, and the location of the principal point inthe image plane. The camera calibration algorithm isan extension of recent work on camera auto-calibration [14].

Calibration is performed using several arbitrary(unknown) views of a planar calibration target. Thetarget used in our experiments is shown in Figure 3,but any planar object with detectable surface featuresis sufficient.

Figure 3 A calibration target used by thevision system

Classical approaches to camera calibration are fairlylabour intensive. They require complex threedimensional targets, images are acquired at multiplecamera/target positions, and often it is necessary tomeasure the relative position of the camera and targetvery accurately. Some algorithms can estimate therelative camera / target pose [13]. However, then themathematical camera model may model only simplelens distortions, and large 3D targets are still requiredfor calibration. Such calibration must be performed inground based laboratories and this process cannot berepeated once the hardware is in orbit. This is asignificant limitation as even small mis-calibrationdue to, e.g., thermal and aging effects reduceaccuracy of any vision system. Our approach can beused to calibrate cameras on-orbit, as it only requiresa planar object to be presented to the cameras inseveral arbitrary poses.

After the calibration the updated camera parameters(internal and external) are incorporated into the stereosystem’s model. In order to accelerate the on-linephase certain image and 3D transformations arecomputed off-line and stored as look up tables.

3.2 3D computationThe 3D computation module processes stereo imagesobtained from two calibrated cameras and computes3D data. The process follows a typical processingsequence and it starts from warping the images,which corrects for geometrical lens distortions andrectifies the stereo geometry. Edges detected in therectified images are matched and used to create asparse 3D representation of the scene [6]. One of thestereo images of a mockup representing a satellite,which was used in experiments, is shown in Figure 4.Different views of the computed 3D representationare shown in Figure 5.

Page 3

Page 4: Robust 3D Vision for Autonomous Space Robotic Operationsrobotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM077.pdf · on-line phase the vision system acquires images from

Figure 4 One of the stereo images

Figure 5 Computed 3d representation shownfrom different viewpoints

The 3D computation adapts to variable illuminationand scene content, and uses only a minimum numberof control parameters. In a typical scene our visionsystem computes approximately 1000 3D points.

3.3 Pose EstimationTo determine the position and orientation of theobject, the 3D data points are matched to a knownmodel using a version of the Iterative Closest Pointalgorithm (ICP) [1]. Our implementation of thisalgorithm consists of the following steps [6]:

1. Selection of the closest points between the modeland the data set

2. Rejection of outliers

3. Computation of geometrical registration betweenmatched data points and the model

4. Application of geometrical registration to the data

5. Termination of the iteration after stopping criterionis reached

The algorithm iterates the steps 1-4 until the stoppingcriterion (convergence and/or reaching the maximumallowed number of iterations) is met.

Detection of the closest points between the modeland the data sets involves computing distances foreach combination of data points and model features.This is a computationally intensive process anddifferent model representations and accelerationschemes have been used [1, 15]. For modelsrepresented as points or meshes k-d trees are oftenused [12]. Man-made objects, which can berepresented as collections of simple shapes, allowusing efficient closed form solutions to compute thedistances between points and model parts [6]. In thecurrent version of our system we have implemented avery fast algorithm, which eliminates the need toperform any on-line distance calculations.

3.3.1 Pre-computed Distance MapsAs the computation of distances between data pointsand models is time consuming, we have eliminated itfrom the on-line phase and pre-computed alldistances in advance off-line. This comes at a price ofhaving to store the data and/or reduce the resolutionof the distances.

The space surrounding the model is partitioned intovoxels and the closest point between the centre ofeach voxel and the model is computed off-line.During the on-line operation, instead of computingthe closest point to the model we simply need tolookup the distance based on the partition in whichthe data point lies. The data structure chosen tofacilitate the pre-computed distance map is theoctTree. An octTree is a tree structure where eachnode is either a leaf or has eight children. From anyone point in 3D space one can move in the positive ornegative direction for each dimension giving 23 = 8partitions. Since we have eight such partitions werefer to each one as an octant. Each octant can thenbe subdivided, with each sub-octant representing aregion of space one eighth the size of its parent, seeFigure 6.

Page 4

Page 5: Robust 3D Vision for Autonomous Space Robotic Operationsrobotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM077.pdf · on-line phase the vision system acquires images from

(a) An Octant (b) Partioning of a region of space

(c) Tree representation for the space partioned in (b)

Figure 6 Octtree represnetation of a 3Dspace

The closest point to the model from each corner ofthe octant is stored at each node. When we wish todetermine the closest point on the model from somearbitrary point p in space we traverse the tree andfind the smallest octant containing p. Next we findthe closest corner of the octant, c, to point p and usethe closest point on the model from point c as anapproximation of the closest point to p. This is asimple and very fast way of getting an approximationof the closest point to p. No calculation is required -we simply use the octTree as a lookup table.

Figure 7 shows a model of the object used in theexperiments and Figure 8 its octTree representation atthe level 2.

Figure 7 - Object Model

Figure 8 Level 2 OctTree Representation ofthe Model

In our experiments we have used level 7 as a goodcompromise between the size, accuracy andprocessing speed.

Experimental results show that looking up the closestpoint to the model using the octTree is over 1000times faster than performing the calculation. Thedisadvantage to using octTrees is that all the pre-computed data needs to be stored. Some measureshave been implemented to reduce the storage spacerequired for octTrees. A trade-off between the sizeand resolution of octTrees is illustrated in Table 1 foran object within a space of 0.5 x 0.5 x 0.5 m.

Table 1 Accuracy and Size of OctTrees

TreeLevel

Size ofOctant[mm3]

MaximumDistance toNearest Corner[mm]r

ModelSize [MB]

4 31 27.0 0.25

5 15 13.5 2

6 7.8 6.75 16

7 3.9 3.38 128

8 1.95 1.69 1024

9 0.976 0.85 m 8192

4 Experimental verificationTesting the performance of vision systems isbecoming a separate sub-discipline in computervision. Any system that is or will be installed as partan autonomous system must be fully characterized inrepresentative environments and under expectedoperational scenarios before its deployment. Theexperiments conducted with our vision systemattempted to determine the operational ranges where

Page 5

Page 6: Robust 3D Vision for Autonomous Space Robotic Operationsrobotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM077.pdf · on-line phase the vision system acquires images from

such a vision system can be used successfully. Suchinformation allows one to use the system in a safeand predictable manner and to focus futuredevelopment. The scope of the performed testsincluded measurements of:

• performance, accuracy and repeatability of themain software components of the vision system

• performance, accuracy and repeatability of thecomplete vision system for different scenarios

• effects of illumination on operation of thecomplete system

The testing has been performed using the SpaceVision Testbed with two jointly calibrated robots[10]. The robot positions reported by their controllerswere treated as accurate measurements of the relativepose of a mockup with respect to a stereo cameraplatform. Representative scaled mockups have beenmanufactured using real spacecraft insulationmaterials: a Spartan-like model of a satellite, seeFigure 1 and Figure 5. The mockups were moved inspace mimicking trajectories that might be executedduring an actual space mission: direct and loopapproach, fly-around.

The illumination was scaled down to 1% of the on-orbit level and provided by a computer controlledmobile spotlight with a subtense angle similar to theSun and large screens simulating the Earth albedo.Several combinations of available light sources wereused: direct Sun light (using one of two differentspotlights), “Earth albedo” - directional diffuse lightand diffuse ambient light. The spotlights werearranged in such a way to create hard shadows andlocal image saturation in some of the images.

5 ResultsTesting system components involved testing thecamera calibration accuracy, which was found to be asmall fraction of a pixel. Testing of the 3D generationmodule allowed us to determine that it operates instable fashion under variable light intensity (between40%– 100%), direction and type, and observedobjects and scene complexity. The system operatedalways successfully with 25% data loss and oftenwith even 50%. Example images from experimentwith a loop approach are shown in Figure 12. Theright window displays one of the stereo images of amockup. The left window displays a synthetic imageobtained by rendering the mockup in the same poseand from the same direction as the camera. Thenumbers show the computed pose.

Figure 9 Several images from a testsequence

During this test the stereo camera platformrepresenting the chaser satellite was approaching themockup, which performed a rotation about Y (yaw)axis. The estimated distance to the mockup has beenplotted as a function of the actual distance in Figure10, and the estimated angle in Figure 11.

500 1000 1500 2000 2500 3000 3500500

1000

1500

2000

2500

3000

3500

actual distance Z [mm]

measured Z[mm]

MeasuredAlpha= 0.5 Alpha=0.25

Figure 10 Estimated distance against theactual distance

Page 6

Page 7: Robust 3D Vision for Autonomous Space Robotic Operationsrobotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM077.pdf · on-line phase the vision system acquires images from

500 1000 1500 2000 2500 3000 35000

10

20

30

40

50

60

70

Distance [mm]

Yawmeasured[deg]

Measuredalpha= 0.5alpha=0.25

Figure 11 Estimated yaw angle as thefunction of the actual distance

Figure 12 shows images from another test; note theshadows cast on the observed object. The shadowedges are located on the object surface and they areactually beneficial as they provide additional 3D data(the data in the shadow is totally lost).

Figure 12 Actual and computed images froman experiment with shadows

The effective range of the tested stereo system (forthe lens and cameras used) was found to be in therange of 5 – 20 stereo baselines. The maximum rangewas limited by the size of the object in the imagesand a small disparity, which was affecting themeasurement accuracy and ability to compute thepose. The minimum range was limited by the factthat object exceeded the image size – we used onlyone model of the object.

Iterative algorithms such as ICP operate successfullywhen they are initialized within the convergencerange of the algorithm [1]. Measuring this rangerequires a very large number of tests in order to testconvergence from many initial conditions [5]. Wehave conducted exhaustive tests, in which we

determined that the algorithm always converges inthe range is of 15 – 30 degrees depending on theshape of the object. The high likelihood convergencerange is larger than the guaranteed and it alsodepends on the object.

The processing time of our software is, on a x86family 6 model 7 computer, in order of 1 second withmost of the time spent computing 3D data.

6 ConclusionsThis paper presents a design and the results of anexperimental characterization of a vision system forsatellite proximity operations. The system can trackthe 3D pose of a known object using onlygeometrical models and natural features detected onthe object surfaces. As the system computes highlyredundant data, partial data loss due to occlusion,shadows or local specular reflection does not affectthe system operation.

The system was characterized using representativeimages obtained using a calibrated testbed. Theimages generated in experiments have been stored ina database and are available to collaborating researchinstitutions for evaluation of their computer visionalgorithms.

Pre-computing certain data off-line allows our systemto rely on advanced and reliable algorithms andoperate at 1 Hz on a desktop computer. The 3Dcomputation, which uses most of the processing time,can be implemented in dedicated hardware off-loading the CPU and/or increasing the update rate.

Current work focuses on extending the vision systemcapabilities by incorporating additional modules thatcompute automatically the initial pose [7],implementing the short range module, and integratingthe vision system in a closed loop control system.

The funding for this work has been provided in partby a contract from Canadian Space Agency todevelop an Object Recognition and Pose EstimationToolkit.

7 References[1] Besl, P. & N. McKay, "A Method for Registrationof 3-D Shapes", IEEE Transactions on PatternAnalysis and Machine Intelligence, vol. 14, no. 2,February 1992, pp. 239-256.

[2] Blais F, Beraldin J, Cournoyer L.: "Integration ofa Trackinbg Laser Range Camera with thePhotogrammetry based Space Vision System". SPIEAerosense, vol. 4025, April 2000.

Page 7

Page 8: Robust 3D Vision for Autonomous Space Robotic Operationsrobotics.estec.esa.int/i-SAIRAS/isairas2001/papers/Paper_AM077.pdf · on-line phase the vision system acquires images from

[3] ETS-VII http://oss1.tksc.nasda.go.jp/ets-7/ets7_e/rvd/rvd_index_e.html

[4] Hollander S.,"Autonomous Space Robotics:Enabling Technologies for Advanced SpacePlatforms", Space 2000 Conference, AIAA 2000-5079.

[5] Hugli H., Schutz C: Geometric matching of 3DObjects: Assessing the Range of Successful InitialConditions. Int. Conf. On Recent Advances in 3-DDigital Imaging and Modelling. 3-DIM.

[6] Jasiobedzki P, Talbot J, and Abraham M: Fast 3DPose Estimation for On-Orbit Robotics. ISR 2000Proceedings pp. 434-440

[7] Jasiobedzki P, Greenspan M, Roth G.: PoseDetermination and Tracking for AutonomousSatellite Capture, I-SAIRAS 2001

[8] Laurin, Denis G.; Beraldin, J.-Angelo; Blais,Francois; Rioux, Marc; Cournoyer, Luc. "A Three-Dimensional Tracking and Imaging Laser Scannerfor Space Operations", SPIE Vol. 3707, pp. 278-289.

[9] McCarthy J.: Space Vision System. ISR-1998,Birmingham, UK.

[10] Newhook P.: A Robotic Simulator for satelliteOperations. I-SAIRAS 2001

[11] Oda M, Kine K, Yamagata F., "ETS-VII – aRendezvous Docking and Space Robot TechnologyExperiment Satellite",46 Int. Astronautical Congress,Oct 1995, Norway, pp. 1-9.

[12] Simon D, Hebert M, Kanade T: Real-time 3DPose Estimation Using a High Speed Range Sensor.IEEE Conference on Robotics and Automation, pp.2235-2241

[13] Tsai RY A Versatile Camera CalibrationTechnique for High-Accuracy 3D Machine VisionMetrology Using Off-the-Shelf TV Cameras andLenses. IEEE Journal of Robotics and Automation, v.RA-3, no.4 August, pp. 323-344.

[14] Zhang Z:. Flexible Camera Calibration ByViewing a Plane From Unknown Orientations.International Conference on Computer Vision.(ICCV'99), Corfu, Greece, pages 666-673, September1999.

[15] Zhang Z: Iterative Point Matching forRegistration of Free-Form Curves and Surfaces.IJCV, 13:2 119-152

Page 8