an automated camera calibration framework for desktop ...profdoc.um.ac.ir/articles/a/1014040.pdf ·...

5
An Automated Camera Calibration Framework for Desktop Vision Systems Hamed Rezazadegan Tavakoli and Hamid Reza Pourreza Abstract— Camera calibration is one of the fundamental problems of machine vision. There have been lots of efforts for providing autonomous calibration algorithms. One of the major problems that put up barrier toward autonomy is feature detection and extraction. In this paper, the architecture of an autonomous camera calibration framework is studied. The autonomy of calibration framework originates in its hardware setup. The applied setup makes automatic feature detection and extraction possible. It is shown that the calibration framework is accurate. I. I NTRODUCTION Camera Calibration is referred to the process of deter- mining camera geometric and optical characteristic (intrinsic parameters) and/or the position and orientation of the cam- era (extrinsic parameters) frame relative to a certain world coordinate system [1]. Camera Calibration is the most funda- mental and basic part of many computer vision systems, as it is the best and only mean of providing metric information. Having a metric understanding the vision system would be capable of doing measurements which many applications rely on. There are different camera calibration algorithms avail- able, these algorithms mostly consider parameterization or solution technique. There is also algorithm classification considering parameterization and solution method such as the one provided by Heikkila [2] or Weng, Cohen, and Herniou [3]. The disadvantage of these classifications is that there is always an ’other’ room for upcoming new algorithms. For example a new classification could be added regarding the methods that rely on soft-computing techniques such as neural networks, support vector machine, genetic algorithms and fuzzy methods, some examples could be found to be cited in [4]-[7]. There are also methods that rely on some geometrical characteristics such as vanishing points and lines e.g. [8]-[12]. Besides, these classifications have got overlaps. However, the classification of Zhang [8] which classifies methods regarding the calibration target dimension does not suffer from such weaknesses. In this paper, the focus is on the development of an automatic accurate camera calibration framework. However, there are a set of calibration algorithms known as self- calibration/zero-dimension which are well-known because of H. Rezazadegan Tavakoli is currently an individual researcher in the field of machine vision and machine intelligence and a member of Young Researchers Club, Islamic Azad University, Mashhad Branch [email protected] H.R. Pourreza is with the Department of Computer Engineering, Faculty of Engineering, Ferdowsi University of Mashhad, Mashhad, 91775-1111, Iran [email protected] autonomy, but they are not as accurate as classic methods such as those presented in [1]-[3], [13]. The framework is as accurate as classic methods meanwhile it is fully autonomous. Developed calibration framework deals with both lens distortion and internal parameters of camera. The approach used to provide autonomy utilizes active targets. By active target, we mean those target that are controllable by the calibration algorithm. This new concept gives a new synthesis to the active calibration algorithms. The active approach presented makes a novel method of approximating center of radial distortion possible. In the next section the active aspect and realization of active targets is presented. Section three, contains the information about the calibration framework, its components and algorithms used. The last section contains the experiments, followed by conclusion. II. ACTIVE CALIBRATION Having an active calibration, it is necessary to interact with the environment. Active camera calibration mechanisms interact with the environment by camera movements [9]. Active calibration have gained attention in the field of robot vision; such algorithms’ examples could be found in [10], [11]. It is possible to have an active calibration algorithm while the camera is not that active, and is fixed on a tripod. The idea of such an active calibration algorithm, is that the information gained from each frame could be used to signal the calibration target for the next frame. This requires the calibration target to be active and controllable by the calibration algorithm. The term active calibration could be used in term of both methods. Meanwhile, the two are totally different. An active target could be a Light-emitting Diode (LED) carried by a controlled robotic arm; or a board of LEDs, which switching them on and off shapes patterns. Ap- proaches that rely on mechanical instruments is not versatile; flexible; precise and economical. The same is true for a board of LEDs. Another approach could be use of monitors for screening of patterns. A. Active Targets It is possible to use a computer program for generating different patterns and screening them on a LCD monitor. Using this technique switching from one pattern to another would be easy and fast, giving the maximum flexibility, adaptability, and precision. Also it should be considered that popularity of LCD monitors has made them available in ACTEA 2009 July 15-17, 2009 Zouk Mosbeh, Lebanon 978-1-4244-3834-1/09/$25.00 © 2009 IEEE 96 Authorized licensed use limited to: Ferdowsi University of Mashhad. Downloaded on September 2, 2009 at 14:37 from IEEE Xplore. Restrictions apply.

Upload: others

Post on 02-Aug-2020

15 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An Automated Camera Calibration Framework for Desktop ...profdoc.um.ac.ir/articles/a/1014040.pdf · calibration framework consists of two major independent programs; one is the pattern

An Automated Camera Calibration Framework forDesktop Vision Systems

Hamed Rezazadegan Tavakoli and Hamid Reza Pourreza

Abstract— Camera calibration is one of the fundamentalproblems of machine vision. There have been lots of effortsfor providing autonomous calibration algorithms. One of themajor problems that put up barrier toward autonomy is featuredetection and extraction. In this paper, the architecture ofan autonomous camera calibration framework is studied. Theautonomy of calibration framework originates in its hardwaresetup. The applied setup makes automatic feature detection andextraction possible. It is shown that the calibration frameworkis accurate.

I. INTRODUCTION

Camera Calibration is referred to the process of deter-mining camera geometric and optical characteristic (intrinsicparameters) and/or the position and orientation of the cam-era (extrinsic parameters) frame relative to a certain worldcoordinate system [1]. Camera Calibration is the most funda-mental and basic part of many computer vision systems, asit is the best and only mean of providing metric information.Having a metric understanding the vision system would becapable of doing measurements which many applications relyon.

There are different camera calibration algorithms avail-able, these algorithms mostly consider parameterization orsolution technique. There is also algorithm classificationconsidering parameterization and solution method such as theone provided by Heikkila [2] or Weng, Cohen, and Herniou[3]. The disadvantage of these classifications is that thereis always an ’other’ room for upcoming new algorithms.For example a new classification could be added regardingthe methods that rely on soft-computing techniques such asneural networks, support vector machine, genetic algorithmsand fuzzy methods, some examples could be found to becited in [4]-[7]. There are also methods that rely on somegeometrical characteristics such as vanishing points and linese.g. [8]-[12]. Besides, these classifications have got overlaps.However, the classification of Zhang [8] which classifiesmethods regarding the calibration target dimension does notsuffer from such weaknesses.

In this paper, the focus is on the development of anautomatic accurate camera calibration framework. However,there are a set of calibration algorithms known as self-calibration/zero-dimension which are well-known because of

H. Rezazadegan Tavakoli is currently an individual researcher inthe field of machine vision and machine intelligence and a memberof Young Researchers Club, Islamic Azad University, Mashhad [email protected]

H.R. Pourreza is with the Department of Computer Engineering, Facultyof Engineering, Ferdowsi University of Mashhad, Mashhad, 91775-1111,Iran [email protected]

autonomy, but they are not as accurate as classic methodssuch as those presented in [1]-[3], [13]. The frameworkis as accurate as classic methods meanwhile it is fullyautonomous. Developed calibration framework deals withboth lens distortion and internal parameters of camera.

The approach used to provide autonomy utilizes activetargets. By active target, we mean those target that arecontrollable by the calibration algorithm. This new conceptgives a new synthesis to the active calibration algorithms.The active approach presented makes a novel method ofapproximating center of radial distortion possible. In the nextsection the active aspect and realization of active targetsis presented. Section three, contains the information aboutthe calibration framework, its components and algorithmsused. The last section contains the experiments, followed byconclusion.

II. ACTIVE CALIBRATION

Having an active calibration, it is necessary to interactwith the environment. Active camera calibration mechanismsinteract with the environment by camera movements [9].Active calibration have gained attention in the field of robotvision; such algorithms’ examples could be found in [10],[11]. It is possible to have an active calibration algorithmwhile the camera is not that active, and is fixed on a tripod.The idea of such an active calibration algorithm, is thatthe information gained from each frame could be used tosignal the calibration target for the next frame. This requiresthe calibration target to be active and controllable by thecalibration algorithm. The term active calibration could beused in term of both methods. Meanwhile, the two are totallydifferent.

An active target could be a Light-emitting Diode (LED)carried by a controlled robotic arm; or a board of LEDs,which switching them on and off shapes patterns. Ap-proaches that rely on mechanical instruments is not versatile;flexible; precise and economical. The same is true for a boardof LEDs. Another approach could be use of monitors forscreening of patterns.

A. Active Targets

It is possible to use a computer program for generatingdifferent patterns and screening them on a LCD monitor.Using this technique switching from one pattern to anotherwould be easy and fast, giving the maximum flexibility,adaptability, and precision. Also it should be considered thatpopularity of LCD monitors has made them available in

ACTEA 2009 July 15-17, 2009 Zouk Mosbeh, Lebanon

978-1-4244-3834-1/09/$25.00 © 2009 IEEE 96

Authorized licensed use limited to: Ferdowsi University of Mashhad. Downloaded on September 2, 2009 at 14:37 from IEEE Xplore. Restrictions apply.

Page 2: An Automated Camera Calibration Framework for Desktop ...profdoc.um.ac.ir/articles/a/1014040.pdf · calibration framework consists of two major independent programs; one is the pattern

every laboratory, institute, and house; besides they are notmuch expensive.

A monitor depending on its setting can provide differentprecisions. As an example, a monitor with 1024 × 768resolution; and 317mm×236mm viewable screen has pixelsof approximately 0.31mm tall and 0.31mm wide; whichmeans the pattern can have movements with precision of0.31mm. It is obvious the precision would increase at higherresolutions.

The advantage of a monitor and a pattern generatorprogram, is that patterns can be controlled and changedregarding the circumstances through the calibration processadaptively. The camera is in sync with the pattern generatorand calibration program. It makes a fully automatic imageacquisition and feature extraction possible.

The key to the automatic feature extraction is that, apattern can be screened in multiple frames. This makesfeature extraction from region of interest easy, especially ifone region of interest be displayed in each frame.

III. CALIBRATION FRAMEWORK

In this section different aspects of calibration frameworkis explained. At first lens distortion handling is explained.Afterwards, estimation of intrinsic parameters is studied. Atlast the overall framework architecture is introduced.

A. Radial Distortion

Geometrical distortion deals with image points positioningbecause the actual position of a point would be different thanthe imaged position if lens distortion is present.There are dif-ferent methods for estimation of lens distortion parameters.These methods fall into two categories, quantitative methods[1]-[3], [12]-[14] and qualitative methods [15]-[18].

The approach used in the framework is a qualitative one.Qualitative methods are those that rely on the invarianceimage properties such as straightness of lines [18] and usethese properties to compensate the distortion. The advantageof these techniques is that they do not rely on camerainformation. However, there is a qualitative approach todistortion parameter estimation which requires some extrainformation about the camera [19].

1) Center of Radial Distortion: The framework uses anovel approach to estimate center of radial distortion inadvance. The approach relies on the fact that a line passingthrough the center of radial distortion would be straight.The fact gives rise to the following theorem:

Theorem 1: Under radial distortion, two concurrent linesl1, l2 would stay straight if and only if the intersecting pointp is positioned on the distortion center o.

Fig.1 provides a visualization of theorem. The proof isbeyond the scope of this paper, however, it is somehowself-evident. The reader is referred to [20] for an in-depthconstrual.

A simple search algorithm is proposed for finding distor-tion center; the aim of search is finding the two straight lines

Fig. 1. Center of Radial Distortion, xi is the image of x. The lines ofcross image would be straight if pi lies on o, i.e. p and oc (optical center)are aligned together.

intersection by moving a cross calibration target in front ofcamera.

2) Distortion Model: A polynomial model is used forapproximating the first two coefficients of radial distortion.Unlike conventional methods, the center of radial distortionis approximated beforehand. Line straightness is used as themeasure of distortion. The polynomial model is defined bythe following equation:

r = rd

(1 + k1r

2d + k2r

4d + ... + knr2n

d

)(1)

B. Intrinsic Parameters

It is known that a point X̂ = [X,Y,Z, 1]T from 3D worldcan be related to its corresponding point x̂ = [x, y, 1]T in2D world using (2)

wx̂ = K[R|t]X̂ (2)

where w is an arbitrary scale factor, R is a rotation matrix,t is a translation vector, and K is camera’s intrinsic matrixdefined by (3)

K =

⎡⎣

fx s u0

0 fy v0

0 0 0

⎤⎦ (3)

where fx and fy are the focal length in terms of pixel dimen-sions; s is skew; u0 and v0 are principal point coordinatesin terms of pixel dimensions. Equation (2) can be reducedto wx̂ = HX̂ under assumption of Z = 0, where H is (4),and X̂ = [X,Y, 1]T .

H =[

r1 r2 t]

(4)

97

Authorized licensed use limited to: Ferdowsi University of Mashhad. Downloaded on September 2, 2009 at 14:37 from IEEE Xplore. Restrictions apply.

Page 3: An Automated Camera Calibration Framework for Desktop ...profdoc.um.ac.ir/articles/a/1014040.pdf · calibration framework consists of two major independent programs; one is the pattern

where ri is the ith column of rotation matrix. H is knownas the homography matrix.

1) Solving Intrinsic Parameters: Having x̂ and X̂ fromobservation, homography was calculated using the GoldStandard algorithm after applying an isotropic normalization[21]. Using the knowledge that r1 and r2 are orthonormal,the following constraints on intrinsic parameters are inferred,

hT1 K−T K−1h2 = 0 (5)

hT1 K−T K−1h1 = hT

2 K−T K−1h2 (6)

It is possible to obtain ω = K−T K−1 from the con-straints, using a direct linear approach. Afterwards, the intrin-sic parameters are calculated using Cholesky factorization.However, because framework uses one target plane only fx

and fy are estimated. It is assumed that s = 0 and principalpoint is the center of image.

Having the intrinsic parameters, extrinsic parameters arecalculated using (4). At last, all the parameters are optimizedusing Levenberg-Marquardt technique minimizing the fol-lowing distance:

m∑i=1

xi − x̂ (K,R, t,Xi) (7)

where x̂ is the projection of point Xi in image planeaccording to reduced form of (2).

C. Framework Architecture

Fig. 2 shows the framework architecture. The cameracalibration framework consists of two major independentprograms; one is the pattern generator and the other one isa program that performs all the computation. The latter onewould be referred as computational program in the rest ofthis paper.

Both programs are in connection with each other usinga communication channel. A communication center is incharge of transferring information and commands betweenthese two programs. An interpreter is in charge of coding anddecoding messages from numerical string into meaningfulstructures and vice versa.

The pattern generator consists of a graphic unit, a pixel-metric convertor, and a communication center. The graphicunit is in charge of displaying patterns. Patterns are generatedby means of feature points. The type of pattern, feature pointand the region of interest is requested by the computationalprogram. A frame is displayed when a display signal isreceived; in response the pattern generator displays therequested frame, sends a displayed signal and waits forthe next request. This ensures that the requested frame iscaptured. The computational program can get metric andpixel based information of monitor by requesting it frompixel-metric convertor unit.

The computational program consists of five other com-ponents, plus a communication center. These componentsare image acquisition; feature extraction; geometrical lens

Fig. 2. Camera Calibration Framework Architecture.

distortion handler; camera parameter handler; and decisionunit. Image acquisition is responsible for capturing frames.Geometrical lens distortion handler is responsible for findingdistortion center and radial distortion coefficients using tech-niques explained. Camera parameter handler is responsiblefor approximation of internal parameters using undistortedimages. The decision unit is in charge of these components.Decision unit decides on the information sent from patterngenerator and decides where the information should berouted. It also handles the requests and data from differentcomponents and decides on the destination they should besent (e.g. it decides which component should receive theextracted features information). The reason for decision unitpresence is that computational program is not as simple aspattern generator and a simple interpreter is not enough tohandle all the information.

The approach to camera calibration is based on un-distorting the image using a qualitative technique and us-ing undistorted information for finding the camera intrinsicparameters. At first a simple test is performed to find thedistortion presence; later the center of distortion is estimatedand distortion coefficients are approximated. At last, intrinsicparameters are approximated.

IV. EXPERIMENTS

The video camera used in these experiments is a Sonycamcorder (DCR-TRV460E) equipped with a 1

6” CCD sen-sor, and a 2.5 − 50mm Sony lens. The lens focal length

98

Authorized licensed use limited to: Ferdowsi University of Mashhad. Downloaded on September 2, 2009 at 14:37 from IEEE Xplore. Restrictions apply.

Page 4: An Automated Camera Calibration Framework for Desktop ...profdoc.um.ac.ir/articles/a/1014040.pdf · calibration framework consists of two major independent programs; one is the pattern

Fig. 3. Calibration setup used in experiments.

was kept to 2.5mm, which is the widest possible focallength in all the experiments The camera is capable of USBstreaming, so no digital to analog converter is needed. Theframes are directly grabbed at the resolution of 640 × 480in RGB color space and later converted to grayscale. A 15”TFT monitor with native resolution of 1024 × 768 (SonySDM-HS53/H) was used to screen the patterns generated bypattern generator. A user defined color space with maximumbacklight used meanwhile the experiments. Also the cameraoptical axis is nearly orthogonal to monitor. Fig. 3 shows thehardware setup used in experiments.

The calibration framework performance was comparedwith its counterpart developed at computational vision groupof Caltech by Jean-Yves Bouguet1.

The pattern used for the case of Caltech’s toolbox wasa chessboard pattern provided by the toolbox. The patternwas printed on a paper; and fixed on a surface. Afterwardsnearly thirty frames from different angles was taken bymoving the camera freely on hand. The feature extractionused was a corner based feature extraction provided by thetoolbox. The semi-automatic approach of corner detector wasselected, where the four outer corners of calibration target isselected and based on the number of squares the positioningof corners are approximated. Later, corners are optimizedusing an iterative scheme.

It has been reported higher number of interest points re-sults in a more accurate calibration result [22]. Consequently,a pattern consisted of nearly two hundred feature points wasused in case of calibration framework. The positioning wasquite random.

The calibration results of the proposed framework and Cal-tech’s toolbox is provided in Table I. The Caltech’s toolboxcalculates the first two coefficients of tangential distortion.However, in the proposed framework it is not needed toconsider them because of known center of distortion.

The accuracy evaluation was done by approximating theangle between two intersecting planes. The targets usedare shown in Fig. 4. The ground truth is 90◦ ± 1. Eightcorrespondent points was selected by hand. Afterwards the

1Reachable at: http://www.vision.caltech.edu/bougetj/calib_doc/index.hml

TABLE I

CALIBRATION RESULT

Framework DLT Framework Optimized Caltech’s Toolbox

fx 724.6332 713.4747 863.31337fy 743.4705 732.942 884.06995s 0 -0.2157 0u0 240 242.4362 237.76127v0 320 322.1570 340.13527cx 321.6408 321.6408 –cy 247.4743 247.4743 –k1 -0.01450 -0.01450 -0.17007k2 -0.00126 -0.00126 0.48422p1 – – -0.01077p2 – – -0.01077

’Framework DLT’ is the result of direct linear transformation beforeoptimization and ’Framework Optimized’ is the final result of frame-work. cx and cy are center of radial distortion, ki is the ith radialdistortion coefficient, pi is the ith tangential distortion coefficient.

TABLE II

RESULT OF ANGLE ESTIMATION

Angle

Framework DLT 95.6303◦Framework Optimized 94.9125◦

Caltech’s Toolbox 99.6314◦

angle was calculated. The evaluation result is summarizedin Table II. As it is shown the framework’s final optimizedanswer has the best result.

V. CONCLUSION

In this paper, an autonomous calibration framework wasintroduced. The framework is as accurate as well-knownmethods meanwhile it is fully autonomous. Also because ofits autonomy and capability of using as many feature pointsas possible, the intrinsic parameter approximation is stable.

It was shown that the framework can outperform somewell-known calibration toolboxes. The main reason is be-cause of accurate automatic feature extraction. In classic cal-ibration, the region of interest for each feature point shouldbe selected by user in order to have a careful calibration.In fact, the calibration process would be a tedious process.On the other hand, proposed calibration framework does nothurt from such a defect

This papers also presents a new synthesis to active calibra-tion, where the target is active and controlled by algorithm.The proposed center of radial distortion algorithm relies onthis idea. Besides, the realization of active target by meansof a pattern generator and a monitor as explained makesthe framework a suitable one for the case of desktop visionsystems (DVS), where the user is a novice.

REFERENCES

[1] R.Y. Tsai, A Versatile Camera Calibration Technique for High-accuracy 3D Machine Vision Metrology Using Off-the-Shelf TV

99

Authorized licensed use limited to: Ferdowsi University of Mashhad. Downloaded on September 2, 2009 at 14:37 from IEEE Xplore. Restrictions apply.

Page 5: An Automated Camera Calibration Framework for Desktop ...profdoc.um.ac.ir/articles/a/1014040.pdf · calibration framework consists of two major independent programs; one is the pattern

Fig. 4. Target used in angle estimation.

Cameras and Lenses, IEEE Journal of Robotics and Automation, vol.RA-3, 1987, pp 323-344.

[2] J. Heikkila, Geometric Camera Calibration Using Circular ControlPoints, IEEE Transactions on Pattern Analysis and Machine Intel-ligence, vol. 22, 2000, pp 1066-1077.

[3] J. Weng, P. Cohen and M. Herniou, Camera Calibration with DistortionModels and Accuracy Evaluation, IEEE Transactions on PatternAnalysis and Machine Intelligence, vol. 14, 1992, pp 965-980.

[4] J. Qiang and Y. Zhang, Camera Calibration with Genetic Algorithms,IEEE Transactions on Systems, Man and Cybernetics, Part A, vol. 31,pp 120-130.

[5] M.S. Mousavi and R.J. Schalkoff, ANN Implementation of StereoVision Using a Multi-Layer Feedback Architecture, IEEE Transactionson Systems, Man and Cybernetics, vol. 24, 1994, pp 1220-1238.

[6] C.V. Jawahar and P.J. Narayanan, ”Towards Fuzzy Calibration”, inAFSS 2002 International Conference on Fuzzy Systems, Calcutta,India, 2002, pp 305-313.

[7] R. Mohamed, A. Ahmed, A. Eid and A. Farag, ”Support Vector Ma-chines for Camera Calibration Problem”, in 2006 IEEE InternationalConference on Image Processing, 2006, pp 1029-1032.

[8] Z. Zhang, Camera Calibration with One-Dimensional Objects, IEEETransactions on Pattern Analysis and Machine Intelligence, vol. 26,2004, pp 892-899.

[9] A. Basu and K. Ravi, Active Camera Calibration Using Pan, Tilt andRoll, IEEE Transactions on Systems, Man and Cybernetics, Part B,vol. 27, 1997, pp 559-566.

[10] D. Konstantinos and E. Jorg, Active Intrinsic Calibration UsingVanishing Points, Pattern Recognition Letters, vol. 17, 1996, pp 1179-1189.

[11] P.F. McLauchlan and D.W. Murray, Active Camera Calibration fora Head-Eye Platformusing the Variable State-Dimension Filter, IEEETransactions on Pattern Analysis and Machine Intelligence, vol. 18,1996, pp. 15- 22.

[12] J. Kannala and S.S. Brandt, A Generic Camera Model and CalibrationMethod for Conventional, Wide-Angle, and Fish-Eye Lenses, IEEETransactions on Pattern Analysis and Machine Intelligence, vol. 28,2006, pp 1335-1340.

[13] Z. Zhang, A Flexible New Technique for Camera Calibration, IEEETransactions on Pattern Analysis and Machine Intelligence, vol. 22,2000, pp 1130-1134.

[14] G.-Q. Wei and S.D. Ma, Implicit and Explicit Camera Calibration:Theory and Experiments, IEEE Transactions on Pattern Analysis andMachine Intelligence, vol. 16, 1994, pp 469-480.

[15] J.P. Barreto, R. Swaminathan and J. Roquette, ”Non ParametricDistortion Correction in Endoscopic Medical Images”, in 3DTV-CONThe True Vision, Capture, Transmission and Display of 3D Video, Kos,Greece, 2007.

[16] H.Li and R. Hartley, ”A Non-Iterative Method for Lens Distortion Cor-

rection from Point Matches”, in OmniVis’05 (workshop in conjunctionwith ICCV’05), Beijing, 2005.

[17] H. Farid and A.C. Popescu, Blind Removal of Lens Distortion, Journalof the Optical Society of America, 2001.

[18] F. Devernay and O. Faugeras, Straight Lines Have to Be Straight:Automatic Calibration and Removal of Distortion from Scenes ofStructured Environments, Machine Vision and Applications, vol. 13,2001, pp 14-24.

[19] J. Wang, F. Shi, J. Zhang and Y. Liu, A New Calibration Model ofCamera Lens Distortion, Pattern Recognition, vol. 41, 2008, pp 607-615.

[20] H.R. Tavakoli, Automatic Camera Calibration Mechanism , in Depart-ment of Computer and Artificial Intelligence, Islamic Azad Universityof Mashhad, M.Sc thesis, 2008.

[21] R. Hartley and A. Zisserman, Multiple View Geometry in ComputerVision, 2nd ed., Cambridge University Press, 2003.

[22] W. Sun and J.R. Cooperstock, An Empirical Evaluation of FactorsInfluencing Camera Calibration Accuracy Using Three Publicly Avail-able Techniques, Machine Vision and Applications, vol. 17, 2006, pp51-67.

100

Authorized licensed use limited to: Ferdowsi University of Mashhad. Downloaded on September 2, 2009 at 14:37 from IEEE Xplore. Restrictions apply.