3d camera calibration -...

6
3D Camera Calibration Nichola Abdo André Borgeat Department of Computer Science, University of Freiburg Abstract Time-of-flight (TOF) cameras based on the Photo- mixing Detector (PMD) technology are capable of measuring distances to objects at high frame rates, making them valuable for many applications in robotics and computer vision. The distance measurements they provide, however, are affected by many factors, systematic and otherwise, incur- ring the need for calibrating them. This paper presents a technique for calibrating PMD cameras, in which we extend a recently-proposed calibra- tion technique [Fuchs and May, 2007] to include intensity-related errors in depth measurement. Our model accounts for three major sources of error: the circular error resulting from the signal mod- ulation process, the signal propagation delay, and the intensity-related error. Our experimental results confirm the advantage of considering the intensity- related factors in the calibration process. 1 Introduction The acquisition of 3D information about the world is of cru- cial importance to many fields, ranging from industrial pro- cesses to computer vision and robotics. This typically re- quires the determination of the distances of objects in the environment to the sensor taking the measurements, and is usually carried out using laser scanners or stereo-vision cam- eras. While those techniques provide precise measurements with high resolution, they suffer from several drawback. Sys- tems involving laser scanners, for example, require some mechanism for sequentially scanning the environment with a laser beam, and are therefore relatively expensive and time- consuming. Additionally, systems utilizing stereo cameras must analyze the scene as viewed in different images to obtain depth measurements, a process which is computationally- demanding and that is not immune to errors resulting from regions of homogeneous intensity and color [Ringbeck and Hagebeuker, 2007]. On the other hand, time-of-flight devices based on PMD technology are becoming increasingly popular for 3D imag- ing applications. Those devices are compact, relatively cheap, and capable of obtaining depth information about the world with higher frame rates than when using the techniques mentioned above [Ringbeck and Hagebeuker, 2007]. This stems from the fact that all pixels in a PMD camera compute the depth measurement of the corresponding points in space in parallel. Consequently, those devices are more suited for real-time applications. However, the performance of PMD devices depends on many factors, including light intensity, distances of objects, and their reflectivity [Fuchs and May, 2007]. This gives rise to the need for calibrating the distance measurements obtained by those devices, which requires an investigation into the dependencies of these measurements on the differ- ent relevant factors and sources of systematic error. This pa- per presents our attempt at calibrating such a PMD camera, through which we account for systematic errors caused by signal modulation, signal-propagation delays, and light inten- sity. In the following section, we give a brief overview of the most important and relevant work in this field. This is fol- lowed by a description of the theory behind the operation of PMD devices and the different sources of error affecting their performance. We then present our own error model and the calibration procedure we conducted. Finally, we give the ex- perimental results we achieved and discuss the main conclu- sions of the work. 2 Related Work There exist a number of approaches for calibrating the depth measurements of TOF cameras. Lindner and Kolb provided a B-splines approximation for the error [Lindner and Kolb, 2006]. In a subsequent work, they used a similar B-splines approximation to account for the intensity factors in the error in [Lindner and Kolb, 2007]. On the other hand, Kahlmann et al. presented a technique based on look-up tables [Kahlmann et al., 2006]. Those works involved calibrating the camera against a known distance taken as the ground truth. The cam- eras, however, were manually fixed in place in the experimen- tal settings and moved to different distances from the objects being sensed. This could lead to erroneous results when as- suming an accurate ground truth distance, and does not take into account the many robotics and industrial applications in which the camera is fixed to a robotic arm for example. Instead, Fuchs and May proposed a different model that additionally estimates the transformation between the cam- era and the tool center point (TCP) (the end-effector of a

Upload: others

Post on 24-May-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 3D Camera Calibration - domino.informatik.uni-freiburg.dedomino.informatik.uni-freiburg.de/.../robotics2/... · On the other hand, time-of-flight devices based on PMD technology

3D Camera Calibration

Nichola Abdo André BorgeatDepartment of Computer Science, University of Freiburg

AbstractTime-of-flight (TOF) cameras based on the Photo-mixing Detector (PMD) technology are capableof measuring distances to objects at high framerates, making them valuable for many applicationsin robotics and computer vision. The distancemeasurements they provide, however, are affectedby many factors, systematic and otherwise, incur-ring the need for calibrating them. This paperpresents a technique for calibrating PMD cameras,in which we extend a recently-proposed calibra-tion technique [Fuchs and May, 2007] to includeintensity-related errors in depth measurement. Ourmodel accounts for three major sources of error:the circular error resulting from the signal mod-ulation process, the signal propagation delay, andthe intensity-related error. Our experimental resultsconfirm the advantage of considering the intensity-related factors in the calibration process.

1 IntroductionThe acquisition of 3D information about the world is of cru-cial importance to many fields, ranging from industrial pro-cesses to computer vision and robotics. This typically re-quires the determination of the distances of objects in theenvironment to the sensor taking the measurements, and isusually carried out using laser scanners or stereo-vision cam-eras. While those techniques provide precise measurementswith high resolution, they suffer from several drawback. Sys-tems involving laser scanners, for example, require somemechanism for sequentially scanning the environment witha laser beam, and are therefore relatively expensive and time-consuming. Additionally, systems utilizing stereo camerasmust analyze the scene as viewed in different images to obtaindepth measurements, a process which is computationally-demanding and that is not immune to errors resulting fromregions of homogeneous intensity and color [Ringbeck andHagebeuker, 2007].

On the other hand, time-of-flight devices based on PMDtechnology are becoming increasingly popular for 3D imag-ing applications. Those devices are compact, relativelycheap, and capable of obtaining depth information about theworld with higher frame rates than when using the techniques

mentioned above [Ringbeck and Hagebeuker, 2007]. Thisstems from the fact that all pixels in a PMD camera computethe depth measurement of the corresponding points in spacein parallel. Consequently, those devices are more suited forreal-time applications.

However, the performance of PMD devices depends onmany factors, including light intensity, distances of objects,and their reflectivity [Fuchs and May, 2007]. This givesrise to the need for calibrating the distance measurementsobtained by those devices, which requires an investigationinto the dependencies of these measurements on the differ-ent relevant factors and sources of systematic error. This pa-per presents our attempt at calibrating such a PMD camera,through which we account for systematic errors caused bysignal modulation, signal-propagation delays, and light inten-sity.

In the following section, we give a brief overview of themost important and relevant work in this field. This is fol-lowed by a description of the theory behind the operation ofPMD devices and the different sources of error affecting theirperformance. We then present our own error model and thecalibration procedure we conducted. Finally, we give the ex-perimental results we achieved and discuss the main conclu-sions of the work.

2 Related WorkThere exist a number of approaches for calibrating the depthmeasurements of TOF cameras. Lindner and Kolb provideda B-splines approximation for the error [Lindner and Kolb,2006]. In a subsequent work, they used a similar B-splinesapproximation to account for the intensity factors in the errorin [Lindner and Kolb, 2007]. On the other hand, Kahlmann etal. presented a technique based on look-up tables [Kahlmannet al., 2006]. Those works involved calibrating the cameraagainst a known distance taken as the ground truth. The cam-eras, however, were manually fixed in place in the experimen-tal settings and moved to different distances from the objectsbeing sensed. This could lead to erroneous results when as-suming an accurate ground truth distance, and does not takeinto account the many robotics and industrial applications inwhich the camera is fixed to a robotic arm for example.

Instead, Fuchs and May proposed a different model thatadditionally estimates the transformation between the cam-era and the tool center point (TCP) (the end-effector of a

Page 2: 3D Camera Calibration - domino.informatik.uni-freiburg.dedomino.informatik.uni-freiburg.de/.../robotics2/... · On the other hand, time-of-flight devices based on PMD technology

robotic arm in that case) [Fuchs and May, 2007]. Their modelaccounts for the circular error (induced by the modulationprocess) and the signal propagation delay, but not for theintensity-related error.

In this paper, we build on the model proposed in [Fuchsand May, 2007] and extend it in two main ways. Firstly, weadjust the error term related to the signal propagation delayto more accurately describe the error in terms of the pixellocation in the PMD array. And secondly, we introduce anintensity-related factor in the error model, since the error inthe depth measurements is also dependent on the intensity ofthe light signal received by the camera.

3 Principle of Operation of PMD CamerasPMD cameras operate on the concept of time of flight, and aretherefore capable of providing distance information about theobjects they are sensing. Typically, a PMD camera consistsof a PMD chip and its peripheral electronics, an illuminationsource, receiver optics, and a camera control system includ-ing digital interfaces and software. The illumination sourceemits infrared light onto the scene, and the reflected light isreceived by the camera and used to measure the distances tothe objects. In contrast to typical TOF devices, however, allpixels in the PMD’s smart pixel array simultaneously analyzethe received optical signal to calculate the depth measurementof the corresponding point in space. This eliminates the needfor scanning a single light beam over the environment to ob-tain 3D information [Ringbeck and Hagebeuker, 2007].

The PMD chip is based on CMOS-processes (complemen-tary metal–oxide–semiconductor), which also provide an au-tomatic suppression of background light, allowing the deviceto be used outdoors as well as indoors [Lindner and Kolb,2006]. Furthermore, the number of the pixels in the arraynaturally defines the lateral resolution of the device. Typicalresolutions include 48×64 and 160×120 pixel at 20 Hz. Thereader is referred to [Prasad et al., 2006] (as cited by [Lindnerand Kolb, 2006]) for an approach combining PMD cameraswith 2D cameras to overcome the resolution limitations ofPMD devices.

To calculate the distance measurement, each pixel in thearray carries out a demodulation process. In a PMD cam-era, a reference electrical signal is applied to the modulationgates of each pixel. Additionally, the incident light on thephotogates of the pixels generates a second electrical signal.If the reference voltage and light source are initially modu-lated using the same signal, then the received optical signalwould differ from the reference signal by a phase shift, whichis proportional to the distance of the reflecting object in space[Ringbeck and Hagebeuker, 2007]. For a given reference sig-nal, g(t), and the optical signal, s(t), received by the pixel,the correlation function, c(τ), for a given phase shift, τ , iscalculated as follows:

c(τ) = s⊗ g = limT→∞

∫ T/2

−T/2s(t) · g(t+ τ)dt. (1)

For a sinusoidal signal, g(t) = cos(ωt), the incident lightand resulting correlation function are s(t) = k + a cos(ωt+

φ) and c(τ) = h + a2 cos(ωτ + φ) respectively, where a is

the amplitude of the incident optical signal, h is the offset ofthe correlation function (and represents the gray-scale valueof each pixel [Ringbeck and Hagebeuker, 2007]), ω is themodulation frequency, and φ is the phase offset proportionalto the distance [Lindner and Kolb, 2007]. By sampling foursignals (A0−A3) at π2 intervals from the correlation function,those values are calculated as:

φ = arctan(A3 −A1

A0 −A2), (2a)

a =

√(A3 −A1)2 + (A0 −A2)2

2, (2b)

h =A0 +A1 +A2 +A3

4. (2c)

Finally, the distance, d, to the target can be calculated fromthe phase shift, φ, as [Lindner and Kolb, 2007]:

d =cφ4πω

, (3)

where c is the speed of light. Figure (1) below illustrates thecorrelation function and the four samples used to calculatethe distance measurement as described above.

Figure 1: Sampling from the correlation function to computethe phase shift φ.

We add that the chosen modulation frequency, ω, de-termines the distance unambiguousness [Lindner and Kolb,2007]. For example, a modulation frequency of 20 MHz re-sults in an unambiguous distance range of 7.5m, as can beverified from the equation governing the wavelength, λ, thespeed of light, and the frequency: λ = c/ω, and noting thatthe distance range = λ/2 as this distance has to be traveledtwice by the light.

4 Error Sources in TOF Depth MeasurementsFirst of all, one has to note that TOF Cameras, like regu-lar gray-scaled cameras, are defined by the pinhole cameramodel. Therefore, their images are corrupted by lens distor-tion effects, focal length, and shifting of the optical center.Those effects are usually handled by the lateral (2D) calibra-tion of the camera.

Page 3: 3D Camera Calibration - domino.informatik.uni-freiburg.dedomino.informatik.uni-freiburg.de/.../robotics2/... · On the other hand, time-of-flight devices based on PMD technology

Additionally, the depth measurements of TOF camerasthemselves are corrupted by numerous error sources (see[Lange, 2000] for an exhaustive review). First of all, Lind-ner and Kolb [2006] observe a periodic error related to themeasured distance. The error has a wave length of approxi-mately 2m. Lindner and Kolb account this error to the fact,that the calculation of the distance assumes a perfectly sinu-soidal light source, which in practice is not given.

A second source of error is the time it takes the sensor inthe array to propagate the signal to the processing unit. Thiserror depends on the relative position of the sensor within thearray (i.e. the pixel in the image) [Fuchs and May, 2007].

Furthermore, and since the distance calculation dependson the amount of reflected light, the intensity of the opti-cal signal (i.e. the brightness) affects the distance measure-ments. On the one hand, a low intensity leads to a bad sig-nal to noise ratio, corrupting the measurement randomly. Onthe other hand, different sources [Lindner and Kolb, 2007;Guðmundsson et al., 2007; Radmer et al., 2008] report an ad-ditional systematic error, which can be incorporated into thecalibration.

Another error arises from the shutter time of the camera,i.e. the time over which the camera integrates the image.Longer integration times tend to shift the image towards thecamera [Kahlmann et al., 2006; Lindner and Kolb, 2007].

Kahlmann et al. [2006] also report that the internal temper-ature of the camera, as well as the external temperature, influ-ence the depth measurements. During the first few minutes,while the camera warms up, the measured distance increases.But even after the temperature has more or less stabilized,Kahlmann et al. report a small deviation that, according tothem, is due to a cool down which occurs in between takingthe individual images. They also determined that increasingthe room temperature results in a drift away from the cameraat around 8mm/◦C.

Finally, in [Guðmundsson et al., 2007], the effects of mul-tiple reflections on the distance measurements are discussed.

5 Error Model and Calibration

Let Div be the distance measurement of a pixel v = (r, c)

at row r and column c in the ith image, Eiv the error in thisdistance measurement. Let, furthermore A : R × R2 → R3

be the projection of a given pixel with a given distance to thecartesian coordinate system, including the correction of thefocal length, the shifting of the optical center and the lensdistortion. In our setup, the camera was attached at the endof a robot arm with multiple joints (see Figure 2). The end-effector pose wTi

t is given by the robot control and assumedto be true. Additionally we assume an unknown transforma-tion tTs, the sensor-to-tool-center-point transformation, be-tween the sensor coordinate system and the end-effector co-ordinate system.

Using these definitions, the world coordinate xiv corre-sponding to a pixel v in image i is given by

xiv = wTittTsA

(Di

v − Eiv,v). (4)

Figure 2: The PMD camera fixed to the robot’s end-effector(courtesy of Barbara Frank). The arm was moved to differ-ent poses to take images from different views of the wall andcheckerboard.

5.1 Error ModelAs discussed in section 4, the depth error Eiv consists of dif-ferent factors. In this work, we try to account for the distance-related error, D, the pixel-related error, P , and the intensity-related error, I. The sum of these three individual errors willbe used as our error model:

Eiv = D + P + I. (5)

Distance-Related Error Since this work focuses on dis-tances below 2m, we decided to ignore the periodicity of thecircular error and not use a sinusoidal-base function. Instead,we follow the approach of [Fuchs and May, 2007] and modelthis error as a third order polynomial:

D(Di

v

)= c0 + c1dD

iv + c2

[Di

v

]2+ c3

[Di

v

]3. (6)

Pixel-Related Error As previously mentioned, Fuchs &May [2007] state, that the pixel related error stems from thepropagation delay within the CMOS gates. They model thiserror using a linear function of the row and column of thepixel as follows:

P1(r, c) = p0 + p1r + p2c.

This assumes that the pixel-related error has its minimum atone corner of the image (depending on the signs of p1 andp2), and its maximum at the diagonally-opposite corner.

However, an inspection of the plots of the error against therow and the column (see Figure 3) reveals that our data doesnot seem to confirm this assumption.

The error does not seem to have its maximum exactly at acorner of the image, nor does a linear model seem to fit. Onething to note though is that the individual errors are highly

Page 4: 3D Camera Calibration - domino.informatik.uni-freiburg.dedomino.informatik.uni-freiburg.de/.../robotics2/... · On the other hand, time-of-flight devices based on PMD technology

(a) Error against row

(b) Error against column

Figure 3: Plot of the error against (a) the row of the pixeland (b) the column of the pixel. Errors are given in centime-ters, averaged over all available pixels, thus the actual value islargely uninformative. The plot are only used as an indicatorfor the general trend.

correlated. The pixels on the outside of the image, for exam-ple, are usually much darker and therefore are affected differ-ently by any intensity related effects than those in the center.

Nonetheless we decided to model the pixel related error asit appears to be and utilized the following term

P2(r, c) = p0 + p1(r − r0)2 + p2(c− c0)2, (7)

to allow for more accurate determination of the location (interms of row and column number) where this error is mini-mal.

Intensity-Related Error As already mentioned in the pre-vious section, the error in the measured distance is also re-lated to the intensity of the pixel. Pixels with lower reflec-tivity (i.e. darker pixels) tend to drift closer to the camera(this was also observed in [Lindner and Kolb, 2007]). UnlikeLindner and Kolb, where B-Splines are used to model the in-tensity related error, we decided to use a polynomial function,since it seemed to fit better in the general model of Fuchs andMay. In order to keep the number of parameters small weonly used a second-order polynomial, so we get

I1(Iiv)= i0 + i1

[Iiv]+ i2

[Iiv]2, (8)

as error term, where Iiv is the intensity reading reported bythe camera for pixel v in the ith image.

Additionally, to account for the fact that the measured in-tensity is not only related to the reflectivity, but also to thedistance, we used a distance-normalized intensity, N i

v, givenby

N iv = inI

iv

(Di

v

)2,

analogous to the measured intensity, yielding

I2(Iiv, D

iv

)= i0 + i3

[N i

v

]+ i4

[N i

v

]2. (9)

5.2 CalibrationWith the distance, di, and the position of the robot relativeto the wall, given by the unit normal vector of the plane, ni(measured with a laser range finder), it is clear that using thexiv from equation (4), the equation

(ni)Txiv + di = 0

must hold true for all pixels in all images, if we know a perfectmodel of the error. The task of the calibration is, therefore, tofind a parametrization a? of Eiv and the unknown sensor-to-tool-center-point-transformation, that minimizes the sum ofthe squared errors over all available pixels:

a? = argmina∈Rn

∑i

∑v

[(ni)Txiv + di

]2. (10)

We implemented this optimization procedure using theLevenberg-Marquardt algorithm, an algorithm for non-linearleast-square estimation.

6 Experimental ResultsImage Sets In the experiments, we used three different setsof images taken of a plane, white wall. The first set contains62 images of a checkerboard pattern hung on the wall, thesecond consists of 30 images of the white wall, whereas thethird contains 20 images of the plane wall and 22 checker-board images. The images were taken using a PMD-[vision]O3 camera manufactured by PMD Technologies, which is at-tached to the end-effector of a robotic arm and has a resolu-tion of 50×64 pixels. Moreover, each image was taken froma different position relative to the wall.

Since the three image sets were taken on three differentoccasions with different operating conditions (ambient light,room temperature, temperature of the camera, . . . ) we de-cided not to mix the sets up, but to treat them individually.

Lateral Correction The calibration for the focal length, theoptical center, and the lens distortion is a well-explored topicin computer vision. The calibration can be done in a distinctpreprocessing step using the intensity values from the images,since it does not depend on the distance readings. In our ex-periments, we used camera parameters calculated using theOpenCV library1.

Error Models We tested three different approaches:EA: the error model as described above using the plain inten-

sities

EA = D + P2 + I1EB : the same model, using the normalized intensities instead

of the plain intensities

EB = D + P2 + I2E0: the error model from [Fuchs and May, 2007] with the

linear pixel related error term (as discussed above) andno intensity error as a baseline

E0 = D + P1

1http://opencv.willowgarage.com/wiki/

Page 5: 3D Camera Calibration - domino.informatik.uni-freiburg.dedomino.informatik.uni-freiburg.de/.../robotics2/... · On the other hand, time-of-flight devices based on PMD technology

Experiments In a first series of experiments, each data setwas randomly split into two halves, one training set used dur-ing the calibration and one test set used during the evaluation.Table 1 shows the absolute error for each error model, aver-aged over all the pixels in the test set.

All three methods decreased the error significantly. In par-ticular, all three methods were able to align the image with theplane (see Figure 4), i.e. they found a suitable sensor-to-tcptransformation. The two methods presented in this paper bothoutperformed the baseline method on all three image sets, al-though the margin on the second image set is by far smallerthan on the others. This is due to the fact that the second im-age set only contains white images, which have lesser varia-tion in the intensities and therefore the intensity related erroris not as decisive.

Table 1: Average per pixel error before and after correction bythe three error models on the three image sets in millimeters

0 E0 EA EB

Image Set 1 28.30 11.33 8.11 8.05Image Set 2 55.29 7.38 6.36 7.00Image Set 3 26.12 21.86 16.39 17.23

(a) Before Correction (b) After Correction

Figure 4: Projection of one image into 3D, (a) before cor-rection and (b) after correction. Note how in (a) the planeis translated and tilted with respect to the expected plane,while in (b) the expected and the actual plane are more orless aligned. This is most notably the effect of the sensor-to-tool-center-point transformation.

Since the difference between the model using the measuredintensities and model using the normalized intensities weretoo small for any qualified statement, we ran a second seriesof experiments. The second series of experiments was doneusing 6-fold cross validation, to get rid of the randomnesscoming from splitting the image sets. Due to time constraints,the second experiment was only done using the third data set,which we believe to be the most representative for the errorsources, the presented error models try to account for. Table2 shows the results of the second experiment.

Although the results suggest that the model using the mea-sured intensities performs slightly better than the model withthe normalized intensities, no statistically significant differ-ence between the two models can be claimed.

Effects of the intensity related error term The resultsclearly show that using the intensities as additional error

Table 2: Average per pixel error (in millimeter) for the differ-ent error models for each fold

0 E0 EA EB

Fold 1 19.79 15.71 10.16 9.76Fold 2 21.57 12.54 7.92 10.58Fold 3 36.99 23.52 19.05 18.55Fold 4 24.41 14.64 9.76 9.83Fold 5 32.66 13.66 11.58 12.53Fold 6 23.83 10.78 6.96 8.85Mean 26.54 15.14 10.90 11.682.57SE ±6.48 ±4.26 ±4.13 ±3.44

terms is beneficial. Both of the here-presented models aresignificantly better compared to the baseline model (usinga t-distribution with 5 degrees of freedom and a confidencelevel of 95%). The effect of the correction using the inten-sity is shown in Figure 5. One can see that the effect of thecheckerboard pattern is almost gone. Another effect we ob-served using the baseline model is that the surface of the im-age is always concave. This is probably due to the fact thatthe intensities in our image sets tend to be much higher in thecenter of the image. This effect was also largely reduced bythe intensity error term.

One more interesting thing to note is that the error tendsto be higher on the edges of the checkerboard squares. Thiscould be explained by the low resolution of the camera. Thepixels on the edges are probably a mixture between the darkerand the brighter areas.

Other Observations In our experiments we also noticedeffects we think we can attribute to the temperature of thecamera. Figure 6 shows the error in the distance measure-ment of the first three images of image set two. The threeimages were taken consecutively from almost the same posi-tion. Since we didn’t notice such a large difference (roughly15mm from the first to the third) in other images with suchsimilar positions, we believe we can assume that this drift isrelated to the warm up phase of the camera.

Figure 6: Plots of the absolute error in the measured distanceof three consecutive images to illustrate the drift of the imageduring the warm up phase of the camera

We could not confirm this assumption, since our data doescontain neither information about the temperature of the cam-era nor of the room.

Page 6: 3D Camera Calibration - domino.informatik.uni-freiburg.dedomino.informatik.uni-freiburg.de/.../robotics2/... · On the other hand, time-of-flight devices based on PMD technology

(a) Baseline Method (b) Plain intensities (c) Normalized intensities

Figure 5: Projection of a checkerboard image into 3D (a) without, (b) with plain and (c) with normalized intensity correction.Note how in (a) the checkerboard pattern is affecting the distance measurement and how this is accounted for in (b) and (c).Also note that, since the intensity is in general higher in the center of the image, (a) has a bowl-like shape. This effect is alsoreduced in (b) and (c).

7 ConclusionsIn this paper we presented a method for calibrating TOF cam-eras. The presented method extends the error model pub-lished by Fuchs & May [2007] by an additional error termwhich accounts for the intensity related shift in the measureddistances. Our experiments showed that, by introducing ourmodel of the distance-intensity dependency, the error in thedistance measurement can be significantly decreased.

Further work should be made to find more suitable func-tions for the individual error terms, as well as trying to finderror models for other systematic errors mentioned in thisand other papers, such as the temperature of the camera andthe environment. A problem we see in this approach is thatthe number of parameters of the error model is already bigenough to make it susceptible to over-fitting. We fear that afurther enhancement of the model could increase this prob-lem.

Finally, and to better-assess the viability of the presentedmethod, more work should be invested into comparing thiswork to other approaches which incorporate the intensity asan error source, such as the work done by Lindner & Kolb[2007].

AcknowledgementsThis paper was written for the final project of the MobileRobotics 2 course, held in the 2009/10 winter term at theAlbert-Ludwigs Universität Freiburg, lectured by Prof. Dr.Wolfram Burgard, PD Dr. Cyrill Stachniss, Dr. GiorgioGrisetti and Dr. Kai Arras. The project was supervised byBarbara Frank, who we would like to thank for her assistanceand for providing us with the image sets and data necessaryfor this work.

References[Fuchs and May, 2007] Stefan Fuchs and Stefan May. Cali-

bration and registration for precise surface reconstructionwith TOF cameras. In Proceedings of the ADGM Dyn3DWorkshop, Heidelberg, Germany, 2007.

[Guðmundsson et al., 2007] Sigurjón Árni Guðmundsson,Henrik Aanæs, and Rasmus Larsen. Environmental effectson measurement uncertainties of time-of-flight cameras. InInternational Symposium on Signals Circuits and Systems- ISSCS, 2007.

[Kahlmann et al., 2006] T. Kahlmann, F. Remondino, andH. Ingensand. Calibration for increased accuracy of therange imaging camera swissranger™. In ISPRS Commis-sion V Symposium ’Image Engineering and Vision Metrol-ogy’, 2006.

[Lange, 2000] Robert Lange. 3D Time-of-Flight DistanceMeasurement with Custom Solid-State Image Sensors inCMOS/CCD-Technology. PhD thesis, Department of Elec-trical Engineering and Computer Science at Uuniversity ofSiegen, 2000.

[Lindner and Kolb, 2006] Marvin Lindner and AndreasKolb. Lateral and depth calibration of PMD-distancesensors. In Advances in Visual Computing, volume 2,pages 524–533. Springer, 2006.

[Lindner and Kolb, 2007] Marvin Lindner and AndreasKolb. Calibration of the intensity-related distance errorof the PMD TOF-camera. In Intelligent Robots andComputer Vision XXV: Algorithms, Techniques, andActive Vision, 2007.

[Prasad et al., 2006] T.D.Arun Prasad, Klaus Hartmann,Wolfgang Weihs, Seyed Eghbal Ghobadi, and ArndSluiter. First steps in enhancing 3D vision technique us-ing 2D/3D sensors. In Computer Vision Winter Workshop,2006.

[Radmer et al., 2008] Jochen Radmer, Pol Moser Fusté,Henning Schmidt, and Jörg Krüger. Incident light relateddistance error study and calibration of the PMD-rangeimaging camera. In IEEE Computer Society Conferenceon Computer Vision and Pattern Recognition, 2008.

[Ringbeck and Hagebeuker, 2007] Thorsten Ringbeck andBianca Hagebeuker. A 3D time of flight camera for objectdetection. In Object 3D Measurement Techniques. ETHZürich, 2007.