[ieee 2013 ieee virtual reality (vr) - lake buena vista, fl (2013.3.18-2013.3.20)] 2013 ieee virtual...

2
Photometric Display Calibration for Embedded MR Environments Steven Braeger * University of Central Florida Yiyan Xiong University of Central Florida Charles E. Hughes University of Central Florida ABSTRACT We demonstrate a new technique to minimize photometric differ- ences between the color space of a display, sensor, and the real world, for mixed-reality (MR) systems where the real world is vis- ible near or around a display. In this approach, we use an uncali- brated sensor to capture images of the display showing a series of calibration patterns. Using these images, we precompute a polyno- mial mapping function describing the continuous color-space trans- form between the real-world,sensor, and display. This mapping function can be evaluated in real-time to produce images that, when rendered on the embedded display, greatly reduce the color dif- ferences between displayed images of a mixed-reality world and the surrounding real-world environment. The color distribution of the resulting displayed images is significantly closer to that of the ground truth real-world scene as perceived through the human eye or other sensors. We demonstrate the impact of this technique when applied to a simple dynamic-geometry mixed-reality application. Keywords: Color-calibration, Displays, Mixed-reality 1 I NTRODUCTION In many AR or MR applications, such as an avatar system, or mo- bile augmented reality [4, 6], a part of the real-world environment is usually captured and then rendered to a display device. There is often a dramatic difference between the appearance of the MR envi- ronment on the display and the actual appearance of the surround- ing “real world” environment. This difference often manifests as a blueish tint and a compressed dynamic range. We believe that this difference can have a significant impact on immersion. We demonstrate a technique to reduce the difference between the image rendered on an “embedded” display device, and the actual colors of the surrounding real-world environment, as measured by a sensor or the human eye. By sensing the display while it is rendering a series of pat- terns, we sample the non-linear mapping (which goes from a vir- tual world, through the display, to the real world, and then through a sensor) as a black box, which allows us to avoid creating an ex- plicit model of the internals of these systems. We fit our samples of the black-box color-space mapping to a continuous 3-D polynomial that maps from the RGB space of the sensor to the RGB space of the real world. This polynomial is evaluated at render time to correct the color values of the virtual world so that images rendered on the display match the colors detected in the real world. We validate our method by calibrating a display using the method, then sensing the display with multiple unrelated sensors. This demonstrates that the calibration matches the real world, be- cause the calibrated images appear consistent with the real world to multiple sensors 2 DISCUSSION Previous work models photometric calibration of multiple displays to each other, but not to the real world [3, 7]. Other works also model color-space transformations and calibration using polyno- mial models [5, 2] to do generic image calibrations, but not to the * e-mail: [email protected] e-mail:[email protected] Figure 1: An example of a randomly-generated gradient calibration pattern. Figure 2: The experimental setup real world through a display. In our approach, we directly match the virtual color space of a display to the actual emitted photons of the real-world We consider 3 color spaces involved: 1) The ’virtual’ color space V (0, 1) 3 , as output by a 3D application to a display. 2) The “actual” color space A R 3 is the luminance emitted by a display device and seen by the human eye. 3) The “sensor” color space I R 3 is the recorded color of the real-world colors after they have been sampled by a sensor device, such as a camera, and recorded to a storage medium. The goal of our method is to determine D, the mapping through the display from virtual color space V to the real-word colors A. That transformation can be undone by applying D -1 to the data before it gets to the display, reproducing the real-world colors ac- curately on the display. 2.1 Calibration A sensor is pointed at the display, and then geometrically calibrated to the screen plane with a homography [1]. The sensor is used to measure the screen output. Since the sensor has a transformation C as well, the values returned from the measurement measure a sensor space transformation F , which is the composition of D and C, representing the transform from virtual space, through the real world, and back to image space through a sensor: F ( ~ v)= C(D( ~ v)) = C( ~ a)= ~ i (1) A series of calibration patterns are chosen to display and facili- tate the calibration. These patterns are chosen to generate a series of sample points with good coverage over the entire 3D space of V . The patterns are based on long lines of slowly blending strips of color. Each vertical strip of color begins at a random point on one side of the RGB color space and ends on the other. One calibration image is made up of 16 of these vertical strips (Figure 1). 135 IEEE Virtual Reality 2013 16 - 20 March, Orlando, FL, USA 978-1-4673-4796-9/13/$31.00 ©2013 IEEE

Upload: charles-e

Post on 11-Mar-2017

217 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: [IEEE 2013 IEEE Virtual Reality (VR) - Lake Buena Vista, FL (2013.3.18-2013.3.20)] 2013 IEEE Virtual Reality (VR) - Photometric display calibration for embedded MR environments

Photometric Display Calibration for Embedded MR EnvironmentsSteven Braeger∗

University of Central FloridaYiyan Xiong

University of Central FloridaCharles E. Hughes†

University of Central Florida

ABSTRACT

We demonstrate a new technique to minimize photometric differ-ences between the color space of a display, sensor, and the realworld, for mixed-reality (MR) systems where the real world is vis-ible near or around a display. In this approach, we use an uncali-brated sensor to capture images of the display showing a series ofcalibration patterns. Using these images, we precompute a polyno-mial mapping function describing the continuous color-space trans-form between the real-world,sensor, and display. This mappingfunction can be evaluated in real-time to produce images that, whenrendered on the embedded display, greatly reduce the color dif-ferences between displayed images of a mixed-reality world andthe surrounding real-world environment. The color distribution ofthe resulting displayed images is significantly closer to that of theground truth real-world scene as perceived through the human eyeor other sensors. We demonstrate the impact of this technique whenapplied to a simple dynamic-geometry mixed-reality application.

Keywords: Color-calibration, Displays, Mixed-reality

1 INTRODUCTION

In many AR or MR applications, such as an avatar system, or mo-bile augmented reality [4, 6], a part of the real-world environmentis usually captured and then rendered to a display device. There isoften a dramatic difference between the appearance of the MR envi-ronment on the display and the actual appearance of the surround-ing “real world” environment. This difference often manifests as ablueish tint and a compressed dynamic range. We believe that thisdifference can have a significant impact on immersion.

We demonstrate a technique to reduce the difference between theimage rendered on an “embedded” display device, and the actualcolors of the surrounding real-world environment, as measured bya sensor or the human eye.

By sensing the display while it is rendering a series of pat-terns, we sample the non-linear mapping (which goes from a vir-tual world, through the display, to the real world, and then througha sensor) as a black box, which allows us to avoid creating an ex-plicit model of the internals of these systems.

We fit our samples of the black-box color-space mapping to acontinuous 3-D polynomial that maps from the RGB space of thesensor to the RGB space of the real world.

This polynomial is evaluated at render time to correct the colorvalues of the virtual world so that images rendered on the displaymatch the colors detected in the real world.

We validate our method by calibrating a display using themethod, then sensing the display with multiple unrelated sensors.This demonstrates that the calibration matches the real world, be-cause the calibrated images appear consistent with the real world tomultiple sensors

2 DISCUSSION

Previous work models photometric calibration of multiple displaysto each other, but not to the real world [3, 7]. Other works alsomodel color-space transformations and calibration using polyno-mial models [5, 2] to do generic image calibrations, but not to the

∗e-mail: [email protected]†e-mail:[email protected]

Figure 1: An example of a randomly-generated gradient calibrationpattern.

Figure 2: The experimental setup

real world through a display. In our approach, we directly matchthe virtual color space of a display to the actual emitted photons ofthe real-world

We consider 3 color spaces involved: 1) The ’virtual’ color spaceV ⊂ (0,1)3, as output by a 3D application to a display. 2) The“actual” color space A ⊂ R3 is the luminance emitted by a displaydevice and seen by the human eye. 3) The “sensor” color spaceI ⊂R3 is the recorded color of the real-world colors after they havebeen sampled by a sensor device, such as a camera, and recorded toa storage medium.

The goal of our method is to determine D, the mapping throughthe display from virtual color space V to the real-word colors A.That transformation can be undone by applying D−1 to the databefore it gets to the display, reproducing the real-world colors ac-curately on the display.

2.1 CalibrationA sensor is pointed at the display, and then geometrically calibratedto the screen plane with a homography [1]. The sensor is used tomeasure the screen output. Since the sensor has a transformationC as well, the values returned from the measurement measure asensor space transformation F , which is the composition of D andC, representing the transform from virtual space, through the realworld, and back to image space through a sensor:

F(~v) =C(D(~v)) =C(~a) =~i (1)

A series of calibration patterns are chosen to display and facili-tate the calibration. These patterns are chosen to generate a seriesof sample points with good coverage over the entire 3D space ofV . The patterns are based on long lines of slowly blending strips ofcolor. Each vertical strip of color begins at a random point on oneside of the RGB color space and ends on the other. One calibrationimage is made up of 16 of these vertical strips (Figure 1).

135

IEEE Virtual Reality 201316 - 20 March, Orlando, FL, USA978-1-4673-4796-9/13/$31.00 ©2013 IEEE

Page 2: [IEEE 2013 IEEE Virtual Reality (VR) - Lake Buena Vista, FL (2013.3.18-2013.3.20)] 2013 IEEE Virtual Reality (VR) - Photometric display calibration for embedded MR environments

Ground Truth Uncalibrated Self-Validated Cross-Validated C-V FinePixE900

Table 1: Experimental results calibrating a laptop display to the real-world using a Nikon D70 (top) and a Canon Powershot G7 (bottom),including cross-validation

Similar to other work, [2], a polynomial model (below) is fit tothe data to find a continuous curve that serves as a suitable definitionof F and F−1. However, our polynomial has arbitrarily-high orderinstead of being quadratic, and it is fully generic with all 3-D terms.

F−1c (r,g,b) =

n

∑i

n−i

∑j

n−i− j

∑k

Ci jkrig jbk (2)

Here, Fc is the curve for a channel, Ci jk is the coefficient, n is theorder, and r,g,b is the pixel color. We fit the data points to the modelin a least-squared sense, increasing the order of the polynomial andre-fitting if doing so helps improve the fit.

The MR application uses the the uncalibrated color space thatthe sensor defines internally. In other words, if the ’virtual’ colorspace in the application is based on the uncalibrated color spacecaptured from the sensor, then V = I. (This is known as ”image-based rendering”). Therefore, applying F−1 to V at runtime in theapplication will result in the correct real-world color being rebderedon the display.

D(F−1(~ienv)) = D(D−1(C−1(~ienv))) (3)

= C−1(~ienv) (4)= ~aenv (5)

2.2 Experimental ValidationFor validation, the calibration procedure was performed using 2 dif-ferent imaging sensors on the display device. A 3rd sensor, Fuji-Film FinePixE900, was not calibrated, but used to validate the ’realworld’ nature of the calibration by imaging the MR application withthe other calibration curves applied.

For each of the two main sensors, the following experiment wasperformed: 1) Remove the display device and take a photo of thebackground environment. Store it as ienv 2) Replace the display de-vice and run the geometric and color calibration procedure. (Sec-tion 2.1). 3) Render a color-corrected version of ienv to the displaydevice. 4) Photograph the display device showing the image cali-brated to this sensor with all three sensors. 5) Compare the imagefrom each sensor to the ground truth ienv image for that sensor usingsum of squared differences.

Photographic results of this experiment are documented in Table1. The AR image in the display compares favorably to the groundtruth “real-world”, even when photographs are taken using othersensors. This lends support to the idea that the calibration is sensor-independent.

In order to emperically evaluate the results for each sensor, thesubregion of the image that is inside the display was compared tothe ground truth image for each sensor. Treating the image differ-ence data as a vector, we compute the L2-norm of this vector.

The results are summarized below:Sensor Calibrated UncalibratedNikon D70 3.551x104 6.4831x104

Powershot G7 7.905x104 8.8110x104

The difference between the images of the display and the groundtruth become significantly smaller after calibration.

3 CONCLUSION

We demonstrate a color-calibration procedure for “embedded”mixed-reality applications. The multi-dimensional polynomial pro-duced by the calibration procedure can be used with image-basedrendering to improve the fidelity of a large class of MR and ARexperiences. The validation shows a significant improvement inimage fidelity on the calibrated display vs. the ground truth actualenvironment on multiple sensors.

However, human perception of realism is a far more relevant per-formance metric for MR and AR applications. Thus, it would bedesirable to evaluate this technique with a user study in the future.

REFERENCES

[1] R. Hartley. In defense of the eight-point algorithm. Pattern Analysisand Machine Intelligence, IEEE Transactions on, 19(6):580 –593, jun1997.

[2] A. Ilie and G. Welch. Ensuring color consistency across multiple cam-eras. In in ICCV, 2005, pages 1268–1275, 2005.

[3] D. Iwai and K. Sato. Limpid desk: see-through access to disorderlydesktop in projection-based mixed reality. In Proceedings of the ACMsymposium on Virtual reality software and technology, VRST ’06,pages 112–115, New York, NY, USA, 2006. ACM.

[4] P. Lincoln, G. Welch, A. Nashel, A. Ilie, A. State, and H. Fuchs. Ani-matronic shader lamps avatars. In Proceedings of the 2009 8th IEEE In-ternational Symposium on Mixed and Augmented Reality, ISMAR ’09,pages 27–33, Washington, DC, USA, 2009. IEEE Computer Society.

[5] T. Mitsunaga and S. Nayar. Radiometric self calibration. In ComputerVision and Pattern Recognition, 1999. IEEE Computer Society Confer-ence on., volume 1, pages 2 vol. (xxiii+637+663), 1999.

[6] D. Wagner and D. Schmalstieg. First steps towards handheld aug-mented reality. In Proceedings of the 7th IEEE International Sympo-sium on Wearable Computers, ISWC ’03, pages 127–, Washington, DC,USA, 2003. IEEE Computer Society.

[7] A. G. Welch, A. R. Stevens, R. H. Towles, A. Majumder, and A. Ma-jumder. A practical framework to achieve perceptually seamless multi-projector displays. Technical report, 2003.

136