an error estimation framework for photon density estimation

1
An Error Estimation Framework for Photon Density Estimation Toshiya Hachisuka * Wojciech Jarosz ,* Henrik Wann Jensen * * UC San Diego Disney Research Z¨ urich A B C 1E-0 1E-1 Measurement Points (a) Relative Error at Point A (b) Relative Error at Point B (c) Relative Error at Point C Number of Iterations 1E-2 1E-3 1E-4 Relative Error 1 10 100 1000 Number of Iterations 1 10 100 1000 Number of Iterations 1 10 100 1000 actual error estimated error 0.5 0.25 0.125 0.0625 Average Relative Error Number of Iterations 1.0 0.1 1 10 100 1000 0.01 actual error estimated error bound E = thr E = thr E = thr E = thr Figure 1: Top: error bound estimation on a test scene. We show the reference rendering (left), as well as the actual error (red) and estimated error bound (green) at the three points (a,b,c) shown in the reference images. The specified confidence is 90%. Each iteration uses 15K photons. Bottom: rendering with specified error thresholds. The rendering process is terminated once the error estimate reaches the specified threshold. Our conservative error bound predicts the rate of convergence of the true average error automatically (log-log plot). Introduction Estimating error is an important task in rendering. For many predictive rendering applications such as simulation of car headlights, lighting design, or architectural design it is import to provide an estimate of the actual error to ensure confidence and accuracy of the results. Even for applications where accuracy is not critical, error estimation is still useful for improving aspects of the rendering algorithm. Examples include terminating the rendering algorithm automatically, adaptive sampling where the parameters of the rendering algorithm are adjusted dynamically to minimize the error, and interpolating sparsely sampled radiance within a given error bound. We present a general error bound estimation framework for global illumination rendering using photon density estimation. Our method estimates bias due to density estimation of photons, and variance due to photon tracing. Our error bound estimation is robust for any light transport configuration since it is based on progressive photon map- ping (PPM). We demonstrate that our estimated error bound captures error under complex light transport. Figure 1 shows that our error bound estimation works well under complex illumination including caustics. As a proof of concept of our error bound estimation, we demonstrate that it can be used to automatically terminate rendering without any subjective trial and error by a user. Our framework is the first general error estimation framework for photon based ren- dering methods that can handle complex global light transport with arbitrary light paths and materials. Existing work is either based on heuristics that do not capture error, or are limited to specific light paths or materials. We believe that our work is the first step towards answering the important question: “How many photons are needed to render this scene?”. Our Framework Unbiased Monte Carlo ray tracing algorithms are often preferred for predictive rendering since the error bound can be estimated based on variance. However, unbiased methods are not robust in the presence of specular reflections or refractions of caustics from small light sources, which can be often seen in applications of lighting simulation (light bulbs, headlights etc). We therefore use progressive photon mapping (PPM) [Hachisuka et al. 2008], which is a biased Monte Carlo algorithm that is robust under these lighting conditions. Our error estimation framework estimates the following stochastic error bound: P (-Ei <Li - L - Bi < Ei )=1-β, where Ei = t(i, 1- β 2 ) q V i i , t(i, x) is the x percentile of the t-distribution with degree i, Li is estimated radiance, Bi is estimated bias, Vi is estimated variance, L is the correct radiance, and 1 - β is user-defined confidence of this stochastic bound. Using this stochastic error bound, the user can simply specify their desired confidence as 1 - β, and our framework can tell how far the current estimated radiance is from the correct radiance without knowing L. We estimate the bias as Bi 1 2 R 2 i k2ΔL [Silverman 1986], where k2 is a constant derived from the kernel and Ri is the radius for density estimation. We show how to apply this technique to the progressive density estimation used in progressive photon mapping. This equation uses the Laplacian of the unknown, correct radiance ΔL. We estimate this value using the Laplacian of the kernel, which has been used in standard density estimation techniques. Although the original PPM does not support smooth kernels that have a Lapla- cian, we derive the necessary conditions for incorporating these kernel functions within PPM. Using our kernel-based PPM, we can estimate any order of derivatives including the Laplacian of radiance in a consistent way, which has not been done in existing work and would be useful for analysis of illumination. Unfortunately, the standard procedure to estimate variance cannot be used in biased methods. We therefore propose estimating vari- ance using bias-corrected radiance Li - Bi , which also removes the dependency between samples in PPM. The same framework can be applied in a straight forward way for photon mapping or grid-based photon-density estimation by simply estimating the Laplacian. The advantage of using PPM is that the estimation of the Laplacian con- verges to the correct Laplacian in the limit, not just an approximation. That means that the entire framework is consistent except for the approximation of bias. References HACHISUKA, T., OGAKI , S., AND JENSEN, H. W. 2008. Progressive photon mapping. ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2008) 27, 5, Article 130. SILVERMAN, B. 1986. Density Estimation for Statistics and Data Analysis. Chapman and Hall, New York, NY.

Upload: others

Post on 21-Mar-2022

19 views

Category:

Documents


0 download

TRANSCRIPT

An Error Estimation Framework for Photon Density Estimation

Toshiya Hachisuka∗ Wojciech Jarosz†,∗ Henrik Wann Jensen∗

∗UC San Diego †Disney Research Zurich

A

BC

1E-0

1E-1

Measurement Points (a) Relative Error at Point A (b) Relative Error at Point B (c) Relative Error at Point C

Number of Iterations

1E-2

1E-3

1E-4

Rel

ativ

e E

rror

1 10 100 1000Number of Iterations

1 10 100 1000

Number of Iterations1 10 100 1000

actual error estimated error

0.5 0.25 0.125 0.0625

Ave

rage

Rel

ativ

e E

rror

Number of Iterations

1.0

0.1

1 10 100 10000.01

actual errorestimated error bound

E =thr E =thr E =thr E =thr

Figure 1: Top: error bound estimation on a test scene. We show the reference rendering (left), as well as the actual error (red) and estimated error bound(green) at the three points (a,b,c) shown in the reference images. The specified confidence is 90%. Each iteration uses 15K photons. Bottom: rendering withspecified error thresholds. The rendering process is terminated once the error estimate reaches the specified threshold. Our conservative error bound predictsthe rate of convergence of the true average error automatically (log-log plot).

Introduction Estimating error is an important task in rendering.For many predictive rendering applications such as simulation ofcar headlights, lighting design, or architectural design it is importto provide an estimate of the actual error to ensure confidence andaccuracy of the results. Even for applications where accuracy is notcritical, error estimation is still useful for improving aspects of therendering algorithm. Examples include terminating the renderingalgorithm automatically, adaptive sampling where the parameters ofthe rendering algorithm are adjusted dynamically to minimize theerror, and interpolating sparsely sampled radiance within a givenerror bound.We present a general error bound estimation framework for globalillumination rendering using photon density estimation. Our methodestimates bias due to density estimation of photons, and variance dueto photon tracing. Our error bound estimation is robust for any lighttransport configuration since it is based on progressive photon map-ping (PPM). We demonstrate that our estimated error bound captureserror under complex light transport. Figure 1 shows that our errorbound estimation works well under complex illumination includingcaustics. As a proof of concept of our error bound estimation, wedemonstrate that it can be used to automatically terminate renderingwithout any subjective trial and error by a user. Our framework isthe first general error estimation framework for photon based ren-dering methods that can handle complex global light transport witharbitrary light paths and materials. Existing work is either based onheuristics that do not capture error, or are limited to specific lightpaths or materials. We believe that our work is the first step towardsanswering the important question: “How many photons are neededto render this scene?”.

Our Framework Unbiased Monte Carlo ray tracing algorithmsare often preferred for predictive rendering since the error boundcan be estimated based on variance. However, unbiased methodsare not robust in the presence of specular reflections or refractionsof caustics from small light sources, which can be often seen inapplications of lighting simulation (light bulbs, headlights etc). Wetherefore use progressive photon mapping (PPM) [Hachisuka et al.2008], which is a biased Monte Carlo algorithm that is robust underthese lighting conditions. Our error estimation framework estimates

the following stochastic error bound: P (−Ei < Li − L − Bi <

Ei) = 1−β, whereEi = t(i, 1− β2

)√

Vii

, t(i, x) is the x percentileof the t-distribution with degree i, Li is estimated radiance, Bi isestimated bias, Vi is estimated variance, L is the correct radiance,and 1− β is user-defined confidence of this stochastic bound. Usingthis stochastic error bound, the user can simply specify their desiredconfidence as 1 − β, and our framework can tell how far the currentestimated radiance is from the correct radiance without knowing L.We estimate the bias as Bi ≈ 1

2R2i k2∆L [Silverman 1986], where

k2 is a constant derived from the kernel and Ri is the radius fordensity estimation. We show how to apply this technique to theprogressive density estimation used in progressive photon mapping.This equation uses the Laplacian of the unknown, correct radiance∆L. We estimate this value using the Laplacian of the kernel, whichhas been used in standard density estimation techniques. Althoughthe original PPM does not support smooth kernels that have a Lapla-cian, we derive the necessary conditions for incorporating thesekernel functions within PPM. Using our kernel-based PPM, we canestimate any order of derivatives including the Laplacian of radiancein a consistent way, which has not been done in existing work andwould be useful for analysis of illumination.Unfortunately, the standard procedure to estimate variance cannotbe used in biased methods. We therefore propose estimating vari-ance using bias-corrected radiance Li −Bi, which also removes thedependency between samples in PPM. The same framework can beapplied in a straight forward way for photon mapping or grid-basedphoton-density estimation by simply estimating the Laplacian. Theadvantage of using PPM is that the estimation of the Laplacian con-verges to the correct Laplacian in the limit, not just an approximation.That means that the entire framework is consistent except for theapproximation of bias.

ReferencesHACHISUKA, T., OGAKI, S., AND JENSEN, H. W. 2008. Progressive photon mapping.

ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2008) 27, 5,Article 130.

SILVERMAN, B. 1986. Density Estimation for Statistics and Data Analysis. Chapmanand Hall, New York, NY.