lecture 11: monte carlo integration...cs184/284a, lecture 11 ren ng, spring 2016 reminder:...
TRANSCRIPT
Computer Graphics and Imaging UC Berkeley CS184/284A, Spring 2016
Lecture 11:
Monte Carlo Integration
Ren Ng, Spring 2016CS184/284A, Lecture 11
Reminder: Quadrature-Based Numerical Integration
f(x)
x0 = a x1 � x2 � x3x1 � x2 � x3x1 � x2 � x3 x4 = b
E.g. trapezoidal rule - estimate integral assuming function is piecewise linear
Z b
af(x)dx = F (b)� F (a)
Multi-Dimensional Integrals(Rendering Examples)
Ren Ng, Spring 2016CS184/284A, Lecture 11
2D Integral: Recall Antialiasing By Area SamplingCheckerboard sequence by Tom
Duff
Point Sampling Area Sampling
Integrate over 2D area of pixel
3D Integral: Motion Blur
Cook et al. “1984”
Integrate over area of pixel and over exposure time.
Ren Ng, Spring 2016CS184/284A, Lecture 11
5D Integral: Real Camera Pixel ExposureCredit: lycheng99, http://flic.kr/p/x4DEZh
Integrate over 2D lens pupil, 2D pixel, and over exposure time
Ren Ng, Spring 2016CS184/284A, Lecture 11
5D Integral: Real Camera Pixel Exposure
p
p0
r2
✓
✓
d
Sensor plane
Lens aperture
Pixel area
Qpixel
=
1
d2
Z t1
t0
Z
Alens
Z
Apixel
L(p0 ! p, t) cos4 ✓ dp dp0 dt
Ren Ng, Spring 2016CS184/284A, Lecture 11
High-Dimensional Integration
Complete set of samples:
• “Curse of dimensionality”
Numerical integration error:
Random sampling error:
In high dimensions, Monte Carlo integration requires fewer samples than quadrature-based numerical integration
In global illumination, we will encounter infinite-dimensional integrals
Stanford CS348b, Spring 2014
High-Dimensional Integration▪ Complete set of samples:
- ‘The curse of dimensionality’ !
▪ Random sampling error: !
▪ Numerical integration error: !
▪ In high dimensions, Monte Carlo requires fewer samples than quadrature-based numerical integration for the same error
N = n⇥ n⇥ · · ·⇥ n| {z }d
= nd
E ⇠ V 1/2 ⇠ 1pN
E ⇠ 1
n=
1
N1/d
Error = Variance
1/2 ⇠ 1pN
Error ⇠ 1
n=
1
N1/d
Monte Carlo Integration
Ren Ng, Spring 2016CS184/284A, Lecture 11
Monte Carlo Numerical Integration
Idea: estimate integral based on evaluation of function at random sample points Advantages:
• General and relatively simple method
• Requires only function evaluation at any point
• Works for very general functions, including functions with discontinuities
• Efficient for high-dimensional integrals — avoids “curse of dimensionality”
Disadvantages:
• Noise. Integral estimate is random, only correct “on average”
• Can be slow to converge — need a lot of samples
Ren Ng, Spring 2016CS184/284A, Lecture 11
Example: Monte Carlo Lighting
Stanford CS348b, Spring 2014
Monte Carlo Lighting
Light Center RandomSample center of light Random point on light
1 sample per pixel
Ren Ng, Spring 2016CS184/284A, Lecture 11
Review: Random VariablesX random variable. Represents a distribution of
potential values
probability density function (PDF). Describes relative probability of a random process choosing value
X ⇠ p(x)
p(1) = p(2) = p(3) = p(4) = p(5) = p(6)
X takes on values 1, 2, 3, 4, 5, 6
X ⇠ p(x)
Uniform PDF: all values over a domain are equally likely
e.g., for an unbiased die
Think: is the probability that a random measurement of will yield the value
Probability Distribution Function (PDF)
xi
xi
pi pi
pi � 0
pi =1
6
n discrete values
With probability
Requirements of a probability distribution:
Six-sided die example:
nX
i=1
pi = 1
pi X xi
X takes on the value with probabilityxi pi
Ren Ng, Spring 2016CS184/284A, Lecture 11
Expected Value of a Random Variable
The average value that one obtains if repeatedly drawing samples from the random distribution.
pi
pixin discrete values
with probabilities
X drawn from distribution with
Expected value of X: E[X] =nX
i=1
xipi
Die example: E[X] =nX
i=1
i
6
= (1 + 2 + 3 + 4 + 5 + 6)/6 = 3.5
Ren Ng, Spring 2016CS184/284A, Lecture 11
Continuous Probability Distribution Function
A random variable X that can take any of a continuous set of values, where the relative probability of a particular value is given by a continuous probability density function p(x). Conditions on p(x): Expected value of X:
X ⇠ p(x)
p(x) � 0 and
Zp(x) dx = 1
E[X] =
Zx p(x) dx
Ren Ng, Spring 2016CS184/284A, Lecture 11
Function of a Random Variable
A function Y of a random variable X is also a random variable:
Expected value of a function of a random variable:
X ⇠ p(x)
Y = f(X)
E[Y ] = E[f(X)] =
Zf(x) p(x) dx
X ⇠ p(x)
Y = f(X)
E[Y ] = E[f(X)] =
Zf(x) p(x) dx
Ren Ng, Spring 2016CS184/284A, Lecture 11
Monte Carlo Integration
Simple idea: estimate the integral of a function by averaging random samples of the function’s value.
f(x)
x0 = a
x4 = b
Z b
af(x)dx = F (b)� F (a)
xi
Ren Ng, Spring 2016CS184/284A, Lecture 11
Monte Carlo Integration
Let us define the Monte Carlo estimator for the definite integral of given function
Z b
af(x)dxDefinite integral
Random variable Xi ⇠ p(x) =1
b� a
Yi = f(Xi)
Monte Carlo estimator
Z b
af(x)dx
FN =1
N
NX
i=1
f(Xi)
p(Xi)
Ren Ng, Spring 2016CS184/284A, Lecture 11
Expected Value of Monte Carlo Estimator
Properties of expected values:
E
"X
i
Yi
#=X
i
E[Yi]
E[aY ] =aE[Y ]
The expected value of the Monte Carlo estimator is the desired integral.
E[FN ] =E
"1
N
NX
i=1
f(Xi)
p(Xi)
#
=1
N
NX
i=1
E
f(Xi)
p(Xi)
�
=1
N
NX
i=1
Z b
a
f(x)
p(x)p(x)dx
=1
N
NX
i=1
Z b
af(x) dx
=
Z b
af(x) dx
Ren Ng, Spring 2016CS184/284A, Lecture 11
Basic Monte Carlo Estimator
The basic Monte Carlo estimator is a simple special case where we use uniform random variables
Uniform random variable Xi ⇠ p(x) =1
b� a
Yi = f(Xi)
Basic Monte Carlo estimator FN =b� a
N
NX
i=1
f(Xi)
Ren Ng, Spring 2016CS184/284A, Lecture 11
Unbiased Estimator
Definition: A Monte Carlo integral estimator is unbiased if its expected value is the desired integral.
The general and basic Monte Carlo estimators are unbiased.
Why do we want unbiased estimators?
Ren Ng, Spring 2016CS184/284A, Lecture 11
Definite Integral Can Be N-Dimensional
Example in 3D:Z
x1
x0
Zy1
y0
Zz1
z0
f(x, y, z) dx dy dz
Uniform 3D random variable*
Basic 3D MC estimator*
Xi ⇠ p(x, y, z) =1
x1 � x0
1
y1 � y0
1
z1 � z0
FN =(x1 � x0)(y1 � y0)(z1 � z0)
N
NX
i=1
f(Xi)
* Generalizes to arbitrary N-dimensional PDFs
Ren Ng, Spring 2016CS184/284A, Lecture 11
Direct Lighting (Irradiance) Estimate
✓
d!
dA
E(p) =
ZL(p,!) cos ✓ d!
=2⇡
ZL(p,!) cos ✓
1
2⇡d!
=2⇡
ZL(p,!) cos ✓ p(!) d!
Estimator:L(p,!)
p(!) =1
2⇡
Sample directions over hemisphere uniformly in solid angle
Xi ⇠ p(!)
Yi = f(Xi)
Yi = L(p,!i)cos ✓i
FN =2⇡
N
NX
i=1
Yi
Ren Ng, Spring 2016CS184/284A, Lecture 11
Direct Lighting (Irradiance) Estimate
E(p) =
ZL(p,!) cos ✓ d!
=2⇡
ZL(p,!) cos ✓
1
2⇡d!
=2⇡
ZL(p,!) cos ✓ p(!) d!
Given surface point p
For each of N samples:
Generate random direction:
Compute incoming radiance arriving at p from direction:
Compute incident irradiance due to ray:
Accumulate into estimator2⇡
NdEi dEi = Licos ✓i
Li
!i
!i
A ray tracer evaluates radiance along a ray
Sample directions over hemisphere uniformly in solid angle
Ren Ng, Spring 2016CS184/284A, Lecture 11
Direct Lighting - Solid Angle Sampling
100 shadow rays per pixel
Light
Blocker
Observation: incoming radiance is zero for most directions in this scene
Idea: integrate only over the area of the light (directions where incoming radiance is non-zero)
Ren Ng, Spring 2016CS184/284A, Lecture 11
Direct Lighting: Area IntegralChange of variables to integral over area of light *
E(p) =
Z
A
0Lo
(p
0,!0)V (p, p0)
cos ✓ cos ✓0
|p� p
0|2 dA0
Outgoing radiance from light point p, in direction w’ towards p
Binary visibility function: 1 if p’ is visible from p, 0 otherwise (accounts for light occlusion)
A0
p0
p
✓
✓0
!0 = p� p0
! = p0 � p
dw =dA
0cos ✓
|p0 � p|2
Ren Ng, Spring 2016CS184/284A, Lecture 11
Direct Lighting: Area Integral
E(p) =
Z
A
0Lo
(p
0,!0)V (p, p0)
cos ✓ cos ✓0
|p� p
0|2 dA0
A0
p0
p
✓
✓0
!0 = p� p0
! = p0 � p
Sample shape uniformly by area A’Z
A0p(p0) dA0 = 1
p(p0) =1
A0
Estimator
Yi
= Lo
(p
0i
,!0i
)V (p, p0i
)
cos ✓i
cos ✓0i
|p� p
0i
|2
FN
=
A0
N
NX
i=1
Yi
Yi
= Lo
(p
0i
,!0i
)V (p, p0i
)
cos ✓i
cos ✓0i
|p� p
0i
|2
FN
=
A0
N
NX
i=1
Yi
Ren Ng, Spring 2016CS184/284A, Lecture 11
Direct Lighting - Area Sampling
100 shadow rays per pixel
Ren Ng, Spring 2016CS184/284A, Lecture 11
Random Sampling Introduces Noise
Stanford CS348b, Spring 2014
Monte Carlo Lighting
Light Center RandomSample center of light Random point on light
1 sample per pixel
Ren Ng, Spring 2016CS184/284A, Lecture 11
More Random Samples Reduces Noise
Stanford CS348b, Spring 2014
Monte Carlo Lighting
Light Center Random1 shadow ray 16 shadow rays
Ren Ng, Spring 2016CS184/284A, Lecture 11
Why Is Area Sampling Better Than Solid Angle?
Stanford CS348b, Spring 2014
Monte Carlo Lighting
Light Center RandomSolid angle sampling Area sampling
How to Draw Samples From a Desired Probability Distribution?
Ren Ng, Spring 2016CS184/284A, Lecture 11
Cumulative Distribution Function (CDF)
0 Pi 1
Pn = 1
Pj
0
1Cumulative PDF:
where:
xi
pi
Pj =jX
i=1
pi
Discrete random variable withn possible values.
Ren Ng, Spring 2016CS184/284A, Lecture 11
Sampling from Probability Distribution
⇠
Pi�1 < ⇠ Pi
To randomly select a value,
select ifxi
2 [0, 1)Uniform random variable
x2
Pj
0
1
How to compute? Binary search.
Continuous Probability DistributionPDF p(x)
p(x) � 0
P (x)
P (x) =
Zx
0p(x) dx
P (x) = Pr(X < x)
P (1) = 1
= P (b)� P (a)
CDF
Pr(a X b) =
Z b
ap(x) dx
Uniform distributionon unit interval
1
0 1
0 1
Ren Ng, Spring 2016CS184/284A, Lecture 11
Sampling Continuous Probability Distributions
Cumulative probability distribution function
P (x) = Pr(X < x)
Construction of samples:
Solve for x = P
�1(⇠)
0
1
⇠
x
Must know the formula for:
1. The integral of
2. The inverse function
p(x)
P
�1(x)1
Called the “inversion method”
Ren Ng, Spring 2016CS184/284A, Lecture 11
Example: Sampling Continuous Distribution
f(x) = x
2x 2 [0, 2]
Given:
Compute PDF:
1 =
Z 2
0c f(x) dx
= c(F (2)� F (0))
= c
1
323
=8c
3c =
3
8, p(x) =
3
8x
2 Ensure PDF is normalized
Want to sampleaccording to this
Ren Ng, Spring 2016CS184/284A, Lecture 11
Example: Sampling Continuous Distribution
f(x) = x
2x 2 [0, 2]
Given:
Compute CDF:
p(x) =3
8x
2
P (x) =
Zx
0p(x) dx
=x
3
8
Ren Ng, Spring 2016CS184/284A, Lecture 11
Example: Sampling Continuous Distribution
f(x) = x
2x 2 [0, 2]
Given:
Sample from
p(x) =3
8x
2
p(x)
P (x) =x
3
8
⇠ = P (x) =x
3
8
x = 3p
8⇠x
⇠
Applying the inversion method
How to Uniformly Sample The Unit Circle?
Ren Ng, Spring 2016CS184/284A, Lecture 11
Sampling Unit Circle: Simple but Wrong Method
2⇡
(r cos ✓, r sin ✓)(r cos ✓, r sin ✓)
(r cos ✓, r sin ✓) = uniform random angle between 0 and
= uniform random radius between 0 and 1
Return point:
Result
Source: [PBRT]
Reference
Ren Ng, Spring 2016CS184/284A, Lecture 11
Need to Sample Uniformly in AreaIncorrect
Not Equi-arealCorrect
Equi-areal
✓ = 2⇡⇠1
r =p
⇠2
✓ = 2⇡⇠1
r = ⇠2
Sampling Unit Circle (via Inversion in 2D)*
A =
Z 2⇡
0
Z 1
0r dr d✓ =
Z 1
0r dr
Z 2⇡
0d✓ =
✓r2
2
◆ ���1
0✓���2⇡
0= ⇡
p(r, ✓) dr d✓ =1
⇡r dr d✓ ! p(r, ✓) =
r
⇡
p(r, ✓) = p(r)p(✓)
p(✓) =1
2⇡
P (✓) =1
2⇡✓ ✓ = 2⇡⇠1
p(r) = 2r
P (r) = r2 r =p
⇠2
rdrd✓independent r, ✓
* See Shirley et al. p.331 for another explanation.
Shirley’s Mapping
r = ⇠1
✓ =⇡⇠24r
Distinct cases for eight octants
Ren Ng, Spring 2016CS184/284A, Lecture 11
Rejection Sampling Circle’s Area
do { x = 1 - 2 * rand01(); y = 1 - 2 * rand01(); } while (x*x + y*y > 1.);
Efficiency of technique: area of circle / area of square
Ren Ng, Spring 2016CS184/284A, Lecture 11
Rejection Sampling 2D Directions
do { x = 1 - 2 * rand01(); y = 1 - 2 * rand01(); } while (x*x + y*y > 1.);
r = sqrt(x*x+y*y); x_dir = x/r; y_dir = y/r;
Generates 2D directions with uniform probability
Ren Ng, Spring 2016CS184/284A, Lecture 11
Approximate the Area of a Circle
inside = 0 for (i = 0; i < N; ++i) { x = 1 - 2 * rand01(); y = 1 - 2 * rand01(); if (x*x + y*y < 1.) ++inside; } A = inside * 4 / N;
Uniform Sampling of Hemisphere
0.2 0.4 0.6 0.8 1.0
0.2
0.4
0.6
0.8
1.0
Generate random direction on hemisphere (all dirs. equally likely)
p(!) =1
2⇡
Direction computed from uniformly distributed point on 2D square:
Exercise to students: derive from the inversion method
(⇠1, ⇠2) ! (
q1� ⇠21 cos(2⇡⇠2),
q1� ⇠21 sin(2⇡⇠2), ⇠1)
Efficiency
Ren Ng, Spring 2016CS184/284A, Lecture 11
Variance of a Random Variable
Definition
Variance decreases linearly with number of samples
Properties of variance
V [Y ] = E[(Y � E[Y ])2]
= E[Y 2]� E[Y ]2
V
"1
N
NX
i=1
Yi
#=
1
N2
NX
i=1
V [Yi] =1
N2N V [Y ] =
1
NV [Y ]
V
"NX
i=1
Yi
#=
NX
i=1
V [Yi] V [aY ] = a2 V [Y ]
Ren Ng, Spring 2016CS184/284A, Lecture 11
More Random Samples Reduces Variance
Stanford CS348b, Spring 2014
Monte Carlo Lighting
Light Center Random1 shadow ray 16 shadow rays
Ren Ng, Spring 2016CS184/284A, Lecture 11
Comparing Different Techniques
Variance in an estimator manifests as noise in rendered images Estimator efficiency measure:
If one integration technique has twice the variance as another, then it takes twice as many samples to achieve the same variance If one technique has twice the cost of another technique with the same variance, then it takes twice as much time to achieve the same variance
E�ciency / 1
Variance⇥ Cost
Ren Ng, Spring 2016CS184/284A, Lecture 11
Recall MC Estimator Can Use Non-Uniform PDF
Let us define the Monte Carlo estimator for the definite integral of given function
Z b
af(x)dxDefinite integral
Random variable Xi ⇠ p(x) =1
b� a
Yi = f(Xi)
Monte Carlo estimator
Z b
af(x)dx
FN =1
N
NX
i=1
f(Xi)
p(Xi)
Ren Ng, Spring 2016CS184/284A, Lecture 11
Importance Sampling
Sample according to Recall definition of variance:
f̃(x) =f(x)
p(x) E[f̃2] =
Z f(x)
p(x)
�2p(x) dx
=
Z f(x)
f(x)/E[f ]
�2f(x)
E[f ]dx
=E[f ]
Zf(x) dx
=E
2[f ]
! V [f̃ ] = 0 ?!?
If PDF is proportional to f then variance is 0!
Idea: concentrate selection of random samples in parts of domain where function is large (“high value samples”)
V [f̃ ] = E[f̃2]� E2[f̃ ]p(x) = cf(x)
f(x)
Ren Ng, Spring 2016CS184/284A, Lecture 11
Importance Sampling: Area Sampling
Stanford CS348b, Spring 2014
Monte Carlo Lighting
Light Center RandomSolid angle sampling Area sampling
Hemispherical Solid Angle Sampling, 100 shadow rays
(random directions drawn uniformly from hemisphere)
The MC estimator uses different random directions at each pixel. Only some directions point towards the light.
(Estimator is a random variable)
Why does this image look “noisy”?
If no occlusion is present, all directions chosen in computing estimate “hit” the light source.
(Choice of direction only matters if portion of light is occluded from surface point p.)
Light Source Area Sampling 100 shadow rays
Ren Ng, Spring 2016CS184/284A, Lecture 11
Random Sampling Introduces Noise
Stanford CS348b, Spring 2014
Monte Carlo Lighting
Light Center RandomSample center of light Random point on light
1 sample per pixel
Ren Ng, Spring 2016CS184/284A, Lecture 11
More Random Samples Reduces Noise
Stanford CS348b, Spring 2014
Monte Carlo Lighting
Light Center Random1 shadow ray 16 shadow rays
Ren Ng, Spring 2016CS184/284A, Lecture 11
Importance Sampling Reduces Noise
Stanford CS348b, Spring 2014
Monte Carlo Lighting
Light Center RandomSolid angle sampling Area sampling
Effect of Sampling Distribution “Fit”f(x)
p1(x)
p2(x)
What is the behavior of f(x)/p1(x)? f(x)/p2(x)? How does this impact the variance of the estimator?
Ren Ng, Spring 2016CS184/284A, Lecture 11
Things to Remember
Monte Carlo integration
• Unbiased estimators
• Good for high-dimensional integrals
• Estimates are visually noisy and need many samples
• Importance sampling can reduce variance (noise) if probability distribution “fits” underlying function
Sampling random variables
• Inversion method, rejection sampling
• Sampling in 1D, 2D, disks, hemispheres
Ren Ng, Spring 2016CS184/284A, Lecture 11
Acknowledgments
Many thanks to Kayvon Fatahalian, Matt Pharr, and Pat Hanrahan, who created the majority of these slides.
Extra
Ren Ng, Spring 2016CS184/284A, Lecture 11
Pseudo-Random Number Generation
Example: cellular automata #30http://m
athworld.wolfram.com
/Rule30.html
Ren Ng, Spring 2016CS184/284A, Lecture 11
Pseudo-Random Number Generation
Center line values are random bit sequenceUsed for random number generator in Mathematica
http://mathworld.wolfram
.com/Rule30.htm
l
Example: cellular automata #30
Ren Ng, Spring 2016CS184/284A, Lecture 11
Pseudo-Random Number GenerationCredit: Richard Ling