computational photography - eecs.yorku.cambrown/eecs6323/lectures/01-eecs_632… · 5 light is...

40
Brown - 1 Computational Photography Michael S. Brown Photography and Imaging

Upload: ngodat

Post on 25-Aug-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

Brown - 1

Computational Photography

Michael S. Brown

Photography and Imaging

2

Overview• Part 1

– Photography Preliminaries

– Traditional Film Imaging (Camera)

• Part 2

– General Imaging

• 5D Plenoptic Function (McMillan)

• 4D Light Fields (Levoy, Gortler)

Photography Preliminaries

3

Photography in a nutshell

• Focal Length

• Exposure and Aperture

• Depth of Field

• Noise

4

5

Light is coming from all directions

From Photography, London et al.

Why is there no image

on a piece of white paper?

paper

6

Pinhole

From Photography, London et al.

We need to ‘focus’ on some selected rays.

One way to do this is to use a ‘pin-hole’.

Such “camera mechanisms” have been known for some time:

Mozi (墨子) - 470 BC

Aristotle – 384 BCAbu Ali Al-Hasan Ibn al-Haitham – 953 AD (book on optics)

7

8

Focal Length Examples

9

Focal length and field of view

10

Perspective vs. viewpoint

• A small change in viewpoint is

a big change in background.

• Telephoto lens can simulate

this

11

Sensor size

• Similar to cropping

source: canon red book

12

Exposure

• Exposure controls how much light hits the

camera sensor

• Two ways to control this:

– Aperture: the “hole” in the optical path for the

light

– Shutter speed: the time the “hole” is opened

Aperture Controllable Shutter

13

Shutter speed and aperture

• Shutter speed

– Expressed in fraction of a second:

– 1/30, 1/60, 1/125, 1/250, 1/500

– (in reality, 1/32, 1/64, 1/128, 1/256, . . . )

• Aperture– Expressed as ratio of aperture size to focal length (f-stop)

– f/2.0, f/2.8, f/4, f/5.6, f/8, f/11, f/16, f/22, f/32

– f/X, means focal length is X times bigger than the aperture

– Each f-stop reduces the area of the aperture by half

– So, the larger the f-stop, the smaller the aperture

We are going to see how these are related in the following slides.

Shutter speed and motion

14

Slow shutter speeds can result in motion blur

if the scene isn’t static or if the camera moves or shakes.

Aperture and depth of field

15

Se

nso

r/F

ilmS

en

so

r/F

ilm

Focus Plane in the scene

Points outside

the focal plane

diverge on the

sensor

(circle of confusion)

Closing the aperture

reduces the circle

of confusion . . i.e.

it expands the depth

of field. It also

reduces the amount

of light.

Aperture controls depth of field (dof)

16

Main effect of aperture

Bigger aperture = shallow depth of field.

17

Exposure

• The play between f-stop and shutter:

– Aperture (in f stop)

– Shutter speed (in fraction of a second)

• Reciprocity

The same exposure is obtained with

an exposure twice as long and an

aperture area half as big

From Photography, London et al. Slide from Fredo Durand

18

Reciprocity cont’

• Assume we know how much light we need

• We have the choice of an infinity of shutter speed/aperture pairs

• What will guide our choice of a shutter speed?– Freeze motion vs. motion blur, camera shake

• What will guide our choice of an aperture?– Depth of field

• Often we must compromise– Open more to enable faster speed (but shallow DoF)

Slide from Fredo Durand

19

From Photography, London et al.

Note trade-off in DoF for motion blur.

20

Note trade-off in DoF for motion blur.

21

Note trade-off in DoF for motion blur.

22

CCD sensitivity (ISO) and noise• One solution to low exposure from a fast shutter speed is to

increase the camera’s CCD signal (i.e. gain the signal)

• This is analogous to film ISO sensitivity

– ISO 100 (slow film), ISO 1600 (fast film, x16 more sensitive)

• The drawback?

Amplifying the CDD signal, amplifies the sensor noise!

htt

p:/

/ww

w.b

ob

atk

ins.

com

Photography Equation

• Focal length (and position)

– Controls view/zoom

• Finessing motion blur, noise, and dof

– Trade-off between shutter speed and aperture

23

Camera settings Motion Blur Artifacts DoF Noise

fast-shutter speed

wide aperture

low ISO (gain)

No Narrow No

slow-shutter speed

small aperture

low ISO (gain)

Yes Wide No

fast-shutter speed

small aperture

high ISO (gain)

No Wide Yes

General Imaging

24

25

Part 2: General Imaging

• Cameras image are single “2D snap shots”

• Captured at a fixed viewing location

• Are there better ways to think about 3D scenes

in terms of images?

• Better representations?

26

5D Plenoptic Sample

www.cs.unc.edu/~mcmillan/papers/sig95_mcmillan.pdf

All light rays entering

a 3D point (Vx, Vy, Vy)

can be parameterized

by Φ and θ.

27

5D Plenoptic Sample

www.cs.unc.edu/~mcmillan/papers/sig95_mcmillan.pdf

A camera image is

a good approximation

of a portion of a

plenoptic sample.

We need to somehow

know its position and

orientation.

28

5D Plenoptic Samples• So, imagine that you could make dense

plenoptic samples over some 3D space

Plenoptic samples

x

y

z

29

5D Plenoptic Samples• Now you want to create a ‘novel’ view

Plenoptic samples

x

y

z

Making an

image from

a new view

is a matter

of “interpolating”

from the other

samples.

30

Variations on Plenoptic Samples• Sweep, Strip, or Slit cameras

– Creates a multi-center of projection images

– Imagine the camera captures only 1 column of pixels

http://www.cs.unc.edu/~rademach/mcop98.html

31

Surveillance CamerasSlit cameras are used in Satellites and Aerial Photography

www.cs.huji.ac.il/~peleg/papers/cvpr97-manifold.pdf

With a hand-held camera

32

From 5D to 4D Light-Field

• Lumigraph/4D Light-field

– Assume you are “outside” the space of 3D objects

u,v

s,t

For each (u,v) there are a bundle of possible rays coming into

this point. These rays are parameterized by (s,t).

This does not mean there are only 2 images for a light field.

There is an full image (s,t) for each pixel (u,v),

resulting in a 4D function L(u,v,s,t). Call this a light-slab.

http://graphics.stanford.edu/papers/light/

33

4D Light-field

• For a fixed view point, we can calculate

which rays to “show”

– That is (u,v) and its associated (s,t) for that

view

– We can generate the view for image (x,y)

34

From u,v to s,t looks

like lots of images from

slightly different perspectives.

From s,t to u,v looks like the

surface of the scene’s material

as it would scatter light in space.

35

Capturing 4D-Light Fields

An array of cameras!

Data is huge, but

highly redundant

(compresses well)

36

4D Illumination Field

Same idea, but to represent illumination falling onto a scene.

Light parameterized by (u,v) illuminate

in all directions* parameterized by (s,t)

* All directions in a half-plane

37

4D Illumination field

Generating an “Illumination field”.

38

Put them together:

8D Reflectance Field

Now, for each possible ray in the 4D Light Field, we have its

response to a 4D Illumination Field! – Huge amount of data.

And this is for a static scene.

39

Reflectance Fields

http://gl.ict.usc.edu/Films/RelightingHumanLocomotion/index.html

40

Summary

• This lecture covers the preliminaries for

Computational Photography

– Introduction to traditional camera and associated

terminology and uses

– Introduction to some reasonable new ideas on how to

think beyond camera for image representation

– Plenoptic Function, Light Field, Illumination Field

Reflectance Fields

• NEXT?

– Background on image processing . . .