lighting affects appearance. lightness digression from boundary detection vision is about recovery...

46
Lighting affects appearance

Post on 19-Dec-2015

234 views

Category:

Documents


2 download

TRANSCRIPT

Lighting affects appearance

Lightness

Digression from boundary detection

Vision is about recovery of properties of scenes: lightness is about recovering material properties.

Simplest is how light or dark material is (ie., its reflectance).

We’ll see how boundaries are critical in solving other vision problems.

Basic problem of lightnessLuminance (amount of light striking the eye) depends on illuminance (amount of light striking the surface) as well as reflectance.

Basic problem of lightness

B

A

Is B darker than A because it reflects a smaller proportion of light, or because it’s further from the light?

Planar, Lambertian material.

n

n

L = r*cos(e where r is reflectance (aka albedo)

is angle between light and n

e is illuminance (strength of light)

If we combine and e at a point into E(x,y) then:

L(x,y) = R(x,y)*E(x,y)

L(x,y) = R(x,y)*E(x,y)

Can think of E as appearance of white paper with given illuminance.

R is appearance of planar object under constant lighting.

L is what we see.

Problem: We measure L, we want to recover R. How is this possible?

Answer: We must make additional assumptions.

Simultaneous contrast effect

Illusions

Seems like visual system is making a mistake.

But, perhaps visual system is making assumptions to solve underconstrained problem; illusions are artificial stimuli that reveal these assumptions.

Assumptions

Light is slowly varyingThis is reasonable for planar world: nearby image points come from nearby scene points with same surface normal.

Within an object reflectance is constant or slowly varying.

Between objects, reflectance varies suddenly.

This is sometimes called the Mondrian world.

L(x,y) = R(x,y)*E(x,y)

• Formally, we assume that illuminance, E, is low frequency.

L(x,y) = R(x,y)*E(x,y)

* =

Smooth variations in image due to lighting, sharp ones due to reflectance.

So, we remove slow variations from image. Many approaches to this. One is:

• Log(L(x,y)) = log(R(x,y)) + log(E(x,y))

• Hi-pass filter this, (say with derivative).

•Why is derivative hi-pass filter?

d sin(nx)/dx = ncos(nx). Frequency n is amplified by a factor of n.

• Threshold to remove small low-frequencies.

• Then invert process; take integral, exponentiate.

Reflectances Reflectances*Lighting

Restored Reflectances

(Note that the overall scale of the reflectances is lost because we take derivative then integrate)

These operations are easy in 1D, tricky in 2D.

• For example, in which direction do you integrate?

Many techniques exist.

These approaches fail on 3D objects, where illuminance can change quickly as well.

Our perceptions are influenced by 3D cues.

To solve this, we need to compute reflectance in the right region. This means that lightness depends on surface perception, ie., a different kind of boundary detection.

What is the Question ?(based on work of Basri and Jacobs, ICCV 2001)

Given an object described by its normal at each surface point and its albedo (we will focus on Lambertian surfaces)

1.What is the dimension of the space of images that this object can generate given any set of lighting conditions ?

2. How to generate a basis for this space ?

90.797.296.399.5#9

88.596.395.399.1#7

84.794.193.597.9#5

76.388.290.294.4#3

42.867.953.748.2#1

ParrotPhoneFaceBall

(Epstein, Hallinan and Yuille; see also Hallinan; Belhumeur and Kriegman)

5 2DDimension:

Empirical Study

DomainDomain

Lambertian

No cast shadows

(“convex” objects)

Lights are distant

nl

Lambert Law

k(max (cos, 0)

Lighting to Reflectance: Intuition

1

0 1 2 30

0.5

1

0 1 2 30

0.5

1

1.5

2

r Three point-light sources, l), Illuminating a sphere and its reflection r). Profiles of l and r

Lighting to Reflectance: Intuition

Images

...

Lighting Reflectance

lllllS llrrrr ddlkr sin),(),;,(),(2

where )0),max(cos(1

)(),;,( lrlrllrr kk

...

Spherical Harmonics (S.H.)

Orthonormal basis, , for functions on the sphere.

n’th order harmonics have 2n+1 components.

Rotation = phase shift (same n, different m).

In space coordinates: polynomials of degree n.

imnmnm eP

mn

mnnY )(cos

)!(

)!(

4

)12(),(

nmn

mn

n

m

nm zdz

d

n

zzP )1(

!2

)1()( 2

22

nmY

S.H. analog to convolution theorem

• Funk-Hecke theorem: “Convolution” in function domain is multiplication in spherical harmonic domain.filter.

k

nn

nmnnm

kn

YYk

12

4

Harmonic Transform of Kernel

00

( ) max(cos ,0) n nn

k k h

12

( 2)! (2 1)

13

( 1) 2, even2 ( 1)!( 1)!

0 2, od2

2

d2

0

nn

n

n nn

n n

kn

n

n

0nY 2cos16

5cos

2

1

4

1

0.886

0.591

0.222

0.037 0.014 0.0070

0.5

1

1.5

0 1 2 3 4 5 6 7 8

Amplitudes of Kernel

n

nA

n

nmnmn k

nA 2

12

1

Energy of Lambertian Kernel in low order harmonics

Accumulated Energy

37.5

87.599.2 99.81 99.93 99.97

0

20

40

60

80

100

120

0 1 2 4 6 8

kis a low pass filter 2cos16

5cos

2

1

4

1)( k

Reflectance Functions Near Low-dimensional Linear Subspace

0

( )n

nm nm nmn m n

r k l K L h

Yields 9D linear subspace.

2

0

)(n

n

nmnmnmnm hLK

nmY

nmY

Forming Harmonic Imagesnm nmb (p)=λr (X,Y,Z)

Z YX

2 2 22λ(Z -X -Y ) XZ YZ2 2λ(X -Y ) XY

How accurate is approximation?Point light source

Amplitude of k

-0.5

0

0.5

1

1.5

2

2.5

3

3.5

0 1 2 4 6 8

Amplitude of l = point source

0

0.2

0.4

0.6

0.8

1

1.2

0 1 2 4 6 8

9D space captures 99.2% of energy

2

00

)()(n

n

nmnmnmnm

n

n

nmnmnmnm hLKhLKlkr nmY nmY

How accurate is approximation? Worst case.

9D space captures 98% of energy

DC component as big as any other. 1st and 2nd harmonics of light could have zero energy

Amplitude of k

-0.50

0.51

1.5

22.5

33.5

0 1 2 4 6 8

Amplitude of l = point source

0

0.2

0.4

0.6

0.8

1

1.2

0 1 2 4 6 8

How Accurate Is Approximation?

Accuracy depends on lighting.

For point source: 9D space captures 99.2% of energy

For any lighting: 9D space captures >98% of energy.

Accuracy of Approximation of Images

Normals present to varying amounts.

Albedo makes some pixels more important.

Worst case approximation arbitrarily bad.

“Average” case approximation should be good.

Summary

Convex, Lambertian objects: 9D linear space captures >98% of reflectance.

Explains previous empirical results.

For lighting, justifies low-dim methods.

Models

Query

Find Pose

Compare

Vector: IMatrix: B

Harmonic Images

Recognition

Experiments (Basri&Jacobs) 3-D Models of 42 faces

acquired with scanner.

30 query images for each of 10 faces (300 images).

Pose automatically computed using manually selected features (Blicher and Roy).

Best lighting found for each model; best fitting model wins.

Results

9D Linear Method: 90% correct.

9D Non-negative light: 88% correct.

Ongoing work: Most errors seem due to pose problems. With better poses, results seem near 100%.