visual perception of surface materials roland w. fleming max planck institute for biological...

Post on 01-Apr-2015

222 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Visual Perception of Surface Visual Perception of Surface

Materials Materials

Roland W. FlemingRoland W. FlemingMax Planck InstituteMax Planck Institute

for Biological Cyberneticsfor Biological Cybernetics

2

3

Different materials, such as quartz, satin and chocolate have distinctive visual appearances

Without touching an object we usually have a clear impression of what it would feel like: hard or soft; wet or dry; rough or smooth

Visual Perception of Material Visual Perception of Material QualitiesQualities

4

The appearance of “stuff” was a major preoccupation in C17th Dutch art: Sensual quality of things Stasis, and

contemplation in Still Life

Contrast with Italian Renaissance: Perspective and the

spatial arrangement of things

Drama, events and dynamism.

Willem Kalfdetail from

Still Life with Silver JugRijksmuseum, Amsterdam

5

Material-ismMaterial-ism

6

Everyday lifeEveryday life

7

The state of thingsThe state of things

8

EdibilityEdibility

9

EdibilityEdibility

10

UsabilityUsability

11

DungDung

12

Materials in Materials in Computer GraphicsComputer Graphics

Hollywood and the games industry know that photorealism means getting materials right

Henrik Wann Jensen, Steve Marschner and Pat Hanrahan won a technical Oscar in 2004 for their work on Sub-surface scattering

Henrik Wann Jensen and Craig Donner

13

What gives a material its characteristic appearance?

How does the human brain identify materials across a wide variety of viewing conditions?

Research QuestionsResearch Questions

14

Visual estimation of surface reflectance properties: gloss

Perception of materials that transmit light Refraction Sub-surface scattering

Exploiting the assumptions of the visual system to edit the material appearance of objects in photographs

OutlineOutline

15

Surface ReflectanceSurface Reflectance

These spheres look different because they have different surface reflectance properties.

Everyday language: colour, gloss, lustre, etc.

Images: Ron Dror

16

Surface ReflectanceSurface Reflectance

Visual system’s goal: Estimate the BRDF

f(i, i; r, r)BRDF

Images: Ron Dror

17

Confounding EffectsConfounding Effectsof Illuminationof Illumination

Identical materials can lead to very different images

Different materials can lead to very similar images

18

Ambiguity betweenAmbiguity betweenReflectance and IlluminationReflectance and Illumination

a(f) = 1 / f

= 0 = 0.4 = 0.8 = 1.2

= 1.6 = 2.0 = 4.0 = 8.0

19

HypothesisHypothesis

Humans exploit statistical regularities of real-world

illumination in order to eliminate unlikely image interpretations

20

Real-world IlluminationReal-world Illumination

Directly from luminous sources

Indirectly, reflected from other surfaces

Illumination at a point in space: amount of light arriving from every direction.

21

Photographically capturedPhotographically capturedlight probeslight probes (Debevec (Debevec et alet al., 2000)., 2000)

Panoramic projectionPanoramic projection

22

Statistics of typicalStatistics of typicalIlluminationIllumination (Dror et al. 2004) (Dror et al. 2004)

Intensity histogram is heavily skewed Few direct light sources

23

Statistics of typicalStatistics of typicalIlluminationIllumination (Dror et al. 2004) (Dror et al. 2004)

Typical 1/f amplitude spectrum

24

HypothesisHypothesis

Humans exploit statistical regularities of real-world

illumination in order to eliminate unlikely image interpretations

25

Dismissing unlikelyDismissing unlikelyinterpretationsinterpretations

Blurry featureBlurry feature

2 interpretations:

Sharp reflection, blurry world Blurry reflection, sharp worldBut the world usually isn’t blurry!

Therefore it is probably a blurry reflection

26

Dismissing unlikelyDismissing unlikelyinterpretationsinterpretations

In practice, unlikely image interpretations do not need to be explicitly entertained

Under normal illumination conditions, different materials yield to diagnostic image features that are responsible for their ‘look’

The brain doesn’t need to The brain doesn’t need to model the physics, it just model the physics, it just needs to look out for tell-needs to look out for tell-tale image featurestale image features

27

Context has surprisingly little effect on apparent gloss

ObservationsObservations

28

FindingsFindings

Subjects are good at judging surface reflectance across real-world illuminations: ‘gloss constancy’

BeachBeachLight probeLight probe

CampusCampusLight probeLight probe

Single pointSingle point sourcesource

GaussianGaussian1/f Noise1/f Noise

Subjects are poorer at estimating gloss under illuminations with atypical statistics.

29

Illuminations haveIlluminations haveskewed intensity histogramsskewed intensity histograms

Intensity, I

N(I

)

0 200

>103

Intensity, I

N(I

)

400

>102

30

OriginalOriginal

Illumination distributionIllumination distributionis importantis important

Campus

Pink noise

ModifiedModified

Intensity, I

N(I

)

0 200

>103

Intensity, I

N(I

)

0 200

>103

Intensity, I

N(I

)

400

>102

Intensity, I

N(I

)

400

>102

OriginalOriginal

ModifiedModified

31

Histograms aren’t everythingHistograms aren’t everything

White noise White noise withwithhistogram ofhistogram of Campus CampusCampus originalCampus original

32

Heeger-Bergen texture Heeger-Bergen texture synthesissynthesis

Input textureInput texture Synthesized textureSynthesized texture

Treat illumination maps as if they are stochastic texture

Taken from Pyramid-Based Texture Analysis/Synthesis

33

Wavelet StatisticsWavelet Statistics

BeachBeach BuildingBuilding CampusCampus EucalyptusEucalyptus

UffiziUffiziSt. Peter’sSt. Peter’sKitchenKitchenGraceGrace

Synthetic illuminations with same wavelet statistics as real-world illuminations

34

Visual estimation of surface reflectance properties: gloss

Perception of materials that transmit light Refraction Sub-surface scattering

Exploiting the assumptions of the visual system to edit the material appearance of objects in photographs

OutlineOutline

35

Previous work onPrevious work onmaterials that transmit lightmaterials that transmit light

Adelson E.H. (1993). Perceptual organization and the judgment of brightness. Science. 262(5142): 2042-2024.Adelson, E. H. (1999). Lightness perception and lightness illusions. In In The New Cognitive Neurosciences. 2nd edn. (ed. Gazzaniga, M.S.) 339—351 (MIT Press, Cambridge, Massachusetts, 1999).Adelson, E. H., and Anandan, P. (1990). Ordinal characteristics of transparency. Proceedings of the AAAI-90 Workshop on Qualitative Vision, 77- 81.Anderson B.L. (1997). A theory of illusory lightness and transparency in monocular and binocular images: the role of contour junctions. Perception. 26(4): 419-453.Anderson, B.L. (1999). Stereoscopic surface perception. Neuron. 24: 991-928.Anderson, B.L. (2001). Contrasting theories of White's illusion. Perception, 30: 1499-1501.Anderson, B.L. (2003). The Role of Occlusion in the Perception of Depth, Lightness, and Opacity. Psychological Review, 110(4): 785-801.Anderson B.L. (2003). Perceptual organization and White's illusion. Perception. 32(3): 269-84.Beck J. (1978). Additive and subtractive color mixture in color transparency. Percept Psychophys. 23(3): 265-267. Beck J, Prazdny K, Ivry R. (1984). The perception of transparency with achromatic colors. Percept Psychophys. 35(5): 407-422.Beck J, Ivry R. (1988). On the role of figural organization in perceptual transparency. Percept Psychophys. 44(6): 585-594.Chen, J.V. & D'Zmura, M. (1998). Test of a convergence model for color transparency perception. Perception, 27: 595-608.D'Zmura, M., Colantoni, P., Knoblauch, K. & Laget, B. (1997). Color transparency, Perception, 26, 471-492.D Zmura, M., Rinner, O., & Gegenfurtner, K. R. (2000). The colors seen behind transparent filters. Perception, 29, 911-926.Gerbino W., Stultiens C.I., Troost J.M. and de Weert C.M. (1990). Transparent layer constancy. J Exp Psychol Hum Percept Perform. 16(1): 3-20. Gerbino, W. (1994). Achromatic transparency. In A.L. Gilchrist, Ed. Lightness, Brightness and Transparency. (pp 215-255). Hove, England: Lawrence Erlbaum.Heider, G. M. (1933). New studies in transparency, form and color. Psychologische Forschung. 17: 13—56.Kersten, D. (1991) Transparency and the cooperative computation of scene attributes. In Computational Models of Visual Processing, M.I.T. Press, Landy M and Movshon A., Eds..Kersten, D. and Bülthoff, H. (1991) Transparency affects perception of structure from motion. Massachusetts Institute of Technology, Center for Biological Information Processing Tech. Memo, 36.Kersten, D., Bülthoff, H. H., Schwartz, B., & Kurtz, K. (1992). Interaction between transparency and structure from motion. Neural Computation, 4(4), 573-589.Khang BG, Zaidi Q. (2002). Cues and strategies for color constancy: perceptual scission, image junctions and transformational color matching. Vision Res. 42(2):211-26. Erratum in: Vision Res.

2002. 42(24): 2729.Khang, B.-G., & Zaidi, Q. (2002). Accuracy of color scission for spectral transparencies. Journal of Vision, 2(6), 451-466, doi:10.1167/2.6.3.Kanizsa, G. (1955). Condiziono ed effetti della transparenza fenomica. Revista di Psicologia. 49: 3—19.Koffka, K. (1935). Principles of Gestalt Psychology. Cleveland: Harcourt, Brace and World.Lindsey DT, Todd JT. (1996). On the relative contributions of motion energy and transparency to the perception of moving plaids. Vision Res. 36(2): 207-22.Metelli, F. (1970). An algebraic development of the theory of perceptual transparency. Ergonomics, 13(1): 59-66.Metelli, F. (1974a). Achromatic color conditions in the perception of transparency. In Eds. Macleod, R. B., & Pick, H. L. (Eds.) Perception (pp. 95—116). Ithaca, NY: Cornell University Press.Metelli F. (1974b). The perception of transparency. Scientific American, 230(4): 90-98.Metelli, F. (1985). The simulation and perception of transparency. Psychological Research, 47(4): 185-202.Metelli, F., Da Pos, O. and Cavedon, A. (1985). Balanced and unbalanced, complete and partial transparency. Perception & Psychophysics, 38(4): 354-66.Masin SC. (1999). Color scission and phenomenal transparency. Percept Mot Skills. 89(3 Pt 1): 815-23.Masin SC. (1999) Phenomenal transparency in achromatic checkerboards. Percept Mot Skills. 88(2): 685-92. Masin SC. (1997). The luminance conditions of transparency. Perception. 26(1): 39-50.Nakayama, K., & Shimojo, S. (1992). Experiencing and perceiving visual surfaces. Science. 257: 1357-1363.Nakayama, K., Shimojo, S., & Ramachandran, V. (1990). Transparency: Relation to depth, subjective contours, luminance, and neon color spreading. Perception, 19: 497-513.Plummer DJ, Ramachandran VS. (1993). Perception of transparency in stationary and moving images. Spat Vis. 7(2): 113-23. Robilotto R, Khang BG & Zaidi Q. (2002). Sensory and physical determinants of perceived achromatic transparency. JOV. 2(5): 388-403.Robilotto, R., & Zaidi, Q. (2004). Perceived transparency of neutral density filters across dissimilar backgrounds. Journal of Vision, 4(3), 183-195, doi:10.1167/4.3.5.Singh M, Anderson BL. (2002). Perceptual assignment of opacity to translucent surfaces: the role of image blur. Perception. 31(5): 531-52. Singh M, Anderson BL. (2002). Toward a perceptual theory of transparency. Psychol Rev. 109(3): 492-519.Somers, D. C., and Adelson, E. H. (1997). Junctions, Transparency, and Brightness. Investigative Ophthalmology and Vision Science (supplement), 38, S453.Stoner GR, Albright TD, Ramachandran VS. (1990). Transparency and coherence in human motion perception. Nature. 344(6262): 153-155. Tommasi M. (1999) A ratio model of perceptual transparency. Percept Mot Skills. 89(3 Pt 1): 891-7. Watanabe, T. & Cavanagh, P. (1992). The role of transparency in perceptual grouping and pattern recognition. Perception, 21,131-139.Watanabe, T. & Cavanagh, P. (1993). Transparent surfaces defined by implicit X junctions. Vision Research, 33: 2339-2346. Watanabe, T. & Cavanagh, P. (1993). Surface decomposition accompanying the perception of transparency. Spatial Vision, 7, 95-111.

36

Transparent MaterialsTransparent Materials

Metelli’s episcotister

37

Real transparent objectsReal transparent objects

38

Real transparent objectsReal transparent objects … are not ideal infinitesimal

films

… obey Fresnel’s equations: Specular reflections Refraction

… can appear vividly transparent without containing the image cues traditionally believed to be important for the perception of “transparency”.

39

Real transparent objectsReal transparent objects

Questions:

What image cues do we use to tell that something is transparent?

How do we estimate and represent the refractive index of transparent bodies?

40

Refractive IndexRefractive Index Possibly the most important property that

distinguishes real chunks of transparent stuff from Metelli-type materials

Snell’s Law:

sin(i)

sin(t)

n2

n1

=

41

Refractive IndexRefractive Index

Varying the refractive index can lead to the distinct impression that the object is made out a different material

1.51.2 2.3

refractive index

42

Refractive IndexRefractive Index

43

Refractive IndexRefractive Index

44

Refractive IndexRefractive Index

45

Refractive IndexRefractive Index

46

Refractive IndexRefractive Index

47

Refractive IndexRefractive Index

48

Refractive IndexRefractive Index

49

Refractive IndexRefractive Index

50

Refractive IndexRefractive Index

51

Refractive IndexRefractive Index

52

Refraction and image distortionRefraction and image distortion

convex concave

Low RI

53

Refraction and image distortionRefraction and image distortion

convex concave

High RI

54

Displacement FieldDisplacement Field

Measures the way the transparent object displaces the position of refracted features in the image.

d = prefracted - pactual

The perturbation of the texture caused by refraction can be captured by the displacement field.

55

vector field

Displacement FieldDisplacement Field

56

Distortion FieldsDistortion Fields

But, to compute the displacement field the visual system would need to know the positions of non-displaced features on the backplane.

Let us assume, instead, that the visual system can estimate the relative distortions of the texture (compression or expansion of texture)

D = div( d )

57

Distortion FieldsDistortion Fields

distortion fieldimage

Red = magnification Green = minification

58

Distance to backplaneDistance to backplane

59

Distance to backplaneDistance to backplane

60

Distance to backplaneDistance to backplane

61

Distance to backplaneDistance to backplane

62

Distance to backplaneDistance to backplane

63

Distance to backplaneDistance to backplane

64

Distance to backplaneDistance to backplane

65

Distance to backplaneDistance to backplane

66

Distance to backplaneDistance to backplane

67

Distance to backplaneDistance to backplane

68

Distance to backplaneDistance to backplane

convex concave

near

69

Distance to backplaneDistance to backplane

convex concave

far

70

Object thicknessObject thickness

71

Object thicknessObject thickness

72

Object thicknessObject thickness

73

Object thicknessObject thickness

74

Object thicknessObject thickness

75

Object thicknessObject thickness

76

Object thicknessObject thickness

77

Object thicknessObject thickness

78

Object thicknessObject thickness

79

Object thicknessObject thickness

80

Asymmetric Matching taskAsymmetric Matching task

The distortions in the image depend not only on the RI but also on: Geometry of object (curvatures, thickness) Distance between object and background Distance between viewer and object

So, if the observers base their judgments of RI primarily on the pattern of distortions (rather than correctly estimating the physics), then they should show systematic errors in their estimates when these irrelevant scene factors vary.

81

Asymmetric Matching taskAsymmetric Matching task

Subject adjusts RI of standard probe stimulus to match the appearance of the other stimulus.

One scene variable (thickness, distance to back-plane) is clamped at a different value for test stimulus.

Measures ability of observer to ‘ignore’ or ‘discount’ the differences that are caused by irrelevant scene variables

probetest

82

Asymmetric Matching taskAsymmetric Matching task

distance to backplane thickness of pebble

0.25

0.625

1.375

1.75

0.5

0.75

1.25

1.5

Refractive index

Perc

eiv

ed R

efr

act

ive in

dex

83

84

Observers don’t appear to estimate the physics correctly.

Instead, they probably use some heuristic image measurements (e.g. distortion fields) that are affected by refractive index, but also by other scene factors.

This can lead to illusions (mis-perceptions)

ObservationsObservations

85

Visual estimation of surface reflectance properties: gloss

Perception of materials that transmit light Refraction Sub-surface scattering

Exploiting the assumptions of the visual system to edit the material appearance of objects in photographs

OutlineOutline

86

Image-based Material EditingImage-based Material EditingKahn, Reinhart, Fleming & Bülthoff, SIGGRAPH Kahn, Reinhart, Fleming & Bülthoff, SIGGRAPH 06.06. inputinput outputoutput

87

Image based material editingImage based material editing

Goal: given single (HDR) photo as input, change appearance of object to completely different material

Physically accurate solution would require fully reconstructing illumination and 3D geometry. Beyond state-of-the-art computer vision capabilities

input output

88

Image based material editingImage based material editing

Alternative: “Perceptually accurate” solution Series of simple, highly approximate manipulations, each of which is provably

incorrect, but whose ensemble effect is visually compelling.

input output

89

texture mapping

output image

bilateralfilter

holefilling

environment

depth map

Processing PipelineProcessing Pipeline

alpha matte

segmentation

90

Alpha MattesAlpha Mattes By restricting our image modifications to the area within the

boundary of the object, we can create the illusion of a transformed material.

Assumption: addition of new non-local effects (e.g. additional reflexes, caustics, etc.) is not crucial. Approximate shadows and reflections are already in place in original

scene

91

Automatic alpha-mattingAutomatic alpha-matting

Bayesian pixel classification methods Chuang, Curless, Salesin & Szeliski (2001) Apostoloff & Fitzgibbon (2004)

Methods using multiple images with different planes of focus McGuire, Matusik, Pfister, Hughes & Durand

(2005) Kahn & Reinhard (2005)

Although we extract our alpha-mattes manually (rotoscoping), there has been a recent wave of automatic / semi-automatic procedures.

92

Processing PipelineProcessing Pipeline

texture mapping

output image

bilateralfilter

holefilling

environment

depth map

alpha matte

segmentation

93

Processing PipelineProcessing Pipeline

texture mapping

output image

bilateralfilter

holefilling

environment

depth map

alpha matte

segmentation

94

How How notnot to do to doshape-from-shadingshape-from-shading

Try using the state-of-the-art algorithms and you will generally be disappointed!

inputinput

reconstructionreconstruction

95

Try using the state-of-the-art algorithms and you will generally be disappointed!

inputinput

reconstructionreconstruction

How How notnot to do to doshape-from-shadingshape-from-shading

96

Try using the state-of-the-art algorithms and you will generally be disappointed!

inputinput

reconstructionreconstruction

How How notnot to do to doshape-from-shadingshape-from-shading

97

What is the alternative ?What is the alternative ?

We use a simple but suprisingly effective heuristic:

Dark is DeepDark is DeepIn other words …

z(x, y) = L(x, y)z(x, y) = L(x, y)

initial depth estimateinitial depth estimateinputinput

ComplicateComplicated shape d shape

from from shading shading

algorithmalgorithm

WHAT ?WHAT ?

98

Bilateral FilterBilateral Filter

The ‘recovered depths’ are conditioned using a bilateral filter (Tomasi & Manduchi, 1998; Durand & Dorsey, 2002).

Simple non-linear edge-preserving filter with kernels in space and intensity domains.

Note: 2 user parameters.

99

Bilateral FilterBilateral Filter

spacespace

intensityintensity

Normally use log L, but this is unbounded. We prefer to use a sigmoid (based on human vision)

100

Bilateral Filter: 3 main functionsBilateral Filter: 3 main functions

1. De-noising depth-map Intuition: depths are generally smoother than intensities in the

real world.

2. Selectively enhance or remove textures for embossed effect

101

Bilateral Filter: 3 main functionsBilateral Filter: 3 main functions

3. Shape-from-silhouette, like level-sets shape ‘inflation’ (e.g. Williams, 1998) Intuition: values outside object are set to zero, so blurring

across boundary makes recovered depths smooth and convex.

102

Forgiving caseForgiving case

Diffuse surface reflectance leads to clear shading pattern Silhouette provides good constraints

originaloriginal reconstructed depthsreconstructed depths

103

Difficult caseDifficult case Strong highlights create large spurious depth peaks Silhouette is relatively uninformative

originaloriginal reconstructed depthsreconstructed depths

104

Light from the sideLight from the side Shadows and intensity gradient leads to substantial distortions of the

face

originaloriginal reconstructed depthsreconstructed depths

105

Importance of viewpointImportance of viewpoint

correct viewpointcorrect viewpoint

Substantial errors in depth reconstruction are not visible in transformed image

transformed imagetransformed image

106

Importance of viewpointImportance of viewpoint

107

Importance of viewpointImportance of viewpoint

108

Importance of viewpointImportance of viewpoint

109

Importance of viewpointImportance of viewpoint

110

Importance of viewpointImportance of viewpoint

111

Importance of viewpointImportance of viewpoint

112

Why does it work ?Why does it work ? Generic viewpoint assumption (Koenderink & van

Doorn, 1979; Binford, 1981; Freeman, 1994)

113

Why does it work ?Why does it work ?

Bas-relief ambiguity Belhumeur, Kriegman & Yuille (1999)

Affine transformed versions of scene are indistinguishable

Particularly important for objects illuminated from the side

114

Why does it work ?Why does it work ?

frontfront sideside

115

Why does it work ?Why does it work ? Masking effect of patterns that are subsequently placed on surface

(e.g. highlights).

116

From depths to surface normalsFrom depths to surface normals

Note: 1 user parameter (number of iterations)

Take the derivates. Reshape them recursively.

Cross product of derivates gives surface normal

117

Processing PipelineProcessing Pipeline

texture mapping

output image

bilateralfilter

holefilling

environment

depth map

alpha matte

segmentation

118

Processing PipelineProcessing Pipeline

texture mapping

output image

bilateralfilter

holefilling

environment

depth map

alpha matte

segmentation

119

Re-texturing the objectRe-texturing the object

Use recovered surface normals as indices into a texture map The most important trick: blend original intensities back into

image, for correct shading and highlights

120

Re-texturing the objectRe-texturing the object

Use recovered surface normals as indices into a texture map The most important trick: blend original intensities back into

image, for correct shading and highlights

121

Re-texturing the objectRe-texturing the object

The most important trick: blend original intensities back into image, for correct shading and highlights

TextureShop (Fang & Hart, 2004) tacitly uses the same trick:

122

Other material transformationsOther material transformations

To create the illusion of transparent or translucent materials, we map a modified version of the background image onto the surface.

To apply arbitrary BRDFs, we use the recovered surface normals, and an approximate reconstruction of the environment

to evaluate the BRDF.

123

Processing PipelineProcessing Pipeline

texture mapping

output image

bilateralfilter

holefilling

environment

depth map

alpha matte

segmentation

124

Processing PipelineProcessing Pipeline

texture mapping

output image

bilateralfilter

holefilling

environment

depth map

alpha matte

segmentation

125

Hole filling the easy wayHole filling the easy way A number of sophisticated algorithms exist for object removal (e.g.

Bertalmio et al. 2000; Drori et al. 2003; Sun et al. 2005) Crude but fast alternative: cut and paste!

126

Hole filling the easy wayHole filling the easy way A number of sophisticated algorithms exist for object removal (e.g.

Bertalmio et al. 2000; Drori et al. 2003; Sun et al. 2005) Crude but fast alternative: cut and paste!

127

Hole filling the easy wayHole filling the easy way A number of sophisticated algorithms exist for object removal (e.g.

Bertalmio et al. 2000; Drori et al. 2003; Sun et al. 2005) Crude but fast alternative: cut and paste!

128

Fake transparencyFake transparency The human visual system appears does not recognize

transparency by correctly using inverse optics. Instead, it seems to rely on the consistency of the image

statistics and patterns of distortion.

129

Fake translucencyFake translucency

To create frosted glass, simply blur the background before mapping it onto the surface

130

Fake translucencyFake translucency

For translucency, blur and colour correct Reinhard et al. (2001) colour transfer

131

Fake translucencyFake translucency

132

Fake translucencyFake translucency

133

Environment mapEnvironment map

Background image is used to generate full HDR light probe for image-based lighting

134

Arbitrary BRDFsArbitrary BRDFs Given surface normals and complete HDR light probe, we can

evaluate empirical or parametric BRDFs, as in standard image based lighting (local effects only).

We used Matusik’s BRDFs.

originaloriginal blue metallicblue metallic nickelnickel

135

Arbitrary BRDFsArbitrary BRDFs Given surface normals and complete HDR light probe, we can

evaluate empirical or parametric BRDFs, as in standard image based lighting (local effects only).

We used Matusik’s BRDFs.

originaloriginal nickelnickel

136

How important is HDR ?How important is HDR ?

HDR makes the procedure much more stable Depth estimates are superior Easier to identify (and selectively manipulate) highlights

137

How important is HDR ?How important is HDR ?

HDR makes the procedure much more stable Depth estimates are superior Easier to identify (and selectively manipulate) highlights

originaloriginal boostedboosted suppressedsuppressed

138

General PrinciplesGeneral Principles Complementary heuristics. Normally, errors accumulate

as one approximation is added to another. The key to our approach is choosing heuristics such that the errors of one approximation visually compensate for the errors of another.

Exploit visual tolerance for ambiguities to achieve solutions that are perceptually acceptable even though they are physically wrong.

139

Using shape toUsing shape toinfer material propertiesinfer material properties

140

Using shape toUsing shape toinfer material propertiesinfer material properties

Photo: Ted Adelson

141

Thank YouThank You

Funding: DFG FL 624/1-1

Some renderings generated using Henrik Wann Jensen’s DALI

Co-authors:

Ted Adelson, Ron Dror, Frank Jäkel, Larry Maloney, Erum Kahn, Erik Reinhard, Heinrich Bülthoff

142

Stability across extrinsic scene Stability across extrinsic scene variablesvariables

Physical RI

Perc

ep

tual sc

ale

distance tobackground

pebblethickness

143

Contrast as a cue to RIContrast as a cue to RI

Refractive index

Variance

Mean

144

Contrast as a cue to RIContrast as a cue to RI

Scene variable

Variance

Mean

thickness

backplane distance

view distance

145

Variability across Variability across subjectssubjectsdistance to backplane thickness of pebble

Subject 1

Subject 2

146

Binocular cuesBinocular cues

Matched features generally do not specify the correct stereo depth of the surface: specularities and features visible through object

Disparity field can contain large spurious vertical disparities and unmatchable features, leading to rivalry

Extent of rivalry could in fact be a cue to RI (cf. binocular ‘lustre’).

147

Binocular cuesBinocular cues

Matched features generally do not specify the correct stereo depth of the surface: specularities and features visible through object

Disparity field can contain large spurious vertical disparities and unmatchable features, leading to rivalry

Extent of rivalry could in fact be a cue to RI (cf. binocular ‘lustre’).

148

Refraction vs. ReflectionRefraction vs. Reflectionglassmirror

Both reflection and refraction work by mapping features from the surroundings into the image

But: different mapping rules

149

displacement field

Distortion FieldsDistortion Fields

Measures the way the transparent object displaces the position of refracted features in the image.

150

Real transparent objectsReal transparent objects … are not ideal infinitesimal

films

… obey Fresnel’s equations: Specular reflections Refraction

… can appear vividly transparent without containing the image cues traditionally believed to be important for the perception of “transparency”.

151

Real transparent objectsReal transparent objects

Questions:

What image cues do we use to tell that something is transparent?

How do we estimate and represent the refractive index of transparent bodies?

152

The visual brainThe visual brain

More than a third of the brain contains cells that process visual signals.

Literally billions of neurons are dedicated to the task of seeing. Why? Is vision really so complicated?

153

154

Statistics of typicalStatistics of typicalIlluminationIllumination (Dror et al. 2004) (Dror et al. 2004) Properties based on raw luminance values

High dynamic range Pixel histogram heavily skewed towards low intensities

Quasi-local and Non-local properties Nearby pixels are correlated in intensity (roughly 1/f amplitude

spectrum) Distributions of wavelet coefficients are highly kurtotic (i.e.

significant wavelet coefficients are sparse) Approximate scale invariance (i.e. distributions of wavelet

coefficients are similar at different scales)

Global and Non-stationary properties Dominant direction of illumination Presence of recognizable objects such objects and trees Cardinal axes (due to ground plane and perpendicular structures

erected thereupon).

155

What is the perceptual scale of refractive index ?

??

Physical RI

Perc

eiv

ed R

I

156

Maximum Likelihood Difference Maximum Likelihood Difference ScalingScaling

A B

157

Maximum Likelihood Difference Maximum Likelihood Difference ScalingScaling

nCk image permutations: n=10 k=4

nCk=210 quadruplesestimate scale that best accountsfor similarity judgments

Refractive Index

21 4 10

A B

158

Maximum Likelihood Difference Maximum Likelihood Difference ScalingScaling

Physical RI

Perc

ep

tual sc

ale

159

Maximum Likelihood Difference Maximum Likelihood Difference ScalingScaling

Physical RI

Perc

ep

tual sc

ale

RMS imagedifferences

perceptual scale

160

Physical RI

Dis

tort

ion M

ag

nit

ude

Distortion FieldsDistortion Fields

161

Physical RI

Perc

ep

tual sc

ale

RMS imagedifferences

perceptual scale

Distortion FieldsDistortion Fields

mean distortion

162

Surface Reflectance MatchingSurface Reflectance Matching

TestTest MatchMatch

METHODMETHOD

163

Ward modelWard model

Specular Reflectance matte to glossy

Roughness crisp to blurred

Axes rescaled to form a psychophysically uniform space (Pellacini et al. 2000)

Rou

gh

nes

s

Specular Reflectance

164

Real-world IlluminationsReal-world Illuminations

Illuminations downloaded from: http//graphics3.isi.edu/~debevec/Probes

BeachBeach BuildingBuilding CampusCampus EucalyptusEucalyptus

UffiziUffiziSt. Peter’sSt. Peter’sKitchenKitchenGraceGrace

165

Match IlluminationMatch Illumination

GalileoGalileo

166

Artificial IlluminationsArtificial Illuminations

Single point sourceSingle point source Multiple point sourcesMultiple point sources Extended sourceExtended source

Gaussian White NoiseGaussian White Noise Gaussian Pink NoiseGaussian Pink Noise

167

Subject: RF. (110 observations)Illumination: “St. Peter’s”.

Specular reflectance

Surface roughness

Subject: MS. (110 observations)Illumination: “Grace”.

Subject: RA. (110 observations)Illumination: “Eucalyptus”.

Specular reflectance Specular reflectance

Surface roughnessSurface roughness

Mat

chm

atte

shin

y

matte shinyTest

Mat

chm

atte

shin

y

Mat

chm

atte

shin

y

matte shinyTest

matte shinyTest

smooth roughTest smooth rough

Test

smooth roughTest

Mat

chsm

ooth

roug

h

Mat

chsm

ooth

roug

h

Mat

chsm

ooth

roug

h

Subjects can matchSubjects can match surface reflectance surface reflectance

168

Specular reflectance Surface roughness

roughsmoothmatte shiny

Data pooled across all subjects and all real-world illuminations

Test

Mat

ch

Test

Mat

chSubjects can matchSubjects can match surface reflectance surface reflectance

169

Real-world Real-world vsvs Artificial IlluminationArtificial Illumination

0

5

10

15

20

25

30

35

40

45

Beach BuildingCampusEucalyptus

GraceKitchenSt. Peter's

UffiziPoint Multi

ExtendedWhite NoisePink Noise

0

5

10

15

20

25

30

Beach BuildingCampusEucalyptus

GraceKitchenSt. Peter's

UffiziPoint Multi

ExtendedWhite NoisePink Noise

erro

r

erro

r

Illumination map Illumination map

Specular reflectanceSpecular reflectance RoughnessRoughness

170

Noise is unlikeNoise is unlikereal-world illuminationreal-world illumination

UffiziUffizi White NoiseWhite Noise Pink NoisePink Noise

mat

ch

mat

ch

mat

chSpecular contrast

Least accurate of all real-world illuminations

Specular contrast Specular contrast

171

What are the relevant statistics?What are the relevant statistics?

Real Extended 1/f Noise

172

Real-world IlluminationsReal-world Illuminations

BeachBeach CampusCampus

Single point sourceSingle point source

Gaussian Pink NoiseGaussian Pink Noise

173

Material-ismMaterial-ism

174

Materials in Materials in Computer GraphicsComputer Graphics

Nvidia advanced skin rendering demo

Hollywood and the games industry know that photorealism means getting materials right

No subsurface scatteringNo subsurface scattering With subsurface scatteringWith subsurface scattering

175

EdibilityEdibility

176

EdibilityEdibility

177

EdibilityEdibility

178

EdibilityEdibility

179

algaealgae

top related