deep view synthesis from sparse ... - slides.games-cn.org¾泽祥.pdf · [flynn et al. 2016]...

Post on 16-Aug-2020

1 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

DEEP VIEW SYNTHESIS FROM SPARSE PHOTOMETRIC IMAGES

Zexiang Xu1, Sai Bi1, Kalyan Sunkavalli2,

Sunil Hadap3, Hao Su1, Ravi Ramamoorthi1

1University of California, San Diego

2Adobe Research

3Lab 126, Amazon

© 2019 SIGGRAPH. ALL RIGHTS RESERVED.1

[Einarsson et al. 2006]

[Dong et al. 2010]

[Schwartz et al. 2011] [Zickler et al. 2005]

Render real scenes

Appearance of a scene

• Geometry

• Materials

[Xu et. al 2016] [Li et. al 2018]

[Furukawa and Ponce 2008] [Newcombe et al. 2011]

Appearance of a scene

• Realistic rendering

[Einarsson et al. 2006]

[Dong et al. 2010] [Schwartz et al. 2011]

• Geometry

• Materials

Appearance of a scene

• Geometry

• Materials

• Realistic rendering

[Einarsson et al. 2006]

[Dong et al. 2010] [Schwartz et al. 2011]

Appearance of a scene

• Realistic rendering

[Einarsson et al. 2006]

[Dong et al. 2010] [Schwartz et al. 2011]

Light

Transport

Function

Light transport acquisition

Light

Transport

Function

[Matusik et al. 2002]

Image-based relighting

[Xu et al. 2018]

Light transport acquisition for changing view

Sparse input views Novel view appearance

Novel view synthesis

[Flynn et al. 2016] [Kalantari et al. 2016]

[Penner and Zhang 2017] [Zhou et al. 2018]

• Unstructured views

• Small baseline

• Natural illumination

[Chen and Williams 1993]

[Levoy and Hanrahan 1996]

Sparse sampling for light transport acquisition

• Large baseline

• Controlled lighting

Preview

• Large baseline

• Controlled lighting

Preview

CNN

Preview

Our Result Ground Truth

CNN

Acquisition configuration

• Sparse

• Good coverage

Acquisition configuration

Icosahedron

• 12 vertices

• 20 faces

• Symmetric

Icosahedron

Acquisition configuration

Acquisition configuration

Icosahedron

Icosahedron

Acquisition configuration

Acquisition configuration

37o

Icosahedron

Acquisition configuration

Icosahedron

Synthetic scenes

Procedurally

Generated

Objects

Material images courtesy: Allegorithmic and Adobe Stock

Geometry:

Adobe Stock

Material

Reflectance:

Synthetic scenes

Procedurally

Generated

Objects

Geometry:

Adobe Stock

Material

Reflectance:

Overview

Input views

CNN

Overview

Input views

CNN

Overview

Input views

CNN

Overview

Input views Novel view

CNN

Overview

Input views Novel view

Plane sweep volume

Input views

Novel view

Plane sweep volume

Input views

Novel view…

Plane sweep volume

Input views

Novel view

Plane sweep volume

Input views

Novel view

Plane sweep volume

Input views

Novel view

Plane sweep volume

Input views

Novel view

Plane sweep volume

Input views

Novel view

Plane sweep volume

Input views

Novel view

Plane sweep volume

Input views

Novel view

Plane sweep volume

Input views

Novel view…

Plane sweep volume

Input views

Novel view

……

…Depth

Plane sweep volume

……

……

Depth

Plane sweep volume

Attention maps

……

……

Depth

Plane sweep volume

Attention maps

……

……

Depth

Plane sweep volume

……

……

Depth

Visibility-aware

attention maps

……

……

Depth

Visibility-aware

attention maps

……

……

Depth

Plane sweep volume

Attention maps

……

……

Depth

Visibility-aware

attention maps

……

……

Depth

Visibility-aware

attention maps

……

……

Depth

Plane sweep volume

……

……

Depth

Attention-masked volume

……

……

Depth

Attention-masked volume

……

……

Depth

Plane sweep volume

Attention maps

CNN

CNN

Correspondence

Branch

Shading

Branch

Our network

• Infer geometry (depth)

• Infer attention maps

• Infer shading

• Aggregate appearance

Correspondence branch

• Infer geometry (depth)

• Infer attention maps

Correspondence branch

Feature maps

Feature

Extractor

(2D CNN)

Input images

(2D CNN)

Correspondence branch

Input images Feature maps

…Plane sweep

Feature

Extractor

(2D CNN)

Correspondence branch

Input images Feature maps

…Plane sweep

(3D CNN)

Feature

Extractor

(2D CNN)

Correspondence

Predictor

(3D CNN)

Correspondence branch

Input images Feature maps

Correspondence

Predictor

(3D CNN)

Visibility-aware

attention maps

Depth probability

maps

… …

Feature

Extractor

(2D CNN)

…Plane sweep

Correspondence branch

Input images Feature maps

Correspondence

Predictor

(3D CNN)

Visibility-aware

attention maps

Depth probability

maps

… …

Feature

Extractor

(2D CNN)

Plane sweep

Input images

Shading branch

Visibility-aware

attention mapsDepth probability

maps

Shading branch

(3D CNN)

Shading

Predictor

(3D CNN)

Visibility-aware

attention mapsDepth probability

maps

Plane sweep

volume

Attention-masked

volume

Per-plane image

… …

Shading branch

Shading

Predictor

(3D CNN)

Visibility-aware

attention mapsDepth probability

maps

Plane sweep

volume

Per-plane image

… …

Attention-masked

volume

Corr-Branch + Shade-Branch

Visibility-aware

attention maps

Depth probability

maps

… …

Plane sweep

volume

Per-plane image

… …

Input images

Feature

Extractor

(2D CNN)

Correspondence

Predictor

(3D CNN)

Shading

Predictor

(3D CNN)

Data #1:

Data #1:

Data #1:

Data #2:

Data #2:

Data #3:

Data #3:

Data #4:

Data #4:

Novel view relighting

Novel view relighting

Data #4:

Multi-view stereo

Input images Reconstruction

Data #2:

Limitations

• Highly specular objects• 64 x 64 image crops for

training

• Limited receptive field

Our result Ground truth

Limitations

• Highly specular objects• 64 x 64 image crops for

training

• Limited receptive field

• Highly non-convex shape• Visible from 1 or 2 views

Our result Ground truth

Conclusion

Our Result Ground Truth

Visibility-aware

attention maps

Conclusion

Our Result Ground Truth

Novel view relighting

Multi-view stereo

Acknowledgements

• Pratul Srinivasan and Zhengqin Li

• NSF grants 1617234, 1703957

• ONR grant N000141712687

• Adobe

• Adobe Research Fellowship

• Powell-Bundle Fellowship

• Ronald L. Graham Chair

• UC San Diego Center for Visual Computing

© 2019 SIGGRAPH. ALL RIGHTS RESERVED.

THANK YOU!

top related