Light Fields Computational Photography Connelly Barnes Slides from Alexei A. Efros, James Hays, Marc Levoy, and others

Download Light Fields Computational Photography Connelly Barnes Slides from Alexei A. Efros, James Hays, Marc Levoy, and others

Post on 17-Dec-2015

212 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

<ul><li> Slide 1 </li> <li> Light Fields Computational Photography Connelly Barnes Slides from Alexei A. Efros, James Hays, Marc Levoy, and others </li> <li> Slide 2 </li> <li> What is light? Electromagnetic radiation (EMR) moving along rays in space R( ) is EMR, measured in units of power (watts) is wavelength Useful things: Light travels in straight lines In vacuum, radiance emitted = radiance arriving i.e. there is no transmission loss </li> <li> Slide 3 </li> <li> Figures Stephen E. Palmer, 2002 What do we see? 3D world2D image </li> <li> Slide 4 </li> <li> What do we see? 3D world2D image Painted backdrop </li> <li> Slide 5 </li> <li> On Simulating the Visual Experience Just feed the eyes the right data No one will know the difference! Philosophy: Ancient question: Does the world really exist? Science fiction: Many, many, many books on the subject, e.g. slowglass from Light of Other DaysLight of Other Days Latest take: The Matrix Physics: Light can be stopped for up to a minute: Scientists stop lightScientists stop light Computer Science: Virtual Reality To simulate we need to know: What does a person see? </li> <li> Slide 6 </li> <li> The Plenoptic Function Q: What is the set of all things that we can ever see? A: The Plenoptic Function (Adelson &amp; Bergen) Lets start with a stationary person and try to parameterize everything that he can see Figure by Leonard McMillan </li> <li> Slide 7 </li> <li> Grayscale snapshot is intensity of light Seen from a single view point At a single time Averaged over the wavelengths of the visible spectrum (can also do P(x,y), but spherical coordinate are nicer) P( ) </li> <li> Slide 8 </li> <li> Color snapshot is intensity of light Seen from a single view point At a single time As a function of wavelength P( ) </li> <li> Slide 9 </li> <li> A movie is intensity of light Seen from a single view point Over time As a function of wavelength P( ,t) </li> <li> Slide 10 </li> <li> Holographic movie is intensity of light Seen from ANY viewpoint Over time As a function of wavelength P( ,t,V X,V Y,V Z ) </li> <li> Slide 11 </li> <li> The Plenoptic Function Can reconstruct every possible view, at every moment, from every position, at every wavelength Contains every photograph, every movie, everything that anyone has ever seen! It completely captures our visual reality! Not bad for a function P( ,t,V X,V Y,V Z ) </li> <li> Slide 12 </li> <li> Sampling Plenoptic Function (top view) Just lookup Google Street View </li> <li> Slide 13 </li> <li> Model geometry or just capture images? </li> <li> Slide 14 </li> <li> Slide 15 </li> <li> Ray Lets not worry about time and color: 5D 3D position 2D direction P( V X,V Y,V Z ) Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 16 </li> <li> SurfaceCamera No Change in Radiance Lighting How can we use this? </li> <li> Slide 17 </li> <li> Ray Reuse Infinite line Assume light is constant (vacuum) 4D 2D direction 2D position non-dispersive medium Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 18 </li> <li> Only need plenoptic surface </li> <li> Slide 19 </li> <li> Synthesizing novel views Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 20 </li> <li> Lumigraph / Lightfield Outside convex space 4D Stuff Empty Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 21 </li> <li> Lumigraph - Organization 2D position 2D direction s Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 22 </li> <li> Lumigraph - Organization 2D position 2 plane parameterization s u Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 23 </li> <li> Lumigraph - Organization 2D position 2 plane parameterization u s t s,t u,v v s,t u,v Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 24 </li> <li> Lumigraph - Organization Hold s,t constant Let u,v vary An image s,t u,v Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 25 </li> <li> Lumigraph / Lightfield </li> <li> Slide 26 </li> <li> Lumigraph - Capture Idea 1 Move camera carefully over s,t plane Gantry see Lightfield paper s,t u,v Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 27 </li> <li> Lumigraph - Capture Idea 2 Move camera anywhere Find camera pose from markers Rebinning see Lumigraph paper s,t u,v Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 28 </li> <li> Lumigraph - Rendering q For each output pixel determine s,t,u,v either use closest discrete RGB interpolate near values s u Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 29 </li> <li> Lumigraph - Rendering Nearest closest s closest u draw it Blend 16 nearest quadrilinear interpolation s u Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 30 </li> <li> Lumigraph Rendering Use rough depth information to improve rendering quality </li> <li> Slide 31 </li> <li> Lumigraph Rendering Use rough depth information to improve rendering quality </li> <li> Slide 32 </li> <li> Lumigraph Rendering Without using geometry Using approximate geometry </li> <li> Slide 33 </li> <li> Light Field Results Marc Levoy Results Light Field Viewer </li> <li> Slide 34 </li> <li> Stanford multi-camera array 640 480 pixels 30 fps 128 cameras synchronized timing continuous streaming flexible arrangement </li> <li> Slide 35 </li> <li> Light field photography using a handheld plenoptic camera Commercialized as Lytro Ren Ng, Marc Levoy, Mathieu Brdif, Gene Duval, Mark Horowitz and Pat Hanrahan </li> <li> Slide 36 </li> <li> Marc Levoy Conventional versus light field camera </li> <li> Slide 37 </li> <li> Marc Levoy Conventional versus light field camera uv-planest-plane </li> <li> Slide 38 </li> <li> Prototype camera 4000 4000 pixels 292 292 lenses = 14 14 pixels per lens Contax medium format cameraKodak 16-megapixel sensor Adaptive Optics microlens array125 square-sided microlenses </li> <li> Slide 39 </li> <li> Slide 40 </li> <li> Marc Levoy Digitally stopping-down aperture (i.e. decrease aperture size, increase DOF) stopping down = summing only the central portion of each microlens </li> <li> Slide 41 </li> <li> Marc Levoy Digital refocusing refocusing = summing windows extracted from several microlenses </li> <li> Slide 42 </li> <li> Marc Levoy Example of digital refocusing </li> <li> Slide 43 </li> <li> Marc Levoy Digitally moving the observer moving the observer = moving the window we extract from the microlenses </li> <li> Slide 44 </li> <li> Marc Levoy Example of moving the observer </li> <li> Slide 45 </li> <li> Marc Levoy Further enhancement of lumigraphs: do not use two-plane parameterization Store original pictures: no resampling Hand-held camera, moved around an environment Unstructured Lumigraph Rendering </li> <li> Slide 46 </li> <li> Marc Levoy Unstructured Lumigraph Rendering To reconstruct views, assign penalty to each original ray Distance to desired ray, using approximate geometry Resolution Feather near edges of image Construct camera blending field Render using texture mapping </li> <li> Slide 47 </li> <li> Marc Levoy Blending fieldRendering Unstructured Lumigraph Rendering </li> <li> Slide 48 </li> <li> Marc Levoy Unstructured Lumigraph Rendering Unstructured Lumigraph Rendering Results </li> <li> Slide 49 </li> <li> 3D Lumigraph One row of s,t plane i.e., hold t constant s,t u,v </li> <li> Slide 50 </li> <li> 3D Lumigraph One row of s,t plane i.e., hold t constant thus s,u,v a row of images s u,v </li> <li> Slide 51 </li> <li> by David Dewey P(x,t) </li> <li> Slide 52 </li> <li> 2D: Image What is an image? All rays through a point Panorama? Slide by Rick Szeliski and Michael Cohen </li> <li> Slide 53 </li> <li> Image Image plane 2D position </li> <li> Slide 54 </li> <li> Spherical Panorama All light rays through a point form a panorama Totally captured in a 2D array -- P( ) Where is the geometry??? See also: 2003 New Years Eve http://www.panoramas.dk/fullscreen3/f1.html </li> <li> Slide 55 </li> <li> Other ways to sample Plenoptic Function Moving in time: Spatio-temporal volume: P( ,t) Useful to study temporal changes Long an interest of artists: Claude Monet, Haystacks studies </li> <li> Slide 56 </li> <li> Time-Lapse Spatio-temporal volume: P( ,t) Factored Time Lapse Video Google/Time Magazine Time Lapse </li> <li> Slide 57 </li> <li> Space-time images Other ways to slice the plenoptic function x y t </li> <li> Slide 58 </li> <li> Other Lightfield Acquisition Devices Spherical motion of camera around an object Samples space of directions uniformly Second arm to move light source measure BRDF </li> <li> Slide 59 </li> <li> The Theatre Workshop Metaphor desired image (Adelson &amp; Pentland,1996) PainterLighting Designer Sheet-metal worker </li> <li> Slide 60 </li> <li> Painter (images) </li> <li> Slide 61 </li> <li> Lighting Designer (environment maps) Show Naimark SF MOMA video http://www.debevec.org/Naimark/naimark-displacements.mov </li> <li> Slide 62 </li> <li> Sheet-metal Worker (geometry) Let surface normals do all the work! </li> <li> Slide 63 </li> <li> working together Want to minimize cost Each one does whats easiest for him Geometry big things Images detail Lighting illumination effects clever Italians </li> </ul>