![Page 1: Edward (“Ted”) Adelson Professor of Vision Science in MIT Dept. of Brain and Cognitive Sciences; member of CSAIL. B.A. Physics and Philosophy (Yale); Ph](https://reader036.vdocuments.site/reader036/viewer/2022062806/56649e8a5503460f94b8ed7d/html5/thumbnails/1.jpg)
Edward (“Ted”) Adelson
• Professor of Vision Science in MIT Dept. of Brain and Cognitive Sciences; member of CSAIL.
• B.A. Physics and Philosophy (Yale); Ph. D. Experimental Psychology (Univ. Mich.) ; postdoc at NYU on motion perception.
• RCA Sarnoff Labs 1981-86; MIT Media Lab 1987- 95:
• Interests: human vision, machine vision, image processing, computer graphics... anything involving images!
• Current interest: snapshots!
![Page 2: Edward (“Ted”) Adelson Professor of Vision Science in MIT Dept. of Brain and Cognitive Sciences; member of CSAIL. B.A. Physics and Philosophy (Yale); Ph](https://reader036.vdocuments.site/reader036/viewer/2022062806/56649e8a5503460f94b8ed7d/html5/thumbnails/2.jpg)
Where am I coming from?
Gaussian/Laplacian Pyramid (with Burt; 1981, 1983)
Multiscale image coding, denoising, merging, inpainting (with Burt and others, 1980’s).
Steerable Filters (with Freeman 1991)
![Page 3: Edward (“Ted”) Adelson Professor of Vision Science in MIT Dept. of Brain and Cognitive Sciences; member of CSAIL. B.A. Physics and Philosophy (Yale); Ph](https://reader036.vdocuments.site/reader036/viewer/2022062806/56649e8a5503460f94b8ed7d/html5/thumbnails/3.jpg)
Plenoptic function (with Bergen 1991)
Layered motion analysis (with Wang, 1992)
Plenoptic Camera (with Wang, 1994)
![Page 4: Edward (“Ted”) Adelson Professor of Vision Science in MIT Dept. of Brain and Cognitive Sciences; member of CSAIL. B.A. Physics and Philosophy (Yale); Ph](https://reader036.vdocuments.site/reader036/viewer/2022062806/56649e8a5503460f94b8ed7d/html5/thumbnails/4.jpg)
Imaging: gather evidence, then render image.
• Traditional photography: Gathering evidence: light hits film in camera. Rendering: develop & print.
• Examples with less direct route: tomography, coded aperture imaging, range sensing.
• But even for snapshots we can gather evidence across space and time.
![Page 5: Edward (“Ted”) Adelson Professor of Vision Science in MIT Dept. of Brain and Cognitive Sciences; member of CSAIL. B.A. Physics and Philosophy (Yale); Ph](https://reader036.vdocuments.site/reader036/viewer/2022062806/56649e8a5503460f94b8ed7d/html5/thumbnails/5.jpg)
Using evidence gathered over time.
Original image of Sarah. Cute face, but bad lighting.
Dig through dozens of photos...
From a video
Do we have prior evidence of what she looks like under different lighting?
From cell phone pic.
Extract low freqs, add to luminance component.
Improved lighting
![Page 6: Edward (“Ted”) Adelson Professor of Vision Science in MIT Dept. of Brain and Cognitive Sciences; member of CSAIL. B.A. Physics and Philosophy (Yale); Ph](https://reader036.vdocuments.site/reader036/viewer/2022062806/56649e8a5503460f94b8ed7d/html5/thumbnails/6.jpg)
![Page 7: Edward (“Ted”) Adelson Professor of Vision Science in MIT Dept. of Brain and Cognitive Sciences; member of CSAIL. B.A. Physics and Philosophy (Yale); Ph](https://reader036.vdocuments.site/reader036/viewer/2022062806/56649e8a5503460f94b8ed7d/html5/thumbnails/7.jpg)
![Page 8: Edward (“Ted”) Adelson Professor of Vision Science in MIT Dept. of Brain and Cognitive Sciences; member of CSAIL. B.A. Physics and Philosophy (Yale); Ph](https://reader036.vdocuments.site/reader036/viewer/2022062806/56649e8a5503460f94b8ed7d/html5/thumbnails/8.jpg)
My camera should learn about my friends and family.
Instead of solving general problem of analyzing and rendering faces, just learn a lot about Sarah’s face. Even without fancy models: just use lots of 2-D examples of different lighting, pose, expression.
My camera has much experience looking at my friends and family. It should use it to infer better pictures.
![Page 9: Edward (“Ted”) Adelson Professor of Vision Science in MIT Dept. of Brain and Cognitive Sciences; member of CSAIL. B.A. Physics and Philosophy (Yale); Ph](https://reader036.vdocuments.site/reader036/viewer/2022062806/56649e8a5503460f94b8ed7d/html5/thumbnails/9.jpg)
Gathering evidence across space: plenoptic/lightfield imaging.
• Snapshots again: how to get multiple views of a wedding or birthday party?• Fill room with cameras, all communicating wirelessly.• But how?
![Page 10: Edward (“Ted”) Adelson Professor of Vision Science in MIT Dept. of Brain and Cognitive Sciences; member of CSAIL. B.A. Physics and Philosophy (Yale); Ph](https://reader036.vdocuments.site/reader036/viewer/2022062806/56649e8a5503460f94b8ed7d/html5/thumbnails/10.jpg)
Name tag cams? Balloon cams?
Party hats cams?
Somehow... get lots of cameras out there. Then use IBR to move viewpoint around, get good shots.