three dimensional multi perspective imaging with randomly

7
Journal of Physics: Conference Series OPEN ACCESS Three dimensional multi perspective imaging with randomly distributed sensors To cite this article: Mehdi DaneshPanah and Bahrain Javidi 2008 J. Phys.: Conf. Ser. 139 012017 View the article online for updates and enhancements. You may also like Two-Dimensional/Three-Dimensional Convertible Integral Imaging Using Dual Depth Configuration Jisoo Hong, Jonghyun Kim and Byoungho Lee - High Optical Magnification Three- Dimensional Integral Imaging of Biological Micro-organism Dan Sun, , Yao Lu et al. - An efficient depth map acquisition technology based on generalized superpixel for integral imaging Yue-Jia-Nan Gu, Yan Piao, Li-Jin Deng et al. - Recent citations Seeing lens imaging as a superposition of multiple views Sascha Grusche - This content was downloaded from IP address 188.148.246.141 on 16/11/2021 at 03:46

Upload: others

Post on 04-Feb-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Journal of Physics Conference Series

OPEN ACCESS

Three dimensional multi perspective imaging withrandomly distributed sensorsTo cite this article Mehdi DaneshPanah and Bahrain Javidi 2008 J Phys Conf Ser 139 012017

View the article online for updates and enhancements

You may also likeTwo-DimensionalThree-DimensionalConvertible Integral Imaging Using DualDepth ConfigurationJisoo Hong Jonghyun Kim and ByounghoLee

-

High Optical Magnification Three-Dimensional Integral Imaging of BiologicalMicro-organismDan Sun Yao Lu et al

-

An efficient depth map acquisitiontechnology based on generalizedsuperpixel for integral imagingYue-Jia-Nan Gu Yan Piao Li-Jin Deng etal

-

Recent citationsSeeing lens imaging as a superposition ofmultiple viewsSascha Grusche

-

This content was downloaded from IP address 188148246141 on 16112021 at 0346

Three dimensional multi perspective imaging withrandomly distrib uted sensors

Mehdi DaneshPanah and Bahram JavidiDeptartment of Electrical and Computer Engineering University of Connecticut Storrs CT 06269 USA

E-mail bahramengruconnedu

Abstract In this paper we review a three dimensional (3D) passive imaging system that exploits thevisual information captured from the scene from multiple perspectives to reconstruct the scene voxel byvoxel in 3D space The primary contribution of this work is to provide a computational reconstructionscheme based on randomly distributed sensor locations in space In virtually all of multi perspectivetechniques (eg integral imaging synthetic aperture integral imaging etc) there is an implicit assumptionthat the sensors lie on a simple regular pickup grid Here we relax this assumption and suggest acomputational reconstruction framework that unifies the available methods as its special cases Theimportance of this work is that it enables three dimensional imaging technology to be implemented in amultitude of novel application domains such as 3D aerial imaging collaborative imaging long range 3Dimaging and etc where sustaining a regular pickup grid is not possible andor the parallax requirements callfor a irregular or sparse synthetic aperture mode Although the sensors can be distributed in any randomarrangment we assume that the pickup position is measured at the time of caputre of each elemental imageWe demonstrate the feasibility of the methods proposed here by experimental results

1 IntroductionTraditionallly two dimensional imaging systems have been the primary method of sensing the worldvisually However in the recent years with growing demand for advanced imaging methods theinterest in three dimensional (3D) imaging and information processing systems has increased [1] Thecommon goal of most of these techniques is to capture display andor process one or more of thephysiological depth cues ie sensing the world with 3D perception Among various techniques whichcan quantitatively measure depth cues one major thrust is in Integral Imaging (II) [2] (also known asintegral photography) which is based on the original work of Lippmann with lenticular sheets [3] andis classified under passive multi-perspective 3D imaging systems The different perspective imagesare known as elemental images each one from a slightly different perspective This technique is apromising method compared to other techniques such as holography due to its interesting featuresincluding continuous viewing angle full parallax and full color display without the need for coherentsources of illumination and its relative simplicity of implementation

Integral Imaging based optical displays provide autosterioscopic image or video [4] with no need forspecial eyewear for preceiving 3D cues This is made possible essentially by recording the intensity anddirection of light rays ie the light field [5] emanated from the scene in the capture stage and laterbackpropagating the rays with the same parameters in the display stage Developments in this venueinclude aberation reduction [6] extention of depth of field [7] use of gradient index lens arrays [8]to handle the orthoscopic to pseudoscopic conversion also resolution improvement methods includinguse of moving lenslet technique (MALT) [9] and electronically synthesized moving Fresnel lenslets

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

ccopy 2008 IOP Publishing Ltd 1

[10] Nevertheless optical reconstruction approach suffers from lowresolution low sampling ratequality degradation due to diffraction limited dynamic range and low overall visual quality partly due tolimitation of electro-optical projection devices

On the other hand computational reconstruction techniques deliver a more flexible venue to extractand exploit 3D cues by digital manipulation of integral image data [11 12 13 14] This amounts tocalculation of the light ray distribution over a particular plane in the scene from the image informationin the captured elemental images The extracted infomration is then displayed on regular 2D LCDs ormanipulated further to create a depth profile from the scene Computational techniques are particularlyinteresting for object recognition and classification surface profiling digital refocusing 3D artwork andetc

Conventional Integral Imaging is historically developed based on lenticular sheets and lenslet arraysHowever limitations in the field of view resolution-parallax compromise high aberation and lowimaging resolution of each lenslet have led to development of Synthetic Aperture Integral Imaging (SAII)[15 16 17] In this technique a conventional imaging sensor (eg camera) scans an area in a regulargrid pattern and each elemental image is acquired in full frame This enables one to obtain larger field ofview highly corrected and high resolution elemental images In this pickup process a natural questionis what happens if the elemental images are taken neither on a regular grid nor on the same plane

In this paper we overview the approach in [17] for a generaliation to the Integral Imaging methodin order to acheive 3D pickup geometry with randomly distributed sensor coordinates in all three spatialdimensions For 3D reconstruction a finite number of sensors with known coordinates are randomlyselected from within this pick up volume In particular we will study the case of Synthetic ApertureIntegral Imaging (SAII) where the sensor distribution is not controlled that is it is random howeverthe locations of sensors in space are known In this study the optical axes of the cameras are assumedparallel but each sensor has a different distances from the 3D object A computational reconstructionframework based on the back projection method is developed using a variable affine transform betweenthe image space and the object space It can be shown that affine coordinate transformation correspondsto orthographic projection similar to what is needed in light back-projection based II reconstruction

2 Sparse sensor configurationVirtually all instances of integral imaging methods have been investigated under the assumption thatelemental images are captured on a known geometrical surface (eg planar concave and etc) and in aregular pattern In what follows a generic scheme for integral imaging is proposed in which the pickuplocation is random andor sparse

A synthetic aperture case is investigated in which each sensor is positioned independently andrandomly in 3D space looking at the scene [see Fig 1] The pickup location of theith elementalimageP i is measured in a universal frame of reference in Cartesian coordinates The origin of theframe of reference is rather arbitrary but it has to be fixed during all position measurements Howeversince the proposed mathematical image reconstruction framework merely relies on relative distance ofelemental images in space it stays consistent if the origin moves and all position measurements areadjusted accordingly Also a local coordinate system is defined for each sensor with its origin lying onthe sensorrsquos midpoint

We assume that the sensor size(LxLy) effective focal length of the i-th imaging opticsgi and theposition of each sensor from the pickup stage is known In our analysis we make no assumption on thedistribution of elemental images in the space to achieve a generic reconstruction scheme To demonstratethe feasibility of the proposed technique the random pickup locationsP i = (xi yi zi) are chosen fromthree independent uniform random variables Clearly the actual distribution of elemental images isdictated by the specific application of interest We have used uniform distribution to give all locations inthe pickup volume an equal chance to get selected as sensor positions The reference elemental imageie the elemental image from which perspective the reconstruction is desired is denoted asE0

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

2

Figure 1 Layout of the sparse Integral Imaging sensor The reference andith elemental images areshown with their respective field of view in blue and green

3 Three dimensional image reconstructionSeveral methods have been investigated for computational reconstruction of II data In the Fourierdomain digital refocusing has been proposed [18] by applying Fourier slice theorem in 4D light fieldsThis technique is relatively fast with complexity ofO(n2 logn) n being the total number of samplesHowever this method is intrinsically based on the assumption of periodic sampling of the light fieldand thus may require heuristic adjustments if elemental images are not ordered regularly In the spatialdomain a fast ray tracing based reconstruction from the observers point of view is proposed [11] withcomplexity ofO(m)m being the number of elemental images Although fast and simple this methodyields low resolution reconstructions Yet another spatial domain reconstruction method is based onseries of 2D image back projections [12] This method offers a much better reconstruction resolutioncomparing to [11] at the expense of an algorithm with complexity ofO(n) since usuallyn ≫ m Forinstancem is typically in the range of 100-200 elemental images whilen can be as large as 107 pixelsIn the context of this paper we stay within the spatial domain in order to provide a generic reconstructionalgorithm with minimum assumptions about the pickup geometry

Computational reconstruction based on back-projection [12 13] has certain assumptions which areonly valid for lenslet based integral imaging systems In this section we develop a more genericreconstruction method based on affine transform which has its roots in a rigorous affine space theoryand is popularly used for orthographic image projection

The relationship between the local frame of referenceΨi and global frame of referenceΦ can bewritten based on Affine transforms as

[

Φ1

]

=

[

A i P r minusP i

0 0 0 1

]

times

[

Ψi

1

]

(1)

whereΦ = [xyz]T andΨi = [ui vi wi ]T denote the points in the reconstruction space andi-th elemental

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

3

image space respectively MatrixA i andtranslation vectorP i can be written as

A i =

Mi 0 00 Mi 00 0 1

P i =

pxi

pyi

pzi

(2)

in which P i denotes the position ofi-th sensor andMi = zigi is the associated magnification betweenthe i-th elemental image and its projection at distancezi = zr + pz

i minus pz0 [See Fig 1] Also the position

of the midpoint of the plane that we are interested in reconstructing is given byP r =[

px0 py

0 pz0minuszr

]

If the i-th elemental image is given byE i = Ei(ui vi wi) the relationshiph between local and globalcoordinates using Eq [1] can be written explicitly as

x = Miui minus pxi + px

0y = Mivi minus py

i + py0

z= wi minuszi = pzi minuszi

(3)

Using Eq [3] the projection ofE i on the reconstruction plane that represents the the expression ofthe back-projectedith elemental image inΦ coordinate system at planez= pz

0minuszr is

BPi(xyz) =Ei

(

x+ pxi minus px

0

Miy+ py

i minus py0

Miz+zi

)

where Mi =zi

gi (4)

The final reconstruction plane is achieved by superimposing the back-projected elemental imageswith common field of view as

R(xyzr) = Nminus1Nminus1

sumi=0

BPi(xy pz0minuszr) (5)

whereN is the total number of elemental images The described reconstruction technique similar toits counterpart in [12] generates high resolution reconstructions however it is generalized to deal withelemental images captured in arbitrary locations in space

4 ResultsExperimental results are demonstrated with toy models of a tank and a sports car resembeling a 3D sceneThe tank can be enclosed in a box of the size (5times25times2cm3) whereas the car model fits in a volumeof 4times3times25cm3 Also the tank and the car are placed approximately 19cmand 24cmaway from thereference elemental image respectively

To obtain random pickup positions a set of 100 positionsP i = (pxi py

i pzi ) are obtained from three

uniform random variable generators Parallax inx andy is set to(minus4cm4cm)and forz to (25cm27cm)assuming the desirable reconstruction range within [19cm24cm] from the reference sensor Theithelemental image is then taken with a digital camera at its associated random pickup positionP i Thefocal length for all lenses are set to be equal iegi = 25mm The CMOS sensor size is 227times151mm2 with 7micrompixel pitch The field of view (FOV) for each elemental image is then 48

times33 in thehorizontal and vertical directions respectively which covers an area of 18times12cm2 at 20cmaway fromthe pickup location in the object space A single camera is translated between the acquisition points suchthat it only passes each location once while at each location a full frame image with size 3072times2048pixels is captured The camera is translated inxyzusing off the shelve translation components

The 100 perspective images are used in Eqs [4] and [5] to reconstruct the 3D scene at differentdistances from the viewpointP0 = (minus0218259)cmwith varyingzr isin [160mm300mm] Two of suchreconstruction planes are shown in Fig 3 atzr = 185mmandzr = 240mm respectively

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

4

Figure 2 Four elemental images taken from position (a)P1 = (minus39minus39268) (b) P2 =(1434250) (c)P3 = (29minus32257)and (d)P4 = (minus2921264)all in cm

Figure 3 Two reconstructed images from viewpointP0 = (minus0218259) at (a) zr = 185mm (b)zr = 240mm

5 ConclusionIn this paper we overviewed a generalization to conventional Integral Imaging to encompass 3D multindashperspective imaging with arbitrary 3D pickup geometry and randomly distributed sensor coordinatesin all three spatial dimensions [17] A finite number of sensors with known coordinates are randomlyselected from within a pick up volume and are used for 3D object reconstruction In this approach whilethe sensor distribution is random that is it is not controlled the locations of sensors in space are assumedto be known at the reconstruction stage A computational reconstruction framework based on the backprojection method is developed using a variable affine transform between the image space and the objectspace

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

5

References[1] Okoshi T 1980Proc of the IEEE68(5) 548-564

Yang L McCornick M and Davies N 1988Appl Optics274529-34Igarishi Y Murata H and Ueda M 1978Jpn J Appl Phys171683-84Travis A R L 1997Proc of the IEEE85(11) 1817-32Lypton L 1982Foundation of Stereoscopic Cinema A Study in Depth(New York Van Nostrand Reinhold)Javidi B and Okano F 2002Three Dimensional Television Video and Display Tech(Berlin Springer)Javidi B 2006Optical Imaging Sensors and Systems for Homeland Security App(New York Springer)

[2] Okano F Hoshino H Arai J and Yuyama I 1997Appl Optics361598ndash1603Okano F Arai J Hoshino H and Yuyama I 1999Opt Eng381072ndash78Stern A and Javidi B 2006Proc of the IEEE94(3) 591ndash607

[3] Lippmann M G 1908Comptes-rendus de lrsquoAcademie des Sciences146446ndash451Sokolov A P 1911Autostereoscpy and Integral Photography by Professor Lippmanns Method(Moscow Moscow State

Univ Press)Ives H E 1931Journal of Optical Soc of America A21171-176

[4] Perlin K Paxia S and Kollin J S 2000Proc of the 27th Ann Conf on Computer Graphics and Interactive Techniques319ndash326 (ACM PressAddison-Wesley)

[5] Levoy M Hanrahan P 1996Proc of ACM SIGGRAPH31ndash42 (New Orleans)Levoy M 2006IEEE Computer3946ndash55

[6] Martınez R Pons A Saavedra G Martinez-Corral M and Javidi B 2006Opt Express149657ndash63[7] Martınez-Cuenca R Saavedra G Martnez-Corral M and Javidi B 2004Opt Express125237ndash42[8] Arai J Okano F Hoshino H and Yuyama I 1998Appl Opt372034ndash45[9] Jang J S and Javidi B 2002Opt Lett27324ndash326

[10] Jang J S and Javidi B 2002Opt Lett271767ndash69[11] Arimoto H and Javidi B 2001Opt Lett26157ndash159[12] Hong S H Jang J S and Javidi B 2004Opt Express12483-491[13] Hong S H and Javidi B 2004Opt Express124579ndash88[14] Stern A and Javidi B 2003Appl Opt427036-42[15] Jang J S and Javidi B 2002Opt Lett271144ndash46[16] Tavakoli B DaneshPanah M Javidi B and Watson E A 2007Opt Express1511889ndash902[17] DaneshPanah M Javidi B and Watson 2008 E AOpt Express166368ndash77[18] Ng R 2005Proc of ACM SIGGRAPH24735ndash744

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

6

Three dimensional multi perspective imaging withrandomly distrib uted sensors

Mehdi DaneshPanah and Bahram JavidiDeptartment of Electrical and Computer Engineering University of Connecticut Storrs CT 06269 USA

E-mail bahramengruconnedu

Abstract In this paper we review a three dimensional (3D) passive imaging system that exploits thevisual information captured from the scene from multiple perspectives to reconstruct the scene voxel byvoxel in 3D space The primary contribution of this work is to provide a computational reconstructionscheme based on randomly distributed sensor locations in space In virtually all of multi perspectivetechniques (eg integral imaging synthetic aperture integral imaging etc) there is an implicit assumptionthat the sensors lie on a simple regular pickup grid Here we relax this assumption and suggest acomputational reconstruction framework that unifies the available methods as its special cases Theimportance of this work is that it enables three dimensional imaging technology to be implemented in amultitude of novel application domains such as 3D aerial imaging collaborative imaging long range 3Dimaging and etc where sustaining a regular pickup grid is not possible andor the parallax requirements callfor a irregular or sparse synthetic aperture mode Although the sensors can be distributed in any randomarrangment we assume that the pickup position is measured at the time of caputre of each elemental imageWe demonstrate the feasibility of the methods proposed here by experimental results

1 IntroductionTraditionallly two dimensional imaging systems have been the primary method of sensing the worldvisually However in the recent years with growing demand for advanced imaging methods theinterest in three dimensional (3D) imaging and information processing systems has increased [1] Thecommon goal of most of these techniques is to capture display andor process one or more of thephysiological depth cues ie sensing the world with 3D perception Among various techniques whichcan quantitatively measure depth cues one major thrust is in Integral Imaging (II) [2] (also known asintegral photography) which is based on the original work of Lippmann with lenticular sheets [3] andis classified under passive multi-perspective 3D imaging systems The different perspective imagesare known as elemental images each one from a slightly different perspective This technique is apromising method compared to other techniques such as holography due to its interesting featuresincluding continuous viewing angle full parallax and full color display without the need for coherentsources of illumination and its relative simplicity of implementation

Integral Imaging based optical displays provide autosterioscopic image or video [4] with no need forspecial eyewear for preceiving 3D cues This is made possible essentially by recording the intensity anddirection of light rays ie the light field [5] emanated from the scene in the capture stage and laterbackpropagating the rays with the same parameters in the display stage Developments in this venueinclude aberation reduction [6] extention of depth of field [7] use of gradient index lens arrays [8]to handle the orthoscopic to pseudoscopic conversion also resolution improvement methods includinguse of moving lenslet technique (MALT) [9] and electronically synthesized moving Fresnel lenslets

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

ccopy 2008 IOP Publishing Ltd 1

[10] Nevertheless optical reconstruction approach suffers from lowresolution low sampling ratequality degradation due to diffraction limited dynamic range and low overall visual quality partly due tolimitation of electro-optical projection devices

On the other hand computational reconstruction techniques deliver a more flexible venue to extractand exploit 3D cues by digital manipulation of integral image data [11 12 13 14] This amounts tocalculation of the light ray distribution over a particular plane in the scene from the image informationin the captured elemental images The extracted infomration is then displayed on regular 2D LCDs ormanipulated further to create a depth profile from the scene Computational techniques are particularlyinteresting for object recognition and classification surface profiling digital refocusing 3D artwork andetc

Conventional Integral Imaging is historically developed based on lenticular sheets and lenslet arraysHowever limitations in the field of view resolution-parallax compromise high aberation and lowimaging resolution of each lenslet have led to development of Synthetic Aperture Integral Imaging (SAII)[15 16 17] In this technique a conventional imaging sensor (eg camera) scans an area in a regulargrid pattern and each elemental image is acquired in full frame This enables one to obtain larger field ofview highly corrected and high resolution elemental images In this pickup process a natural questionis what happens if the elemental images are taken neither on a regular grid nor on the same plane

In this paper we overview the approach in [17] for a generaliation to the Integral Imaging methodin order to acheive 3D pickup geometry with randomly distributed sensor coordinates in all three spatialdimensions For 3D reconstruction a finite number of sensors with known coordinates are randomlyselected from within this pick up volume In particular we will study the case of Synthetic ApertureIntegral Imaging (SAII) where the sensor distribution is not controlled that is it is random howeverthe locations of sensors in space are known In this study the optical axes of the cameras are assumedparallel but each sensor has a different distances from the 3D object A computational reconstructionframework based on the back projection method is developed using a variable affine transform betweenthe image space and the object space It can be shown that affine coordinate transformation correspondsto orthographic projection similar to what is needed in light back-projection based II reconstruction

2 Sparse sensor configurationVirtually all instances of integral imaging methods have been investigated under the assumption thatelemental images are captured on a known geometrical surface (eg planar concave and etc) and in aregular pattern In what follows a generic scheme for integral imaging is proposed in which the pickuplocation is random andor sparse

A synthetic aperture case is investigated in which each sensor is positioned independently andrandomly in 3D space looking at the scene [see Fig 1] The pickup location of theith elementalimageP i is measured in a universal frame of reference in Cartesian coordinates The origin of theframe of reference is rather arbitrary but it has to be fixed during all position measurements Howeversince the proposed mathematical image reconstruction framework merely relies on relative distance ofelemental images in space it stays consistent if the origin moves and all position measurements areadjusted accordingly Also a local coordinate system is defined for each sensor with its origin lying onthe sensorrsquos midpoint

We assume that the sensor size(LxLy) effective focal length of the i-th imaging opticsgi and theposition of each sensor from the pickup stage is known In our analysis we make no assumption on thedistribution of elemental images in the space to achieve a generic reconstruction scheme To demonstratethe feasibility of the proposed technique the random pickup locationsP i = (xi yi zi) are chosen fromthree independent uniform random variables Clearly the actual distribution of elemental images isdictated by the specific application of interest We have used uniform distribution to give all locations inthe pickup volume an equal chance to get selected as sensor positions The reference elemental imageie the elemental image from which perspective the reconstruction is desired is denoted asE0

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

2

Figure 1 Layout of the sparse Integral Imaging sensor The reference andith elemental images areshown with their respective field of view in blue and green

3 Three dimensional image reconstructionSeveral methods have been investigated for computational reconstruction of II data In the Fourierdomain digital refocusing has been proposed [18] by applying Fourier slice theorem in 4D light fieldsThis technique is relatively fast with complexity ofO(n2 logn) n being the total number of samplesHowever this method is intrinsically based on the assumption of periodic sampling of the light fieldand thus may require heuristic adjustments if elemental images are not ordered regularly In the spatialdomain a fast ray tracing based reconstruction from the observers point of view is proposed [11] withcomplexity ofO(m)m being the number of elemental images Although fast and simple this methodyields low resolution reconstructions Yet another spatial domain reconstruction method is based onseries of 2D image back projections [12] This method offers a much better reconstruction resolutioncomparing to [11] at the expense of an algorithm with complexity ofO(n) since usuallyn ≫ m Forinstancem is typically in the range of 100-200 elemental images whilen can be as large as 107 pixelsIn the context of this paper we stay within the spatial domain in order to provide a generic reconstructionalgorithm with minimum assumptions about the pickup geometry

Computational reconstruction based on back-projection [12 13] has certain assumptions which areonly valid for lenslet based integral imaging systems In this section we develop a more genericreconstruction method based on affine transform which has its roots in a rigorous affine space theoryand is popularly used for orthographic image projection

The relationship between the local frame of referenceΨi and global frame of referenceΦ can bewritten based on Affine transforms as

[

Φ1

]

=

[

A i P r minusP i

0 0 0 1

]

times

[

Ψi

1

]

(1)

whereΦ = [xyz]T andΨi = [ui vi wi ]T denote the points in the reconstruction space andi-th elemental

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

3

image space respectively MatrixA i andtranslation vectorP i can be written as

A i =

Mi 0 00 Mi 00 0 1

P i =

pxi

pyi

pzi

(2)

in which P i denotes the position ofi-th sensor andMi = zigi is the associated magnification betweenthe i-th elemental image and its projection at distancezi = zr + pz

i minus pz0 [See Fig 1] Also the position

of the midpoint of the plane that we are interested in reconstructing is given byP r =[

px0 py

0 pz0minuszr

]

If the i-th elemental image is given byE i = Ei(ui vi wi) the relationshiph between local and globalcoordinates using Eq [1] can be written explicitly as

x = Miui minus pxi + px

0y = Mivi minus py

i + py0

z= wi minuszi = pzi minuszi

(3)

Using Eq [3] the projection ofE i on the reconstruction plane that represents the the expression ofthe back-projectedith elemental image inΦ coordinate system at planez= pz

0minuszr is

BPi(xyz) =Ei

(

x+ pxi minus px

0

Miy+ py

i minus py0

Miz+zi

)

where Mi =zi

gi (4)

The final reconstruction plane is achieved by superimposing the back-projected elemental imageswith common field of view as

R(xyzr) = Nminus1Nminus1

sumi=0

BPi(xy pz0minuszr) (5)

whereN is the total number of elemental images The described reconstruction technique similar toits counterpart in [12] generates high resolution reconstructions however it is generalized to deal withelemental images captured in arbitrary locations in space

4 ResultsExperimental results are demonstrated with toy models of a tank and a sports car resembeling a 3D sceneThe tank can be enclosed in a box of the size (5times25times2cm3) whereas the car model fits in a volumeof 4times3times25cm3 Also the tank and the car are placed approximately 19cmand 24cmaway from thereference elemental image respectively

To obtain random pickup positions a set of 100 positionsP i = (pxi py

i pzi ) are obtained from three

uniform random variable generators Parallax inx andy is set to(minus4cm4cm)and forz to (25cm27cm)assuming the desirable reconstruction range within [19cm24cm] from the reference sensor Theithelemental image is then taken with a digital camera at its associated random pickup positionP i Thefocal length for all lenses are set to be equal iegi = 25mm The CMOS sensor size is 227times151mm2 with 7micrompixel pitch The field of view (FOV) for each elemental image is then 48

times33 in thehorizontal and vertical directions respectively which covers an area of 18times12cm2 at 20cmaway fromthe pickup location in the object space A single camera is translated between the acquisition points suchthat it only passes each location once while at each location a full frame image with size 3072times2048pixels is captured The camera is translated inxyzusing off the shelve translation components

The 100 perspective images are used in Eqs [4] and [5] to reconstruct the 3D scene at differentdistances from the viewpointP0 = (minus0218259)cmwith varyingzr isin [160mm300mm] Two of suchreconstruction planes are shown in Fig 3 atzr = 185mmandzr = 240mm respectively

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

4

Figure 2 Four elemental images taken from position (a)P1 = (minus39minus39268) (b) P2 =(1434250) (c)P3 = (29minus32257)and (d)P4 = (minus2921264)all in cm

Figure 3 Two reconstructed images from viewpointP0 = (minus0218259) at (a) zr = 185mm (b)zr = 240mm

5 ConclusionIn this paper we overviewed a generalization to conventional Integral Imaging to encompass 3D multindashperspective imaging with arbitrary 3D pickup geometry and randomly distributed sensor coordinatesin all three spatial dimensions [17] A finite number of sensors with known coordinates are randomlyselected from within a pick up volume and are used for 3D object reconstruction In this approach whilethe sensor distribution is random that is it is not controlled the locations of sensors in space are assumedto be known at the reconstruction stage A computational reconstruction framework based on the backprojection method is developed using a variable affine transform between the image space and the objectspace

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

5

References[1] Okoshi T 1980Proc of the IEEE68(5) 548-564

Yang L McCornick M and Davies N 1988Appl Optics274529-34Igarishi Y Murata H and Ueda M 1978Jpn J Appl Phys171683-84Travis A R L 1997Proc of the IEEE85(11) 1817-32Lypton L 1982Foundation of Stereoscopic Cinema A Study in Depth(New York Van Nostrand Reinhold)Javidi B and Okano F 2002Three Dimensional Television Video and Display Tech(Berlin Springer)Javidi B 2006Optical Imaging Sensors and Systems for Homeland Security App(New York Springer)

[2] Okano F Hoshino H Arai J and Yuyama I 1997Appl Optics361598ndash1603Okano F Arai J Hoshino H and Yuyama I 1999Opt Eng381072ndash78Stern A and Javidi B 2006Proc of the IEEE94(3) 591ndash607

[3] Lippmann M G 1908Comptes-rendus de lrsquoAcademie des Sciences146446ndash451Sokolov A P 1911Autostereoscpy and Integral Photography by Professor Lippmanns Method(Moscow Moscow State

Univ Press)Ives H E 1931Journal of Optical Soc of America A21171-176

[4] Perlin K Paxia S and Kollin J S 2000Proc of the 27th Ann Conf on Computer Graphics and Interactive Techniques319ndash326 (ACM PressAddison-Wesley)

[5] Levoy M Hanrahan P 1996Proc of ACM SIGGRAPH31ndash42 (New Orleans)Levoy M 2006IEEE Computer3946ndash55

[6] Martınez R Pons A Saavedra G Martinez-Corral M and Javidi B 2006Opt Express149657ndash63[7] Martınez-Cuenca R Saavedra G Martnez-Corral M and Javidi B 2004Opt Express125237ndash42[8] Arai J Okano F Hoshino H and Yuyama I 1998Appl Opt372034ndash45[9] Jang J S and Javidi B 2002Opt Lett27324ndash326

[10] Jang J S and Javidi B 2002Opt Lett271767ndash69[11] Arimoto H and Javidi B 2001Opt Lett26157ndash159[12] Hong S H Jang J S and Javidi B 2004Opt Express12483-491[13] Hong S H and Javidi B 2004Opt Express124579ndash88[14] Stern A and Javidi B 2003Appl Opt427036-42[15] Jang J S and Javidi B 2002Opt Lett271144ndash46[16] Tavakoli B DaneshPanah M Javidi B and Watson E A 2007Opt Express1511889ndash902[17] DaneshPanah M Javidi B and Watson 2008 E AOpt Express166368ndash77[18] Ng R 2005Proc of ACM SIGGRAPH24735ndash744

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

6

[10] Nevertheless optical reconstruction approach suffers from lowresolution low sampling ratequality degradation due to diffraction limited dynamic range and low overall visual quality partly due tolimitation of electro-optical projection devices

On the other hand computational reconstruction techniques deliver a more flexible venue to extractand exploit 3D cues by digital manipulation of integral image data [11 12 13 14] This amounts tocalculation of the light ray distribution over a particular plane in the scene from the image informationin the captured elemental images The extracted infomration is then displayed on regular 2D LCDs ormanipulated further to create a depth profile from the scene Computational techniques are particularlyinteresting for object recognition and classification surface profiling digital refocusing 3D artwork andetc

Conventional Integral Imaging is historically developed based on lenticular sheets and lenslet arraysHowever limitations in the field of view resolution-parallax compromise high aberation and lowimaging resolution of each lenslet have led to development of Synthetic Aperture Integral Imaging (SAII)[15 16 17] In this technique a conventional imaging sensor (eg camera) scans an area in a regulargrid pattern and each elemental image is acquired in full frame This enables one to obtain larger field ofview highly corrected and high resolution elemental images In this pickup process a natural questionis what happens if the elemental images are taken neither on a regular grid nor on the same plane

In this paper we overview the approach in [17] for a generaliation to the Integral Imaging methodin order to acheive 3D pickup geometry with randomly distributed sensor coordinates in all three spatialdimensions For 3D reconstruction a finite number of sensors with known coordinates are randomlyselected from within this pick up volume In particular we will study the case of Synthetic ApertureIntegral Imaging (SAII) where the sensor distribution is not controlled that is it is random howeverthe locations of sensors in space are known In this study the optical axes of the cameras are assumedparallel but each sensor has a different distances from the 3D object A computational reconstructionframework based on the back projection method is developed using a variable affine transform betweenthe image space and the object space It can be shown that affine coordinate transformation correspondsto orthographic projection similar to what is needed in light back-projection based II reconstruction

2 Sparse sensor configurationVirtually all instances of integral imaging methods have been investigated under the assumption thatelemental images are captured on a known geometrical surface (eg planar concave and etc) and in aregular pattern In what follows a generic scheme for integral imaging is proposed in which the pickuplocation is random andor sparse

A synthetic aperture case is investigated in which each sensor is positioned independently andrandomly in 3D space looking at the scene [see Fig 1] The pickup location of theith elementalimageP i is measured in a universal frame of reference in Cartesian coordinates The origin of theframe of reference is rather arbitrary but it has to be fixed during all position measurements Howeversince the proposed mathematical image reconstruction framework merely relies on relative distance ofelemental images in space it stays consistent if the origin moves and all position measurements areadjusted accordingly Also a local coordinate system is defined for each sensor with its origin lying onthe sensorrsquos midpoint

We assume that the sensor size(LxLy) effective focal length of the i-th imaging opticsgi and theposition of each sensor from the pickup stage is known In our analysis we make no assumption on thedistribution of elemental images in the space to achieve a generic reconstruction scheme To demonstratethe feasibility of the proposed technique the random pickup locationsP i = (xi yi zi) are chosen fromthree independent uniform random variables Clearly the actual distribution of elemental images isdictated by the specific application of interest We have used uniform distribution to give all locations inthe pickup volume an equal chance to get selected as sensor positions The reference elemental imageie the elemental image from which perspective the reconstruction is desired is denoted asE0

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

2

Figure 1 Layout of the sparse Integral Imaging sensor The reference andith elemental images areshown with their respective field of view in blue and green

3 Three dimensional image reconstructionSeveral methods have been investigated for computational reconstruction of II data In the Fourierdomain digital refocusing has been proposed [18] by applying Fourier slice theorem in 4D light fieldsThis technique is relatively fast with complexity ofO(n2 logn) n being the total number of samplesHowever this method is intrinsically based on the assumption of periodic sampling of the light fieldand thus may require heuristic adjustments if elemental images are not ordered regularly In the spatialdomain a fast ray tracing based reconstruction from the observers point of view is proposed [11] withcomplexity ofO(m)m being the number of elemental images Although fast and simple this methodyields low resolution reconstructions Yet another spatial domain reconstruction method is based onseries of 2D image back projections [12] This method offers a much better reconstruction resolutioncomparing to [11] at the expense of an algorithm with complexity ofO(n) since usuallyn ≫ m Forinstancem is typically in the range of 100-200 elemental images whilen can be as large as 107 pixelsIn the context of this paper we stay within the spatial domain in order to provide a generic reconstructionalgorithm with minimum assumptions about the pickup geometry

Computational reconstruction based on back-projection [12 13] has certain assumptions which areonly valid for lenslet based integral imaging systems In this section we develop a more genericreconstruction method based on affine transform which has its roots in a rigorous affine space theoryand is popularly used for orthographic image projection

The relationship between the local frame of referenceΨi and global frame of referenceΦ can bewritten based on Affine transforms as

[

Φ1

]

=

[

A i P r minusP i

0 0 0 1

]

times

[

Ψi

1

]

(1)

whereΦ = [xyz]T andΨi = [ui vi wi ]T denote the points in the reconstruction space andi-th elemental

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

3

image space respectively MatrixA i andtranslation vectorP i can be written as

A i =

Mi 0 00 Mi 00 0 1

P i =

pxi

pyi

pzi

(2)

in which P i denotes the position ofi-th sensor andMi = zigi is the associated magnification betweenthe i-th elemental image and its projection at distancezi = zr + pz

i minus pz0 [See Fig 1] Also the position

of the midpoint of the plane that we are interested in reconstructing is given byP r =[

px0 py

0 pz0minuszr

]

If the i-th elemental image is given byE i = Ei(ui vi wi) the relationshiph between local and globalcoordinates using Eq [1] can be written explicitly as

x = Miui minus pxi + px

0y = Mivi minus py

i + py0

z= wi minuszi = pzi minuszi

(3)

Using Eq [3] the projection ofE i on the reconstruction plane that represents the the expression ofthe back-projectedith elemental image inΦ coordinate system at planez= pz

0minuszr is

BPi(xyz) =Ei

(

x+ pxi minus px

0

Miy+ py

i minus py0

Miz+zi

)

where Mi =zi

gi (4)

The final reconstruction plane is achieved by superimposing the back-projected elemental imageswith common field of view as

R(xyzr) = Nminus1Nminus1

sumi=0

BPi(xy pz0minuszr) (5)

whereN is the total number of elemental images The described reconstruction technique similar toits counterpart in [12] generates high resolution reconstructions however it is generalized to deal withelemental images captured in arbitrary locations in space

4 ResultsExperimental results are demonstrated with toy models of a tank and a sports car resembeling a 3D sceneThe tank can be enclosed in a box of the size (5times25times2cm3) whereas the car model fits in a volumeof 4times3times25cm3 Also the tank and the car are placed approximately 19cmand 24cmaway from thereference elemental image respectively

To obtain random pickup positions a set of 100 positionsP i = (pxi py

i pzi ) are obtained from three

uniform random variable generators Parallax inx andy is set to(minus4cm4cm)and forz to (25cm27cm)assuming the desirable reconstruction range within [19cm24cm] from the reference sensor Theithelemental image is then taken with a digital camera at its associated random pickup positionP i Thefocal length for all lenses are set to be equal iegi = 25mm The CMOS sensor size is 227times151mm2 with 7micrompixel pitch The field of view (FOV) for each elemental image is then 48

times33 in thehorizontal and vertical directions respectively which covers an area of 18times12cm2 at 20cmaway fromthe pickup location in the object space A single camera is translated between the acquisition points suchthat it only passes each location once while at each location a full frame image with size 3072times2048pixels is captured The camera is translated inxyzusing off the shelve translation components

The 100 perspective images are used in Eqs [4] and [5] to reconstruct the 3D scene at differentdistances from the viewpointP0 = (minus0218259)cmwith varyingzr isin [160mm300mm] Two of suchreconstruction planes are shown in Fig 3 atzr = 185mmandzr = 240mm respectively

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

4

Figure 2 Four elemental images taken from position (a)P1 = (minus39minus39268) (b) P2 =(1434250) (c)P3 = (29minus32257)and (d)P4 = (minus2921264)all in cm

Figure 3 Two reconstructed images from viewpointP0 = (minus0218259) at (a) zr = 185mm (b)zr = 240mm

5 ConclusionIn this paper we overviewed a generalization to conventional Integral Imaging to encompass 3D multindashperspective imaging with arbitrary 3D pickup geometry and randomly distributed sensor coordinatesin all three spatial dimensions [17] A finite number of sensors with known coordinates are randomlyselected from within a pick up volume and are used for 3D object reconstruction In this approach whilethe sensor distribution is random that is it is not controlled the locations of sensors in space are assumedto be known at the reconstruction stage A computational reconstruction framework based on the backprojection method is developed using a variable affine transform between the image space and the objectspace

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

5

References[1] Okoshi T 1980Proc of the IEEE68(5) 548-564

Yang L McCornick M and Davies N 1988Appl Optics274529-34Igarishi Y Murata H and Ueda M 1978Jpn J Appl Phys171683-84Travis A R L 1997Proc of the IEEE85(11) 1817-32Lypton L 1982Foundation of Stereoscopic Cinema A Study in Depth(New York Van Nostrand Reinhold)Javidi B and Okano F 2002Three Dimensional Television Video and Display Tech(Berlin Springer)Javidi B 2006Optical Imaging Sensors and Systems for Homeland Security App(New York Springer)

[2] Okano F Hoshino H Arai J and Yuyama I 1997Appl Optics361598ndash1603Okano F Arai J Hoshino H and Yuyama I 1999Opt Eng381072ndash78Stern A and Javidi B 2006Proc of the IEEE94(3) 591ndash607

[3] Lippmann M G 1908Comptes-rendus de lrsquoAcademie des Sciences146446ndash451Sokolov A P 1911Autostereoscpy and Integral Photography by Professor Lippmanns Method(Moscow Moscow State

Univ Press)Ives H E 1931Journal of Optical Soc of America A21171-176

[4] Perlin K Paxia S and Kollin J S 2000Proc of the 27th Ann Conf on Computer Graphics and Interactive Techniques319ndash326 (ACM PressAddison-Wesley)

[5] Levoy M Hanrahan P 1996Proc of ACM SIGGRAPH31ndash42 (New Orleans)Levoy M 2006IEEE Computer3946ndash55

[6] Martınez R Pons A Saavedra G Martinez-Corral M and Javidi B 2006Opt Express149657ndash63[7] Martınez-Cuenca R Saavedra G Martnez-Corral M and Javidi B 2004Opt Express125237ndash42[8] Arai J Okano F Hoshino H and Yuyama I 1998Appl Opt372034ndash45[9] Jang J S and Javidi B 2002Opt Lett27324ndash326

[10] Jang J S and Javidi B 2002Opt Lett271767ndash69[11] Arimoto H and Javidi B 2001Opt Lett26157ndash159[12] Hong S H Jang J S and Javidi B 2004Opt Express12483-491[13] Hong S H and Javidi B 2004Opt Express124579ndash88[14] Stern A and Javidi B 2003Appl Opt427036-42[15] Jang J S and Javidi B 2002Opt Lett271144ndash46[16] Tavakoli B DaneshPanah M Javidi B and Watson E A 2007Opt Express1511889ndash902[17] DaneshPanah M Javidi B and Watson 2008 E AOpt Express166368ndash77[18] Ng R 2005Proc of ACM SIGGRAPH24735ndash744

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

6

Figure 1 Layout of the sparse Integral Imaging sensor The reference andith elemental images areshown with their respective field of view in blue and green

3 Three dimensional image reconstructionSeveral methods have been investigated for computational reconstruction of II data In the Fourierdomain digital refocusing has been proposed [18] by applying Fourier slice theorem in 4D light fieldsThis technique is relatively fast with complexity ofO(n2 logn) n being the total number of samplesHowever this method is intrinsically based on the assumption of periodic sampling of the light fieldand thus may require heuristic adjustments if elemental images are not ordered regularly In the spatialdomain a fast ray tracing based reconstruction from the observers point of view is proposed [11] withcomplexity ofO(m)m being the number of elemental images Although fast and simple this methodyields low resolution reconstructions Yet another spatial domain reconstruction method is based onseries of 2D image back projections [12] This method offers a much better reconstruction resolutioncomparing to [11] at the expense of an algorithm with complexity ofO(n) since usuallyn ≫ m Forinstancem is typically in the range of 100-200 elemental images whilen can be as large as 107 pixelsIn the context of this paper we stay within the spatial domain in order to provide a generic reconstructionalgorithm with minimum assumptions about the pickup geometry

Computational reconstruction based on back-projection [12 13] has certain assumptions which areonly valid for lenslet based integral imaging systems In this section we develop a more genericreconstruction method based on affine transform which has its roots in a rigorous affine space theoryand is popularly used for orthographic image projection

The relationship between the local frame of referenceΨi and global frame of referenceΦ can bewritten based on Affine transforms as

[

Φ1

]

=

[

A i P r minusP i

0 0 0 1

]

times

[

Ψi

1

]

(1)

whereΦ = [xyz]T andΨi = [ui vi wi ]T denote the points in the reconstruction space andi-th elemental

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

3

image space respectively MatrixA i andtranslation vectorP i can be written as

A i =

Mi 0 00 Mi 00 0 1

P i =

pxi

pyi

pzi

(2)

in which P i denotes the position ofi-th sensor andMi = zigi is the associated magnification betweenthe i-th elemental image and its projection at distancezi = zr + pz

i minus pz0 [See Fig 1] Also the position

of the midpoint of the plane that we are interested in reconstructing is given byP r =[

px0 py

0 pz0minuszr

]

If the i-th elemental image is given byE i = Ei(ui vi wi) the relationshiph between local and globalcoordinates using Eq [1] can be written explicitly as

x = Miui minus pxi + px

0y = Mivi minus py

i + py0

z= wi minuszi = pzi minuszi

(3)

Using Eq [3] the projection ofE i on the reconstruction plane that represents the the expression ofthe back-projectedith elemental image inΦ coordinate system at planez= pz

0minuszr is

BPi(xyz) =Ei

(

x+ pxi minus px

0

Miy+ py

i minus py0

Miz+zi

)

where Mi =zi

gi (4)

The final reconstruction plane is achieved by superimposing the back-projected elemental imageswith common field of view as

R(xyzr) = Nminus1Nminus1

sumi=0

BPi(xy pz0minuszr) (5)

whereN is the total number of elemental images The described reconstruction technique similar toits counterpart in [12] generates high resolution reconstructions however it is generalized to deal withelemental images captured in arbitrary locations in space

4 ResultsExperimental results are demonstrated with toy models of a tank and a sports car resembeling a 3D sceneThe tank can be enclosed in a box of the size (5times25times2cm3) whereas the car model fits in a volumeof 4times3times25cm3 Also the tank and the car are placed approximately 19cmand 24cmaway from thereference elemental image respectively

To obtain random pickup positions a set of 100 positionsP i = (pxi py

i pzi ) are obtained from three

uniform random variable generators Parallax inx andy is set to(minus4cm4cm)and forz to (25cm27cm)assuming the desirable reconstruction range within [19cm24cm] from the reference sensor Theithelemental image is then taken with a digital camera at its associated random pickup positionP i Thefocal length for all lenses are set to be equal iegi = 25mm The CMOS sensor size is 227times151mm2 with 7micrompixel pitch The field of view (FOV) for each elemental image is then 48

times33 in thehorizontal and vertical directions respectively which covers an area of 18times12cm2 at 20cmaway fromthe pickup location in the object space A single camera is translated between the acquisition points suchthat it only passes each location once while at each location a full frame image with size 3072times2048pixels is captured The camera is translated inxyzusing off the shelve translation components

The 100 perspective images are used in Eqs [4] and [5] to reconstruct the 3D scene at differentdistances from the viewpointP0 = (minus0218259)cmwith varyingzr isin [160mm300mm] Two of suchreconstruction planes are shown in Fig 3 atzr = 185mmandzr = 240mm respectively

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

4

Figure 2 Four elemental images taken from position (a)P1 = (minus39minus39268) (b) P2 =(1434250) (c)P3 = (29minus32257)and (d)P4 = (minus2921264)all in cm

Figure 3 Two reconstructed images from viewpointP0 = (minus0218259) at (a) zr = 185mm (b)zr = 240mm

5 ConclusionIn this paper we overviewed a generalization to conventional Integral Imaging to encompass 3D multindashperspective imaging with arbitrary 3D pickup geometry and randomly distributed sensor coordinatesin all three spatial dimensions [17] A finite number of sensors with known coordinates are randomlyselected from within a pick up volume and are used for 3D object reconstruction In this approach whilethe sensor distribution is random that is it is not controlled the locations of sensors in space are assumedto be known at the reconstruction stage A computational reconstruction framework based on the backprojection method is developed using a variable affine transform between the image space and the objectspace

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

5

References[1] Okoshi T 1980Proc of the IEEE68(5) 548-564

Yang L McCornick M and Davies N 1988Appl Optics274529-34Igarishi Y Murata H and Ueda M 1978Jpn J Appl Phys171683-84Travis A R L 1997Proc of the IEEE85(11) 1817-32Lypton L 1982Foundation of Stereoscopic Cinema A Study in Depth(New York Van Nostrand Reinhold)Javidi B and Okano F 2002Three Dimensional Television Video and Display Tech(Berlin Springer)Javidi B 2006Optical Imaging Sensors and Systems for Homeland Security App(New York Springer)

[2] Okano F Hoshino H Arai J and Yuyama I 1997Appl Optics361598ndash1603Okano F Arai J Hoshino H and Yuyama I 1999Opt Eng381072ndash78Stern A and Javidi B 2006Proc of the IEEE94(3) 591ndash607

[3] Lippmann M G 1908Comptes-rendus de lrsquoAcademie des Sciences146446ndash451Sokolov A P 1911Autostereoscpy and Integral Photography by Professor Lippmanns Method(Moscow Moscow State

Univ Press)Ives H E 1931Journal of Optical Soc of America A21171-176

[4] Perlin K Paxia S and Kollin J S 2000Proc of the 27th Ann Conf on Computer Graphics and Interactive Techniques319ndash326 (ACM PressAddison-Wesley)

[5] Levoy M Hanrahan P 1996Proc of ACM SIGGRAPH31ndash42 (New Orleans)Levoy M 2006IEEE Computer3946ndash55

[6] Martınez R Pons A Saavedra G Martinez-Corral M and Javidi B 2006Opt Express149657ndash63[7] Martınez-Cuenca R Saavedra G Martnez-Corral M and Javidi B 2004Opt Express125237ndash42[8] Arai J Okano F Hoshino H and Yuyama I 1998Appl Opt372034ndash45[9] Jang J S and Javidi B 2002Opt Lett27324ndash326

[10] Jang J S and Javidi B 2002Opt Lett271767ndash69[11] Arimoto H and Javidi B 2001Opt Lett26157ndash159[12] Hong S H Jang J S and Javidi B 2004Opt Express12483-491[13] Hong S H and Javidi B 2004Opt Express124579ndash88[14] Stern A and Javidi B 2003Appl Opt427036-42[15] Jang J S and Javidi B 2002Opt Lett271144ndash46[16] Tavakoli B DaneshPanah M Javidi B and Watson E A 2007Opt Express1511889ndash902[17] DaneshPanah M Javidi B and Watson 2008 E AOpt Express166368ndash77[18] Ng R 2005Proc of ACM SIGGRAPH24735ndash744

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

6

image space respectively MatrixA i andtranslation vectorP i can be written as

A i =

Mi 0 00 Mi 00 0 1

P i =

pxi

pyi

pzi

(2)

in which P i denotes the position ofi-th sensor andMi = zigi is the associated magnification betweenthe i-th elemental image and its projection at distancezi = zr + pz

i minus pz0 [See Fig 1] Also the position

of the midpoint of the plane that we are interested in reconstructing is given byP r =[

px0 py

0 pz0minuszr

]

If the i-th elemental image is given byE i = Ei(ui vi wi) the relationshiph between local and globalcoordinates using Eq [1] can be written explicitly as

x = Miui minus pxi + px

0y = Mivi minus py

i + py0

z= wi minuszi = pzi minuszi

(3)

Using Eq [3] the projection ofE i on the reconstruction plane that represents the the expression ofthe back-projectedith elemental image inΦ coordinate system at planez= pz

0minuszr is

BPi(xyz) =Ei

(

x+ pxi minus px

0

Miy+ py

i minus py0

Miz+zi

)

where Mi =zi

gi (4)

The final reconstruction plane is achieved by superimposing the back-projected elemental imageswith common field of view as

R(xyzr) = Nminus1Nminus1

sumi=0

BPi(xy pz0minuszr) (5)

whereN is the total number of elemental images The described reconstruction technique similar toits counterpart in [12] generates high resolution reconstructions however it is generalized to deal withelemental images captured in arbitrary locations in space

4 ResultsExperimental results are demonstrated with toy models of a tank and a sports car resembeling a 3D sceneThe tank can be enclosed in a box of the size (5times25times2cm3) whereas the car model fits in a volumeof 4times3times25cm3 Also the tank and the car are placed approximately 19cmand 24cmaway from thereference elemental image respectively

To obtain random pickup positions a set of 100 positionsP i = (pxi py

i pzi ) are obtained from three

uniform random variable generators Parallax inx andy is set to(minus4cm4cm)and forz to (25cm27cm)assuming the desirable reconstruction range within [19cm24cm] from the reference sensor Theithelemental image is then taken with a digital camera at its associated random pickup positionP i Thefocal length for all lenses are set to be equal iegi = 25mm The CMOS sensor size is 227times151mm2 with 7micrompixel pitch The field of view (FOV) for each elemental image is then 48

times33 in thehorizontal and vertical directions respectively which covers an area of 18times12cm2 at 20cmaway fromthe pickup location in the object space A single camera is translated between the acquisition points suchthat it only passes each location once while at each location a full frame image with size 3072times2048pixels is captured The camera is translated inxyzusing off the shelve translation components

The 100 perspective images are used in Eqs [4] and [5] to reconstruct the 3D scene at differentdistances from the viewpointP0 = (minus0218259)cmwith varyingzr isin [160mm300mm] Two of suchreconstruction planes are shown in Fig 3 atzr = 185mmandzr = 240mm respectively

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

4

Figure 2 Four elemental images taken from position (a)P1 = (minus39minus39268) (b) P2 =(1434250) (c)P3 = (29minus32257)and (d)P4 = (minus2921264)all in cm

Figure 3 Two reconstructed images from viewpointP0 = (minus0218259) at (a) zr = 185mm (b)zr = 240mm

5 ConclusionIn this paper we overviewed a generalization to conventional Integral Imaging to encompass 3D multindashperspective imaging with arbitrary 3D pickup geometry and randomly distributed sensor coordinatesin all three spatial dimensions [17] A finite number of sensors with known coordinates are randomlyselected from within a pick up volume and are used for 3D object reconstruction In this approach whilethe sensor distribution is random that is it is not controlled the locations of sensors in space are assumedto be known at the reconstruction stage A computational reconstruction framework based on the backprojection method is developed using a variable affine transform between the image space and the objectspace

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

5

References[1] Okoshi T 1980Proc of the IEEE68(5) 548-564

Yang L McCornick M and Davies N 1988Appl Optics274529-34Igarishi Y Murata H and Ueda M 1978Jpn J Appl Phys171683-84Travis A R L 1997Proc of the IEEE85(11) 1817-32Lypton L 1982Foundation of Stereoscopic Cinema A Study in Depth(New York Van Nostrand Reinhold)Javidi B and Okano F 2002Three Dimensional Television Video and Display Tech(Berlin Springer)Javidi B 2006Optical Imaging Sensors and Systems for Homeland Security App(New York Springer)

[2] Okano F Hoshino H Arai J and Yuyama I 1997Appl Optics361598ndash1603Okano F Arai J Hoshino H and Yuyama I 1999Opt Eng381072ndash78Stern A and Javidi B 2006Proc of the IEEE94(3) 591ndash607

[3] Lippmann M G 1908Comptes-rendus de lrsquoAcademie des Sciences146446ndash451Sokolov A P 1911Autostereoscpy and Integral Photography by Professor Lippmanns Method(Moscow Moscow State

Univ Press)Ives H E 1931Journal of Optical Soc of America A21171-176

[4] Perlin K Paxia S and Kollin J S 2000Proc of the 27th Ann Conf on Computer Graphics and Interactive Techniques319ndash326 (ACM PressAddison-Wesley)

[5] Levoy M Hanrahan P 1996Proc of ACM SIGGRAPH31ndash42 (New Orleans)Levoy M 2006IEEE Computer3946ndash55

[6] Martınez R Pons A Saavedra G Martinez-Corral M and Javidi B 2006Opt Express149657ndash63[7] Martınez-Cuenca R Saavedra G Martnez-Corral M and Javidi B 2004Opt Express125237ndash42[8] Arai J Okano F Hoshino H and Yuyama I 1998Appl Opt372034ndash45[9] Jang J S and Javidi B 2002Opt Lett27324ndash326

[10] Jang J S and Javidi B 2002Opt Lett271767ndash69[11] Arimoto H and Javidi B 2001Opt Lett26157ndash159[12] Hong S H Jang J S and Javidi B 2004Opt Express12483-491[13] Hong S H and Javidi B 2004Opt Express124579ndash88[14] Stern A and Javidi B 2003Appl Opt427036-42[15] Jang J S and Javidi B 2002Opt Lett271144ndash46[16] Tavakoli B DaneshPanah M Javidi B and Watson E A 2007Opt Express1511889ndash902[17] DaneshPanah M Javidi B and Watson 2008 E AOpt Express166368ndash77[18] Ng R 2005Proc of ACM SIGGRAPH24735ndash744

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

6

Figure 2 Four elemental images taken from position (a)P1 = (minus39minus39268) (b) P2 =(1434250) (c)P3 = (29minus32257)and (d)P4 = (minus2921264)all in cm

Figure 3 Two reconstructed images from viewpointP0 = (minus0218259) at (a) zr = 185mm (b)zr = 240mm

5 ConclusionIn this paper we overviewed a generalization to conventional Integral Imaging to encompass 3D multindashperspective imaging with arbitrary 3D pickup geometry and randomly distributed sensor coordinatesin all three spatial dimensions [17] A finite number of sensors with known coordinates are randomlyselected from within a pick up volume and are used for 3D object reconstruction In this approach whilethe sensor distribution is random that is it is not controlled the locations of sensors in space are assumedto be known at the reconstruction stage A computational reconstruction framework based on the backprojection method is developed using a variable affine transform between the image space and the objectspace

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

5

References[1] Okoshi T 1980Proc of the IEEE68(5) 548-564

Yang L McCornick M and Davies N 1988Appl Optics274529-34Igarishi Y Murata H and Ueda M 1978Jpn J Appl Phys171683-84Travis A R L 1997Proc of the IEEE85(11) 1817-32Lypton L 1982Foundation of Stereoscopic Cinema A Study in Depth(New York Van Nostrand Reinhold)Javidi B and Okano F 2002Three Dimensional Television Video and Display Tech(Berlin Springer)Javidi B 2006Optical Imaging Sensors and Systems for Homeland Security App(New York Springer)

[2] Okano F Hoshino H Arai J and Yuyama I 1997Appl Optics361598ndash1603Okano F Arai J Hoshino H and Yuyama I 1999Opt Eng381072ndash78Stern A and Javidi B 2006Proc of the IEEE94(3) 591ndash607

[3] Lippmann M G 1908Comptes-rendus de lrsquoAcademie des Sciences146446ndash451Sokolov A P 1911Autostereoscpy and Integral Photography by Professor Lippmanns Method(Moscow Moscow State

Univ Press)Ives H E 1931Journal of Optical Soc of America A21171-176

[4] Perlin K Paxia S and Kollin J S 2000Proc of the 27th Ann Conf on Computer Graphics and Interactive Techniques319ndash326 (ACM PressAddison-Wesley)

[5] Levoy M Hanrahan P 1996Proc of ACM SIGGRAPH31ndash42 (New Orleans)Levoy M 2006IEEE Computer3946ndash55

[6] Martınez R Pons A Saavedra G Martinez-Corral M and Javidi B 2006Opt Express149657ndash63[7] Martınez-Cuenca R Saavedra G Martnez-Corral M and Javidi B 2004Opt Express125237ndash42[8] Arai J Okano F Hoshino H and Yuyama I 1998Appl Opt372034ndash45[9] Jang J S and Javidi B 2002Opt Lett27324ndash326

[10] Jang J S and Javidi B 2002Opt Lett271767ndash69[11] Arimoto H and Javidi B 2001Opt Lett26157ndash159[12] Hong S H Jang J S and Javidi B 2004Opt Express12483-491[13] Hong S H and Javidi B 2004Opt Express124579ndash88[14] Stern A and Javidi B 2003Appl Opt427036-42[15] Jang J S and Javidi B 2002Opt Lett271144ndash46[16] Tavakoli B DaneshPanah M Javidi B and Watson E A 2007Opt Express1511889ndash902[17] DaneshPanah M Javidi B and Watson 2008 E AOpt Express166368ndash77[18] Ng R 2005Proc of ACM SIGGRAPH24735ndash744

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

6

References[1] Okoshi T 1980Proc of the IEEE68(5) 548-564

Yang L McCornick M and Davies N 1988Appl Optics274529-34Igarishi Y Murata H and Ueda M 1978Jpn J Appl Phys171683-84Travis A R L 1997Proc of the IEEE85(11) 1817-32Lypton L 1982Foundation of Stereoscopic Cinema A Study in Depth(New York Van Nostrand Reinhold)Javidi B and Okano F 2002Three Dimensional Television Video and Display Tech(Berlin Springer)Javidi B 2006Optical Imaging Sensors and Systems for Homeland Security App(New York Springer)

[2] Okano F Hoshino H Arai J and Yuyama I 1997Appl Optics361598ndash1603Okano F Arai J Hoshino H and Yuyama I 1999Opt Eng381072ndash78Stern A and Javidi B 2006Proc of the IEEE94(3) 591ndash607

[3] Lippmann M G 1908Comptes-rendus de lrsquoAcademie des Sciences146446ndash451Sokolov A P 1911Autostereoscpy and Integral Photography by Professor Lippmanns Method(Moscow Moscow State

Univ Press)Ives H E 1931Journal of Optical Soc of America A21171-176

[4] Perlin K Paxia S and Kollin J S 2000Proc of the 27th Ann Conf on Computer Graphics and Interactive Techniques319ndash326 (ACM PressAddison-Wesley)

[5] Levoy M Hanrahan P 1996Proc of ACM SIGGRAPH31ndash42 (New Orleans)Levoy M 2006IEEE Computer3946ndash55

[6] Martınez R Pons A Saavedra G Martinez-Corral M and Javidi B 2006Opt Express149657ndash63[7] Martınez-Cuenca R Saavedra G Martnez-Corral M and Javidi B 2004Opt Express125237ndash42[8] Arai J Okano F Hoshino H and Yuyama I 1998Appl Opt372034ndash45[9] Jang J S and Javidi B 2002Opt Lett27324ndash326

[10] Jang J S and Javidi B 2002Opt Lett271767ndash69[11] Arimoto H and Javidi B 2001Opt Lett26157ndash159[12] Hong S H Jang J S and Javidi B 2004Opt Express12483-491[13] Hong S H and Javidi B 2004Opt Express124579ndash88[14] Stern A and Javidi B 2003Appl Opt427036-42[15] Jang J S and Javidi B 2002Opt Lett271144ndash46[16] Tavakoli B DaneshPanah M Javidi B and Watson E A 2007Opt Express1511889ndash902[17] DaneshPanah M Javidi B and Watson 2008 E AOpt Express166368ndash77[18] Ng R 2005Proc of ACM SIGGRAPH24735ndash744

Seventh EurondashAmerican Workshop on Information Optics IOP PublishingJournal of Physics Conference Series 139 (2008) 012017 doi1010881742-65961391012017

6