introduction to modern digital holography with matlab

Upload: luis-enrique-b-g

Post on 21-Feb-2018

248 views

Category:

Documents


2 download

TRANSCRIPT

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    1/228

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    2/228

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    3/228

    INT RODUCT ION T O M ODE RN DIGIT AL HOL OGRAP HY

    With MATLAB

    Get up to speed with digital holography with this concise and straightforward

    introduction to modern techniques and conventions.

    Building up from the basic principles of optics, this book describes key tech-

    niques in digital holography, such as phase-shifting holography, low-coherence

    holography, diffraction tomographic holography, and optical scanning holography.

    Practical applications are discussed, and accompanied by all the theory necessary

    to understand the underlying principles at work. A further chapter covers advanced

    techniques for producing computer-generated holograms. Extensive MATLABcode is integrated with the text throughout and is available for download online,

    illustrating both theoretical results and practical considerations such as aliasing,

    zero padding, and sampling.

    Accompanied by end-of-chapter problems, and an online solutions manual

    for instructors, this is an indispensable resource for students, researchers, and

    engineers in the elds of optical image processing and digital holography.

    ting-chung poon is a Professor of Electrical and Computer Engineering at

    Virginia Tech, and a Visiting Professor at the Shanghai Institute of Optics and Fine

    Mechanics, Chinese Academy of Sciences. He is a Fellow of the OSA and SPIE.

    jung-ping liu is a Professor in the Department of Photonics at Feng Chia

    University, Taiwan.

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    4/228

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    5/228

    INTRODUCTION TO MODERN

    DIGITAL HOLOGRAPHYWith MATLAB

    T I N G - C H U N G P O O NVirginia Tech, USA

    J U N G - P I N G L I UFeng Chia University, Taiwan

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    6/228

    University Printing House, Cambridge CB2 8BS, United Kingdom

    Published in the United States of America by Cambridge University Press, New York

    Cambridge University Press is part of the University of Cambridge.

    It furthers the Universitys mission by disseminating knowledge in the pursuit of

    education, learning and research at the highest international levels of excellence.

    www.cambridge.org

    Information on this title: www.cambridge.org/9781107016705

    T-C. Poon & J-P. Liu 2014

    This publication is in copyright. Subject to statutory exception

    and to the provisions of relevant collective licensing agreements,

    no reproduction of any part may take place without the written

    permission of Cambridge University Press.

    First published 2014

    Printing in the United Kingdom by TJ International Ltd. Padstow Cornwall

    A catalog record for this publication is available from the British Library

    Library of Congress Cataloging in Publication data

    Poon, Ting-Chung.

    Introduction to modern digital holography : with MATLAB / Ting-Chung Poon, Jung-Ping Liu.

    pages cm

    ISBN 978-1-107-01670-5 (Hardback)

    1. HolographyData processing. 2. Image processingDigital techniques. I. Liu, Jung-Ping. II. Title.

    TA1542.P66 2014621.36075dc23

    2013036072

    ISBN 978-1-107-016705-Hardback

    Additional resources for this publication atwww.cambridge.org/digitalholography

    Cambridge University Press has no responsibility for the persistence or accuracy of

    URLs for external or third-party internet websites referred to in this publication,

    and does not guarantee that any content on such websites is, or will remain,

    accurate or appropriate.

    http://www.cambridge.org/http://www.cambridge.org/9781107016705http://www.cambridge.org/digitalholographyhttp://www.cambridge.org/digitalholographyhttp://www.cambridge.org/http://www.cambridge.org/9781107016705
  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    7/228

    Contents

    Preface page ix

    1 Wave optics 1

    1.1 Maxwells equations and the wave equation 1

    1.2 Plane waves and spherical waves 3

    1.3 Scalar diffraction theory 5

    1.3.1 Fresnel diffraction 9

    1.3.2 Fraunhofer diffraction 11

    1.4 Ideal thin lens as an optical Fourier transformer 14

    1.5 Optical image processing 15

    Problems 24References 26

    2 Fundamentals of holography 27

    2.1 Photography and holography 27

    2.2 Hologram as a collection of Fresnel zone plates 28

    2.3 Three-dimensional holographic imaging 33

    2.3.1 Holographic magnications 38

    2.3.2 Translational distortion 39

    2.3.3 Chromatic aberration 40

    2.4 Temporal and spatial coherence 42

    2.4 1 Temporal coherence 43

    2.4.2 Coherence time and coherence length 45

    2.4.3 Some general temporal coherence considerations 46

    2.4.4 Fourier transform spectroscopy 48

    2.4.5 Spatial coherence 51

    2.4.6 Some general spatial coherence considerations 53

    Problems 56

    References 58

    v

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    8/228

    3 Types of holograms 59

    3.1 Gabor hologram and on-axis (in-line) holography 59

    3.2 Off-axis holography 61

    3.3 Image hologram 64

    3.4 Fresnel and Fourier holograms 683.4.1 Fresnel hologram and Fourier hologram 68

    3.4.2 Lensless Fourier hologram 70

    3.5 Rainbow hologram 73

    Problems 78

    References 78

    4 Conventional digital holography 79

    4.1 Sampled signal and discrete Fourier transform 79

    4.2. Recording and limitations of the image sensor 89

    4.2.1 Imager size 91

    4.2.2 Pixel pitch 91

    4.2.3 Modulation transfer function 92

    4.3 Digital calculations of scalar diffraction 95

    4.3.1 Angular spectrum method (ASM) 95

    4.3.2 Validity of the angular spectrum method 97

    4.3.3 Fresnel diffraction method (FDM) 99

    4.3.4 Validation of the Fresnel diffraction method 101

    4.3.5 Backward propagation 1034.4 Optical recording of digital holograms 105

    4.4 1 Recording geometry 105

    4.4 2 Removal of the twin image and the zeroth-order light 108

    4.5 Simulations of holographic recording and reconstruction 111

    Problems 116

    References 117

    5 Digital holography: special techniques 118

    5.1 Phase-shifting digital holography 118

    5.1.1 Four-step phase-shifting holography 119

    5.1.2 Three-step phase-shifting holography 120

    5.1.3 Two-step phase-shifting holography 120

    5.1.4 Phase step and phase error 122

    5.1.5 Parallel phase-shifting holography 124

    5.2 Low-coherence digital holography 126

    5.3 Diffraction tomographic holography 133

    5.4 Optical scanning holography 137

    vi Contents

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    9/228

    5.4.1 Fundamental principles 138

    5.4.2 Hologram construction and reconstruction 142

    5.4.3 Intuition on optical scanning holography 144

    Problems 147

    References 148

    6 Applications in digital holography 151

    6.1 Holographic microscopy 151

    6.1.1 Microscope-based digital holographic microscopy 151

    6.1.2 Fourier-based digital holographic microscopy 154

    6.1.3 Spherical-reference-based digital holographic

    microscopy 156

    6.2 Sectioning in holography 158

    6.3 Phase extraction 164

    6.4 Optical contouring and deformation measurement 168

    6.4.1 Two-wavelength contouring 169

    6.4.2 Two-illumination contouring 172

    6.4.3 Deformation measurement 175

    Problems 175

    References 175

    7 Computer-generated holography 179

    7.1 The detour-phase hologram 179

    7.2 The kinoform hologram 1857.3 Iterative Fourier transform algorithm 187

    7.4 Modern approach for fast calculations and holographic

    information processing 189

    7.4.1 Modern approach for fast calculations 189

    7.4.2 Holographic information processing 196

    7.5 Three-dimensional holographic display using spatial light

    modulators 199

    7.5.1 Resolution 199

    7.5.2 Digital mask programmable hologram 201

    7.5.3 Real-time display 205

    7.5.4 Lack of SLMs capable of displaying a complex function 206

    Problems 210

    References 211

    Index 214

    Contents vii

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    10/228

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    11/228

    Preface

    Owing to the advance in faster electronics and digital processing power, the past

    decade has seen an impressive re-emergence of digital holography. Digitalholography is a topic of growing interest and it nds applications in three-

    dimensional imaging, three-dimensional displays and systems, as well as bio-

    medical imaging and metrology. While research in digital holography continues

    to be vibrant and digital holography is maturing, we nd that there is a lack of

    textbooks in the area. The present book tries to serve this need: to promote and

    teach the foundations of digital holography. In addition to presenting traditional

    digital holography and applications in Chapters 14, we also discuss modern

    applications and techniques in digital holography such as phase-shifting holog-

    raphy, low-coherence holography, diffraction tomographic holography, optical

    scanning holography, sectioning in holography, digital holographic microscopy

    as well as computer-generated holography in Chapters 57. This book is geared

    towards undergraduate seniors orrst-year graduate-level students in engineer-

    ing and physics. The material covered is suitable for a one-semester course in

    Fourier optics and digital holography. The book is also useful for scientists and

    engineers, and for those who simply want to learn about optical image processing

    and digital holography.

    We believe in the inclusion of MATLAB in the textbook because digital

    holography relies heavily on digital computations to process holographic data.

    MATLAB will help the reader grasp and visualize some of the important

    concepts in digital holography. The use of MATLAB not only helps to illustrate

    the theoretical results, but also makes us aware of computational issues such as

    aliasing, zero padding, sampling, etc. that we face in implementing them. Never-

    theless, this text is not about teaching MATLAB, and some familiarity with

    MATLAB is required to understand the codes.

    ix

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    12/228

    Ting-Chung Poon would like to thank his wife, Eliza, and his children, Christina

    and Justine, for their love. This year is particularly special to him as Christina gave

    birth to a precious little one Gussie. Jung-Ping Liu would like to thank his wife,

    Hui-Chu, and his parents for their understanding and encouragement.

    x Preface

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    13/228

    1

    Wave optics

    1.1 Maxwells equations and the wave equation

    In wave optics, we treat light as waves. Wave optics accounts for wave effects such

    as interference and diffraction. The starting point for wave optics is Maxwells

    equations:

    rD v, 1:1rB 0, 1:2

    r E Bt

    , 1:3

    r H J JC Dt

    , 1:4

    where we have four vector quantities called electromagnetic (EM) elds: the

    electric eld strength E (V/m), the electric ux density D(C/m2), the magneticeld strength H (A/m), and the magnetic ux density B (Wb/m2). The vectorquantity JC and the scalar quantity v are the current density (A/m

    2) and theelectric charge density(C/m3), respectively, and they are the sources responsiblefor generating the electromagnetic elds. In order to determine the four eld

    quantities completely, we also need the constitutive relations

    D E, 1:5and

    B H, 1:6

    where and are the permittivity (F/m) and permeability (H/m) of the medium,

    respectively. In the case of a linear, homogenous, and isotropic medium such as in

    vacuum or free space,and are scalar constants. UsingEqs. (1.3)(1.6), we can

    1

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    14/228

    derive a wave equation in E orB in free space. For example, by taking the curl of

    E inEq. (1.3), we can derive the wave equation in Eas

    r2E

    2E

    t2

    JC

    t 1

    rv,

    1:7

    wherer2 2/x2 2/y2 2/z2 is the Laplacian operator in Cartesiancoordinates. For a source-free medium, i.e.,JC 0 and 0,Eq. (1.7)reduces tothe homogeneous wave equation:

    r2E 1v2

    2E

    t2 0: 1:8

    Note thatv 1=ffiffiffiffiffip

    is the velocity of the wave in the medium.Equation (1.8)is

    equivalent to three scalar equations, one for every component ofE. Let

    E Exax Eyay Ezaz, 1:9

    where ax,ay, and az are the unit vectors in the x, y, and z directions, respectively.

    Equation (1.8)then becomes

    2

    x2

    2

    y2

    2

    z2

    Exax Eyay Ezaz 1

    v2

    2

    t2Exax Eyay Ezaz: 1:10

    Comparing theax-component on both sides of the above equation gives us

    2Ex

    x2

    2Ex

    y2

    2Ex

    z2 1

    v2

    2Ex

    t2 :

    Similarly, we can derive the same type of equation shown above for the Eyand Ezcomponents by comparison with other components in Eq. (1.10). Hence we can

    write a compact equation for the three components as

    2

    x2

    2

    y2

    2

    z2

    1v2

    2

    t2

    1:11a

    or

    r2 1v2

    2

    t2 , 1:11b

    wherecan represent a component, Ex,Ey, orEz, of the electric eldE. Equation

    (1.11) is called thethree-dimensional scalar wave equation. We shall look at some

    of its simplest solutions in the next section.

    2 Wave optics

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    15/228

    1.2 Plane waves and spherical waves

    In this section, we will examine some of the simplest solutions, namely the plane

    wave solution and the spherical wave solution, of the three-dimensional scalar wave

    equation in Eq. (1.11). For simple harmonic oscillation at angular frequency 0

    (radian/second) of the wave, in Cartesian coordinates, the plane wave solution is

    x,y,z, t Aexp j0tk0R, 1:12where j ffiffiffiffiffiffi1p , k0 k0xax k0yay k0zaz is the propagation vector, andR xax yay zaz is the position vector. The magnitude of k0 is called thewave number and isjk0j k0

    ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffik20x k20y k20z

    q 0=v. If the medium is free

    space, c (the speed of light in vacuum) and k0 becomes the wave number infree space.Equation (1.12)is aplane waveof amplitude A, traveling along the k0direction. The situation is shown in Fig. 1.1.

    If a plane wave is propagating along the positive z-direction,Eq. (1.12)becomes

    z, t Aexpj0t k0z, 1:13which is a solution to the one-dimensional scalar wave equation given by

    2

    z2 1

    v2

    2

    t2 : 1:14

    Equation (1.13) is a complex representation of a plane wave. Since the electro-

    magnetic elds are real functions of space and time, we can represent the plane

    wave in real quantities by taking the real part ofto obtain

    Refz, tg A cos 0t k0z: 1:15Another important solution to the wave equation inEq. (1.11b)is a spherical wave

    solution. The spherical wave solution is a solution which has spherical symmetry,

    i.e., the solution is not a function of and under the spherical coordinates shown

    inFig. 1.2. The expression for the Laplacian operator, r2, is

    Figure 1.1 Plane wave propagating along the directionk0

    .

    1.2 Plane waves and spherical waves 3

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    16/228

    r2

    2

    R2 2

    R

    R 1

    R2 sin2

    2

    2 1

    R2

    2

    2 cot

    R2

    :

    HenceEq. (1.11b), under spherical symmetry, becomes

    2

    R2 2

    R

    R 1

    v2

    2

    t2 : 1:16

    Since

    R

    2

    R2 2

    R

    R

    2

    R

    t2 ,

    we can re-writeEq. (1.16)to become

    2RR2

    1v2

    2Rt2

    : 1:17

    By comparing the above equation with Eq. (1.14), which has a solution given by

    Eq. (1.13), we can construct a simple solution to Eq. (1.17)as

    R R, t Aexp j0t k0R ,

    or

    R, t AR

    exp j0t k0R : 1:18

    The above equation is a spherical wave of amplitude A, which is one of the

    solutions to Eq. (1.16). In summary, plane waves and spherical waves are some

    of the simplest solutions of the three-dimensional scalar wave equation.

    Figure 1.2 Spherical coordinate system.

    4 Wave optics

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    17/228

    1.3 Scalar diffraction theory

    For a plane wave incident on an aperture or a diffracting screen, i.e., an opaque

    screen with some openings allowing light to pass through, we need to nd the eld

    distribution exiting the aperture or the diffracted eld. To tackle the diffraction

    problem, we nd the solution of the scalar wave equation under some initial

    condition. Let us assume the aperture is represented by a transparency with

    amplitude transmittance, often called transparency function, given by t(x, y),

    located on the plane z = 0 as shown inFig. 1.3.

    A plane wave of amplitude A is incident on the aperture. Hence at z 0,according to Eq. (1.13), the plane wave immediately in front of the aperture

    is given by A exp(j0t). The eld distribution immediately after the aperture is

    (x,y,z 0,t) At(x, y) exp(j0t). In general,t(x,y) is a complex function thatmodies the eld distribution incident on the aperture, and the transparency has

    been assumed to be innitely thin. To develop(x,y,z 0,t) further mathematic-ally, we write

    x,y,z 0, t At x,y exp j0t p x,y;z 0 exp j0t p0 x,y exp j0t : 1:19

    The quantity p0(x, y) is called the complex amplitude in optics. This complex

    amplitude is the initial condition, which is given by p0(x, y) A t(x, y), theamplitude of the incident plane wave multiplied by the transparency function of

    the aperture. To nd the eld distribution atz away from the aperture, we modelthe solution in the form of

    x,y,z, t px,y;zexpj0t, 1:20wherep(x,y;z) is the unknown to be found with initial conditionp0(x,y) given.

    To ndp(x,y;z), we substituteEq. (1.20)into the three-dimensional scalar wave

    equation given byEq. (1.11a)to obtain the Helmholtz equation forp(x, y; z),

    Figure 1.3 Diffraction geometry:t(x, y) is a diffracting screen.

    1.3 Scalar diffraction theory 5

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    18/228

    2p

    x2

    2p

    y2

    2p

    z2 k20p 0: 1:21

    To nd the solution to the above equation, we choose to use the Fourier transform

    technique. The two-dimensional Fourier transform of a spatial signal f(x, y) isdened as

    Fffx,yg Fkx, ky

    fx,yexpjkxxjkyydx dy, 1:22a

    and the inverse Fourier transform is

    F1fFkx, kyg fx,y 1

    42

    Fkx, kyexpjkxxjkyydkxdky, 1:22b

    where kxand ky are called spatial radian frequencies as they have units of radian

    per unit length. The functions f(x, y) and F(xx, ky) form a Fourier transform pair.

    Table 1.1shows some of the most important transform pairs.

    By taking the two-dimensional Fourier transform of Eq. (1.21) and using

    transform pair number 4 inTable 1.1to obtain

    F

    2p

    x2 jkx2pkx, ky;z

    F

    2p

    y2

    jky2pkx, ky;z,

    1:23

    where Ffp x,y;z g p kx, ky;z

    , we have a differential equation in p(kx,ky;z)

    given by

    d2p

    dz2 k20 1

    k2x

    k20

    k2y

    k20

    !p 0 1:24

    subject to the initial known condition Ffp x,y;z 0 g p kx, ky;z 0 p0 kx, ky

    . The solution to the above second ordinary different equation is

    straightforward and is given by

    pkx, ky;z p0kx, kyexp jk0ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 k2x=k20 k2y=k20

    q z

    h i 1:25

    as we recognize that the differential equation of the form

    d2yz

    dz

    2

    2y

    z

    0

    6 Wave optics

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    19/228

    Table 1.1 Fourier transform pairs

    1.f(x, y) F(kx, ky)

    2. Shiftingf(x x0, y y0) F(kx, ky) exp(jkxx0 jkyy0)

    3. Scaling

    f(ax, by) 1

    jabjf kx

    a,

    ky

    b

    4. Differentiation

    f(x, y) /x jkxF kx, ky

    5. Convolution integral

    f1 2

    f1x0,y0f2xx0,yy0dx0dy0Product of spectra

    F1kx, kyF2kx, kywhere Fff1x,yg F1kx, ky andFff2x,yg F2kx, ky

    6. Correlation

    f1 f2

    f1x0,y0f2xx0,yy0dx0dy0 F1kx, kyF2kx, ky

    7. Gaussian function

    expx2 y2Gaussian function

    exp

    k2x k2x4

    8. Constant of unity

    1

    Delta function

    42

    x,y

    1exp

    jkxx

    jkyy

    dkxdky

    9. Delta functionx,y

    Constant of unity1

    10. Triangular function

    x

    a,

    y

    b

    x

    a

    y

    b

    where x

    a

    (

    1

    xa

    forxa

    10 otherwise

    a sinc2 kxa

    2

    b sinc2

    kyb

    2

    11. Rectangular function

    rectx,y rect x recty

    where rectx n

    1 for jxj

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    20/228

    has the solution given by

    yz y0expjz:FromEq. (1.25), we dene thespatial frequency transfer function of propagation

    through a distance zas [1]

    Hkx, ky;z pkx, ky;z=p0kx, ky exp jk0

    ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1 k2x=k

    20 k

    2y=k

    20

    q z

    h i:

    1:26

    Hence the complex amplitudep(x,y;z) is given by the inverse Fourier transform

    ofEq. (1.25):

    px,y;zF1fpkx,ky;zgF1fp0kx,kyHkx,ky;zg

    142

    p0kx,kyexpjk0ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

    1k2

    x=k20 k

    2y=k

    20

    q z

    h iexpjkxxjkyydkxdky:

    1:27The above equation is a very important result. For a given eld distribution along

    thez 0 plane, i.e.,p(x, y;z 0) p0(x,y), we can nd the eld distributionacross a plane parallel to the (x, y) plane but at a distance z from it by calculating

    Eq. (1.27). The termp0(kx, ky) is a Fourier transform ofp0(x, y) according to

    Eq. (1.22):

    p0x,y F1fp0kx, kyg 1

    42

    p0kx, kyexpjkxxjkyydkxdky:1:28

    The physical meaning of the above integral is that we rst recognize a plane wave

    propagating with propagation vector k0, as illustrated in Fig. 1.1. The complex

    amplitude of the plane wave, according toEq. (1.12), is given by

    Aexpjk0xxjk0yy jk0zz: 1:29

    The

    eld distribution atz = 0 or the plane wave component is given byexpjk0xxjk0yy:

    Comparing the above equation with Eq. (1.28) and recognizing that the spatial

    radian frequency variableskxandkyof the eld distributionp0(x,y) arek0xandk0yof the plane wave in Eq. (1.29), p0(kx, ky) is called the angular plane wave

    spectrumof the eld distributionp0(x,y). Therefore, p0(kx,ky) exp (jkxxjkyy)

    is the plane wave component with amplitude p0(kx, ky) and by summing over

    various directions ofkxand ky, we have the eld distritionp0(x,y) atz 0 givenbyEq. (1.28). To nd the eld distribution a distance ofz away, we simply let the

    8 Wave optics

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    21/228

    various plane wave components propagate over a distancez, which means acquir-

    ing a phase shift of exp(jkzz) or exp(jk0zz) by noting that the variablekzis k0zof

    the plane wave so that we have

    px,y;z 142

    p0kx, kyexpjkxxjkyy jkzzdkxdky

    F1fp0kx, kyexpjk0zzg:1:30

    Note thatk0ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

    k20x k20y k20zq

    and hencekz k0z k0ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi

    1 k2x=k20 k

    2y=k

    20

    qand with this relation inEq. (1.29), we immediately recoverEq. (1.27)and provide

    physical meaning to the equation. Note that we have kept the sign in the aboverelation to represent waves traveling in the positive z-direction. In addition, for

    propagation of plane waves, 1

    k2x=k20

    k2y=k20 0 ork2x k2y k20. If the reverseis true, i.e., k2x k2y k20, we have evanescent waves.

    1.3.1 Fresnel diffraction

    When propagating waves make small angles, i.e., under the so-called paraxial

    approximation, we have k2x k2y k20 and

    ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi1

    k

    2

    x=k2

    0

    k

    2

    y=k2

    0 r 1 k2x=2k20 k2y=2k20: 1:31

    Equation (1.27)becomes

    px,y;z 1

    42

    p0kx, kyexp jk0z j

    k2x k2xz=2k0

    expjkxxjkyydkxdky,

    which can be written in a compact form as

    p x,y;z F1fp0kx, kyHkx, ky;zg, 1:32

    where

    Hkx, ky;z expjk0zexp j

    k2x k2yz=2k0

    h i: 1:33

    H(kx, ky; z) is called the spatial frequency transfer function in Fourier optics [1].

    The transfer function is simply a paraxial approximation to Hkx, ky;z: Theinverse Fourier transform ofH(kx, ky; z) is known as the spatial impulse response

    in Fourier optics, h(x, y; z)[1]:

    1.3 Scalar diffraction theory 9

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    22/228

    hx,y;z F1fHkx, ky;zg expjk0z jk02z

    exp jk0

    2z

    x2 y2 : 1:34

    To nd the inverse transform of the above equation, we have used transform pair

    number 13 in Table 1.1. We can express Eq. (1.32) in terms of the convolution

    integral by using transform pair number 5:

    px,y;z p0x,y hx,y;z

    expjk0z jk02z

    p0x0,y0exp

    jk0

    2z

    hxx02 y y02

    idx0dy0:

    1:35Equation (1.35)is called the Fresnel diffraction formulaand describes the Fresnel

    diffraction of abeam

    during propagation which has an initial complex amplitudegiven by p0(x, y).

    If we wish to calculate the diffraction pattern at a distance far away from the

    aperture,Eq. (1.35)can be simplied. To see how, let us complete the square in the

    exponential function and then re-writeEq. (1.35)as

    px,y;z expjk0zjk0

    2zexp

    jk0

    2z

    x2 y2

    p0x0,

    y0exp jk0

    2zhx02 y02i exp

    jk0

    z xx0 yy0 dx0dy0:1:36

    In terms of Fourier transform, we can write the Fresnel diffraction formula as

    follows:

    px,y;z expjk0zjk0

    2zexp

    jk0

    2z

    x2 y2

    F p0x,yexp jk

    02z

    x2 y2

    kx k0xz , ky k0y

    z

    : 1:37

    In the integral shown inEq. (1.36), p0 is considered the source, and therefore

    the coordinatesx0 and y0 can be called the source plane. In order to nd the elddistribution p on the observation plane z away, or on the xy plane, we need to

    multiply the source by the two exponential functions as shown inside the integrand

    of Eq. (1.36) and then to integrate over the source coordinates. The result of

    the integration is then multiplied by the factor exp(jk0z) (ik0/2z) exp[(jk0/2z)

    (x2

    y2)] to arrive at the nal result on the observation plane given byEq. (1.36).

    10 Wave optics

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    23/228

    1.3.2 Fraunhofer diffraction

    Note that the integral in Eq. (1.36)can be simplied if the approximation below

    is true:

    k0

    2 x02 y02h i

    max

    0x02 y02h i

    max z: 1:38

    Figure 1.4 (a) t(x,y) is a diffracting screen in the form of circ(r /r0), r0 0.5mm. (b) Fresnel diffraction atz 7 cm, |p(x,y;z 7 cm)|. (c) Fresnel diffractionatz 9 cm, |p(x, y; z 9 cm)|. SeeTable 1.2for the MATLAB code.

    Figure 1.5 (a) Three-dimensional plot of a Fraunhofer diffraction pattern atz 1 m, |p(x, y; z 1 m)|. (b) Gray-scale plot of |p(x, y;z 1 m)|. SeeTable1.3for the MATLAB code.

    1.3 Scalar diffraction theory 11

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    24/228

    The term[(x0)2 (y0)2]maxis like the maximum area of the source and if this areadivided by the wavelength is much less than the distancezunder consideration, the

    term exp{(jk0/2z)[(x0)2 (y0)2]} inside the integrand can be considered to be

    unity, and henceEq. (1.36)becomes

    px,y;z exp

    jk0z jk02z

    exp

    jk02z

    x2 y2

    p0x0,y0exp jk0

    zxx0 yy0

    dx0 dy0: 1:39

    Equation (1.39) is the Fraunhofer diffraction formula and is the limiting case of

    Fresnel diffraction.Equation (1.39)is therefore called theFraunhofer approximation

    or the fareld approximation as diffraction is observed at a far distance. In terms of

    Fourier transform, we can write the Fraunhofer diffraction formula as follows:

    Table 1.2 MATLAB code for Fresnel diffraction of a circular aperture, seeFig. 1.4

    close all; clear all;

    lambda0.6*10^-6; % wavelength, unit:mdelta

    10*lambda; % sampling period,unit:m

    z0.07; % propagation distance; mM512; % space sizec1:M;r1:M;[C, R ] meshgrid(c, r);THOR((R-M/2-1).^2(C-M/2-1).^2).^0.5;RRTHOR.*delta;OBzeros(M); % Objectfor a1:M;

    for b1:M;if RR(a,b)

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    25/228

    px,y;z expjk0z jk0

    2zexp

    jk0

    2z

    x2 y2 Ffp0x,ygkx k0xz , ky k0yz :

    1:40

    Figure 1.4shows the simulation of Fresnel diffraction of a circular aperture function

    circ (r/r0), i.e., p0(x, y) circ(r/r0), where rffiffiffiffiffiffiffiffiffiffiffiffiffiffi

    x2 y2p and circ(r/r0) denotes avalue 1 within a circle of radius r0 and 0 otherwise. The wavelength used for the

    simulation is 0.6 m. Since p(x, y; z) is a complex function, we plot its absolute

    value in the gures. Physically, the situation corresponds to the incidence of a plane

    wave with unit amplitude on an opaque screen with a circular opening with radius

    r0asp(x, y; z) 1 t(x, y) with t(x, y) circ(r/r0). We would then observe theintensity pattern, which is proportional to|p(x,y;z)|

    2, at distancez away from the

    aperture. InFig. 1.5, we show Fraunhofer diffraction. We have chosen the distance

    of 1 m so that the Fraunhofer approximation fromEq. (1.38)is satised.

    Table 1.3 MATLAB code for Fraunhofer diffraction of a circular aperture, seeFig. 1.5

    close all; clear all;

    lambda0.6*10^-6; % wavelength, unit:mdelta

    80*lambda; % sampling period,unit:m

    z1; % propagation distance, unit:mM512; % space sizec1:M;r1:M;[C, R ]meshgrid(c, r);THOR(((R-M/2-1).^2(C-M/2-1).^2).^0.5)*delta;OBzeros(M); % Objectfor a1:M;

    for b1:M;if THOR(a,b)

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    26/228

    1.4 Ideal thin lens as an optical Fourier transformer

    Anideal thin lensis a phase object, which means that it will only affect the phase

    of the incident light. For an ideal converging lens with a focal length f, the phase

    function of the lens is given by

    tfx,y exp jk02f

    x2 y2 , 1:41

    where we have assumed that the lens is of innite extent. For a plane wave of

    amplitudeA incident upon the lens, we can employ the Fresnel diffraction formula

    to calculate the eld distribution in the back focal plane of the lens. Using

    Eq. (1.37)forz f, we have

    px,y;f expjk0f jk0

    2f exp

    jk0

    2fx

    2

    y2

    F p0x,yexp jk0

    2f

    x2 y2

    kx k0xf , ky k0y

    f

    , 1:42

    where p0(x, y) is given by p0(x, y)At(x, y), the amplitude of the incidentplane wave multiplied by the transparency function of the aperture. In the present

    case, the transparency function of the aperture is given by the lens functiontf(x,y),

    i.e., t(x, y) tf(x, y). Hence p0(x, y) A t(x, y) A tf(x, y). The elddistribution faway from the lens, according to Eq. (1.37), is then given by

    px,y;f expjk0fjk0

    2fexp

    jk0

    2f

    x2 y2

    F Aexp jk02f

    x2 y2 exp jk0

    2f

    x2 y2

    kx k0xf ,ky k0y

    f

    / x,y: 1:43

    We see that the lens phase function cancels out exactly the quadratic phase function

    associated with Fresnel diffraction, giving the Fourier transform of constant A

    proportional to a delta function, (x, y), which is consistent with the geometrical

    optics which states that all input rays parallel to the optical axis converge behind the

    lens to a point called the back focal point. The discussion thus far in a sense justies

    the functional form of the phase function of the lens given by Eq. (1.41).

    We now look at a more complicated situation shown in Fig. 1.6, where a

    transparency t(x, y) illuminated by a plane wave of unity amplitude is located in

    the front focal plane of the ideal thin lens.

    We want to nd the eld distribution in the back focal plane. The eld

    immediately after t(x, y) is given by 1

    t(x, y). The resulting eld is then

    14 Wave optics

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    27/228

    undergoing Fresnel diffraction of a distance f. According to Fresnel diffraction and

    hence usingEq. (1.35), the diffractedeld immediately in front of the lens is given

    by t(x, y)* h(x, y; f). The eld after the lens is then [t(x, y)* h(x, y; f)] tf(x, y).Finally, the eld at the back focal plane is found using Fresnel diffraction one more

    time for a distance of f, as illustrated in Fig. 1.6. The resulting eld on the back

    focal plane of the lens can be written in terms of a series of convolution and

    multiplication operations as follows [2]:

    px,y f tx,y hx,y;ftfx,yg hx,y;f: 1:44

    The above equation can be rearranged to become, apart from some constant,

    px,y Fftx,ygkx k0xf , ky k0yf T k0x

    f,

    k0y

    f , 1:45whereT(k0x/f,k0y/f) is the Fourier transform or thespectrumoft(x,y). We see that

    we have the exact Fourier transform of the input,t(x,y), on the back focal plane

    of the lens. Hence an ideal thin lens is an optical Fourier transformer.

    1.5 Optical image processing

    Figure 1.6 is the backbone of an optical image processing system. Figure 1.7

    shows a standard image processing system with Fig. 1.6as the front end of the

    system. The system is known as the 4-fsystem as lensL1and lensL2both have the

    same focal length,f.p(x,y) is called thepupil functionof the optical system and it

    is on the confocal plane.

    On the object plane, we have an input of the form of a transparency, t(x, y),

    which is assumed to be illuminated by a plane wave. Hence, according to

    Eq. (1.45), we have its spectrum on the back focal plane of lens L1, T(k0x/f, k0y/f),

    whereTis the Fourier transform oft(x,y). Hence the confocal plane of the optical

    system is often called the Fourier plane. The spectrum of the input image is now

    modied by the pupil function as the eld immediately after the pupil function

    Figure 1.6 Lens as an optical Fourier transformer.

    1.5 Optical image processing 15

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    28/228

    is T(k0x/f, k0y/f)p(x, y). According Eq. (1.45) again, this eld will be Fourier

    transformed to give the eld on the image plane as

    pi F T k

    0x

    f,

    k0y

    f

    px,y kx k0xf , ky

    k0y

    f

    , 1:46

    which can be evaluated, in terms of convolution, to give

    pix,y tx, y Ffpx, ygkx k0xf , ky k0yf tx, y P

    k0x

    f,

    k0y

    f

    t

    x, y

    hc

    x,y

    ,

    1:47

    where the negative sign in the argument oft(x, y) shows that the original input onthe image plane has been ipped and inverted on the image plane. P is the Fourier

    transform ofp. FromEq. (1.47), we dene hc(x, y) as the coherent point spread

    function (CPSF) in optics, which is given by [1]

    hcx,y Ffpx,ygkx k0xf , ky k0yf P k0x

    f,

    k0y

    f

    : 1:48

    By denition, the Fourier transform of the coherent point spread function is the

    coherent transfer function (CTF)given by [1]

    Hc kx, ky F hc x,y f g F P k0x

    f,

    k0y

    f

    p f kx

    k0,

    f ky

    k0

    : 1:49

    The expression given byEq. (1.47)can be interpreted as the ipped and inverted

    image of t(x, y) being processed by the coherent point spread function given by

    Eq. (1.48). Therefore, image processing capabilities can be varied by simply

    designing the pupil function, p(x, y). Or we can interpret this in the spatial

    frequency domain as spatialltering is proportional to the functional form of the

    Figure 1.7 4-f image processing system.

    16 Wave optics

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    29/228

    pupil function as evidenced by Eq. (1.46) together with Eq. (1.49). Indeed,

    Eq. (1.46)is the backbone of so-called coherent image processing in optics [1].

    Let us look at an example. If we take p(x, y)

    1, this means that we do not

    modify the spectrum of the input image according to Eq. (1.46). Or fromEq. (1.49), the coherent transfer function becomes unity, i.e., all-pass ltering,

    for all spatial frequencies of the input image. Mathematically, usingEq. (1.48)and

    item number 8 ofTable 1.1, hc(x, y) becomes

    hcx,y Ff1gkx k0xf , ky k0yf 4 k0x

    f,

    k0y

    f

    4 f

    k0

    2x,y,

    a delta function, and the output image fromEq. (1.47)is

    pix,y / tx, y k0x

    f ,

    k0y

    f

    / tx, y: 1:50

    To obtain the last step of the result in Eq. (1.50), we have used the properties of

    (x, y) inTable 1.4.

    If we now takep(x,y) circ(r/r0), from the interpretation ofEq. (1.49)we seethat, for this kind of chosen pupil, ltering is of lowpass characteristic as the

    opening of the circle on the pupil plane only allows the low spatial frequencies to

    physically go through.Figure 1.8shows examples of lowpass ltering. InFig. 1.8

    (a)and1.8(b), we show the original of the image and its spectrum, respectively. In

    Fig. 1.8(c)and1.8(e)we show the ltered images, and lowpass ltered spectra are

    shown inFig. 1.8(d)and1.8(f), respectively, where the lowpass ltered spectra are

    obtained by multiplying the original spectrum by circ(r/r0) [seeEq. (1.46)]. Note that

    the radius r0 inFig. 1.8(d) is larger than that in Fig. 1.8(f). InFig. 1.9, we show

    highpass ltering examples where we takep(x,y) 1 circ(r/r0).So far, we have discussed the use of coherent light, such as plane waves derived

    from a laser, to illuminate t(x, y) in the optical system shown in Fig. 1.7. The

    optical system is called acoherent optical system in that complex quantities are

    Table 1.4 Properties of a delta function

    Unit area property

    xx0,yy0dxdy 1

    Scaling property ax, by 1jabj x,y

    Product property f(x, y) (x x0, y y0) f(x0, y0)(x x0, y y0)

    Sampling property

    fx,yxx0,yy0dxdy fx0,y0

    1.5 Optical image processing 17

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    30/228

    Figure 1.8 Lowpass ltering examples: (a) original image, (b) spectrum of (a);(c) and (e) lowpass images; (d) and (f) spectra of (c) and (e), respectively. SeeTable 1.5for the MATLAB code.

    Figure 1.9 Highpass ltering examples: (a) original image, (b) spectrum of (a);(c) and (e) highpass images; (d) and (f) spectra of (c) and (e), respectively. SeeTable 1.6for the MATLAB code.

    18 Wave optics

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    31/228

    manipulated. Once we have found the complex eld on the image plane given by

    Eq. (1.47), the corresponding image intensity is

    Iix,y pix,ypix,y jtx, y hcx,yj2, 1:51

    which is the basis for coherent image processing. However, light from extended

    sources, such as uorescent tube lights, is incoherent. The system shown inFig. 1.7

    becomes anincoherent optical systemupon illumination from an incoherent source.

    Table 1.5 MATLAB code for lowpassltering of an image, seeFig. 1.8

    clear all;close all;

    Aimread(lena.jpg); % read 512512 8bit imageA

    double(A);

    AA/255;SPfftshift(fft2(fftshift(A)));Dabs(SP);DD(129:384,129:384);gure;imshow(A);

    title(Original image)

    gure;imshow(30.*mat2gray(D)); % spectrum

    title(Original spectrum)

    c1:512;r1:512;[C, R ]meshgrid(c, r);CI((R-257).^2(C-257).^2);lterzeros(512,512);% produce a high-pass lter

    for a1:512;for b1:512;

    if CI(a,b)>20^2; %lter diameterlter(a,b)0;

    else

    lter(a,b)1;end

    end

    end

    Gabs(lter.*SP);GG(129:384,129:384);gure;imshow(30.*mat2gray(G));

    title(Low-pass spectrum)

    SPFSP.*lter;Eabs(fftshift(ifft2(fftshift(SPF))));gure;imshow(mat2gray(E));

    title(Low-pass image)

    1.5 Optical image processing 19

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    32/228

    The optical system manipulates intensity quantities directly. To nd the image

    intensity, we perform convolution with the given intensity quantities as follows:

    Iix,y jtx, yj2 jhcx,yj2: 1:52Equation (1.52)is the basis for incoherent image processing [1], and |hc(x, y)|

    2 is

    the intensity point spread function (IPSF) [1]. Note that the IPSF is real and non-

    negative, which means that it is not possible to implement even the simplest

    enhancement and restoration algorithms (e.g., highpass, derivatives, etc.), which

    Table 1.6 MATLAB code for highpassltering of an image, seeFig. 1.9

    clear all;close all;

    Aimread(lena.jpg); % read 512512 8bit imageA

    double(A);

    AA/255;SPfftshift(fft2(fftshift(A)));Dabs(SP);DD(129:384,129:384);gure;imshow(A);

    title(Original image)

    gure;imshow(30.*mat2gray(D)); % spectrum

    title(Original spectrum)

    c1:512;r1:512;[C, R ]meshgrid(c, r);CI((R-257).^2(C-257).^2);lterzeros(512,512);% produce a high-pass lter

    for a1:512;for b1:512;

    if CI(a,b)

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    33/228

    require abipolar point spread function. Novel incoherent image processing tech-

    niques seek to realize bipolar point spread functions (see, for example, [36]).

    The Fourier transform of the IPSF gives a transfer function known as theoptical

    transfer function (OTF)of the incoherent optical system:

    OTFkx, ky F jhcx,yj2n o

    : 1:53

    Using Eq. (1.49), we can relate the coherent transfer function to the OTF as

    follows:

    Figure 1.10 Incoherent spatialltering examples using p(x,y) circ(r/r0): (a)original image, (b) spectrum of (a); (c) and (f) ltered images; (d) and (g) spectraof (c) and (f), respectively; (e) and (h) cross sections through the center of theOTF using differentr0 in the pupil function for the processed images in (c) and(f ), respectively. The full dimension along the horizontal axis contains 256 pixelsforgures (b) to (h), while gures (e) and (h) zoom in the peak with 30 pixelsplotted. SeeTable 1.7for the MATLAB code.

    1.5 Optical image processing 21

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    34/228

    OTFkx, ky Hckx, ky Hckx, ky

    Hck0x, k0yHck0x kx, k0y kydk0xdk0y, 1:54

    where denes correlation [seeTable 1.1]. The modulus of the OTF is called themodulation transfer function(MTF), and it is important to note that

    jOTFkx, ky j jOTF0, 0j, 1:55

    Figure 1.11 Incoherent spatial ltering examples usingp(x,y) 1 circ(r/r0):(a) original image, (b) spectrum of (a); (c) and (f) ltered images; (d) and (g)spectra of (c) and (f ), respectively; (e) and (h) cross sections through the center ofthe OTF using differentr0 in the pupil function for the processed images of (c)and (f), respectively. The full dimension along xcontains 256 pixels forgures(b) to (h). SeeTable 1.8for the MATLAB code.

    22 Wave optics

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    35/228

    which states that the MTF always has a central maximum. This signies that we

    always have lowpass ltering characteristics regardless of the pupil function used

    in an incoherent optical system. InFigs. 1.10and1.11, we showincoherent spatial

    ltering results in an incoherent optical system [1] using p(x, y) circ(r/r0) andp(x, y) 1 circ(r/r0), respectively.

    Table 1.7 MATLAB code for incoherent spatialltering, circ(r/r0), seeFig. 1.10

    clear all;close all;

    Aimread(lena.jpg); % read 512512 8bit imageA

    double(A);

    AA/255;SPfftshift(fft2(fftshift(A)));Dabs(SP);DD(129:384,129:384);gure;imshow(A);

    title(Original image)

    gure;imshow(30.*mat2gray(D)); % spectrum

    title(Original spectrum)

    c1:512;r1:512;[C, R ]meshgrid(c, r);CI((R-257).^2(C-257).^2);pupzeros(512,512);% produce a circular pupil

    for a1:512;for b1:512;

    if CI(a,b)>30^2; %pupil diameter 30 or 15pup(a,b)0;

    else

    pup(a,b)1;end

    end

    end

    hifft2(fftshift(pup));OTFfftshift(fft2(h.*conj(h)));OTFOTF/max(max(abs(OTF)));Gabs(OTF.*SP);GG(129:384,129:384);gure;imshow(30.*mat2gray(G));

    title(Filtered spectrum)

    Iabs(fftshift(ifft2(fftshift(OTF.*SP))));gure;imshow(mat2gray(I));

    title(Filtered image)

    1.5 Optical image processing 23

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    36/228

    Problems

    1.1 Starting from the Maxwell equations, (a) derive the wave equation forE in a

    linear, homogeneous, and isotropic medium characterized by and, and (b)

    do the same as in (a) but forH.

    1.2 Verify the Fourier transform properties 2, 3 and 4 inTable 1.1.

    1.3 Verify the Fourier transform pairs 5 and 6 inTable 1.1.

    Table 1.8 MATLAB code for incoherent spatialltering, 1circ(r/r0), seeFig. 1.11

    clear all;close all;

    Aimread(lena.jpg); % read 512512 8bit imageA

    double(A);

    AA/255;SPfftshift(fft2(fftshift(A)));Dabs(SP);DD(129:384,129:384);gure;imshow(A);

    title(Original image)

    gure;imshow(30.*mat2gray(D)); % spectrum

    title(Original spectrum)

    c1:512;r1:512;[C, R ]meshgrid(c, r);CI((R-257).^2(C-257).^2);pupzeros(512,512);% produce a circular pupil

    for a1:512;for b1:512;

    if CI(a,b)>350^2; % pupil diameter 300 or 350pup(a,b)1;

    else

    pup(a,b)0;end

    end

    end

    hifft2(fftshift(pup));OTFfftshift(fft2(h.*conj(h)));OTFOTF/max(max(abs(OTF)));Gabs(OTF.*SP);GG(129:384,129:384);gure;imshow(30.*mat2gray(G));

    title(Filtered spectrum)

    Iabs(fftshift(ifft2(fftshift(OTF.*SP))));gure;imshow(mat2gray(I));

    title(Filtered image)

    24 Wave optics

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    37/228

    1.4 Verify the Fourier transform pairs 7, 8, 9, 10, and 11 in Table 1.1.

    1.5 Assume that the solution to the three-dimensional wave equation inEq. (1.11)

    is given by (x, y, z, t) p(x, y; z)exp(j0t), verify that the Helmholtzequation forp(x, y; z) is given by

    2p

    x2

    2p

    y2

    2p

    z2 k20p 0,

    where k0 0/.1.6 Write down functions of the following physical quantities in Cartesian coord-

    inates (x, y, z).

    (a) A plane wave on the xz plane in free space. The angle between the

    propagation vector and the z-axis is.

    (b) A diverging spherical wave emitted from a point source at (x0,y0,z0) under

    paraxial approximation.

    1.7 A rectangular aperture described by the transparency function t(x, y)rect(x/x0, y /y0) is illuminated by a plane wave of unit amplitude. Determine

    the complex eld,p(x, y; z), under Fraunhofer diffraction. Plot the intensity,

    |p(x, y; z)|2, along the x-axis and label all essential points along the axis.

    1.8 Repeat P7 but with the transparency function given by

    tx,y rect xX=2x0

    ,

    y

    y0 rect xX=2

    x0,

    y

    y0 , X x0:1.9 Assume that the pupil function in the 4-fimage processing system inFig. 1.7

    is given by rect(x/x0,y /y0). (a) Find the coherent transfer function, (b) give an

    expression for the optical transfer function and express it in terms of the

    coherent transfer function, and (c) plot both of the transfer functions.

    1.10 Repeat P9 but with the pupil function given by the transparency function in

    P8.

    1.11 Consider a grating with transparency function t x,y 12 1

    2cos 2x=,

    where is the period of the grating. Determine the complex eld,p(x,y;z),

    under Fresnel diffraction if the grating is normally illuminated by a unit

    amplitude plane wave.

    1.12 Consider the grating given in P11. Determine the complex eld, p(x, y; z),

    under Fraunhofer diffraction if the grating is normally illuminated by a unit

    amplitude plane wave.

    1.13 Consider the grating given in P11 as the input pattern in the 4-f image

    processing system in Fig. 1.7. Assuming coherent illumination, nd the

    intensity distribution at the output plane when a small opaque stop is located

    at the center of the Fourier plane.

    Problems 25

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    38/228

    References

    1. T.-C. Poon, and P. P. Banerjee, Contemporary Optical Image Processing withMATLAB (Elsevier, Oxford, UK, 2001).

    2. T.-C. Poon, and T. Kim, Engineering Optics with MATLAB (World Scientic, RiverHackensack, NJ, 2006).

    3. A. W. Lohmann, and W. T. Rhodes, Two-pupil synthesis of optical transfer functions,Applied Optics17, 11411151 (1978).

    4. W. Stoner, Incoherent optical processing via spatially offset pupil masks, AppliedOptics17, 24542467 (1978).

    5. T.-C. Poon, and A. Korpel, Optical transfer function of an acousto-optic heterodyningimage processor, Optics Letters 4, 317319 (1979).

    6. G. Indebetouw, and T.-C. Poon, Novel approaches of incoherent image processing withemphasis on scanning methods, Optical Engineering 31, 21592167 (1992).

    26 Wave optics

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    39/228

    2

    Fundamentals of holography

    2.1 Photography and holography

    When an object is illuminated, we see the object as light is scattered to create an

    object wavereaching our eyes. The object wave is characterized by two quantities:

    theamplitude, which corresponds to brightness or intensity, and the phase, which

    corresponds to the shape of the object. The amplitude and phase are conveniently

    represented by the so-called complex amplitude introduced in Chapter 1. The

    complex amplitude contains complete information about the object. When the

    object wave illuminates a recording medium such as a photographic lm or a

    CCD camera, what is recorded is the variation in light intensity at the plane of the

    recording medium as these recording media respond only to light intensity.

    Mathematically, the intensity, I(x, y), is proportional to the complex amplitude

    squared, i.e., I(x, y)/ |p(x, y)|2, where p is the two-dimensional complexamplitude on the recording medium. The result of the variation in light intensity

    is a photograph and if we want to make a transparency from it, the amplitude

    transmittancet(x, y) of the transparency can be made proportional to the recorded

    intensity, or we simply write as follows:

    tx,y jpx,yj2: 2:1

    Hence in photography, as a result of this intensity recording, all information about

    the relative phases of the light waves from the original three-dimensional scene

    is lost. This loss of the phase information of the light eld in fact destroys the

    three-dimensional character of the scene, i.e., we cannot change the perspective of

    the image in the photograph by viewing it from a different angle (i.e., parallax) and

    we cannot interpret the depth of the original three-dimensional scene. In essence, a

    photograph is a two-dimensional recording of a three-dimensional scene.

    Holography is a method invented by Gabor in 1948 [1] in which not only the

    amplitude but also the phase of the light eld can be recorded. The word

    27

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    40/228

    holographycombines parts of two Greek words: holos, meaning complete,and

    graphein, meaning to write or to record. Thus, holography means the recording

    of complete information. Hence, in the holographic process, the recording medium

    records the original complex amplitude, i.e., both the amplitude and phase of the

    complex amplitude of the object wave. The result of the recorded intensityvariations is now called ahologram. When a hologram is properly illuminated at

    a later time, our eyes observe the intensity generated by the same complex eld. As

    long as the exact complex eld is restored, we can observe the original complex

    eld at a later time. The restored complex eld preserves the entire parallax and

    depth information much like the original complex eld and is interpreted by our

    brain as the same three-dimensional object.

    2.2 Hologram as a collection of Fresnel zone plates

    The principle of holography can be explained by recording a point object since any

    object can be considered as a collection of points. Figure 2.1shows a collimated

    laser split into two plane waves and recombined through the use of two mirrors

    (M1 and M2) and two beam splitters (BS1 and BS2).

    One plane wave is used to illuminate the pinhole aperture (our point object), and

    the other illuminates the recording medium directly. The plane wave that is

    Figure 2.1 Holographic recording of a point object (realized by a pinhole aperture).

    28 Fundamentals of holography

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    41/228

    scattered by the point object generates a diverging spherical wave toward the

    recording medium. This diverging wave is known as anobject wavein holography.

    The plane wave that directly illuminates the recording medium is known as a

    reference wave. Let0 represent the eld distribution of the object wave on the

    plane of the recording medium, and similarly letrrepresent the eld distributionof the reference wave on the plane of the recording medium. The recording

    medium now records the interference of the reference wave and the object wave,

    i.e., what is recorded is given by|0 r|2, provided the reference wave and theobject wave are mutually coherent over the recording medium. The coherence of

    the light waves is guaranteed by the use of a laser source (we will discuss

    coherence in Section 2.4). This kind of recording is known as holographic

    recording, distinct from a photographic recording in that the reference wave does

    not exist and hence only the object wave is recorded.

    We shall discuss holographic recording of a point source object mathematically.Let us consider the recording of a point object at a distancez0from the recording

    medium as shown inFig. 2.1. The pinhole aperture is modeled as a delta function,

    (x, y), which gives rise to an object wave, 0, according to Fresnel diffraction

    [seeEq. (1.35)], on the recording medium as

    0x,y;z0 x,y hx,y;z0 x,y expjk0z0 jk0

    2z0exp

    jk0

    2z0x2 y2

    exp

    jk

    0z

    0 jk0

    2z0expjk0

    2z0 x2

    y2

    : 2:2This object wave is aparaxial spherical wave. For the reference plane wave, we

    assume that the plane wave has the same initial phase as the point object at a

    distancez0away from the recording medium. Therefore, its eld distribution on the

    recording medium is r a exp(jk0z0), where a, considered real for simplicityhere, is the amplitude of the plane wave. Hence, the recorded intensity distribution

    on the recording medium or the hologram with amplitude transmittance is given by

    tx,y jr 0j2

    aexp

    jk0z0 exp

    jk0z0 jk0

    2z0 exp

    jk0

    2z0 x2

    y2

    2

    or

    t x,y a2 k02z0

    2 a jk0

    2z0exp

    jk0

    2z0x2 y2 a jk0

    2z0exp

    jk0

    2z0x2 y2 :

    2:3Note that the last term, which is really the desirable term of the equation, is the total

    complex eld of the original object wave [see Eq. (2.2)] aside from the constant

    2.2 Hologram as a collection of Fresnel zone plates 29

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    42/228

    termsaand exp(jk0z0). Now,Eq. (2.3)can be simplied to a real function and we

    have a real hologram given by

    tx,y A B sin k02z0

    x2 y2

    , 2:4

    whereA a2 (k0/2z0)2 is some constant bias, andB ak0/z0. The expressioninEq. (2.4) is often called the sinusoidal Fresnel zone plate (FZP), which is the

    hologram of the point source object at a distance z z0 away from the recordingmedium. Plots of the FZPs are shown inFig. 2.2, where we have setk0to be some

    constant but forz z0 and z 2z0.When we investigate the quadratic spatial dependence of the FZP, we notice that

    the spatial rate of change of the phase of the FZP, say along the x-direction, is

    given by

    flocal 1

    2

    d

    dx

    k0

    2z0x2

    k0x

    2z0: 2:5

    This is alocal fringe frequency that increases linearly with the spatial coordinate,x. In

    other words, the further we are away from the center of the zone, the higher the local

    spatial frequency, which is obvious fromFig. 2.2. Note also from the gure, when we

    double thezvalue, say fromz z0toz 2z0, the local fringes become less dense asevident fromEq. (2.5) as well. Hence the local frequency carries the information onz,

    i.e., from the local frequency we can deduce how far the object point source is away

    from the recording mediuman important aspect of holography.

    To reconstruct the original lighteld from the hologram, t(x,y), we can simply

    illuminate the hologram with plane wave rec

    , called the reconstruction wave in

    Figure 2.2 Plots of Fresnel zone plates forz z0and z 2z0.

    30 Fundamentals of holography

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    43/228

    holography, which gives a complex amplitude at z away from the hologram,

    according to Fresnel diffraction,

    rect

    x,y

    h

    x,y;z

    rec(A

    ajk0

    2z0

    exp jk0

    2z0 x2

    y2

    a jk0

    2z0exp

    jk0

    2z0x2 y2

    ) hx,y;z: 2:6

    Evaluation of the above equation gives three lightelds emerging from the holo-

    gram. The lighteld due to the rst term in the hologram is a plane wave as recA*h

    (x, y; z)/ recA, which makes sense as the plane wave propagates withoutdiffraction. This out-going plane wave is called azeroth-order beamin holography,

    which provides a uniform output at the observation plane. In the present analysis

    the interference is formed using a paraxial spherical wave and a plane wave. So thezeroth-order beam is uniform. However, if the object light is not a uniformeld, the

    zeroth-order beam will not be uniform. Now, the eld due to the second term is

    recajk0

    2z0exp

    jk0

    2z0x2 y2

    hx,y;z

    / jk02z0

    jk0

    2zexp

    jk0

    2z0x2 y2

    exp

    jk0

    2z x2 y2

    jk02z0

    jk02z

    exp

    jk02z0 z x

    2 y2: 2:7This is a converging spherical wave ifz< z0. However, when z> z0, we have a

    diverging wave. Forz z0, the wave focuses to a real point source z0away fromthe hologram and is given by a delta function, (x,y). Now, nally for the last term

    in the equation, we have

    reca jk0

    2z0

    expjk0

    2z0

    x2

    y2

    h

    x,y;z

    / jk0

    2z0

    jk0

    2zexp

    jk0

    2z0x2 y2

    exp

    jk0

    2zx2 y2

    jk02z0

    jk0

    2zexp

    jk0

    2z0 z x2 y2

    , 2:8

    and we have a diverging wave with its virtual point source at a distance z z0,which is behind the hologram, on the opposite side to the observer. This recon-

    structed point source is at the exact location of the original point source object.

    2.2 Hologram as a collection of Fresnel zone plates 31

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    44/228

    The situation is illustrated inFig. 2.3. The reconstructed real point source is called

    the twin image of the virtual point source.

    Although both the virtual image and the real image exhibit the depth of the

    object, the virtual image is usually used for applications of three-dimensional

    display. For the virtual image, the observer will see a reconstructed image with

    the same perspective as the original object. For the real image, the reconstructed

    image is a mirror and inside-out image of the original object, as shown inFig. 2.4.

    This type of image is called thepseudoscopic image, while the image with normal

    perspective is called the orthoscopic image. Because the pseudoscopic imagecannot provide natural parallax to the observer, it is not suitable for three-

    dimensional display.

    Let us now see what happens if we have two point source objects given by

    (x, y)(x x1, y y1). They are located z0 away from the recording medium.The object wave now becomes

    0x,y;z0 b0x,y b1xx1,y y1 hx,y;z0, 2:9whereb0andb1denote the amplitudes of the two point sources. The hologram now

    becomes

    reconstruction wave FZP

    observer

    real imagevirtual image

    Figure 2.3 Reconstruction of a FZP with the existence of the twin image (whichis the real image reconstructed in the gure).

    Figure 2.4 Orthoscopic and pseudoscopic images.

    32 Fundamentals of holography

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    45/228

    tx,y jr 0x,y;z0j2

    aexpjk0z0 b0 expjk0z0 jk02z0

    exp jk0

    2z0x2 y2

    b1 expjk0z0 jk0

    2z0exp

    jk0

    2z0xx12 y y12h i

    2

    : 2:10

    Again, the above expression can be put in a real form, i.e., we have

    tx,y C ab0k0z0

    sin

    k0

    2z0x2 y2

    ab1k0

    z0sin

    k0

    2z0xx12 y y12

    2b0b1

    k0

    2z0

    2

    cos k0

    2z0

    x21

    y21

    2xx1

    2yy1

    ,

    2:11

    where C is again some constant bias obtained similarly as in Eq. (2.4). We

    recognize that the second and third terms are our familiar FZP associated to each

    point source, while the last term is a cosinusoidal fringe grating which comes

    about due to interference among the spherical waves. Again, only one term from

    each of the sinusoidal FZPs contains the desirable information as each contains

    the original light eld for the two points. The other terms in the FZPs are

    undesirable upon reconstruction, and give rise to twin images. The cosinusoidal

    grating in general introduces noise on the reconstruction plane. If we assume the

    two point objects are close together, then the spherical waves reaching the

    recording medium will intersect at small angles, giving rise to interference fringes

    of low spatial frequency. This low frequency as it appears on the recording

    medium corresponds to a coarse grating, which diffracts the light by a small

    angle, giving the zeroth-order beam some structure physically. Parker Givens has

    previously given a general form of such a hologram due to a number of point

    sources [2,3].

    2.3 Three-dimensional holographic imaging

    In this section, we study the lateral and longitudinal magnications in holographic

    imaging. To make the situation a bit more general, instead of using plane waves for

    recording and reconstruction as in the previous section, we use point sources. We

    consider the geometry for recording shown in Fig. 2.5. The two point objects,

    labeled 1 and 2, and the reference wave, labeled R, emit spherical waves that, on

    the plane of the recording medium, contribute to complex elds,p1,p2, andpR,

    respectively, given by

    2.3 Three-dimensional holographic imaging 33

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    46/228

    p1x,y x

    h

    2,y

    hx,y;R expjk0R jk0

    2Rexp

    jk0

    2R

    hx h=22 y2

    i

    / exp

    jk0

    2R

    hx h=22 y2

    i, 2:12

    p2x,y x h2 ,y hx,y;R d expjk0R d jk0

    2R d exp

    jk0

    2R d x h=22 y2

    h i

    / exp

    jk0

    2R d x h=22 y2

    h i, 2:13

    and

    pRx,y

    x

    a,y

    h

    x,y; l1

    exp

    jk0l1

    jk0

    2l1expjk0

    2l1 x

    a

    2

    y2h i

    / exp

    jk0

    2l1x a2 y2h i

    : 2:14

    These spherical waves interfere on the recording medium to yield a hologram

    given by

    tx,y jp1x,y p2x,y pRx,yj2 p1x,y p2x,y pRx,yp1x,y p2x,y pRx,y,

    2:15

    Figure 2.5 Recording geometry for the two point objects, 1 and 2. The referencepoint source is labeled R.

    34 Fundamentals of holography

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    47/228

    where the superscript * represents the operation of a complex conjugate. Rather

    than write down the complete expression fort(x,y) explicitly, we will, on the basis

    of our previous experience, pick out some relevant terms responsible for image

    reconstruction. The terms of relevance are treli(x, y), where i 1,2,3,4trel1x,y p1x,ypRx,y

    exp

    jk0

    2Rx h=22 y2

    exp

    jk0

    2l1x a2 y2

    , 2:16a

    trel2x,y p2x,ypRx,y

    exp

    jk0

    2R d x h=22 y2

    exp

    jk0

    2l1x a2 y2

    ,

    2:16b

    trel3x,y p1

    x,ypR x,y trel1 x,y, 2:16c

    trel4x,y p2x,ypR x,y trel2 x,y: 2:16d

    Note that trel3(x, y) and trel4(x, y) contain the original wavefronts p1(x, y) and

    p2(x, y) of points 1 and 2, respectively, and upon reconstruction they give rise to

    virtual images as shown in the last section for a single point object. However,

    trel1(x, y) and trel2(x, y) contain the complex conjugates of the original complex

    amplitudes p1 x,

    y and p2 x,

    y of points 1 and 2, respectively, and uponreconstruction they give rise to real images. We shall now show how thesereconstructions come about mathematically for spherical reference recording and

    reconstruction.

    The reconstruction geometry for real images is shown in Fig. 2.6, where the

    hologram just constructed is illuminated with a reconstruction spherical wave from

    Figure 2.6 Reconstruction geometry for the two point objects, 1 and 2. Thereconstruction point source is labeled r.

    2.3 Three-dimensional holographic imaging 35

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    48/228

    a point source labeled r. For simplicity, we assume that the wavelength of the

    reconstruction wave is the same as that of the waves of the recording process.

    Hence the complex eld, pr(x, y), illuminating the hologram is

    prx,y x b,y hx,y; l2 expjk0l2 jk0

    2l2 expjk0

    2l2 x b2

    y2h i

    / exp

    jk0

    2l2x b2 y2h i

    : 2:17

    We nd the total complex eld immediately behind (away from the source) the

    hologram by multiplyingEq. (2.17)withEq. (2.15)but the reconstructions due to

    the relevant terms are

    pr x,ytreli x,y, 2:18

    where the treli are dened in Eqs. (2.16).Consider, rst, the contribution from pr (x, y)trel1 (x, y). After propagation

    through a distance z behind the hologram, the complex eld is transformed

    according to the Fresnel diffraction formula. Note that because the eld is conver-

    ging, it will contribute to a real image. Explicitly, the eld can be written as

    prx,ytrel1x,y hx,y;z

    prx,ytrel1x,y expjk0zjk0

    2zexp

    jk0

    2zx2 y2

    / exp

    jk0

    2l2x b2 y2

    exp

    jk0

    2Rx h=22 y2

    exp

    jk0

    2l1x a2 y2

    jk0

    2zexp

    jk0

    2zx2 y2

    : 2:19

    From the denition of convolution integral [see Table 1.1], we perform the

    integration by writing the functions involved with new independent variables

    x0, y0 and (x x0,y y0). We can then equate the coefcients ofx02,y02, appearing

    in the exponents, to zero, thus leaving only linear terms in x0, y0. Doing this forEq. (2.19), we have

    1

    R

    1

    l1

    1

    l2

    1

    zr1 0, 2:20

    where we have replacedzby zr1.zr1is the distance of the real image reconstruction

    of point object 1 behind the hologram. We can solve forzr1to get

    zr1 1R

    1

    l1

    1

    l2 1

    Rl1l2l1l2

    l1

    l2

    R: 2:21

    36 Fundamentals of holography

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    49/228

    At this distance, we can writeEq. (2.19)as

    prx,ytrel1x,y hx,y;zr1

    /

    expjk0

    h2R

    a

    l1 b

    l2 x

    zr1

    x0 jk0y

    zr1y0

    dx0dy0

    / x zr1

    b

    l2

    h

    2R

    a

    l1

    ,y

    , 2:22

    which is a function shifted in the lateral direction and is a real image of point

    object 1. The image is located zr1 away from the hologram and at

    x

    x1

    zr1

    b

    l2

    h

    2R

    a

    l1 , y y1 0:

    As for the reconstruction due to the relevant term pr (x, y)trel2 (x, y) in the

    hologram, we have

    prx,ytrel2x,y hx,y;z

    prx,ytrel2x,y expjk0z jk0

    2zexp

    jk0

    2zx2 y2

    /expjk0

    2l2 xb

    2

    y2h iexp jk0

    2R d x

    h=2

    2

    y2h i

    exp

    jk0

    2l1x a2 y2h i

    jk02z

    exp

    jk0

    2zx2 y2 : 2:23

    A similar analysis reveals that this is also responsible for a real image reconstruc-

    tion but for point object 2, expressible as

    prx,ytrel2x,y hx,y;zr2 / xzr2b

    l2 h

    2R d a

    l1

    ,y

    , 2:24

    where

    zr2 1R d

    1

    l1

    1

    l2

    1

    R dl1l2l1l2l1 l2R d :

    Here, zr2 is the distance of the real image reconstruction of point object2 behind

    the hologram and the image point is located at

    x x2 zr2 bl2

    h2

    R

    d

    a

    l1 , y y2 0:

    2.3 Three-dimensional holographic imaging 37

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    50/228

    Equation (2.24) could be obtained alternatively by comparing Eq. (2.23) with

    (2.19) and noting that we only need to change R to R dand h to h. The realimage reconstructions of point objects 1 and 2 are shown in Fig. 2.6. The locations

    of the virtual images of point objects 1 and 2 can be similarly calculated starting

    fromEqs. (2.16c)and(2.16d).

    2.3.1 Holographic magnications

    We are now in a position to evaluate the lateraland longitudinal magnicationsof

    the holographic image and this is best done with the point images we discussed in

    the last section. The longitudinal distance (along z) between the two real point

    images is zr2zr1, so the longitudinal magnication is dened as

    MrLong zr2

    zr1d

    : 2:25

    UsingEqs. (2.21)and (2.24)and assumingR d, the longitudinal magnicationbecomes

    MrLongffi l1l22

    l1l2 Rl1 Rl22: 2:26

    We nd the lateral distance (along x) between the two image points 1 and 2 by

    taking the difference between the locations of the two -functions inEqs. (2.22)and(2.24), so the lateral magnication is

    MrLatzr2

    b

    l2 h

    2R d a

    l1

    zr1

    b

    l2

    h

    2R

    a

    l1

    h

    ffizr2 zr1

    b

    l2

    a

    l1

    zr2 zr1 h

    2R

    h2:27

    forR d. In order to make this magnication independent of the lateral separ-ation between the objects, h, we set

    b

    l2

    a

    l1 0,

    or

    b

    l2 a

    l1: 2:28

    38 Fundamentals of holography

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    51/228

    Then, fromEq. (2.27)and again for the condition thatR d,

    MrLat zr2 zr1 1

    2R l1l2

    l1l2 l1 l2R : 2:29

    By comparing Eq. (2.26) and Eq. (2.29), we have the following important rela-tionship between the magnications in three-dimensional imaging:

    MrLong MrLat2: 2:30

    2.3.2 Translational distortion

    In the above analysis of magnications, we have assumed that the condition of

    Eq. (2.28) is satised in the reconstruction. If Eq. (2.28) is violated, the lateral

    magnication will depend on the lateral separation between the object points. In

    other words, the reconstructed image will experience translational distortion. To

    see clearly the effect of translational distortion, let us consider point objects 1 and 2

    along the z-axis by taking h 0 and inspecting their image reconstitution loca-tions. The situation is shown inFig. 2.7. Points 1 and 2 are shown in the gure as a

    reference to show the original image locations. Points 1 0 and 20 are reconstructedreal image points of object points 1 and 2, respectively, due to the reconstruction

    wave from point r. We notice there is a translation between the two real image

    points along the x-direction. The translational distancexis given by

    x x1

    x2,

    where x1 and x2 are the locations previously found [see below Eqs. (2.22) and

    (2.24)]. Forh 0, we nd the translational distance

    x zr2 zr1 bl2

    a

    l1

    : 2:31

    Figure 2.7 Translational distortion of the reconstructed real image.

    2.3 Three-dimensional holographic imaging 39

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    52/228

    From the above result, we see that the image is twisted for a three-dimensional

    object. In practice we can remove the translational distortion by setting the recon-

    struction point to satisfyEq. (2.28). The distortion can also be removed by setting

    a b 0. However, by doing so, we lose the separation of the real, virtual, andzeroth-order diffraction, a situation reminiscent of what we observed in Fig. 2.3where we used plane waves for recording and reconstruction, with both the plane

    waves traveling along the same direction. We will discuss this aspect more in

    Chapter 3.

    Example 2.1: Holographic magnication

    In this example, we show that by using a spherical wave for recording and plane

    wave reconstruction, we can produce a simple magnication imaging system. We

    start with a general result forMrLatgiven byEq. (2.27):

    MrLatzr2

    b

    l2 h

    2R d a

    l1

    zr1

    b

    l2

    h

    2R

    a

    l1

    h

    : 2:32

    Fora b 0, i.e., the recording and reconstruction point sources are on the z-axis,and d 0, i.e., we are considering a planar image, MrLatbecomes

    MrLatzr2 zr1

    2R 1R

    l1

    R

    l2

    1

    , 2:33

    where zr2 zr1 [1/R 1/l1 1/l2]1. For plane wave reconstruction, l2! .Equation (2.33) nally becomes a simple expression given by

    MrLat 1R

    l1

    1

    : 2:34

    For example, taking l12R, MrLat 2, a magnication of a factor of 2, and forl1 1/4R < R, MrLat 1=3, a demagnication in this case. Note that if therecording reference beam is also a plane wave, i.e., l1 ! , there is no magnica-tion using a plane wave for recording and reconstruction.

    2.3.3 Chromatic aberration

    In the above discussion, the wavelength of the reconstruction wave was assumed to

    be the same as that of the wave used for holographic recording. If the hologram is

    illuminated using a reconstruction wave with a different wavelength, r, then the

    situation becomes much more complicated. Now the reconstruction wave can

    still be found using Eq. (2.19) but pr (x, y) and h(x, y; z) must be modied

    according to

    40 Fundamentals of holography

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    53/228

    prx,y / exp jkr

    2l1

    x a2 y2

    ,

    and

    hx,y;z / jkr2z exp

    jkr2z x2 y2

    ,

    respectively, where kr 2/r. Hence the longitudinal distance of the real imagereconstruction [seeEq. (2.21)] is modied to become

    zr1 r0R

    r

    0l1

    1

    l2

    1

    0Rl1l2rl1l2 rl2 0l1R : 2:35

    Accordingly, the transverse location can be found fromEq. (2.22)to give

    x1

    zr1b

    l2

    hr

    2R0

    ar

    l10

    , y1 0: 2:36

    Thus in general the location of the image point depends on the wavelength of the

    reconstruction wave, resulting in chromatic aberration. We can see that forR l1and R l2,

    zr1 0r

    R, 2:37a

    x1

    h

    2 R

    a

    l1

    b

    l2

    0

    r : 2:37b

    As a result, in chromatic aberration, the shift of the image location due to the

    difference of the wavelength used for recording and reconstruction is proportional

    to R, the distance of the object from the hologram.

    Example 2.2: Chromatic aberration calculation

    We calculate the chromatic aberration of an image point in the following case:

    R 5 cm, h 2 cm, a b 5 cm, l1 l2 20 cm, 0 632 nm.We dene the longitudinal aberration distance, and the transverse aberration

    distance,x, as

    z zr1r zr10, 2:38ax x1r x10: 2:38b

    z and x are plotted in Fig. 2.8with the MATLAB code listed in Table 2.1. In

    comparison with the desired image point,zr1(0) 10 cm andx1(0) 2 cm, theamount of aberration increases as the deviation from the desired wavelength, 0,

    becomes larger so that holograms are usually reconstructed with a single

    2.3 Three-dimensional holographic imaging 41

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    54/228

    wavelength. In the next chapter we will see that holograms can be reconstructed

    using white light in some specic kinds of geometries.

    2.4 Temporal and spatial coherence

    In the preceding discussions when we have discussed holographic recording, we

    have assumed that the optical elds are completely coherent and monochromatic

    Table 2.1 MATLAB code for chromatic aberration calculation, seeFig. 2.8

    close all; clear all;L20; % l1 and l2R

    5;

    a5; % a and bh2;lambda0633; % recording wavelengthlambdaR400:20:700; % reconstruction wavelengthzlambda0*R*L./(lambdaR*(L-R)-lambda0*R);dzR*L/(L-2*R)-z;plot(lambdaR,dz)title(Longitudinal chromatic aberration)xlabel(Reconstruction wavelength (nm))ylabel({\delta}z (mm))

    x-z.*(a/L-lambdaR/lambda0*h/R/2-lambdaR*a/lambda0/L);dxx-R*L/(L-2*R)*(h/2/R);gure;plot(lambdaR,dx)title(Transverse chromatic aberration)xlabel(Reconstruction wavelength (nm))ylabel({\delta}x (mm))

    dz

    (mm)

    dx

    (mm)

    2

    0

    -2

    -4

    -6

    -8

    -10

    -12

    -14

    -1.2

    -1.0

    -0.8

    -0.6

    -0.4

    -0.2

    -0.0

    -0.2

    -1.4

    400 500 600 700 400 500 600 700

    Reconstruction wavelength (nm)

    (a) (b)

    Reconstruction wavelength (nm)

    Figure 2.8 (a) Longitudinal, and (b) transverse chromatic aberration distanceswhen the recording wavelength is 0 632 nm.

    42 Fundamentals of holography

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    55/228

    so that the elds will always produce interference. In this section, we give a brief

    introduction to temporal and spatial coherence. In temporal coherence, we are

    concerned with the ability of a lighteld to interfere with a time-delayed version of

    itself. In spatial coherence, the ability of a lighteld to interfere with a spatially

    shifted version of itself is considered.

    2.4.1 Temporal coherence

    In a simplied analysis of interference, light is considered to be monochromatic,

    i.e., the bandwidth of the light source is innitesimal. In practice there is no ideal

    monochromatic light source. A real light source contains a range of frequencies

    and hence interference fringes do not always occur. An interferogram is a photo-

    graphic record of intensity versus optical path difference of two interfering waves.

    The interferogram of two light waves atris expressed as

    IDjAr, t Br, tj2

    EDjAr, tj2

    EDjBr, tj2

    E 2 Re

    nAr, tBr, to, 2:39

    where stands for the time-average integral ash

    i lim

    T!1

    T

    T=2

    T=2 dt, 2:40

    andA(r,t) andB(r,t) denote the optical elds to be superimposed. In the following

    discussion we will rst assume that the two lightelds are from an innitesimal,

    quasi-monochromatic light source. We model the quasi-monochromatic light as

    having a specic frequency 0 for a certain time and then we change its phase

    randomly. Thus atxed r, A(t) and B(t) can be simply expressed as

    At A0exp

    j0t t

    , 2:41a

    B

    t

    B0expj0t t , 2:41bwheredenotes the time delay due to the optical path difference between A(t) and

    B(t), and (t) denotes the time-variant initial phase of the quasi-monochromatic

    light. By substituting Eq. (2.41) into Eq. (2.39), we have

    I A20 B20 2A0B0 Ren

    ej t t 0 o 2:42

    because hjA t j2i A20 and hjBtj2i B20. InEq. (2.42), the time-average integralis the interference term called the complex degree of coherence of the source,

    which is denoted as

    2.4 Temporal and spatial coherence 43

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    56/228

    D

    ejt t0E: 2:43

    The complex degree of coherence has the properties

    0

    1 and

    j

    j 1:

    2:44

    As a result, the interferogram can be expressed in terms of the complex degree ofcoherence as

    I A20 B20 2A0B0jj cos argfg, 2:45where {} stands for the operation of taking the argument of the function beingbracketed. It should be noted that inEq. (2.45)the modulus of the complex degree

    of coherence comes into existence only when we measure the intensity and it is not

    directly obtainable. In fact, the modulus of the complex degree of coherence is easy

    to determine by measuring the contrast between fringes in I(), as rst performedby Michelson. The fringe contrast is called the fringe visibility , dened by

    Imax IminImax Imin , 2:46

    whereImaxand Imindenote the local maximum value and the local minimum value

    of the interferogram, respectively. Accordingly, we can see that

    Imax A20 B20 2A0B0jj,Imin A20 B20

    2A0B0jj:

    So the visibility of the interferogram in Eq. (2.46)can be expressed as

    2A0B0A20 B20

    j j: 2:47

    Equation (2.47)shows that the modulus of the degree of coherence is proportional

    to the visibility of the fringe. So we can deduce the ability to form interference

    from a light source if we know its coherence property. We say that light waves

    involved in an interferometer are completely coherent, completely incoherent, or

    partially coherent according to the value of|()|:

    jj 1 complete coherencejj 0 complete incoherence

    0 jj 1 partial coherence:Let us take a simple plane wave as an example, i.e., A(t) A0 exp(j0t), and

    B(t) A0exp[j0(t )].Equation (2.43)becomes() exp(j0) and therefore|()|1, a case of complete coherence. On the other hand, ifA(t) is completelyrandom in time, fromEq. (2.43), we have()

    0, a case of complete incoherence.

    44 Fundamentals of holography

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    57/228

    Many natural and articial light sources have a monotonously decreasing function

    in|()|, starting from|(0)| 1.

    2.4.2 Coherence time and coherence length

    When we consider interference and diffraction of waves, we assume that the light

    eld remains perfectly sinusoidal for all time. But this idealized situation is not true

    for ordinary light sources. We can model an ordinary light source as a quasi-

    monochromatic light oscillating at0 ofnite size wave trains with initial phase

    (t) to be randomly distributed between 0 and 2within some xed time, i.e., the

    phase changes randomly every time interval 0

    and remains stable between the

    changes, as shown in Fig. 2.9. According to the model, the complex degree of

    coherence can be found by evaluating Eq. (2.43)to be

    0

    ej0, 2:48

    where (/0) is a triangle function as dened inTable 1.1and is repeated below

    for convenience:

    0

    1

    0 for

    0 10 otherwise:

    8

  • 7/24/2019 Introduction to Modern Digital Holography With Matlab

    58/228

    where dis the optical path difference corresponding to the time delay between

    the two light waves, i.e., 2d/ 0.The width of the complex degree of coherence, 0, is called thecoherence time.

    If the time delay between the light waves involved in the interference is larger than

    the coherence time, no fringes can be observed.

    Finally, we can also dene the coherence length lc as

    c c0, 2:50wherec is the speed of light in vacuum. In other words, the coherence length is the

    path the light passes in the time interval 0. To ensure the success of interference,

    the optical path difference in an interferometer must be smaller than the coherence

    length.

    2.4.3 Some general temporal coherence considerations

    In the above discussion, we used the model of a quasi-monochromatic light so that

    the analysis is relatively simple. Here we will extend the theory to any kind of light

    source. InEq. (2.42), the complex degree of coherence comes from the process of

    time average of the cross term. On the other hand, we also know that (0) 1according toEq. (2.44). Thus for any light source, we can write down the complex

    degree of coherence as:

    hEtEt ihjEtj2i , 2:51

    whereE(t) is the complex amplitude of the light at the source point. Equation (2.51)

    is the general form of the complex degree of coherence.