lidar backscatter signal recovery from phototransistor systematic effect by deconvolution

Post on 03-Oct-2016

213 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Lidar backscatter signal recovery from phototransistorsystematic effect by deconvolution

Tamer F. Refaat,1,* Syed Ismail,2 M. Nurul Abedin,3 Scott M. Spuler,4

Shane D. Mayor,4 and Upendra N. Singh5

1Applied Research Center, Old Dominion University, 12050 Jefferson Avenue, Newport News, Virginia 23606, USA2Chemistry and Dynamics Branch, NASA Langley Research Center, 21 Langley Boulevard,

MS 401A, Hampton, Virginia 23681, USA3Remote Sensing Flight Systems Branch, NASA Langley Research Center, 5 Dryden Street,

MS 468, Hampton, Virginia 23681, USA4Atmospheric Technology Division, National Center for Atmospheric Research,

P.O. Box 3000, Boulder, Colorado 80307, USA5System Engineering Directorate, NASA Langley Research Center, 5 Dryden Street,

MS 468, Hampton, Virginia 23681, USA

*Corresponding author: trefaat@jlab.org

Received 4 June 2008; revised 14 August 2008; accepted 15 August 2008;posted 19 August 2008 (Doc. ID 96990); published 6 October 2008

Backscatter lidar detection systems have been designed and integrated at NASA Langley Research Cen-ter using IR heterojunction phototransistors. The design focused on maximizing the system signal-to-noise ratio rather than noise minimization. The detection systems have been validated using the Ra-man-shifted eye-safe aerosol lidar (REAL) at the National Center for Atmospheric Research. Incorpor-ating such devices introduces some systematic effects in the form of blurring to the backscattered signals.Characterization of the detection system transfer function aided in recovering such effects by deconvolu-tion. The transfer function was obtained by measuring and fitting the system impulse response usingsingle-pole approximation. An iterative deconvolution algorithmwas implemented in order to recover thesystem resolution, while maintaining high signal-to-noise ratio. Results indicated a full recovery of thelidar signal, with resolution matching avalanche photodiodes. Application of such a technique to atmo-spheric boundary and cloud layers data restores the range resolution, up to 60m, and overcomes theblurring effects. © 2008 Optical Society of America

OCIS codes: 280.3640, 010.0280, 290.1090, 250.0040, 100.1830.

1. Introduction

Lidar operating in the infrared (IR) region is an im-portant tool for profiling several atmospheric green-house constituents, such as water vapor, carbondioxide (CO2), carbon monoxide, methane, andethane [1]. In addition to the richness of distinctiveabsorption spectra for these constituents in this re-gion, IR lidar provides a way to increase the trans-

mitted laser energy while maintaining the eye-safety requirements [2]. This allows for longer rangeand higher sensitivity instruments compared to thevisible region. As a consequence, sophisticated lidarmethodology, such as differential absorption lidar(DIAL), could be applied to address important issues,including the study of the greenhouse effect and cli-mate cycles on Earth, especially the carbon cycle,which is a major contributor to global warming[1,3]. Recently, the NASA Orbiting Carbon Observa-tory (OCO) mission has been designed to acquire glo-bal, space-based measurements of atmospheric CO2

0003-6935/08/295281-15$15.00/0© 2008 Optical Society of America

10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5281

[4]. As a passive remote sensor, the OCO measure-ment precision, resolution, and coverage match therequirements for defining the CO2 sources and sinkson regional scales [5]. The Greenhouse gas ObservingSatellite (GOSAT) is another space-based mission.Planned to launch in 2008, GOSAT is a passiveremote sensor designed to conduct CO2 column mea-surement with 1% relative accuracy [6]. One disad-vantage of passive remote sensing is the lack ofcoverage during night background conditions. Thisproblem is overcome by active remote sensors (lidar).Another disadvantage is the influence of aerosolsand clouds. These systematic effects are also commonto continuous wave (cw) laser remote sensing techni-ques. Pulsed lidar systems are void of such influencesand, in addition, provide discrete profiling capability.A ground-based CO2 DIAL system is being developedat NASA Langley Research Center through theNASA Instrument Incubator Program (IIP). Such asystem would be a helpful tool for validating OCOand GOSAT measurements. This system takes ad-vantage of advances in 2-μm pulsed laser technology,suitable for CO2 DIAL transmitters, while driving upthe need for sophisticated detectors operating at thesame wavelength for the receiver [1,7].One suitable detector for DIAL application would

be an avalanche photodiode (APD) device, which hasproven to be successful in several lidar applicationsin the visible and near-IR regions [8]. Currently,APDs with high sensitivity at 2-μm wavelength arecommercially unavailable; however, some researchefforts report on such devices with performance farbeyond mature Si and InGaAs technologies [9,10].In a recent paper, InGaAsSb/AlGaAsSb IR hetero-junction phototransistors (HPTs) have been vali-dated for lidar atmospheric remote sensing [11].Although these HPTs were optimized for 2-μm detec-tion, the validation was performed at 1:5 μm for di-rect comparison with the advanced InGaAs APDtechnology optimized at that particular wavelength[12,13]. By comparison, both HPTs and APDs exhibitinternal gain mechanisms leading to increase theirsignal-to-noise ratio (SNR). Although HPTs couldhave higher gain than APDs and overcome the excessnoise problem, HPTs are associated with higher darkcurrent and limited bandwidth. The dark current ofthe HPT results mainly in higher noise that can becompensated for by the higher gain of the device,leading to a SNR that closely matches the APD[11]. On the other hand, the limited bandwidth ofthe HPT leads to a longer settling time and intro-duces some nonlinearities [14,15]. These parameterscould drastically influence the lidar instrument sys-tematic effects.Deconvolution is one process applied to recover the

true measured quantity from the actual measure-ment records by eliminating the systematic effectsof the measuring system [16]. Such a process iswidely applied to lidar for recovering different sys-tematic effects, such as laser pulse profile, multiplescattering, optical design, and detector response. For

example, Kavaya et al. [17] studied the systematicerrors in lidar operating with a pulsed CO2 lasertransmitter. The work focuses on correcting the influ-ence of the laser pulse shape on the backscattereddata by deconvolution. The authors noted the effectof the detection system transfer function (TF), thelaser monitor, and even the digitizer, and suggestedsimilar deconvolution processing. Using different de-convolution techniques, Gurdev et al. [18] concludedthat the resolution limit of the deconvolved lidar pro-file is equal to or higher than the laser pulse width.Dreischuh et al. [19] managed to define a maximum-resolved lidar profile that practically corresponds tothe system TF, where the lidar return is the convolu-tion of that TF and the output laser pulse shape.Although the procedures were successful using sim-ple Fourier deconvolution, two main effects were ob-served. The first is an offset to the profile, and thesecond is amplitude and phase distortions. To thelimit, the same group applied a similar techniqueusing a chopped cw laser. In this case, to avoid theFourier deconvolution, an algorithm based on differ-entiation and iterative procedure has been appliedthat resulted in retrieving the actual profile, assum-ing rectangle-like laser pulses [20].

At the receiver end, deconvolution was also appliedfor correcting several instrument systematic effects.For example, Gao et al. [21] applied a blind deconvo-lution technique to recover the lidar signal from themultiple scattering effect, which was found to be ashigh as 14% of the original signal. Shipley et al. [22]recovered the lidar signal from a photomultipliertube (PMT) afterpulse effect in a photon counting de-tection system using deconvolution. The work in-cluded measuring the PMT afterpulse probabilityfunction by artificially illuminating the device withshort pulses and applying that function for correctingthe data. On the other hand, in passive remote sen-sors, Matthews [23] reported the application of theFourier deconvolution to correct for the detector–telescope combinational response in the Geostation-ary Earth Radiation Budget Experiment. In thiswork, the treated systematic effects included theresponse of a thermal detector array and the tele-scope diffraction and aberration. Keeping in mindthat the deconvolution is an ill-posed problem, theprocess is sensitive to both the data and the algo-rithm itself [16].

Although different deconvolution algorithms wereproposed, all originate from the basic Fourier decon-volution or inverse filtering method [16,24,25]. Insuch a method, the original measurement is recov-ered by simply dividing the data by the system TFin the frequency domain. Since most systems, includ-ing lidar, exhibit a low-pass action, a problem ariseswhen inverting their TF. This is because the newfunction represents a high-pass action, which simplyimplies amplification of high frequency noise and at-tenuation of the low frequency signal. Therefore ap-plying the deconvolution process usually requires atrade-off between improving the system resolution

5282 APPLIED OPTICS / Vol. 47, No. 29 / 10 October 2008

and deteriorating the SNR. Iterative deconvolution,first introduced in 1931 by van Cittert [24], providescontrollability of such a trade-off. Iterative deconvo-lution is applied in the time domain using the rela-tively stable convolution algorithm (see Appendix A).Thus provided the convergence of the solution, theprocess can be optimized for different conditions,such as highest resolution or maximum SNR [25].The present work focuses on testing the HPT

against the atmospheric boundary layer and uppertroposphere clouds in lidar backscatter mode. Thecapability of such HPT technology for atmosphericCO2 profiling using the DIAL technique is in pro-gress and will be presented in future publications.In this paper, iterative deconvolution is applied forrecovering the lidar data from the detector systema-tic effects. Although the systematic effects recoverywas elaborated for the HPT, iterative deconvolutionimproves the atmospheric data even for APD chan-nels as presented. In Section 2, the lidar experimen-tal setup is presented for simultaneous operation ofAPD and HPT detectors for a thorough validationprocess. Section 3 focuses on the HPT lidar detectionchannel design consideration and performance,while introducing the concepts of the detector effec-tive noise for gain selection and single-pole approx-imation for the system TF. In Section 4, the lidarbackscattered signal recovery is presented for differ-ent pointing angles with a special focus on theresultant SNR. Finally, the research findings aresummarized in Section 5.

2. Lidar Experimental Setup

The operation of a lidar system with a phototransis-tor was presented by incorporating the HPT into thedetection system of the Raman-shifted eye-safe aero-sol lidar (REAL) [2,11,26]. REAL, with the HPT re-placing one of the two detection channels, wasoperated from the Foothills Laboratory of the Na-tional Center for Atmospheric Research in Boulder,Colorado, on 8 June 2006. On that experiment, sys-tematic effects in the HPT detection channel causedovershoot problems in the collected data. Neverthe-less, the backscattered signals were recovered simplyby defining the overshoot function and subtracting itfrom the collected data [11]. As a proof-of-concept,the experiment demonstrated the applicability ofphototransistors in lidar applications. After optimiz-ing the detection system again on 8 December 2006,REAL operated with the same HPT incorporated inits detection channel. The focus this time was on theHPT influence on the lidar systematic effects and onhow to recover the true backscattered signal from thedata of the HPT detection channel.The HPT gain dependence on bias voltage and

temperature, and their correlation to the device set-tling time, has been investigated [27]. Increasing thebias voltage increases both device gain and settlingtime. Cooling down the device decreases the gainwhile increasing the settling time. Therefore atrade-off arises in applying HPTs in lidar, between

increasing the gain to enhance SNR and deteriora-tion of the temporal response. As a consequence,two HPT based lidar detection channel configura-tions have been designed and integrated in the lidarreceiver channel independently. Each configurationconsists of a HPT and detection circuit electronics.In the first configuration the design focuses on in-creasing the system SNR by increasing the HPT gainwhile reducing the gain of the associated electronics.Because this design exhibited longer settling time,the design of the second configuration exhibits a fas-ter response by reducing the HPT device gain whileincreasing the electronic gain. Both designs weretested separately in comparison with one of theREAL detection channels incorporating InGaAsAPDs as a reference.

A. Raman-Shifted Eye-Safe Aerosol Lidar System

REAL is a field-deployable elastic backscatter lidarsystem with polarization sensitivity. The REALtransmitter consists of a flash-lamp pumped1:064 μm Nd:YAG pulsed laser and a high pressuregas cell. The 1:064 μm radiation is converted to1:543 μm by stimulated Raman scattering inmethane. The gas cell is injection seeded by a laserdiode to improve the conversion efficiency and beamquality. The system pulse repetition rate is 10Hzwith a mean energy of 170mJ and pulse durationof 4ns at 1:543 μm. The beam divergence angle is0:24mrad with a 47mm 1=e diameter at the trans-mitter end. The REAL receiver consists of a 40 cmdiameter Newtonian telescope, with 0:54mradfield-of-view (FOV). The collected backscattered ra-diation is collimated and filtered using a 5nmband-pass filter. A half wave plate and a polarizationbeam splitter cube separate the backscatter into par-allel and perpendicular polarization components.REAL normally uses two separate detection chan-nels with two InGaAs APDs. The APDs are inte-grated with signal conditioning electronics, capableof being directly applied to a digitizer. The REAL di-gitizer is a 14 bit, 100MS=s PC card (GaGe Compu-Scope 14100), capable of digitizing both detectorchannels simultaneously at 50MS=s, with input vol-tage range set to �1V [28]. The coaxial transmitter–receiver design achieves full overlap at about 500m.REAL employs an azimuth over elevation beamsteering unit to enable full hemispherical scanningor stationary pointing [2,26,28].

REAL has been deployed in several remote sensingmissions that prove its validity as a state-of-the-artbackscatter lidar system. REAL provides an idealtest-bed for validating and testing the HPT in lidarfor several reasons. With its simultaneous dual de-tection channels capability, incorporating the HPTinto one detection channel, while keeping the In-GaAs APD in the other as a reference, provides ameans to study the HPT systematic effect on lidarand to implement recovery techniques at the dataside. Although optimized for 2 μm detection, theHPTs have good sensitivity at 1:5 μm as well, which

10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5283

is the REAL operating wavelength. Then, the focus-ing capability of REAL, down to 200 μm spot size forranges beyond 500m, makes it just compatible withthe phototransistor active area. Besides, the steeringcapability of REAL enables the adjustment of the re-turn signal, whether from a hard target (pointinghorizontal for alignment), clouds (pointing vertical),or boundary layer (pointing at an angle). Finally,REAL has the capability of recording raw data, withall housekeeping information necessary for thoroughdata analysis.

B. Phototransistor Detection System

Figure 1 shows a schematic of the HPT detection sys-tem, designed and integrated at NASA Langley Re-search Center. The HPT was placed in a customdesigned chamber, which is mounted on a three-dimensional translation stage for alignment pur-poses. The detector temperature is set using a ther-moelectric cooler and temperature controller, whilebias is set using computer controlled electronics.The HPT generated signal is then conditioned usingthe detection circuit electronics, consisting of a tran-simpedance amplifier (TIA), summing amplifier(SA) and voltage amplifier (VA), before being appliedto the system digitizer. The TIA mainly providescurrent-to-voltage conversion for the HPT signal,while limiting its bandwidth. To preserve the electro-nics dynamic range, the TIA compensates for theHPT dark current by injecting a fixed counteracting

current. The TIA output is buffered and invertedusing the summing amplifier. The summing ampli-fier allows for the addition of external signals tothe output as required, to compensate for any electro-nic offsets or to include a marker signal. The outputof the summing amplifier is applied to the voltageamplifier (FEMTO DHPVA-100), which has differentgain settings for adjusting the final gain of the wholesignal to match the digitizer input range [8]. For thepurpose of comparison, the REAL digitizer was setwith one reference lidar detection channel (RLDC)measuring the APD output and the other accommo-dating the HPT detection channel.

3. Detection System Diagnostic

To investigate the influence of the HPT gain versusSNR and settling time on the lidar performance, twoindependent detection channel configurations havebeen tested. Both configurations follow the sche-matic of Fig. 1, including two similar HPTs with dif-ferent settings and two different detection circuits.Table 1 summarizes and compares REAL InGaAsAPD with the two HPT settings used in this study.Considering room temperature operation, the firstdevice (HPT1) was set to a high gain of 290 by in-creasing its bias voltage to 3:5V. At the same tem-perature, the second device (HPT2) was set to alimited gain of 11 by biasing it only to 1:5V. At afixed temperature, increasing the bias voltage leadsto an increase of both the device gain and noise butresults in enhancing the SNR, as observed from thenoise-equivalent-power (NEP) and detectivity (D�)values listed in Table 1 [13,27].

As a final product, the detector settings listed inTable 1 are strongly affected by the selection and set-tings of the detection circuits electronics. Table 2summarizes the settings for both lidar detection cir-cuits (LDCs) implementing the HPT devices. Com-paring Tables 1 and 2, in LDC1 the detector gainwas set to the high value, while the electronic gainwas set to a lower value. In order to achieve a com-parable overall gain, LDC2 electronics were set to ahigher value by increasing the TIA gain, GI, and byintroducing a fixed gain in the summing amplifier,which leads to an increase in the voltage gain, GV .

Fig. 1. Schematic of the lidar detection system that consists ofthe lidar detection channel (LDC) and the digitizer. The LDC isformed by the heterojunction phototransistor (HPT) or the detec-tor and the detection circuit electronics. The electronics includetransimpedance amplifier (TIA), summing amplifier (SA), and vol-tage amplifier.

Table 1. Raman-Shifted Eye-Safe Aerosol Lidar InGaAs Avalanche Photodiode and Heterojunction Phototransistor Parameters at 1:5 μm

D a μm V b V T c °C R d A=W η e % Idf A In

g A=Hz1=2 NEPdh W=Hz1=2 D�d i cmHz1=2=W

APD 200 46.7 22 10.5 75 25 × 10−9 6:5 × 10−13 1:30 × 10−13 1:4 × 1011

HPT1 200 3.5 20 159.4 45.6 5:5 × 10−3 2:3 × 10−10 1:44 × 10−12 1:2 × 1010

HPT2 200 1.5 20 4.0 31.5 6:1 × 10−5 3:8 × 10−11 9:50 × 10−12 1:9 × 109

aD ¼ Device diameter.bV ¼ Operating bias voltage.cT ¼ Operating temperature.dR ¼ Responsivity at the operating bias voltage and temperature.eη ¼ Quantum efficiency at the operating temperature; η ¼ ðh · c=qÞ ·RjV¼0=λ, where h is the Plank’s constant, c is the speed of light, q is

the electron charge, and λ is the wavelength.fId ¼ Dark current at the operating bias voltage and temperature.gIn ¼ Noise current spectral density at the operating bias voltage and temperature.hNEPd ¼Noise-equivalent power; NEPd ¼ In=R.iD�d ¼Detectivity; D�d ¼

ffiffiffiffiA

p=NEP.

5284 APPLIED OPTICS / Vol. 47, No. 29 / 10 October 2008

The total electronic gain of any of the detection cir-cuits,Ge (in V=A), is defined by the product of the TIAgain and the voltage gain, as listed in the third andfourth columns of Table 2, according to

Ge ¼ GI⋅GV ; ð1Þ

where GV is the gain product of the summing andvoltage amplifiers in V=V. Considering the HPT set-tings listed in Table 1, the overall detection channelgain, Gch in V=ðW=cm2Þ, is given by

Gch ¼ 0:5 ·R · A ·Ge; ð2Þ

whereR is the detector responsivity in A=W and A isthe sensitive area in cm2. A factor of 0.5 was intro-duced in the equation to account for 50Ω signal ter-mination at the voltage amplifier output. Finally,considering the REAL digitizer, the whole detectionsystem gain, Gd, in count=ðW=cm2Þ, is given by

Gd ¼ 0:5 ·R · A ·GI ·GV · 2N=Vp−p; ð3Þ

where N and Vp−p are the number of bits and the fullrange input voltage (in V) for the digitizer. The detec-tion system gain is listed in the fifth column ofTable 2, for both detection channels at different vol-tage amplifier gain settings. During lidar testing, oneof the HPT circuits is placed in one of the REAL de-tection channels, while the other channel was keptwith the InGaAs APD as a reference (see Fig. 5 ofRef. [11]). In the following subsections, the SNRand temporal responses of LDC1 and LDC2 will beevaluated and compared.

A. Signal-to-Noise Ratio

Ideally, the detector noise should be the dominantnoise source in the lidar detection system [8]. There-fore any additional electronics should introduce lessnoise to the system than the detector. Figure 2(a)shows the variation of the voltage noise density withfrequency for the LDC1 and LDC2 detection chan-

nels under study. The voltage noise density was mea-sured by connecting the output of the channels to aspectrum analyzer (Stanford Research SystemsSR785). For each channel and for every gain setting,

Table 2. Detection Channel Parameters, Using 50 MS=s, 14-Bit, �1Vp−p Digitizera.

Detector GIb V=A GV

c V=V Gdd count=ðW=cm2Þ Vn

e V=Hz1=2 Inefff A=Hz1=2 NEPeff

g W=Hz1=2 D�eff h cmHz1=2=W

1 × 1 2:1 × 105 2:0 × 10−6 2:0 × 10−9 1:25 × 10−11 1:4 × 109

LDC1 HPT1 1k 1 × 3 6:2 × 105 2:2 × 10−6 7:3 × 10−10 4:58 × 10−12 3:9 × 109

1 × 9 1:8 × 106 3:2 × 10−6 3:5 × 10−10 2:19 × 10−12 8:0 × 109

1 × 27 5:5 × 106 7:6 × 10−6 2:8 × 10−10 1:76 × 10−12 1:0 × 1010

LDC2 HPT2 10k 24 × 1 1:2 × 106 1:2 × 10−5 4:8 × 10−11 1:20 × 10−11 1:5 × 109

24 × 3 3:7 × 106 3:3 × 10−5 4:6 × 10−11 1:15 × 10−11 1:6 × 109

aThe detectors are operating at 1:5 μm and 20 °C. Voltage gains of 1 × 27 and 24 × 3 resulted in an optimum setting for LDC1 and LDC2,respectively, which were used in lidar measurement.

bGI ¼ Transimpedance amplifier gain.cGV ¼ Product of the summing and voltage amplifiers gains.dGd ¼ Total gain of the detection system.eVn ¼ Noise voltage density of the detection system.fIneff ¼ Vn=Ge, where Ge is the electronic gain of the detection circuits.gNEPeff ¼ Ineff=R, where R is the detector responsivity at the operating bias voltage and temperature.hD�eff ¼

ffiffiffiffiA

p=NEPeff , where A is the detector sensitive area in cm2.

Fig. 2. (a) Variation of the voltage noise spectral density with fre-quency for LDC1 and LDC2 detection channels at the specified vol-tage amplifier gains. (b) Calculated effective noise current spectraldensity variation with frequency compared to the HPT1 and HPT2phototransistors’ measured noise current spectral density.

10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5285

the noise was measured in the frequency range from0 to 100kHz, as presented in the figure. At low fre-quency the noise increases due to the flicker effect,after which it decays to a fixed value, assumed con-stant for higher frequencies [29]. The resultant noisevoltage, Vn, is calculated by integrating the voltagenoise density and is presented in column 6 of Table 2.Comparing the two channels at the lowest voltageamplifier setting, LDC2 generates higher noise thanLDC1. This is expected since the electronic gain is240 times higher, while the noise generated by deviceHPT2 is only about an order of magnitude lower thanHPT1. Further increasing the electronic gain ofeither LDC leads to an increase of the noise by thesame factor. Figure 2(a) illustrates that the lowestvoltage gain setting will result in the lowest noisefor either LDC channels; likewise LDC1 generatesthe lowest noise at the output as the best selection.Nevertheless, in backscatter lidar application, thehighest SNR is the best choice rather than the lowestnoise.To investigate the SNR of the detection channels,

the output noise voltage is referred to the detectorside (channel input). This is done by dividing the ob-tained output voltage noise density by the electronicgain, given by Eq. (1). Thus it is assumed that thenoise of the whole detection system is contributedonly from the detector. It is as if the detector gener-ates an effective noise equal to the whole detectionsystem noise, while associated with noiseless electro-nics. Figure 2(b) shows the calculated effective noisecurrent density. The integration of the noise currentdensity is presented in column 7 of Table 2, to obtainthe effective detector noise, Ineff . Once defined, aneffective noise-equivalent power and detectivity,NEPeff and D�eff , respectively, could be calculatedas presented in columns 8 and 9 of the same table.Figure 2(b) also compares the HPT1 andHPT2 detec-tors noise as measured by the same technique. Theresults obtained by LDC1 indicate poor design withlow electronic gain, since the electronics effectivelywill increase the device noise by an order of magni-tude. Increasing the electronic gain leads to a de-crease in the detector effective noise, to the limitthat both the detector noise and the detector effectivenoise are practically equal. In such a case, the elec-tronic noise contribution to the system is minimalwhile the gain is maximal. This gain indicates thatthe detector becomes the dominant noise source,with the system operating at the maximum SNR.On the other hand, LDC2 indicates good design, asany additional electronic gain will not influencethe detector effective noise, which is already closeto the detector noise.Although LDC2 generates lower effective noise,

the end product is a higher effective NEP thanLDC1. This is clear by comparing the detectorsNEP and D� from Table 1 to the corresponding effec-tive values from Table 2. Nevertheless, LDC1 withhigher gain setting indicates better performance re-garding the SNR due to the low NEP and high D� of

the detector to start with. At this point, the focus willbe toward the optimized gains of LDC1 and LDC2, asindicated in Table 2. Both design settings are furtheranalyzed and applied to the REAL system. It shouldbe noted here that the gain setting for both channelswas optimized for the best SNR, which affects the li-dar system minimum detectable signal. Other opti-mized settings, such as digitizer limits versusmaximum detectable signal or system overload, werenot considered in this study.

B. Impulse Response Function

According to control theory, a linear system could bemodeled by a set of linear integro-linear equationsthat correlates the system output to its input. Sucha system may introduce signal distortion when sub-jected to transients (transient nonlinearity) [30].Those transient distortions could be corrected by de-convolution. Thus determination of the system TF iscritical to define its temporal limitation and, if pos-sible, to correct for it. Applying this concept to thelidar detection systems, the impulse response func-tion (IRF) of the tested LDC was measured and com-pared to the APD RLDC. The measurement setup,shown in Fig. 3(a), consists of a 1:5 μm laser diode,driven by a pulse generator. The laser output iscoupled to a fiber optic with fiber coupling that di-vides the radiation into two equal channels. The out-put radiation from each channel is then collimatedand focused onto the APD RLDC and one of the

Fig. 3. (a) Schematic of the experimental setup used to measurethe tested detection systems’ impulse response compared to thereference APD channel. (b) Measured impulse response functionsfor the detection channels (continuous curves) and the selectedpoints (solid circles) for the fitting. The fitting functions are alsoplotted with bandwidth fitting parameters listed for the detectionchannels.

5286 APPLIED OPTICS / Vol. 47, No. 29 / 10 October 2008

tested LDCs. The outputs of both channels were ap-plied to a digitizer. The laser driver was set to apulsed mode, with 10Hz repetition rate and 70nWoutput optical power for each channel, to simulatethe input impulse. By setting the pulse duration longenough to settle the detection systems (200 μs), thetwo branches were aligned and characterized forany gain differences. Then, to measure the impulseresponse, the laser pulse was narrowed down to150ns, and the digitizer output was recorded. Foreach record, 1000 shot averages were applied, witha digitization frequency equal to 15MS=s and 500samples record length.The waveforms of the measured IRF of the detec-

tion channels are shown in Fig. 3(b). The collectedprofiles were modified by subtracting the back-ground signal, represented by the mean of 200 pre-trigger samples. Then, after a running average of 3data points, the IRFs were normalized, with respectto the energy, by dividing each record by its integral.Assuming single-pole approximation for the LDC TF,the decaying parts of the signals can be fitted by anexponential function, in the form of

OpðtÞ ¼ a · e−b·t; ð4Þ

where Op is the fitting IRF, t is the time, and a and bare the fitting parameters. Although a detailed fit-ting could be applied, representing additional polesand zeros of the system TF, the single-pole represen-tation of the LDC TF is adequate to limit the com-plexity of the algorithms. Besides, lidar detectionsystem bandwidth is usually limited either by an out-put filter or by the bandwidth limitation of the am-plifiers or even by the digitizer sampling rate. In anyof these cases a dominant pole could be defined forthe system. In this analysis, the location of such apole is acquired by the decay rate in Eq. (4). Further,fitting the IRF to Eq. (4) was achieved by taking thelogarithm of some data points at the falling edge ofthe pulse, as indicated in Fig. 3(b). Then the loga-rithm is fitted to straight line according to

lnfOpðtÞg ¼ lnfag − b · t; ð5Þ

where the slope of the fitted line (parameter b) repre-sents an approximation of the dominant pole locationor bandwidth (in rad=s) of the detection system. Thefirst term of Eq. (5), represented by an offset, can beneglected since the IRF is normalized. The selectioncriteria of the fitted data points consider only posi-tive values starting from the IRF maximum to theclosest point to the datum. Fitting with such criteriaresults in parameter b values of 9.7, 1.3, and 2:2MHzfor the APD RLDC, LDC1, and LDC2, respectively,with the fitting curves shown on the same figure.As expected, LDC2 with lower gain and higherNEP setting exhibited faster response than theLDC1. In the next section, application of the LDCin REAL will be presented, and the analysis resultswill be discussed.

4. Lidar Backscatter Signal Recovery

Systematic effects of the lidar detection system fol-low three main categories. On a single shot, the de-tection system affects the backscattered signalamplitude and resolution. For multiple shots, fluc-tuation in laser energy affects the detection systemSNR for a group of averaged backscattered signals.Background signal is the simplest form of systematiceffects, which causes amplitude variation, by intro-ducing an offset to the backscattered signal [31]. Thisoffset, arising due to the detector dark current, am-plifies offsets and daylight background and leakageradiations. Part of the background signals can becharacterized by data records while blocking the re-ceiver telescope (blind records). This process ensuresdata recovery from any time-correlated offsets withthe laser transmitter such as electromagnetic pickupin acquisition electronics or room radiation scatter-ing. Another method is to acquire pretrigger samplesfor each shot record (pretrigger records). The mean ofthe pretrigger samples, giving a measure of the day-light background signal, is then subtracted from thewhole record. Full recovery of the backscattered sig-nal is achieved preferably using both methods. Lidardetection system linearity and saturation contributeto the amplitude of the measured signal. On theother hand, the resolution of the backscattered signalis directly affected by the temporal properties of thereceiver system itself. For example, system band-width directly limits the minimum range bin of thedata, affecting the resolution. Systematic effects dueto temporal properties are the hardest to recover thelidar data from. This study suggests recovering thiseffect, by deconvolving the lidar data with the detec-tion system TF. Fluctuation in the transmitted laserenergy is another source of systematic effects, whichincrease the measurement uncertainty for multipleshot data. Records of laser energy monitor usuallyhelp in correcting this effect on a shot-to-shot basis,after background subtraction. Even profiling thetransmitted laser pulse can help recover the data,also by deconvolution for longer pulse widths [17].

A major contribution to the systematic effects ofthe lidar detection system comes from the detector.Implementing phototransistors in such systems in-creases the systematic effects, which might lead todeterioration of the atmospheric data. Thus integrat-ing the LDC with HPT into the REAL detection sys-tem allows for improving and validating the signalprocessing to overcome the systematic effects. Ex-perimentally, after installing each of the LDCs intothe REAL system, alignment was obtained by max-imizing a far-field hard target signal (the front rangeof the Rocky Mountains at about 14km from theREAL location). Measurements from the testedLDC channels and the REAL reference APD RLDCchannel included boundary layer profiling and cloudstructures. For each case, data processing was per-formed with and without the iterative deconvolutionprocess. For each data set, unless otherwise noted,

10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5287

processing the lidar returns included the follow-ing steps:

1. Background subtraction of blind records.2. Background subtraction of pretrigger records.3. Laser energy correction.4. Shot averaging.5. Running average of 10 data samples.At this point, the iterative deconvolution process

was investigated to recover the data using the sys-tem TF. Therefore for nondeconvoluted data, the fi-nal processing step will be the range correction.On the other hand, for the deconvolved data, thesteps proceed as follows:6. Deconvolving the data by the TF.7. Range correction.

The objective of the lidar measurement is to achievea vertical resolution of 30m (5MHz). Consequently,the resolutions of the system TF and the lidar datawere reduced tomatch the requiredmeasurement re-solution by applying appropriate running averages(see Appendix A). For SNR calculation the mean ofseveral shots (equal to the shot average) is dividedby their standard deviation at each range bin thatwas obtained after the laser energy correction [32].

A. Boundary Layer

Boundary layer measurement allows for longaverages due to the minimal temporal variation inatmospheric structure in that region. In such a case,shot noise can be minimized while emphasizing sys-tematic noise. This emphasis allows for the study ofthe influence of the detection circuit noise on the li-dar backscattered signal. Data collected by bothLDC1 and LDC2 channels are presented in Figs. 4and 5, respectively, compared to the reference APDchannel. The data of both figures represent 3000 shotaverages, equivalent to 5 min. To increase the signalrange, the elevation angle was changed to 20° and 7°for LDC1 and LDC2, respectively, while the azimuthangle was fixed at −71°. The boundary layer thick-ness was about 700m, which was virtually increasedby the elevation angle to about 2 and 4km range.Figures 4(a) and 5(a) present the averaged raw data,after background and laser fluctuation corrections.The data are presented by the digitizer counts inthe vertical axis versus the sampling time in the hor-izontal axis. The corresponding SNR is calculated forthe whole 3000 shots and is shown in Figs. 4(b) and 5(b). SNR calculation was obtained before and afterthe deconvolution for the phototransistor channels.The subfigures (c) and (d) of Figs. 4 and 5 comparethe signal versus range after running average beforeand after the deconvolution process, while the rangeis limited to 7km. The subfigures (e) and (f) ofthe same figures compare the results after range cor-rection.LDC1 has higher near-field SNR than LDC2, as

compared to the APD channel. This result was ex-pected, as discussed in Subsection 3.A, due to the

limited gain of the HPT2 device, which limitsits NEP. By definition, NEP is the amount of powerthat generates unity SNR. This process can be ap-plied to extract the system NEP from the SNR andraw data. This extraction is indicated in Figs. 4(a)and 4(b) for LDC1 and Figs. 5(a) and 5(b) forLDC2. The NEP extracted from the data, NEPex,is given by the equation

NEPex ¼�A=Gd ·

ffiffiffiffiffiffiffiffiffiBW

p �· SjSNR¼1; ð6Þ

where BW is the system bandwidth and SjSNR¼1 isthe signal, in count, that corresponds to a unitySNR. Since no limiting filters are used in the detec-tion systems under study, the detection system band-width is limited by the digitization frequency,according to the Nyquist criteria, divided by thenumber of running average. For example, for digiti-zation frequency of 50MHz, with the 10 runningaverages, the system bandwidth is equal to 2:5MHz.Table 3 lists the extracted NEP and its conditionsand compares it to the detector NEP and the systemeffective NEP from Tables 1 and 2, respectively. Theresults presented in Table 3 indicate that the detec-tion systems reached optimum performance by con-trolling the channels’ gains. However, LDC1 isperforming better than LDC2 due to initial optimiza-tion of the NEP of the HPT1 detector itself. It shouldbe mentioned here that once the detector and electro-nic amplifiers are optimized, additional systematicnoise will negligibly contribute to the total noiseand hence the NEP. This is clear from the presentedresults and the fact that even the digitizer noise isnot included in the NEPef calculations, while it is in-cluded in the NEPex measurement. The correspond-ing signals for the APD reference channel are 17.0and 18.8 counts as marked in Figs. 4(b) and 5(b)for LDC1 and LDC2, respectively. This indicates thatthe shot noise has been minimized by the shot aver-age. A final notice is given regarding the range atwhich the NEP was evaluated. The APD channelshows longer range compared to the tested LDC be-cause the APD detector has less NEP than the photo-transistor, as listed in Table 1. The detector NEPaffects the minimum detectable signal and the rangeat which it occurs as indicated in the figures.

Figures 4(c) and 5(c) show the temporal profiles ofboth detection systems compared to the referencechannel for the same data set analyzed without de-convolution, after the running average. The systema-tic effects are clear in these figures, which causebroadening of the near-field signal. The broadeningis wider for LDC1, with higher gain setting ofHPT1, due to its IRF longer settling time, and viceversa for the LDC2 channel. Figures 4(d) and 5(d)show the signals, after deconvolving the previous sig-nals with the detection system IRF, represented inFig. 3(b). The process was applied with 45 iterationsthat recover the broadening effect, while slightlyincreasing the amplitude of the peak signal. Thecorresponding range corrected data are shown in

5288 APPLIED OPTICS / Vol. 47, No. 29 / 10 October 2008

the rest of the figures, illustrating a better matchafter the deconvolution. Also indicated is the in-crease in the far-field noise as an outcome of the pro-cess. The far-field increased noise is due to the lowerSNR to start with at that range. In order to obtain abetter match between the HPT channels and the

APD reference channel, the data of the APD were de-convolved as well, using the APD IRF (Fig. 3(b)), withonly 10 iterations. This process recovers the refer-ence APD channel data from the systematic effectsas well, which practically might be neglected dueto the relatively faster response of the device.

Fig. 4. (a) Data of the boundary layer profiling obtained simultaneously by LDC1 and RLDC. The data were background and laser energycorrected and shot averaged by 3000 shots. The data clearly show the broadening effect of the LDC1 due to its slower response. (b) SNRcalculation of the same data set and for the LDC1 after deconvolution. By correlating the SNR of (b) and the data of (a), the system NEP(marked extracted NEP, NEPex in text) can be obtained and compared to the detector NEP and detection system effective NEP, NEPeff .(c) A zoom-in to the near-field, up to 7km, to the processed data without the deconvolution and (d) with the deconvolution of both channels,using the IRF defined in Fig. 3(b). Comparing (c) and (d), the broadening effect of LDC1 is recovered by the process. (e) The correspondingrange corrected data before the deconvolution and (f) after the deconvolution. Comparing (e) and (f), the deconvolution process matches theboundary layer signals, where the original SNRwas high, while increasing the far-field noise, as indicated by the grassy far-field between 5and 7km.

10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5289

On the other hand, the deconvolution processcauses reduction in the SNR, as indicated in Figs. 4(b) and 5(b), that is proportional to the number ofiterations. SNR degradation is due to two main fac-tors. First, the process increases the system noise,and second, the time response of the system is re-

duced due to the enhanced resolution. For example,Fig. 4(b) indicates a small variation in the SNR at thefirst 670 samples (about 2km). This is attributed tothe first factor, since the boundary layer provideshigh SNR to start with. But the same figure indicatesa higher reduction rate of the SNR at sample 716

Fig. 5. (a) Data of the boundary layer profiling obtained simultaneously by LDC2 and RLDC. The data were background and laser energycorrected and shot averaged by 3000 shots. The data clearly show the slight broadening of the LDC2 due to its relatively faster response.(b) SNR calculation of the same data set and for the LDC2 after deconvolution. SNR indicates a performance deterioration of LDC2 com-pared to LDC1 before the deconvolution (see Fig. 4). By correlating the SNR of (b) and the data of (a), the systemNEP (NEPex in text) can beobtained and compared to the detector NEP and detection system effective NEP, NEPeff . (c) A zoom-in to the near-field, up to 7km, to theprocessed data without the deconvolution and (d) with the deconvolution of both channels, using the IRF defined in Fig. 3(b). Comparing (c)and (d), the slight modification to the profile was obtained by the process. (e) The corresponding range corrected data before the decon-volution and (f) after the deconvolution. Comparing (e) and (f), the deconvolution processmatch the boundary layer signals, but with highernoise, as indicated by the grassy profile in the far field.

5290 APPLIED OPTICS / Vol. 47, No. 29 / 10 October 2008

(2148m, as marked on the figure), which is attribu-ted to both factors, due to the edge of the boundarylayer. Once the deconvolution converges, the SNRwill settle to a fixed value at higher iterations num-ber. At this point, increasing the iterations will notaffect the signal or the noise but only increase theprocessing time. This is an advantage of using suchan iterative technique versus noniterative, due to thecontrol on how well it is required to recover the re-solution versus how much the SNR can be sacrificed.As a resolution measure, the tolerance, defined asthe maximum difference between signals producedby two successive iterations, was calculated for eachiteration. In the presented results, the iterationswere stopped after achieving a maximum toleranceof 10−3 counts.

B. Cloud Layer

Turning the beam steering unit to 90° elevation (i.e.,pointing zenith), Figs. 6 and 7 show the false colorimages of the far-field temporal variation of the re-turn signal for the LDC1 and LDC2 detection chan-nels as compared to the RLDC detection channel,respectively. In this case, the system is profiling athin cloud layer at about 10km altitude. Data proces-sing follows the procedures listed before with andwithout deconvolution, as marked in the figures.The whole data set for each figure included 6300shots, equivalent to a time span of 10 min and 30 sas indicated in the horizontal axis of the figures.Data processing included 50 shot averages and 10running averages equivalent to pixel (range) resolu-tion of 30m. The false color represents the strength

Table 3. Comparison Between the Extracted Noise-Equivalent Power (NEP), Obtained from Lidar Data, the Effective NEP, Obtained from theDetection Systems, and the Detector NEP, Obtained Before Deconvolution

NEP NEPef NEPex W=Hz1=2 Sample Range m SjSNR¼1 Count

LDC1 & HPT1 1:44 × 10−12 1:76 × 10−12 1:82 × 10−12 745 2235 50.45LDC2 & HPT2 9:50 × 10−12 1:15 × 10−11 1:21 × 10−11 1230 3690 226.20

Fig. 6. False color history diagrams of the far-field temporal variation of the return signal before deconvolution (a) for the LDC1 detectionchannel and (b) for the RLDC reference detection channel. The same diagrams are repeated for (c) LDC1 and (d) RLDC after the decon-volution process. The vertical white line selects a sample record shown in (e) and (f), which mark some resolved peaks, down to 60m, afterthe deconvolution. A good match between the two channels is a result of the higher NEP setting of HPT1. This led to artificially adding amarker on the LDC1 channel by temporary blocking the signal as indicated by the bold white arrows on the channel diagrams.

10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5291

of the range corrected signal and is shown in arbi-trary units versus the altitude in the vertical axis.As presented earlier, the temporal response of the

LDC detection channels implementing HPT causesloss in the signal resolution, as clear from the figures.This loss in resolution causes the blurring effect ob-served in the nondeconvolved data. By applying thedeconvolution, the data restores the resolution, re-sulting in sharper images. Nevertheless, the restora-tion is limited by the NEP. For example, one canobserve a large difference between LDC1 and RLDCprofiles in Fig. 6(f) at 10:6km. This signal is equiva-lent to about 190 counts. We recall the LDC1 ex-tracted NEP of 50 counts (Table 3), which willincrease to about 300 counts after the deconvolutionprocess. This indicates that at this range (10:6km)the system is operating below its minimum detectionlimit. In such circumstances, a large error in the de-tected signal could be expected. On the other hand,the figures indicate higher background noise forLDC2 than for LDC1, due to the deconvolution pro-cess, as it appears in the blue background outside thecloud layer (between 10.6 and 11:0km). A good reso-lution match between the LDC and the APD refer-ence channels indicates the suitability of HPT forlidar application. In such an application, optimizing

the detection system and modifying the data proces-sing to include the deconvolution process is a must toachieve overall better performance.

5. Conclusion

InGaAsSb heterojunction phototransistors were fab-ricated and optimized for 2 μm detection [12–14]. Si-milar to quantum detectors, optimization of thedevice parameters, such as the quantum efficiency,SNR, or settling time, is mainly dependent on oper-ating bias voltage and temperature [27]. Thesephototransistors were validated for lidar applica-tions using REAL at 1:5 μm, where available InGaAsAPD technology provided an ideal test-bed for perfor-mance comparison [2,11,26]. Phototransistors exhi-bit high gain, resulting in enhanced SNR whileincreasing the settling time [27]. The increase inthe settling time introduces systematic effects tothe lidar system, in the form of broadening of thebackscattered signal, leading to a blurring effect[11]. To investigate such effects, two detection sys-tems were constructed using two similar phototran-sistors. The gain of one of the phototransistors(HPT1) was optimized to minimize the NEP, whilethe other (HPT2) was optimized to minimize the set-tling time. Both phototransistors were integrated

Fig. 7. False color history diagrams of the far-field temporal variation of the return signal before deconvolution (a) for the LDC2 detectionchannel and (b) for the RLDC reference detection channel. The same diagrams are repeated for (c) LDC2 and (d) RLDC after the decon-volution process. The vertical white line selects a sample record shown in (e) and (f), which mark some resolved peaks, down to 60m, afterthe deconvolution. LDC2 shows an increase in the background noise due to the lower NEP setting of the HPT2.

5292 APPLIED OPTICS / Vol. 47, No. 29 / 10 October 2008

with electronics with matched overall gain. By intro-ducing the concept of the effective noise, the gain ofboth systems was optimized to minimize the overallsystem NEP. At these settings, the impulse responsefunctions were obtained for both channels as well asthe reference APD channel. These impulse responsefunctions were used to fit the detection system trans-fer function using single-pole approximation. Thetransfer functions were applied to correct the lidardata by an iterative deconvolution process.Lidar testing included detecting the near-field at-

mospheric boundary layer and the far-field cloudlayer. Data analysis was presented for both the de-convolved and nondeconvolved results. For the de-convolved data, the number of iteration controlsthe resolution while deteriorating the SNR. The de-terioration of the SNR is inversely proportional tothe initial SNR; therefore optimizing the SNR ofthe phototransistor, or the detector in general, ismore beneficial than optimizing other parameterssuch as the settling time. This benefit is confirmedby the results presented for HPT1 compared toHPT2. On the other hand, the deconvolution processis successful in restoring the phototransistor resolu-tion, which results in profiles matching the APD de-tector, up to 60m. Although the HPT device wasoptimized for 2 μm detection, its performance is com-parable to the InGaAs APD at 1:5 μm. Switching thelidar wavelength to 2 μm will lead to an increase inthe phototransistor quantum efficiency. Cooling thedevice will reduce the dark current and noise, whichwill further enhance the overall system SNR [13].

Appendix A: Iterative Deconvolution for RecoveringLidar Return Signals

The lidar detection system records a signal in theform of digital data (system output), which is a mea-sure of the atmospheric backscattered signal (systeminput). Generally, for a system defined by a transferfunction gðtÞ, the input f ðtÞ and output hðtÞ are re-lated by the equation

hðtÞ ¼ f ðtÞ � gðtÞ; ðA1Þwhere the symbol * denotes a convolution process. Inthe frequency domain this equation takes the form

HðsÞ ¼ FðsÞ ·GðsÞ; ðA2Þ

where the frequency domain functions H, F, and Gare related to the time domain functions h, f , andg by the Laplace transform [30]. Equations (A1)and (A2) set a fundamental constraint to lidar sys-tems that all of the three functions have to be repre-sented by the same independent variables t or s,respectively. In Eq. (A2) substituting s by the fre-quency simply implies that the digitization frequen-cies for both science data and the system transferfunction must be equal. Practically, this might not oc-cur, and in such a case a running average could be

used for correction toward the lowest frequency ac-cording to

f sf ¼ f sh=Mh ¼ f sg=Mg; ðA3Þ

where f sf , f sh, and f sg are the system input, output,and transfer function digitization frequencies, re-spectively, and Mh and Mg are the running averagesapplied to the system output (science data) andtransfer functions, respectively. On the other hand,according to Eq. (A2), the output and input will beequal if and only if the system transfer function isunity, or in the time domain, if the transfer functionis equal to a delta function. Otherwise, the measuredquantity at the output will be deformed by the sys-tem transfer function. In order to recover the defor-mation, the output has to be filtered by the reciprocalof that function in the frequency domain or decon-volved in the time domain. Frequency domain treat-ment of the lidar recorded data is problematic andmay cause instabilities [16]. Therefore an alternativeiterative deconvolution technique is used that isequivalent to controlled filtering. According to thevan Cittert [24] time domain iterative deconvolutionmethod, if both the system output and transfer func-tion are known, the nth iteration to recover the trueinput is obtained by the following steps:

Step 1:

k1 ¼ g � f n: ðA4Þ

Step 2:

k2 ¼ h − k1:

Step 3:

k3 ¼ f n þ k2:

Step 4:

f nþ1 ¼ k3:

Then, the steps are repeated for the next iteration,with f n → f nþ1, where f o ¼ h, and k1, k2, and k3are intermediate functions defined by the above re-lations. The process is repeated several times untilthe required convergence is achieved. The abovesteps describe the details of a single iteration, whichcan be summarized by omitting the intermediatefunctions by the relation

f nþ1 ¼ f n þ ðh − g � f nÞ: ðA5Þ

The frequency domain representation of Eq. (A5) isin the form

Fn ¼ H þ ð1 −GÞ · Fn−1; ðA6Þ

10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5293

where Fo ¼ H. The second term of Eq. (A6) impliesthe constrain that [25]

j1 −Gj < 1; ðA7Þwhich requires normalizing the system transfer func-tion for convergence. From Eq. (A6), by successivesubstitution we obtain [33]

Fn ¼ H ·Xni¼0

ð1 −GÞi ¼ ½1 − ð1 −GÞnþ1� ·H=G; ðA8Þ

assuming a finite number of iterations n. The secondrepresentation of Eq. (A8) indicates the filtering ac-tion of the process, with filter properties dependenton the system transfer function and the number ofiterations n. As the number of iterations tends to in-finity, Eq. (A8) approximates to

Fn→∞¼ H ·

X∞i¼0

ð1 −GÞi ¼ H=G; ðA9Þ

which is the exact input function according toEq. (A2). Practically, a limited number of iterationsare used [Eq. (A8)], with a trade-off between thenumber of iterations and the computing time, whileachieving a required tolerance.

The authors thank Terry L. Mack for his help indeveloping the detection systems electronics. Theauthors are also grateful to Bruce M. Morley, RogerWakimoto, and the NCAR senior staff for permittingthe use of NCAR facilities to conduct tests using theREAL system. Funding for this research was pro-vided by NASA’s Earth Science Technology Office, In-strument Incubator Program, for the development ofthe “2 μm DIAL for CO2 Profiling System” and theNASA Laser Risk Reduction Program for 2 μm detec-tor development.

References1. P. Ambrico, A. Amodeo, P. Girolamo, and N. Spinelli, “Sensi-

tivity analysis of differential absorption lidar measurementsin the mid-infrared region,” Appl. Opt. 39, 6847–6865 (2000).

2. S. Spuler and S.Mayor, “Raman shifter optimized for lidar at a1.5 micron wavelength,” Appl. Opt. 46, 2990–2995 (2007).

3. J. Kaiser and K. Schmidt, “Coming to grips with the world’sgreenhouse gases,” Science 281, 504–506 (1998).

4. D. Crisp, R. M. Atlas, F.-M. Breon, L. R. Brown, J. P. Burrows,P. Ciais, B. J. Connor, S. C. Doney, I. Y. Fung, D. J. Jacob, C. E.Miller, D. O’Brien, S. Pawson, J. T. Randerson, P. Rayner, R. J.Salawitch, S. P. Sander, B. Sen, G. L. Stephens, P. P. Tans, G. C.Toon, P. O. Wennberg, S. C. Wofsy, Y. L. Yung, Z. Kuang, B.Chudasama, G. Sprague, B. Weiss, R. Pollock, D. Kenyon,and S. Schroll, “The Orbiting Carbon Observatory (OCO) mis-sion,” Adv. Space Res. 34, 700–709 (2004).

5. C. E. Miller, D. Crisp, P. L. DeCola, S. C. Olsen, J. T. Rander-son, A. M. Michalak, A. Alkhaled, P. Rayner, D. J. Jacob, P.Suntharalingam, D. B. A. Jones, A. S. Denning, M. E. Nicholls,S. C. Doney, S. Pawson, H. Boesch, B. J. Connor, I. Y. Fung, D.

O’Brien, R. J. Salawitch, S. P. Sander, B. Sen, P. Tans, G. C.Toon, P. O. Wennberg, S. C. Wofsy, Y. L. Yung, and R. M.Law, “Precision requirements for space-based XCO2

data,”J. Geophys. Res. 112, D10314 (2007).

6. T. Hamazaki, A. Kuze, and K. Kondo, “Sensor system forgreenhouse gas observing satellite (GOSAT),” Proc. SPIE5543, 275–282 (2004).

7. J. Yu, B. C. Trieu, E. A. Modlin, U. N. Singh, M. J. Kavaya, S.Chen, Y. Bai, P. J. Petzar, andM. Petros, “1J=pulse Q-switched2 μm solid-state laser,” Opt. Lett. 31, 462–464 (2006).

8. T. Refaat, W. Luck, and R. De Young, “Design of advanced at-mospheric water vapor differential absorption lidar (DIAL)detection system,” NASA-TP 209348 (NASA, 1999).

9. I. Andreev, M. Afrailov, A. Baranov, M. Mirsagatov, M.Mikhai-lova, and Y. Yakovlev, “GaInAsSb/GaAlAsSb avalanche photo-diode with separate absorption and multiplication regions,”Sov. Tech. Phys. Lett. 14, 435–437 (1988).

10. O. V. Sulima, M. G. Mauk, Z. A. Shellenbarger, J. A. Cox, J. V.Li, P. E. Sims, S. Datta, and S. B. Rafol, “Uncooled low-voltageAlGaAsSb/InGaAsSb/GaSb avalanche photodetectors,” IEEProc. Optoelectron. 151, 1–5 (2004).

11. T. Refaat, S. Ismail, T. Mack, N. Abedin, S. Mayor, S. Spuler,and U. Singh, “Infrared phototransistor validation for atmo-spheric remote sensing application using the Raman-shiftedeye-safe aerosol lidar,” Opt. Eng. 46, 086001 (2007).

12. O. Sulima, T. Refaat, M. Mauk, J. Cox, J. Li, S. Lohokare,N. Abedin, U. Singh, and J. Rand, “AlGaAsSb/InGaAsSbphototransistors for spectral range around 2 μm,” Electron.Lett. 40, 766–767 (2004).

13. T. Refaat, N. Abedin, O. Sulima, S. Ismail, and U. Singh, “Al-GaAsSb/InGaAsSb phototransistors for 2-μm remote sensingapplications,” Opt. Eng. 43, 1647–1650 (2004).

14. N. Abedin, T. Refaat, O. Sulima, and U. Singh, “AlGaAsSb/In-GaAsSb HPTs with high optical gain and wide dynamicrange,” IEEE Trans. Electron. Devices 51, 2013–2018(2004).

15. J. Campbell, “Phototransistors for lightwave communica-tions,” in Semiconductors and Semimetals, Vol. 22 ofLightwave Communications Technology, Part D, Photodetec-tors, R. Willardson and A. Beer, eds. (Academic, 1985),Chap. 5.

16. S. Riad, “The deconvolution problem: an overview,” Proc. IEEE74, 82–85 (1986).

17. M. Kavaya and R. Menzies, “Lidar aerosol backscatter mea-surements: systematic, modeling, and calibration error con-siderations,” Appl. Opt. 24, 3444–3453 (1985).

18. L. Gurdev, T. Dreischuh, and D. Stoyanov, “Deconvolutiontechniques for improving the resolution of long-pulse lidars,”J. Opt. Soc. Am. A 10, 2296–2306 (1993).

19. T. Dreischuh, L. Gurdev, and D. Stoyanov, “Effect of pulse-shape uncertainty on the accuracy of deconvolved lidar pro-files,” J. Opt. Soc. Am. A 12, 301–306 (1996).

20. D. Stoyanov, L. Gurdev, G. Kolarov, and O. Vankov, “Lidar pro-filing by long rectangular-like chopped laser pulses,”Opt. Eng.39, 1556–1567 (2000).

21. J. Gao and C. Ng, “Deconvolution filtering of ground-based li-dar returns from tropospheric aerosols,” Appl. Phys. B 76,587–592 (2003).

22. S. Shipley, D. Tracy, E. Eloranta, J. Trauger, J. Sroga,F. Roesler, and J. Weinman, “High spectral resolution lidarto measure optical scattering properties of atmospheric aero-sols. 1. Theory and instrumentation,” Appl. Opt. 22, 3716–3724 (1983).

23. G. Matthews, “Calculation of the static in-flight telescope-detector response by deconvolution applied to point-spreadfunction for the Geostationary Earth Radiation Budgetexperiments,” Appl. Opt. 43, 6313–6322 (2004).

5294 APPLIED OPTICS / Vol. 47, No. 29 / 10 October 2008

24. P. van Cittert, “Zum einfluss der spaltbreite auf die intesitats-veteilung in spektallinien,” Z. Phys. 69, 298–308 (1931).

25. A. Amini, “Iterative deconvolution with variable convergencespeed of the iterations,” Appl. Opt. 34, 1878–1884 (1995).

26. S. Mayor and S. Spuler, “Raman-shifted eye-safe aerosol li-dar,” Appl. Opt. 43, 3915–3924 (2004).

27. T. Refaat, N. Abedin, O. Sulima, S. Ismail, and U. Singh, “In-GaAsSb/AlGaAsSb heterojunction phototransistors for infra-red applications,” Proc. SPIE 6295, 629503 (2006).

28. S. Spuler and S. Mayor, “Scanning eye-safe elastic backscatterlidar at 1:54 μm,” J. Atmos. Ocean. Technol. 22, 696–703(2005).

29. K. Daugherty, Analog-to-Digital Conversion, A Practical Ap-proach (McGraw-Hill, 1995), Chap. 9.

30. C. Nachtigal, Instrumentation and Control, Fundamentalsand Applications (Wiley, 1990), Chap. 2.

31. V. Kovalev, “Distortions of the extinction coefficient profilecaused by systematic errors in lidar,” Appl. Opt. 43, 3191–3198 (2004).

32. L. Fiorani and E. Durieux, “Comparison among error calcula-tions in differential absorption lidar measurements,” Opt. La-ser Technol. 33, 371–377 (2001).

33. K. Gieck and R. Gieck, Engineering Formulas, 7th ed.(McGraw-Hill, 1997), Chap. D .

10 October 2008 / Vol. 47, No. 29 / APPLIED OPTICS 5295

top related