spie proceedings [spie spie optical engineering + applications - san diego, california (sunday 21...

14
Adaptive Lidar for Earth Imaging from Space Carl Weimer and Tanya Ramond Ball Aerospace & Technologies Corporation ABSTRACT Laser remote sensing of the Earth from space offers many unique capabilities stemming from the unique properties of lasers. Lidars make possible three-dimensional characterizations that enable new scientific understanding of the natural processes that shape the planet’s oceans, surface, and atmosphere. However, the challenges to further expand on these successes remain complex. Operation of lidars from space is limited in part by the relatively low power available to the lasers, the low signal scattered back to the instrument because of the large distance to the surface, and the need for reliable and autonomous operation because of the significant investment required for satellites. The instrument complexities are compounded by the diversity in the Earth scenes as well as the variability in albedo from cloud, ice, vegetation, desert, or ocean, combined with the highly variable transmission of the laser beam through clouds, forest canopy, or ocean surface and near-surface. This paper will discuss the development of a new approach to space-based lidars that uses adaptive instrument techniques to dramatically enhance the capability of space-based lidars. Keywords: Lidar, Adaptive, Beam Steering, Flash Focal Plane Arrays 1. DESCRIPTION OF CONCEPT The Earth is a complex environment, independent of the human impacts. Bare tree branches change to leafy canopies, snow builds up in drifts and melts away, phytoplankton bloom and sink to the bottom of the ocean, storm fronts blow through leaving small convective clouds to drift by. The constant global change makes space-based remote sensing both critical for our comprehension, but challenging for our implementation. The remote sensing problem has been tackled with a myriad of sensors—one of the most recent being laser remote sensing from space 1,2 . Like radar and stereo- photography, lidar sensors have begun to give a three-dimensional view of the Earth’s atmosphere, surface, and near- surface features. Light detection and ranging using lasers (lidars), offer a number of advantages for Earth remote sensing. They include the ability to sense both night and day, well-collimated laser beams that yield small “footprints” on the ground, very short laser pulses that allow precise ranging to targets, and very narrow spectral linewidths at various wavelengths that enable detailed spectroscopic measurements. There are a number of limitations that must be addressed in designing a lidar system. One limitation is that the power for the laser has to be drawn from batteries fed by solar arrays; less than a 1 kW is typically available for low Earth orbiting satellites (excluding the International Space Station) 3 . This places a premium on the efficient generation and use of the laser light. This limitation has driven, in part, the current lidar implementations that use single beams, which are collimated to help reduce the effects of background light on the receiver. In turn, the single-beam approach results in very sparse sampling of the Earth—a beam of pulsed light < 100 m across drawing lines (transects) around the globe at the rate of around 15 per day for common orbit altitudes. The problem becomes worse when the impact of clouds is considered. Like all optical/infrared systems, clouds can cause either complete obscuration of the Earth or partial blockage that has the added disadvantage of introducing systematic sources of errors into the measurements due to additional scattering of the light. A mission’s objectives will include natural spatial scales across the Earth’s surface over which the identified science measurement must be made, and temporal scales for how often the phenomena must be measured. Aerosol variability for climate studies may require measurements only at a 100 km spacing, while forest studies for biodiversity may require samples at every 10 m. Trying to balance the required coverage with the sparse nature of the approach drives the required mission lifetime and therefore the level of instrument reliability – all rolling up into the mission costs. The role of clouds and their non-deterministic nature adds risk to all performance and cost estimates. Finding system-level improvements to lidar instrument designs that address these spatial sampling problems was the objective of this work. There have been a number of approaches used to increase spatial coverage with lidars. In aircraft applications, mechanical scanners scan the single co-aligned laser and detector rapidly across a scene. Mechanical scanners are used effectively in many commercial aircraft systems, but the higher spacecraft speeds, tighter pointing requirements, and the Lidar Remote Sensing for Environmental Monitoring XII, edited by Upendra N. Singh, Proc. of SPIE Vol. 8159, 815907 · © 2011 SPIE · CCC code: 0277-786X/11/$18 · doi: 10.1117/12.893042 Proc. of SPIE Vol. 8159 815907-1 Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

Upload: upendra-n

Post on 18-Dec-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

Adaptive Lidar for Earth Imaging from Space Carl Weimer and Tanya Ramond

Ball Aerospace & Technologies Corporation

ABSTRACT

Laser remote sensing of the Earth from space offers many unique capabilities stemming from the unique properties of lasers. Lidars make possible three-dimensional characterizations that enable new scientific understanding of the natural processes that shape the planet’s oceans, surface, and atmosphere. However, the challenges to further expand on these successes remain complex. Operation of lidars from space is limited in part by the relatively low power available to the lasers, the low signal scattered back to the instrument because of the large distance to the surface, and the need for reliable and autonomous operation because of the significant investment required for satellites. The instrument complexities are compounded by the diversity in the Earth scenes as well as the variability in albedo from cloud, ice, vegetation, desert, or ocean, combined with the highly variable transmission of the laser beam through clouds, forest canopy, or ocean surface and near-surface. This paper will discuss the development of a new approach to space-based lidars that uses adaptive instrument techniques to dramatically enhance the capability of space-based lidars.

Keywords: Lidar, Adaptive, Beam Steering, Flash Focal Plane Arrays

1. DESCRIPTION OF CONCEPT The Earth is a complex environment, independent of the human impacts. Bare tree branches change to leafy canopies, snow builds up in drifts and melts away, phytoplankton bloom and sink to the bottom of the ocean, storm fronts blow through leaving small convective clouds to drift by. The constant global change makes space-based remote sensing both critical for our comprehension, but challenging for our implementation. The remote sensing problem has been tackled with a myriad of sensors—one of the most recent being laser remote sensing from space1,2. Like radar and stereo-photography, lidar sensors have begun to give a three-dimensional view of the Earth’s atmosphere, surface, and near-surface features.

Light detection and ranging using lasers (lidars), offer a number of advantages for Earth remote sensing. They include the ability to sense both night and day, well-collimated laser beams that yield small “footprints” on the ground, very short laser pulses that allow precise ranging to targets, and very narrow spectral linewidths at various wavelengths that enable detailed spectroscopic measurements. There are a number of limitations that must be addressed in designing a lidar system. One limitation is that the power for the laser has to be drawn from batteries fed by solar arrays; less than a 1 kW is typically available for low Earth orbiting satellites (excluding the International Space Station)3. This places a premium on the efficient generation and use of the laser light. This limitation has driven, in part, the current lidar implementations that use single beams, which are collimated to help reduce the effects of background light on the receiver. In turn, the single-beam approach results in very sparse sampling of the Earth—a beam of pulsed light < 100 m across drawing lines (transects) around the globe at the rate of around 15 per day for common orbit altitudes. The problem becomes worse when the impact of clouds is considered. Like all optical/infrared systems, clouds can cause either complete obscuration of the Earth or partial blockage that has the added disadvantage of introducing systematic sources of errors into the measurements due to additional scattering of the light. A mission’s objectives will include natural spatial scales across the Earth’s surface over which the identified science measurement must be made, and temporal scales for how often the phenomena must be measured. Aerosol variability for climate studies may require measurements only at a 100 km spacing, while forest studies for biodiversity may require samples at every 10 m. Trying to balance the required coverage with the sparse nature of the approach drives the required mission lifetime and therefore the level of instrument reliability – all rolling up into the mission costs. The role of clouds and their non-deterministic nature adds risk to all performance and cost estimates. Finding system-level improvements to lidar instrument designs that address these spatial sampling problems was the objective of this work.

There have been a number of approaches used to increase spatial coverage with lidars. In aircraft applications, mechanical scanners scan the single co-aligned laser and detector rapidly across a scene. Mechanical scanners are used effectively in many commercial aircraft systems, but the higher spacecraft speeds, tighter pointing requirements, and the

Lidar Remote Sensing for Environmental Monitoring XII, edited by Upendra N. Singh, Proc. of SPIE Vol. 8159, 815907 · © 2011 SPIE · CCC code: 0277-786X/11/$18 · doi: 10.1117/12.893042

Proc. of SPIE Vol. 8159 815907-1

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

need for continuous multi-year operation makes this approach problematic for space-based lidars. NASA has gone to a fixed five beam approach, each matched to a single pixel detector, for its Lunar Observer Laser Altimeter (LOLA) that is currently mapping the moon. In contrast to the single-pixel detector approach, the relatively recent advent of detector arrays of high sensitivity and high bandwidth has enabled the imaging lidar approach, namely a fixed staring array that allows a marked increase in spatial coverage. The problem of how to handle the extensive high-speed electronic signals has been solved a number of ways, most notably by the inventions of smart pixel Read Out Integrated Circuits (ROICs) that incorporate the timing, binning, and signal amplification into a unit cell for each pixel. These new types of Flash Focal Plane Arrays (FFPA) allow the creation of both range and intensity images using laser light that can provide dense spatial coverage. However, the problem of how much spatial coverage is possible in turn becomes one of how to generate and utilize the laser light efficiently to ensure adequate signal photons are collected on each of the detector array pixels.

For short-range, high-scattering albedo applications from aircraft, flood illumination of the scene imaged onto the lidar FFPA will give the densest spatial coverage. When orbiting, however, the relatively low laser power available combined with the long range makes it impossible to collect adequate photons to fully illuminate a large lidar array. This scenario requires more effective use of the laser light. Instead of illuminating every pixel in a square array, for example, one can shape the laser light into a line that creates a “pushbroom” when imaged onto the array, concentrating the light to get adequate signal photons returned4. From orbit, this gives maximal spatial coverage cross-track, with the along-track sampling set by the laser repetition rate. Although this approach offers a relatively uncomplicated implementation, it also operates independent of scene characteristics such as variations in scene albedos—both within a scene and globally—as well as cloud obscuration.

An alternative approach to using a fixed illumination pattern is to take advantage of the extensive development in dynamic beam shaping and pointing that has been created since the advent of the laser. This includes not only the mechanical scanners mentioned above, but also electro-optic and acousto-optic methods, MEMs mirrors, liquid crystal arrays, and a host of other innovative methods5. For a space-based system the high pointing accuracy, high reliability, and the need to work with high peak-power lasers for longer ranges limits the choices. But each has unique advantages and disadvantages. Acousto-optic beam deflectors (AOBD) have been investigated as one option6,7. When a radio frequency (RF) tone is applied to a piezoelectric transducer bonded to an acousto-optic (AO) crystal, an index modulation is created that acts as a transmission grating. When laser light passes through the crystal, a significant percentage of the light is diffracted at an angle θ, assuming the Bragg condition. When multiple frequencies are applied, f i , the incident light is deflected into an equal number of beams, each predominately a first order diffraction term and at an angle of6:

(1)

Where i is an integer from 1 to M the maximum number of beams generated; λ is the wavelength of the incident beam and V is the acoustic velocity in the crystal. The amplitude of the beam corresponding to the ith frequency is approximated by6:

⎥⎥⎦

⎢⎢⎣

⎡⎟⎠⎞

⎜⎝⎛==

2/1

222 2

2sin ai

o

ii PM

wl

EEA

λπ

(2)

where Eo is the energy of the incident laser beam, Pai is the acoustic power at frequency fi, M2 is the acoustic figure of merit, and l/w is the transducer geometric factor. There is the additional constraint that the optical energy must be conserved

∑ · (3)

Where Eundeflected is the non-diffracted part of the beam and Eloss contains the losses due to coatings and diffraction into higher orders that are not utilized. Similarly, the RF power used for each tone must sum to the total emitted by the radiofrequency generator that drives the piezoelectric. In the simplest case the power Pai of each tone is equal, resulting in amplitudes Ai, that are all equal and only the maximum number of beams M is varied.

Varying the number and frequency of the tones sets the number and angles of the deflected beams. Setting the amplitude of the tone sets the intensity of the corresponding output beam. For some types of AO crystals, greater than 90% of the

Proc. of SPIE Vol. 8159 815907-2

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

incident laser light can be deflected. The settling time of making a change to the deflected beams is dominated by the acoustic velocity which is <<1 ms, much faster than typical laser repetition rates used in lidars. Combining an AOBD with a lidar FFPA creates a lidar where the spatial sampling can be adapted in “real time” i.e., a different transmission pattern for every pulse of the laser. This is termed here an “Electronically Steerable Flash Lidar (ESFL).”

To illustrate how the beam steering can be utilized, equation 4 is a restatement of the lidar equation for a given operating wavelength8. For the deflected beam i, with i ≤ M, the backscattered signal from that beam, Si, is given by

· · · · · · (4)

Here is the responsivity of the pixel of the array on which the beam has been imaged, or the average if the beam is imaged on multiple pixels. G represents the total amplifier gain of the following stages, with a saturation point Ssat included. Ai is the amplitude of the ith beam (from equation 2), while Eo is the laser energy incident on the AO crystal. Ttotal is the total round trip transmission form the lidar to the scene at the operating wavelength. Depending on the application, Ttotal can include atmospheric transmission Tatm, cloud transmission (Tcloud), ocean surface transmission (Tsurface), forest canopy transmission (Tforest), etc and can vary for every pulse. Area is the total area of the receiving telescope (minus obscurations) and r(t) is the range from the lidar to the scene, which can vary with time. For orbiting systems r(t) is fairly constant other than small orbital effects and cloud height variations. The backscatter coefficient is β(t), which will vary with time due to surface and cloud albedo variations.

The range to the scene is found by timing the roundtrip time of flight of the light pulse for each beam. This can be done either with edge detection electronics, which trigger independently for each pixel, or by sampling the signal and binning the result into time bins. If there is a large range dependence across the scene, this may require a large number of samples be collected for each pixel, or a trigger can be implemented in each pixel that sets the ranges that need to be collected. Prior knowledge of the approximate range can be used to enable the FFPA to detect only for a narrow range where the signal is expected. For any of these schemes to work there must be adequate signal to noise on each pixel. Some systems define the range by detecting the change in slope of the signal. However, the detailed shape of the signal can vary due to effects that distribute the signal - e.g. clouds, blowing snow, forest canopy structure, thus causing ranging errors. In addition, accurate ranging also requires that the signal stay within the dynamic range of the detector electronics, because saturation can cause significant biases in the range, reducing accuracy in ways difficult to remove via post-processing. Similar problems exist for lidars that are trying to retrieve physical parameters beyond just the range to the scene: the retrieval requires adequate signal to noise to perform the retrieval, and saturation causes systematic errors. Simply stated:

(5)

Where Sthreshold is a minimum signal that must be measured derived from the system noise and a desired false alarm rate. Equation 5 must be met to meet the overall system objective.

The spatial sampling possibilities for the system are set largely by the field-of-view (FOV) of the lidar receiver, the receiver pixel instantaneous field-of-view (IFOV), and the laser “footprint” on the ground F ≈ r(t)·Өdiv (projected size on the scene) where Өdiv is the laser transmitter divergence after the AO crystal and any beam expander optic. Each laser spot can be imaged onto one or more of the receiver pixels (F =IFOV or F > IFOV). The number of pixels that can be illuminated is set by the radiometry i.e. Sthreshold< S , to meet the mission requirements. Other constraints include the need to maintain eyesafety plus the fact that only a finite number of deflected beams can be generated. For a typical AOBD in the near IR, the maximum number of beams is on the order of a few hundred. With F<< FOV of the receiver, the system can be configured to match the science objectives, changing even on a pulse-by-pulse basis. For example the beams can be aligned contiguously cross-track to give a dense sampling, or spread apart to sparsely sample the cross track scene. This allows different horizontal spatial scales to be characterized.

A common approach to system design is to “design for the worst case”. For lidars, the design must consider the lowest target albedo, the worst case Ttotal, the worst case background noise (e.g., full daylight), which has a number of significant drawbacks. For one example, the system can be sub-optimal if the total transmission drops due to fewer clouds, the signal then saturates resulting in significant ranging error. Similarly the backscatter can change on a short time scale, such as occurs when passing from over a surface of liquid water to ice when working in the near infrared. Many approaches have been used to address these problems including log amplifiers, automated gain settings, split gain channels, etc. The ESFL design offers new options to solve these types of problems. On a shot-by-shot basis the spatial sampling, the number of beams M and the individual amplitudes of the beams can be changed in order to keep the

Proc. of SPIE Vol. 8159 815907-3

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

science return optimized. For example, equation 5) can be maintained by varying the number of beams. For clear sky, more beams can be used, ensuring that the system is not saturated while simultaneously making a larger number of measurements, thus improving the overall coverage. For regions and times of poorer transmission, the number of beams can be decreased, which increases the energy per beam, to ensure that the minimum signal to noise needed is met to make a good range measurement.

Figure 1 illustrates a notional concept for an ESFL using an AOBD. Three deflected beams are shown and along with the one undeflected beam. The transmitter telescope that sets the beam footprints on the ground is not shown, nor the imaging telescope that collects the scattered laser light and images it onto the FFPA. The RF generator is used to produce the tones that drive the AOBD. Preliminary calibrations determine the receiver boresight and plate scale relative to the system attitude control system (ACS) which might also incorporate a global positioning system (GPS) receiver and an inertial measurement unit (IMU) unit, providing knowledge of where the system is staring. The transmitted beam angles can then be calibrated relative to each pixel, providing transmitter to receiver boresight alignments that can be stored in look-up tables. Separate passive cameras can be co-aligned to the lidar camera, as needed for science and context. Here a “cloud camera” is shown that identifies approaching clouds.

Figure 1. Notional diagram of a space-base lidar that utilizes both a staring Flash Focal Plane Array and electronic beam steering

implemented using an acousto-optic crystal. The transmitting and receiving optics and telescopes are not shown.

Figure 1 also illustrates how the system can generate knowledge of the scene which can be used to adjust the lidar transmitter configuration via the beam deflector, allowing it “adapt” to approaching scenes to optimize performance. If the laser shot-to-shot variation is large, the latency will defeat the lidar from optimizing based only its own signal. A

Proc. of SPIE Vol. 8159 815907-4

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

passive “camera” – visible, hyperspectral, infrared combined with detection algorithms can feed forward information to change the beam pattern to be used. Clouds or plumes could be detected, edges between land and water, vegetation and cleared land, etc.

The ability to do “fine” pointing with the AOBD relaxes pointing requirements on the overall spacecraft and provides higher speed re-pointing of the beam, assuming accurate calibration of the beam pointing relative to the spacecraft requirements. Accurate knowledge of the geolocation of the beam location opens up many possibilities for adaptive control. For example instead of a co-boresighted camera on the same platform, the locations of a scene of interest could be passed directly from a second satellite, as has been demonstrated in “Sensor Web” demonstrations at NASA9. Knowledge of regions of interest or results of previous passes can be stored and the pointing re-directed when these regions come within the field of view of the lidar. This can be used to re-measure the same location on multiple passes to detect change. It can be used to optimize the number and angles of “cross-overs” to aid in determining orbit parameters10. In general, for any scientific goal there will be natural spatial scales which define the science. By matching the measurements to these scales the overall measurement accuracy is improved, plus systematic errors can potentially be reduced.

As an example, forests have natural correlation distances which will vary with type of ecology. In order to properly derive biomass, observations are required to be separated by no more than 150 m across track11. Furthermore, to measure an accurate tree height, it is crucial to be able to detect the variations in the ground topography below the trees in addition to the location of the top of the tree canopy. But the ability to measure the range to the ground depends on the density of the canopy cover, with forest coverage typically ranging from 85% to 100%. This translates into beam transmissions of 15% to 0%. ESFL could be used to create multiple beams spaced at 150 m apart, with the number of beams constrained by dividing the available laser energy into the number of beams that gives each beam enough photons to penetrate the canopy coverage of the scene at hand. Denser canopy would require larger numbers of photons to penetrate to the ground and thus allow fewer beams. Sparser foliage would require fewer photons per beam to detect the ground, and thus more beams could be spread across-track11. Note that if the same configuration is used over ice on a clear day, the system will be sub-optimal and might even saturate because of the much higher ice albedo relative to that of, say, bare earth. ESFL would then adapt by increasing the number of beams—thus decreasing the energy per beam—and by setting the beam spacing to match the measurement goal over the ice. This feedback loop can be based on data recorded on the previous laser shot, from a previous pass over the site, or from data from a separate camera.

The ever-present problem of cloud cover in the lidar FOV can be mitigated using ESFL’s capabilities. The beams can be rearranged to “steer around” clouds, assuming that the cloud scene within the FOV is broken and paths to ground can be found. Clouds are troublesome as they can cause both a drop in the signal to noise to the point of full obscuration, and also systematic errors such as those for laser altimeters. In the case of the latter, forward and multiple scattering can cause biases in measuring ranges12. Since most remote sensing targets do not change on the timescale of a satellite orbit, it is sufficient to make the measurement on a different pass over the scene. There have been many algorithms developed for different types of cameras to identify clouds within their scene13. To operate both day and night, a thermal imager may be required. The most robust solution would be to have a forward-looking lidar, such as a pushbroom lidar, that could identify regions where the optical depth to the ground is less than some specific optical depth for which ESFL could still measure through14. This would help in the case of multi-layer clouds that a passive camera would not penetrate. With a simple change in algorithm, ESFL could be reconfigured not to avoid clouds but to seek them in order to measure cloud-tops or heights of plumes15. This is valuable for cloud-top wind height assignments as well as for chemical transport models and visibility modeling.

The ESFL transmitter concept can be considered an extension of several concepts already well-established in optics and communications. For example, it falls into the general category of “beam forming” that is also found in sonar, radar, and communication networks. Furthermore, ESFL can be considered as one type of “phased array” laser systems16. It is similar to communication systems as it is trying to improve a “link margin” and reduce “channel fade” due to atmospheric interferences17,18. Lidar has the advantage of not requiring a double-ended link, as the scientific goals considered here are spatially distributed and the science could be built up over multiple passes. Being able to adapt measurements to areas of highest change also can be used in adaptive models, for example in weather forecasting models19. There the spatial grid of the model is altered to place more grid points in regions of high importance or change, for example along the potential paths of a hurricane, or regions of rapid deforestation or ice melt. To maximize the value of an adaptive grid the interpolation errors can be reduced if the data is collected at the same node points.

Proc. of SPIE Vol. 8159 815907-5

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

1.1 Radiometric modeling for a notional space-based ESFL To show a significant advantage for this new approach to space-based lidar, sufficient signal-to-noise is required over typical Earth scenes. To illustrate what might be achieved, a radiometric (signal-to-noise) model was constructed based on validated Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) models20 and with upgrades to reflect ESFL system features. Table 1 shows a set of the assumed operating parameters used as inputs to the model.

Table 1. Notional concept for a space-based Electronically Steerable Flash Lidar System Parameter Conceptual Design

Orbit 400 km Telescope Aperture 1 m Telescope Field of View 1° Lidar cross-track swath 6.4 km Laser Energy 750 mJ Laser Repetition rate 40 Hz Number of laser Spots on ground 1-40 Laser spot size 25 m Vertical Range resolution <1 m Optical Detectors InGaAs Avalanche Photodiodes – linear mode

NEP – 37 photons rms Ground Albedo at 1064 nm 0.3

As an example, the model was used to estimate the ESFL performance as the transmission from the lidar to the ground was varied from 0% to 100%. This transmission could be set by clouds or by a forest canopy of different densities, for example. The threshold signal-to-noise to make the measurement of the ground return was assumed to be equal to two. The result scales linearly, giving 36 beams (or pixels) for a transmission to the ground of 2% as shown in Figure 2. The result is illustrative of the technique, a large laser pulse energy can be sub-divided into many smaller beams, each making an independent measurement of the scene as long as they have sufficient signal-to-noise.

Figure 2. Prediction of the number of illuminated lidar pixels that could be detected with a space-based ESFL as a function of total

transmission.

0 10 20 30 40 50 60 70 80 90 1000

200

400

600

800

1000

1200

1400

1600

1800

Percent transmission

Num

ber o

f illu

min

ated

pix

els

SNR > ~ 2

ModelCurve fit

Proc. of SPIE Vol. 8159 815907-6

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

2. ADAPTIVE LIDAR DEMONSTRATION INSTRUMENT In order to demonstrate the potential for adaptive lidars for Earth remote sensing, a demonstration unit of ESFL for aircraft testing was funded by the NASA Earth Science Technology Office. The system was applied to the problem of remote sensing of the Earth’s forests as defined in the NRC Decadal Survey of Earth Science Mission – DESDynI (Deformation, Ecosystem structure, and Dynamics of Ice). A schematic of a space-based concept of the ESFL instrument is shown in Figure 1, and the airborne version has essentially all the same elements but scaled for use in low-altitude aircraft. A picture of the airborne ESFL instrument is shown in Figure 3. The transmitter begins with a 5 ns, 1064 nm pulsed laser beam. The laser beam crosses a beam expander and then enters the acousto-optic beam deflector, which is a commercial TeO2 shear wave device. A custom RF driver was designed and built for the ESFL application at Ball Aerospace which allowed us to design the direct digital synthesizer using techniques that could be easily translated to a design for space. The AOBD and custom driver allowed the output of up to 10 beams placed at any configuration within the 8.6 degree FOV, which translates to a range of 25 to 55 MHz RF frequencies.

Figure 3. ESFL airborne instrument.

The focal plane is a 128 x 128 pixel InGaAs FFPA from Advanced Scientific Concepts a technology partner for the effort. Each of the 1282 pixel elements provides a full lidar waveform. In contrast to detectors that output a waveform from only a single element, this focal plane is also a waveform imager. The focal plane oscillator rate is adjustable between approximately 100 – 300 MHz but is typically set to 200 MHz and can divide the returned signal into 44 time bins of 0.75 m each. The instrument operates at a 30 Hz frame rate.

A visible camera is co-boresighted to the lidar to provide context to aid in data collection and processing and to provide input for adapting the beam pointing.

ESFL flies with an Applanix POS (model 510) global positioning system/inertial measurement unit (GPS/IMU) that is co-registered to the lidar receiver FOV prior to flight. GPS and inertial movement of the sensor are recorded in the data stream, which after post-processing allows geolocation of every pixel in the focal plane for every shot. Typical geolocation errors are sub –meter after post-processing.

To prove out the system, careful calibration of the lidar was required. The new aspects of the electronic steering and flash imaging required the development of new test techniques and test facilities. A plate-scale test of the receiver calibrates the returned beam pointing over the entire FOV and references it to the GPS/IMU coordinates. A second focal plane characterization calibration is used to convert intensity returns for every focal plane pixel into femtoJoules. This way waveform intensity profiles can be returned as absolute energy values and can be used to back out reflectivity of the target. Furthermore, large-scale focal plane sensitivity effects can be removed from the data. Other tests include range calibrations to convert time bin values into absolute range, correcting for a number of biases in the data arising from both the laser and focal plane array.

2.1 ESFL aircraft flights The ESFL airborne instrument was built and then tested during three different flight campaigns on a Twin Otter aircraft. The purpose of the flights was two-fold. One was an “engineering demonstration” to prove out the unique capabilities of

Proc. of SPIE Vol. 8159 815907-7

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

ESFL such as real-time beam reconfiguration. The other was to go further and use the instrument to gather data for scientific measurements. Table 2 lists typical ESFL instrument parameters as well as typical operating parameters during these flights over forest scenes. Each beam was projected to the ground and imaged on a spot 12 pixels in diameter. Flight altitudes at 2000 ft above the ground translate to a ~8.6 m beam width on the ground creating a 90 m across-track swath when all the beams were on. A ground speed of ~ 45 m/s means 1.5 m between shots along-track, meaning that with an 8.6 m beam diameter, there was a large amount of overlap shot-to-shot.

Table 2. Typical operating parameters for ESFL flights

Parameter Value Unit Frame rate 30 Hz Altitude above target 2000 ft Ground speed 45 m/s Receiver FOV/Maximum deflection 8.6 degree Number of beams across-track (pseudo-pushbroom mode) 8 - Beam diameter 12 pixel Beam diameter 8.6 m Swath width across-track 90 m Pixel footprint 0.7 m Distance between shots along-track 1.5 m Receiver oscillator rate 200 MHz Bin size 0.75 m Number of time bins 44 - Total gate size 33 m

The ESFL receiver digitizes the returned laser energy into 44 time bins. With each time bin spanning 0.75 m in range, the full gate size equals 33 m. However, most trees are taller than 33 m, or if they are not, it is unlikely that the stop range will be set so that the returned signal is centered exactly in the 44 bins. Therefore, there is need for more than 44 time bins. One approach to solving this problem is the concept of “sliding gates”. As the lidar flies above the target medium, such as a forest, the range gates are staggered on each laser shot to span a larger range. For example, gate 1 will center on the ground return, gate 2 will center on the center of the trees, and gate 3 will center on the canopy tops, and then the series is repeated. This works reasonably well for airplane speeds that are slow enough and a target scene that varies slowly enough such that gates 1, 2, and 3 sample roughly the same scene. There is 1.5 m overlap along-track between successive shots, as shown in Table 1. For a sampling scheme with, say, three gates per stack, there is 1.5 m x 3 = 4.5 m on the ground between gate 1 of each stack. If the scene varies too rapidly, or the sampling is too sparse, however, too much information can be missed this way and additional errors are introduced.

The lidar camera requires knowledge of what range to start the gate, equivalent to saying it requires a time delay before which it starts collecting its 44 time bins of data. This value must be chosen carefully in order to center the gate on the scene. When flying over varying terrain in an airplane that also changes elevation, ‘manual’ adjustment of the gate by the operator is difficult and prone to errors. A simple control loop is implemented in ESFL software called to perform “terrain following.” For each laser pulse, the signal per time bin is summed over all pixels to produce a conglomerate waveform. Assuming a clear ground return is observed in the form of a strong peak in the waveform, the software interprets this as the location of the ground and re-orients the next gate relative to that ground location. This is based on the assumption that the changes in ground elevation and airplane elevation are slow relative to the 30 Hz frame rate of the system. When flying over a scene where the strongest signal is expected to be off the tops of trees instead of off the ground, the algorithm can be reconfigured to assume that the biggest peak is the top of the canopy and to adjust the gate accordingly.

2.2 AIRCRAFT DEMONSTRATIONS OF CONCEPT Figure 4 shows an example of the ESFL capability to reconfigure the projected beam pattern shot-to-shot. Three successive laser shots are shown. In the first shot, a single beam was deflected across-track. In the next, three are output. In the final frame, 8 beams are shown deflected across-track. Surface albedo variation, in part, causes the change in intensity between beams. In flight, this pattern was repeated over and over again. The positions and numbers of beams

Proc. of SPIE Vol. 8159 815907-8

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

were arbitrarily chosen in this example, but it serves to illustrate a major departure from the single fixed-beam “transect” approach to space-based lidars that is currently the paradigm. If instead there were some knowledge of the scene gathered real time from a separate camera, the number of beams and their positions could be positioned to maximize the observation desired.

Figure 4. Three successive laser shots showing three different beam configurations. The transmitter cycles between 1, 3, and 8

deflected beams at a rate of 30 Hz. Data have been uncorrected for focal plane responsivity in this example. The light remaining in the undeflected laser light was spread into a line and projected ahead of the deflected beams allowing its use and maintaining eye-safety.

One example of such a closed-loop implementation can be seen in Figure 5. In this case, a laboratory test was performed to demonstrate how ESFL can be used to avoid clouds. Randomly-generated synthetic cloud patterns of varying density were generated in MATLAB. These were used to create a movie to simulate a dynamic cloud scene, as would be expected to be viewed in the lidar FOV on-orbit. This movie was projected on the laboratory wall and the lidar aligned to this scene. The default beam configuration was chosen by the operator; a three beam case shown in the left panel of Figure 5. The visible context camera co-boresighted with the ESFL lidar was used to image the cloud movie. The lidar computer controller received the data from the camera and in realtime identified the regions with and without clouds using a simple brightness-based algorithm. The AOBD was commanded to change the beam deflections of the three beams independently and in realtime, so as to avoid the clouds. The entire scene, including cloud pattern plus ESFL beams, was detected on the lidar receiver. This is the data displayed in Figure 5. The approach demonstrated that system latencies were sufficiently small that the multi-beam pattern could be steered around clouds at speeds that would be necessary for potential space missions. Future work with ESFL will include flying above clouds to demonstrate this from aircraft.

Figure 5. Two frames of data from the ESFL lidar receiver during a laboratory cloud-avoidance test. A simulated cloud pattern movie

as would be seen from space was projected on a wall. The ESFL visible camera detects the pattern, and then deflects its beams as needed to avoid the cloud.

A second example of a closed-loop ESFL implementation is operation in “geolocation mode” shown in Figure 6. In this mode, the user defines a flight transect by a pair of start and stop latitude and longitude values. As ESFL flies in an aircraft, the Applanix GPS/IMU registers the pointing of the lidar field of view in real time. This pointing will vary with aircraft roll. When the ESFL flies over these points, the instantaneous GPS and IMU values feed directly to the ESFL

Proc. of SPIE Vol. 8159 815907-9

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

computer and the required beam deflection is sent to the AOBD such that the beam intersects the desired transect at all times. To test this, a pre-defined transect was chosen to be the centerline of a straight road. The top panel of Figure 6 was the ESFL lidar data with the undeflected beam spread “horizontally” around row 60 and the single deflected beam around column 80. The higher reflected intensity from the centerline paint is seen in the lidar signal. The bottom panel is the data from the visible camera showing the road being tracked. The demonstration was successful as the beam remained centered in the road throughout the defined transect despite aircraft motion. In the future this capability will be upgraded to follow any predefined geolocation coordinates and not be limited to simple lines.

Figure 6. Example frame of the geolocation mode of the ESFL. Top shows the lidar data in pixel numbers. The spot at column 85, row

35 is the single deflected beam tracking the centerline of the road shown in the visible camera image below. The feature extending along row 60 is the stationary beam, which also shows the signal from the lines in the road being tracked.

Each beam output from the ESFL airborne instrument illuminates multiple pixels on the focal plane. Each beam was sized to image on 12 pixels in diameter, and this fact allows the ground to be probed at multiple length scales simultaneously. For example, Figure 7 shows four panels of lidar waveform data, each panel representing data processed from the same laser pulse, but in different ways. Panel 1 shows three successive intensity waveforms gathered on three laser pulses. In this case, only the center single pixel of the 12 x 12 pixel beam is shown and the rest is discarded. Panel 2 takes the same data from panel 1, but instead bins together the data from the center 3 x 3 pixels of each beam and discards the rest. Similarly, panel 3 is a 5 x 5 subset, and panel 4 is a 7 x 7 pixel subset of the full 12 x 12 pixel beams. Because a pixel footprint corresponds to approximately 0.7 m on the ground in this implementation, this is equivalent to simultaneously probing length scales of 0.7 m, 2.1 m, 3.5 m, and 4.9 m, for each of the four panels respectively. In other words, for the same tree, for example, its structure can be sampled at multiple different length scales on every laser pulse. This capability can serve multiple scientific communities who are interested in smaller scale structure as well as the larger scale structure, whereas previously the different scientific needs could only be addressed by completely different sensors on the same platform.

Proc. of SPIE Vol. 8159 815907-10

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

Figure 7. ESFL intensity waveforms. The same data is binned in four different ways to probe four different length scales. 1 x 1 pixel data captures only the center pixel of a beam, corresponding to a 0.7 m footprint size. 3 x 3 pixel data bins 9 of the center-most pixels of the same beam, corresponding to a length scale of 2.1 m. Likewise, 5 x 5 (7 x7) data bins 25 (49) of the center-most pixels of the same beam, corresponding to length scales of 3.5 m (4.9 m). For each waveform, therefore, these four different length scales can be

probed simultaneously.

2.3 ESFL FLIGHT TESTING FOR SCIENCE DATA COLLECTION The ESFL Instrument Incubator Program (IIP) grant was proposed in collaboration with science partners Professor Michael Lefsky (Colorado State University), Ingrid Burke (University of Wyoming), and Yongxiang Hu (NASA Langley Research Center). The science co-investigators provided input during the design phase and they provided input as to how to best test the scientific potential of the instrument. They chose the sites to fly over, the types of trees to sample, sampling configurations to maximize the observations, and most importantly, after the initial Level 1 processing was done , they analyzed the data and drew scientific conclusions from the ESFL measurements to evaluate its potential.

Flight testing was conducted over several forest sites in multiple areas in the United States. ESFL flew over the Manitou Experimental Forest near Colorado Springs, CO in November 2009 and May 2010. This forest had a relatively open canopy of ponderosa pine (< 80%), and good ground returns were obtained. An example of ESFL data from Manitou is shown in Figure 8.

Figure 8, ESFL data of the Manitou Experimental Forest, containing < 80% canopy cover. Good ground returns are observed.

Figures 9 and 10 show ESFL data taken over the Stephen F. Austin Experimental Forest in Nacogdoches, TX in May 2010. This forest is a mix of pine and deciduous trees, with > 95% canopy closure. Despite the dense foliage, good ground returns are measured in the ESFL data. Figure 9 also shows the Angelina River that meanders through the forest.

Proc. of SPIE Vol. 8159 815907-11

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

Figure 9. Stephen F. Austin Forest near Nacogdoches, TX in May 2010. The Angelina River is also visible.

Figure 10. Stephen F. Austin Forest data showing tree structure and ground returns.

In September 2010, ESFL flew over the Smithsonian Environmental Research Center (SERC) test forest in Maryland. This forest was also comprised primarily of deciduous trees with greater than 95% canopy closure. Figure 11 shows ESFL data taken over the site. Again, despite the dense canopy, ground return strength allowed good comparison between canopy height and ground height to result in a tree height.

Figure 11. ESFL data taken over the SERC forest in Maryland.

Proc. of SPIE Vol. 8159 815907-12

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

In each of these cases, ESFL data was validated against data taken from conventional airborne lidar scanners that are commercially available. In a study over the Manitou and Stephen F. Austin forests, ESFL performs as well as the commercial lidar systems to estimate tree height over the different forests21. Furthermore, measurements of rate of light penetration to the ground surface are greater with the ESFL system than with the commercial lidar. Similar studies are planned for the SERC data set. Future studies also include ground validation measurements in the Manitou Experimental Forest.

3. CONCLUSION An Electronically Steerable Flash Lidar is an example of a new approach to an adaptive space-based laser remote sensing that overcomes some of the current limitations of lidar systems and offers a path towards much higher science return without commensurate increases in mission risk. The ability of the lidar to adapt to the scene to be measured, using internal or external sources of knowledge of the scene, opens up the possibility to optimize both science return and instrument performance. One example of this capability is to alter the beam pattern to make measurements on the natural spatial scales of interest while ensuring there is adequate signal-to-noise to ensure instrument performance. The deterministic pointing also allows the laser light to be used most efficiently for collecting the science. It can be used to point around clouds reducing loss and biases, for example, or to follow precise transects along the ground. Ultimately, the approach lends itself to the idea of a space-based lidar observatory, one that can be re-configured on-orbit to perform very different types of scientific studies, maximizing the scientific return on investment in the satellite.

ACKNOWLEDGEMENTS:

We gratefully acknowledge the support of the Ball Aerospace’s COS Office of the Chief Engineer’s Active Sensing Initiative, and the NASA Earth Science Technology Office (Grant # NNX08AN37G ) for funding. The application of the demonstration hardware greatly benefitted from the advice and scientific analysis of Michael Lefsky, Ingrid Burke, and Yongxiang Hu, our co-Investigators on the grant. Roger Stettner of Advanced Scientific Concepts was our technology partner for the Flash Focal Plane Array used in the demonstration hardware. Warren Seale of Gooch and Housego helped us to understand how to implement the AOBD. The engineering team responsible for the hardware development was Mike Adkins, Jeff Applegate, Eric Coppock, Rex Craig, Jeremy Craner, Tom Delker, Brian Donley, Scott Edfors, Bill Good, Paul Kaptchen, Lyle Ruppert, and David Waller. We also appreciate the support of Ray Demara and Shelley Petroy at Ball throughout the development.

Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Aeronautics and Space Administration.

REFERENCES

[1] Schutz, B.E., H.J. Zwally, C.A. Shuman, D. Hancock, and J.P. DiMarzio: “Overview of the ICESat Mission,” GRL Vol. 32 (2005).

[2] Winker, D.M. et al., “The CALIPSO Mission: A Global 3D View of Aerosols and Clouds,” Bulletin of the American Meteorological Society p. 1211 (2010).

[3] For example see the NASA Rapid Spacecraft Development Office at http://rsdo.gsfc.nasa.gov/catalog.html [4] Ramond, Tanya, Eileen Saiki, Carl Weimer, Jeff Applegate, Yongxiang Hu, Thomas Delker, Lyle Ruppert, and

Brian Donley: “Topographic Mapping Flash Lidar for Multiple Scattering Terrain, and Forest Mapping,” Proc SPIE Vol 8037 (2011).

[5] McManamon, Paul F., Philip J. Bos, Michael J. Escuti, Jason Heikenfeld, Steve Serati, Huikai Xie, and Edward A. Watson: “A review of Phased Array Steering for narrow-Band Electroptical Systems,” Proc. IEEE Vol. 97 p. 1078 (2009).

[6] Jieping Xu and Robert Stroud: Acousto-Optic Devices: Principles, Design, and Applications, John Wiley & Sons, Inc. (1992).

[7] Speake, C.C. and M. Lawrence: “Dynamical precision angle measurement with an acousto-optic beam deflector,” JOSA Vol. 5, p. 1254 (1988).

Proc. of SPIE Vol. 8159 815907-13

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms

[8] Raymond M. Measures: Laser Remote Sensing: Fundamentals and Applications, Krieger Publishing Company (1984).

[9] Seablom, Michael S., Stephen J. Talabac, Glenn J. Higgins, and Brice T. Womack: “Simulation for the Design of Next-Generation Global Earth Observing Systems,” Proc. of SPIE Vol. 6884 (2007).

[10] Mazarico, Erwan, G.A. Neumann, D.D. Rowlands, and D.E. Smith: “Geodetic constraints from multi-beam laser altimeter crossovers” J Geod Vol. 84 p. 343 (2010).

[11] Lefsky, M.A., T. Ramond, and C.S. Weimer: “Alternate spatial sampling approaches for ecosystem structure inventory using spaceborne lidar,” Remote Sensing of Environment, Vol 115 p. 1361 (2011).

[12] Abdalati, W., et al., “Report of the ad-hoc science definition team for the Ice Cloud and Land Elevation Satellite-II (ICESAT-II),” NASA, (2008).

[13] Baum, Bryan A. and Qing Trepte: “A Grouped Threshold Approach for Scene Identification in AVHRR Imagery,” J. of Atmospheric and Oceanic TechnologyVol. 16 p. 793 (1999).

[14] Simpson Weather Associates “Using GLAS/ICESat data to Derive CFLOS statistics for the Design of Future Space-Based Active Optical Remote Sensors,” (2006) available at http://esto.nasa.gov/.

[15] Luo, Zhengzhao, Graeme Stephens, Kerry A. Emanuel, Deborah G. Vane, Natalie D. Tourville, and John M. Haynes: “On the Use of Cloudsat and MODIS Data for Estimating Hurricane Intensity,” IEEE Geoscience and Remote Sensing Letters , Vol. 5, p. 13 (2008).

[16] Xun, Xiaodon; Xiaoguang Chang, and Robert W. Cohn: “System for demonstrating arbitrary multi-spot beam steering from spatial light modulators,” Optics Express Vol. 12, p. 260 (2004).

[17] Ikulin, Vladimir V., Rahul M. Khandekar, and Jozef Sofka: “Agile acousto-optic tracking system for free-space optical communications,” Opt Eng Vol. 47 (2008).

[18] Mandl, Dan et al., “Linking Satellites Via earth ‘Hot Spots’ and the Internet to Form Ad Hoc Constellations,” Proc. SPIE Vol. 5659 (2005).

[19] Morss, Rebecca E., Kerry Emanuel and Chris Snyder: “Idealized Adaptive Observation for Improving Numerical Weather Prediction,” J. Atmospheric Sciences Vol. 58, p. 210 (2001).

[20] Weimer, Carl, Ron Schwiesow, Mark LaPole: “CALIPSO: Lidar and Wide Field Camera Performance,” SPIE Vol. 5542 (2004).

[21] Duong, H.V., M.A. Lefsky, T. Ramond, and C. Weimer: “The Electronically Steerable Flash Lidar: A full waveform scanning system for topographic and ecosystem structure applications,” in preparation.

Proc. of SPIE Vol. 8159 815907-14

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 11/06/2013 Terms of Use: http://spiedl.org/terms