single-shot compressed optical-streaking ultra-high-speed ...single-shot compressed...

4
Single-shot compressed optical-streaking ultra-high-speed photography XIANGLEI LIU,JINGDAN LIU,CHENG JIANG,FIORENZO VETRONE, AND JINYANG LIANG* Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 boulevard Lionel-Boulet, Varennes, Québec J3X1S2, Canada *Corresponding author: [email protected] Received 5 December 2018; revised 29 January 2019; accepted 1 February 2019; posted 4 February 2019 (Doc. ID 354435); published 7 March 2019 Single-shot ultra-high-speed imaging is of great significance to capture transient phenomena in physics, biology, and chemistry in real time. Existing techniques, however, have a restricted application scope, a low sequence depth, or a lim- ited pixel count. To overcome these limitations, we developed single-shot compressed optical-streaking ultra-high-speed photography (COSUP) with an imaging speed of 1.5 million frames per second, a sequence depth of 500 frames, and an x , y pixel count of 0.5 megapixels per frame. COSUPs single-shot ultra-high-speed imaging ability was demon- strated by recording single laser pulses illuminating through transmissive targets and by tracing a fast-moving object. As a universal imaging platform, COSUP is capable of increasing imaging speeds of a wide range of CCD and complementary metaloxidesemiconductor cameras by four orders of mag- nitude. We envision COSUP to be applied in widespread ap- plications in biomedicine and materials science. © 2019 Optical Society of America https://doi.org/10.1364/OL.44.001387 Single-shot ultra-high-speed [i.e., 0.110 million frames per sec- ond (Mfps)] imaging [1] is indispensable for visualizing various instantaneous phenomena occurring in two-dimensional (2D) space, such as phosphorescence light emission [2], neural activities [3], and conformational changes in proteins [4]. Existing single ultra-high-speed imaging techniques [513] can be generally cat- egorized into active-detection and passive-detection domains. The active-detection approaches exploit specially designed pulse trains to probe 2D transient events [i.e., x , y frames that vary in time]. The representative modalities include frequency-dividing imaging [5] and time-stretching imaging [69]. However, these methods are not suitable for imaging self-luminescent and color-selective events. By contrast, the passive-detection approaches leverage re- ceive-only ultra-high-speed detectors to record photons scattered and emitted from the transient scenes. Examples include rotatory- mirror-based cameras [10], beam-splitting-based framing cameras [11], the in situ storage image sensor-based CCD camera [12], and the global shutter stacked complementary metaloxidesemiconductor (CMOS) camera [13]. Nevertheless, these cameras either have a bulky and complicated structure or have a limited sequence depth (i.e., the number of frames in one acquisition) and pixel count. To circumvent these drawbacks, computational imaging techniques [14], combining physical data acquisition and numerical image reconstruction, were increasingly featured in recent years. In particular, the implementation of compressed sensing (CS) [15] for spatial and/or temporal multiplexing has allowed for overcoming the speed limit with a substantial im- provement in the sequence depth and pixel count [16,17]. The representative techniques in this category include a program- mable pixel compressive camera (P2C2) [18,19], coded aper- ture compressive temporal imaging (CACTI) [20,21], and the multiple-aperture (MA)-CS CMOS camera [22]. However, de- spite reaching over one megapixel per frame, the imaging speeds of P2C2 and CACTI, inherently limited by the refresh- ing rate of spatial light modulation and the moving speed of a piezoelectric stage, are clamped at several thousand fps (kfps). For MA-CS CMOS, despite its ultra-high-speed imaging speeds, it has a limited pixel count of 64 × 108 with a sequence depth of 32. Thus, existing computational imaging modalities fall short of simultaneously possessing satisfying specifications in the frame rates, sequence depth, and pixel count for ultra- high-speed imaging. To overcome these limitations, in this Letter, we propose sin- gle-shot compressed optical-streaking ultra-high-speed photogra- phy (COSUP), which is a passive-detection computational imaging modality with a 2D imaging speed of 1.5 Mfps, a se- quence depth of 500, and an x , y pixel count of 1000 × 500 per frame. The schematic of the COSUP system is shown in Fig. 1. A transient scene is first imaged onto a digital micromirror device (DMD, AJD-4500, Ajile Light Industries), on which a binary pseudo-random pattern (with an encoding pixel size of 32.4 μm × 32.4 μm) is loaded to conduct spatial encoding. Subsequently, the spatially encoded frames are relayed onto a CMOS camera (GS3-U3-23S6M-C, FLIR) by a 4f system. A galvanometer scanner (GS, 6220H, Cambridge Technology), placed at the Fourier plane of this 4f system, temporally shears the spatially encoded frames linearly to different spatial locations along the x axis of the CMOS camera according to their time of arrival. The synchronization between the GSs rotation and the camera s exposure is controlled by a sinusoidal signal and a rectangular signal from a function generator (DG1022, Rigol Letter Vol. 44, No. 6 / 15 March 2019 / Optics Letters 1387 0146-9592/19/061387-04 Journal © 2019 Optical Society of America

Upload: others

Post on 24-Jun-2020

12 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Single-shot compressed optical-streaking ultra-high-speed ...single-shot compressed optical-streaking ultra-high-speed photography (COSUP) with an imaging speed of 1.5 million frames

Single-shot compressed optical-streakingultra-high-speed photographyXIANGLEI LIU, JINGDAN LIU, CHENG JIANG, FIORENZO VETRONE, AND JINYANG LIANG*Centre Énergie Matériaux Télécommunications, Institut National de la Recherche Scientifique, 1650 boulevard Lionel-Boulet, Varennes,Québec J3X1S2, Canada*Corresponding author: [email protected]

Received 5 December 2018; revised 29 January 2019; accepted 1 February 2019; posted 4 February 2019 (Doc. ID 354435);published 7 March 2019

Single-shot ultra-high-speed imaging is of great significanceto capture transient phenomena in physics, biology, andchemistry in real time. Existing techniques, however, havea restricted application scope, a low sequence depth, or a lim-ited pixel count. To overcome these limitations, we developedsingle-shot compressed optical-streaking ultra-high-speedphotography (COSUP) with an imaging speed of 1.5 millionframes per second, a sequence depth of 500 frames, and an�x, y� pixel count of 0.5 megapixels per frame. COSUP’ssingle-shot ultra-high-speed imaging ability was demon-strated by recording single laser pulses illuminating throughtransmissive targets and by tracing a fast-moving object. As auniversal imaging platform, COSUP is capable of increasingimaging speeds of a wide range of CCD and complementarymetal–oxide–semiconductor cameras by four orders of mag-nitude. We envision COSUP to be applied in widespread ap-plications in biomedicine and materials science. © 2019Optical Society of America

https://doi.org/10.1364/OL.44.001387

Single-shot ultra-high-speed [i.e., 0.1–10 million frames per sec-ond (Mfps)] imaging [1] is indispensable for visualizing variousinstantaneous phenomena occurring in two-dimensional (2D)space, such as phosphorescence light emission [2], neural activities[3], and conformational changes in proteins [4]. Existing singleultra-high-speed imaging techniques [5–13] can be generally cat-egorized into active-detection and passive-detection domains. Theactive-detection approaches exploit specially designed pulse trainsto probe 2D transient events [i.e., �x, y� frames that vary in time].The representative modalities include frequency-dividing imaging[5] and time-stretching imaging [6–9]. However, these methodsare not suitable for imaging self-luminescent and color-selectiveevents. By contrast, the passive-detection approaches leverage re-ceive-only ultra-high-speed detectors to record photons scatteredand emitted from the transient scenes. Examples include rotatory-mirror-based cameras [10], beam-splitting-based framing cameras[11], the in situ storage image sensor-based CCD camera [12],and the global shutter stacked complementary metal–oxide–semiconductor (CMOS) camera [13]. Nevertheless, these cameraseither have a bulky and complicated structure or have a limited

sequence depth (i.e., the number of frames in one acquisition) andpixel count.

To circumvent these drawbacks, computational imagingtechniques [14], combining physical data acquisition andnumerical image reconstruction, were increasingly featured inrecent years. In particular, the implementation of compressedsensing (CS) [15] for spatial and/or temporal multiplexing hasallowed for overcoming the speed limit with a substantial im-provement in the sequence depth and pixel count [16,17]. Therepresentative techniques in this category include a program-mable pixel compressive camera (P2C2) [18,19], coded aper-ture compressive temporal imaging (CACTI) [20,21], and themultiple-aperture (MA)-CS CMOS camera [22]. However, de-spite reaching over one megapixel per frame, the imagingspeeds of P2C2 and CACTI, inherently limited by the refresh-ing rate of spatial light modulation and the moving speed of apiezoelectric stage, are clamped at several thousand fps (kfps).For MA-CS CMOS, despite its ultra-high-speed imagingspeeds, it has a limited pixel count of 64 × 108 with a sequencedepth of 32. Thus, existing computational imaging modalitiesfall short of simultaneously possessing satisfying specificationsin the frame rates, sequence depth, and pixel count for ultra-high-speed imaging.

To overcome these limitations, in this Letter, we propose sin-gle-shot compressed optical-streaking ultra-high-speed photogra-phy (COSUP), which is a passive-detection computationalimaging modality with a 2D imaging speed of 1.5 Mfps, a se-quence depth of 500, and an �x, y� pixel count of 1000 × 500per frame. The schematic of the COSUP system is shown inFig. 1. A transient scene is first imaged onto a digital micromirrordevice (DMD, AJD-4500, Ajile Light Industries), on which abinary pseudo-random pattern (with an encoding pixel size of32.4 μm × 32.4 μm) is loaded to conduct spatial encoding.Subsequently, the spatially encoded frames are relayed onto aCMOS camera (GS3-U3-23S6M-C, FLIR) by a 4f system. Agalvanometer scanner (GS, 6220H, Cambridge Technology),placed at the Fourier plane of this 4f system, temporally shearsthe spatially encoded frames linearly to different spatial locationsalong the x axis of the CMOS camera according to their timeof arrival. The synchronization between the GS’s rotation andthe camera’s exposure is controlled by a sinusoidal signal and arectangular signal from a function generator (DG1022, Rigol

Letter Vol. 44, No. 6 / 15 March 2019 / Optics Letters 1387

0146-9592/19/061387-04 Journal © 2019 Optical Society of America

Page 2: Single-shot compressed optical-streaking ultra-high-speed ...single-shot compressed optical-streaking ultra-high-speed photography (COSUP) with an imaging speed of 1.5 million frames

Technologies), as depicted in the inset of Fig. 1. Finally, via spa-tiotemporal integration, the CMOS camera compressively re-cords the spatially encoded and temporally sheared scene as a2D streak image with a single exposure. It is noted that our workis inspired by recent advances in compressed ultrafast photogra-phy [23–25]. However, instead of using a streak camera, we im-plement the GS [26,27] for temporal shearing and use an off-the-shelf CMOS camera for detection. This design thus avoids draw-backs—such as the space-charge effect that degrades the imagequality and the low quantum efficiency (e.g., ∼10% at 500 nmfor the S-20 photocathode)—that are presented in existing streakcameras [28]. In the following, we shall demonstrate that thisoptical-streaking approach can increase the imaging speed ofthe CMOS camera by four orders of magnitude to the Mfps levelfor recording single laser pulses illuminating through transmissivetargets and for tracking a fast-moving object.

The operation of the COSUP system can be described bythe following model:

E � TSoCI�x, y, t�, (1)

where I�x, y, t� is the light intensity of the transient event, Crepresents spatial encoding by the DMD, So represents lineartemporal shearing by the GS (the subscript “o” stands for “op-tical”), and T represents spatiotemporal integration by theCMOS camera. Because of the prior knowledge of the opera-tors and the spatiotemporal sparsity of the transient scene,I�x, y, t� can be recovered from the measurement E by solvingthe inverse problem of

I � arg minI

fkE − TSoCIk22 � λΦTV�I�g: (2)

Here, k · k22 represents the l2 norm, λ is a weighting coefficient,and ΦTV�·� is the total variation (TV) regularizer. In practice,I�x, y, t� was recovered by using a CS-based algorithm that wasdeveloped upon the two-step iterative shrinkage/thresholdingalgorithm [29].

To obtain a linear temporal shearing operation, wesynchronized the camera’s exposure window with the GS’s lin-ear rotation region. In particular, a static target was placed at theobject plane. By tuning the initial phase of the sinusoidal

function, the camera’s exposure window was slid to searchfor the peak or valley of the sinusoidal signal. The searchwas completed when local features of the static target were pre-cisely matched in the streak image due to the symmetric backand forth scanning. Finally, 90° was added to the initial phase tofind the linear slope region of the sinusoidal function.

The reconstructed movie has a frame rate of

r � αUf 4

tg d: (3)

Here, α � 0.07 rad∕V is the constant that links the voltageadded onto the GS, denoted as U , with the deflection anglein its linear rotation range. f 4 � 75 mm is the focal lengthof Lens 4, tg is the period of the sinusoidal control signal addedto the GS, and d � 5.86 μm is the CMOS sensor’s pixel size.In addition, the preset exposure time of the CMOS camera, te ,determines the total length of the recording time window. If theentire streak is located within the sensor, the sequence depthcan be calculated by N t � rte. The number of pixels in thex axis of each frame, Nx , can be calculated by Nx ≤Nc � 1 − N t , where Nc denotes the number of pixels in eachcolumn of the CMOS sensor. The number of pixels in the y axisof each frame,Ny, is less than or equal to that in each row of theCMOS sensor, Nr [i.e., N y ≤ Nr].

To characterize COSUP’s spatial frequency responses, weimaged single laser pulses illuminating through a resolution tar-get [Fig. 2(a)]. In particular, a 532 nm continuous wave laser

Fig. 1. Schematic of the COSUP system. Inset: synchronization be-tween the CMOS camera’s exposure (black solid line) with an exposuretime of te and the galvonometer scanner’s sinusoidal control signal(blue dashed line) with a period of tg . Lenses 1 and 4, AC508-075, Thorlabs; Lenses 2 and 3, AC508-100, Thorlabs.

Fig. 2. Quantifying COSUP’s spatial frequency responses.(a) Experimental setup. (b) Illuminated bars on the resolution target.In the first panel, the numbers represent Elements 4 to 6 in Group 2and Elements 1 to 6 in Group 3. The rest of the panels show theprojected images of illuminated bars for different laser pulse widths,calculated by summing over voxels of the reconstructed �x, y, t� data-cubes along the t axis. (c) Comparison of COSUP’s spatial frequencyresponses with different laser pulse widths. (See Visualization 1.)

1388 Vol. 44, No. 6 / 15 March 2019 / Optics Letters Letter

Page 3: Single-shot compressed optical-streaking ultra-high-speed ...single-shot compressed optical-streaking ultra-high-speed photography (COSUP) with an imaging speed of 1.5 million frames

was controlled by an external trigger to generate laser pulseswith different temporal widths. Five different pulse widths(100, 300, 500, 700, and 900 μs) were used to provide de-creased sparsity from 90% to 10% with a step of 20% inthe temporal axis for a recording time window of 1 ms.COSUP captured these dynamic scenes at 60 kfps. The illumi-nated bars (i.e., Elements 4 to 6 in Group 2 and Elements 1 to6 in Group 3) are shown as the first panel in Fig. 2(b). Wereconstructed movies for each pulse width (see Visualization 1)and projected these datacubes onto the xa€ y plane, as shown asthe rest of the panels in Fig. 2(b). These results reveal that thespatial resolution of COSUP depends on the sparsity of the tran-sient scene. The contrast in the reconstructed images degradeswith the increased laser pulse widths. Moreover, longer pulsewidths produce lower reconstructed intensity. To quantify thesystem’s performance by considering both effects, we used thenormalized product of the contrast and the reconstructed inten-sity as the merit function [Fig. 2(c)]. For the 900 μs pulse illu-mination, Element 3 in Group 3 in the reconstruction has anormalized product below 0.25, which was used as the thresholdto determine the resolvable feature. Thus, the COSUP’s spatialresolution was quantified to be 50 μm.

To demonstrate COSUP’s multi-scale ultra-high-speed im-aging capability, we captured transmission of single laser pulsesthrough a mask. An incident laser pulse was divided by a beamsplitter into two components. The reflected component wasrecorded by a photodiode, and the transmitted component il-luminated a mask with four transmissive letters—“U,” “S,” “A,”and “F”—that modulated the laser pulses’ spatial profiles. Forthe first experiment, we generated a pulse train that containedfour 300 μs pulses. COSUP’s imaging speed was set to 60 kfps.

While the CMOS camera, at its intrinsic imaging speed of20 fps, could only provide a single image [red solid box inFig. 3(a)] without any temporal information, COSUP recordedthe mask’s spatial profile and laser pulse’s intensity time coursein a movie with 240 frames (Visualization 2). A representativeframe (t � 433 μs) is shown in Fig. 3(a). Figure 3(b) depictsthe normalized intensity of a selected cross section [cyan dashedline in Figs. 3(a) and 3(d)], which demonstrates the well recon-structed spatial features with respect to the ground truth.Moreover, we calculated the average intensity of each frame.The time course shows a good agreement with the photo-diode-recorded result [Fig. 3(c)]. We then increased the imag-ing speed to 1.5 Mfps to record a single 10 μs laser pulse. Thereconstructed movie is Visualization 3, and a representativeframe (t � 33 μs) is shown in Fig. 3(d). The comparison ofthe time courses of averaged intensity [Fig. 3(e)] confirmsthe consistency between the COSUP and photodiode resultsunder this imaging speed.

To demonstrate COSUP’s ability to track fast-moving objects,we imaged an animation of a fast-moving ball [Fig. 4(a)]. Thisanimation comprised 40 patterns, which were loaded and playedby another DMD (D4100, Digital Light Innovations) at 20 kHz.We shone a collimated laser beam onto the DMD at ∼24° fromits surface normal. The COSUP system perpendicularly faced the

Fig. 3. Recording transmission of single laser pulses through a maskusing COSUP. (a) A represented reconstructed frame showing a300 μs laser pulse passing through a transmissive USAF pattern.The imaging speed was 60 kfps. Inset: the time-integrated image cap-tured by the CMOS camera with its intrinsic imaging speed (20 fps).(b) Normalized intensity of a selected cross section [cyan dashed linesin (a) and (d)] in the ground truth (black circle) and in the represen-tative reconstructed frames using 300 μs (green solid line) and 10 μs(magenta dashed line) laser pulses. (c) Comparison of the measurednormalized average intensity of the laser pulse as a function of timeusing the COSUP system (red solid line) and a photodiode (blackdashed line) for the 300 μs laser pulse. (d) The same as (a), but usinga 10 μs laser pulse with a 1.5 Mfps imaging speed. (e) The same as (c),but for a 10 μs laser pulse. (See Visualization 2 and Visualization 3.)

Fig. 4. Tracing a fast-moving object using COSUP.(a) Experimental setup. (b) Time-integrated image of the fast-movingball patterns, imaged at the intrinsic frame rate of the CMOS camera(20 fps). (c) Superimposed image of 10 representative time-lapseframes (with an interval of 215 μs) of the same dynamic scene in(b), imaged by using the COSUP system. (d) Comparison of the cent-roid positions along the x and y axes between the measurement resultsand the ground truths. To avoid cluttering, only one data point isshown for every seven measured data points. (See Visualization 4.)

Letter Vol. 44, No. 6 / 15 March 2019 / Optics Letters 1389

Page 4: Single-shot compressed optical-streaking ultra-high-speed ...single-shot compressed optical-streaking ultra-high-speed photography (COSUP) with an imaging speed of 1.5 million frames

DMD’s surface and collected the light diffracted by the patterns at140 kfps. Figure 4(b) shows a time-integrated image of this dy-namic event acquired by the CMOS camera at its intrinsic framerate of 20 fps. Figure 4(c) shows a color-encoded image generatedby superimposing 10 representative time-lapse frames (with an in-terval of 215 μs) of the moving ball from the movie reconstructedby the COSUP system (see Visualization 4). While the time-in-tegrated image merely presents the overall trace, the time-lapseframes unambiguously show the evolution of the spatial positionand the shape (especially the deformation from the round to theelliptical shape at the turning points of its trajectory) at each timepoint. To evaluate the reconstruction’s accuracy, we traced thecentroids of the bouncing ball in each reconstructed frame[Fig. 4(d)]. The measurement errors were calculated by sub-tracting the measured position of centroids from the preset ones.Further, the root-mean-square errors (RMSEs) of reconstructedcentroids along the x and y axes were calculated to be 22 and9 μm, respectively. These analyses confirm that the COSUP sys-tem has good measurement accuracy with respect to the groundtruth. The anisotropy of RMSEs was attributed to the spatiotem-poral mixing along the shearing direction.

In conclusion, we have demonstrated single-shot 2D ultra-high-speed passive imaging using COSUP. Featuring opticalstreaking using a GS in the 4f imaging system, COSUP en-dows an off-the-shelf CMOS camera with tunable imagingspeeds of up to 1.5 Mfps, which is approximately three ordersof magnitude higher than the state-of-art in imaging speed ofCS-based temporal imaging using silicon sensors [19–21]. Inaddition, the system is capable of reaching a sequence depthof up to 500 frames and a pixel count of 0.5 megapixels in eachframe. COSUP’s ultra-high-speed imaging capability was dem-onstrated by capturing the transmission of single laser pulsesthrough a mask and by tracing the shape and position of afast-moving object in real time.

As a universal imaging platform, COSUP can achieve a scal-able spatial resolution by coupling with different front optics inmicroscopes and telescopes. Moreover, it provides a cost-effectivealternative to researchers performing high-speed imaging withlimited budgets. Finally, although not demonstrated in this work,COSUP can be easily applied to other CCD or CMOS camerasaccording to specific studies. For instance, integration of an elec-tron-multiplying CCD camera in the COSUP system will enablehigh-sensitivity optical neuroimaging of action potential propa-gating at tens of meters per second [30] under microscopicsettings [31]. As another example, an infrared-camera-basedCOSUP system could enable wide-field temperature sensingin deep tissue using nanoparticles [32]. In summary, by leveragingthe advantages of off-the-shelf cameras and sensors, COSUP isexpected to find widespread applications in both fundamentaland applied sciences.

Funding. Natural Sciences and Engineering ResearchCouncil of Canada (NSERC) (RGPAS-507845-2017, RGPIN-2017-05959); Canada Foundation for Innovation (CFI)

(37146); Fonds de Recherche du Québec—Nature etTechnologies (FRQNT) (2019-NC-252960).

REFERENCES

1. P. W. W. Fuller, Imaging Sci. J. 57, 293 (2009).2. S. S. Howard, A. Straub, N. G. Horton, D. Kobat, and C. Xu, Nat.

Photonics 7, 33 (2013).3. W. Yang and R. Yuste, Nat. Methods 14, 349 (2017).4. M. Dong, S. Husale, and O. Sahin, Nat. Nanotechnol. 4, 514 (2009).5. A. Ehn, J. Bood, Z. Li, E. Berrocal, M. Alden, and E. Kristensson, Light:

Sci. Appl. 6, e17045 (2017).6. K. Goda, K. K. Tsia, and B. Jalali, Nature 458, 1145 (2009).7. K. Goda, A. Ayazi, D. R. Gossett, J. Sadasivam, C. K. Lonappan, E.

Sollier, A. M. Fard, S. C. Hur, J. Adam, C. Murray, C. Wang, N.Brackbill, D. Di Carlo, and B. Jalali, Proc. Natl. Acad. Sci. USA109, 11630 (2012).

8. K. Goda and B. Jalali, Nat. Photonics 7, 102 (2013).9. J. Wu, Y. Xu, J. Xu, X. Wei, A. C. Chan, A. H. Tang, A. K. Lau, B. M.

Chung, H. C. Shum, E. Y. Lam, K. K. Wong, and K. K. Tsia, Light: Sci.Appl. 6, e16196 (2017).

10. C. T. Chin, C. Lancee, J. Borsboom, F. Mastik, M. E. Frijlink, N. deJong, M. Versluis, and D. Lohse, Rev. Sci. Instrum. 74, 5026 (2003).

11. V. Tiwari, M. A. Sutton, and S. R. McNeill, Exp. Mech. 47, 561 (2007).12. T. G. Etoh, D. V. Son, T. Yamada, and E. Charbon, Sensors 13, 4640

(2013).13. M. Suzuki, M. Suzuki, R. Kuroda, and S. Sugawa, Electron. Imaging

2018, 398 (2018).14. J. Hunt, T. Driscoll, A. Mrozack, G. Lipworth, M. Reynolds, D. Brady,

and D. R. Smith, Science 339, 310 (2013).15. D. L. Donoho, IEEE Trans. Inf. Theory 52, 1289 (2006).16. J. Liang and L. V. Wang, Optica 5, 1113 (2018).17. R. G. Baraniuk, T. Goldstein, A. C. Sankaranarayanan, C. Studer, A.

Veeraraghavan, and M. B. Wakin, IEEE Signal Proc. Mag. 34(1), 52(2017).

18. D. Reddy, A. Veeraraghavan, and R. Chellappa, in Conference onComputer Vision and Pattern Recognition (CVPR) (2011), p. 329.

19. Y. Hitomi, J. Gu, M. Gupta, T. Mitsunaga, and S. K. Nayar, in IEEEInternational Conference on Computer Vision (2011), p. 287.

20. P. Llull, X. Liao, X. Yuan, J. Yang, D. Kittle, L. Carin, G. Sapiro, andD. J. Brady, Opt. Express 21, 10526 (2013).

21. R. Koller, L. Schmid, N. Matsuda, T. Niederberger, L. Spinoulas, O.Cossairt, G. Schuster, and A. K. Katsaggelos, Opt. Express 23, 15992(2015).

22. F. Mochizuki, K. Kagawa, S. Okihara, M. W. Seo, B. Zhang, T.Takasawa, K. Yasutomi, and S. Kawahito, Opt. Express 24, 4155 (2016).

23. L. Gao, J. Liang, C. Li, and L. V. Wang, Nature 516, 74 (2014).24. J. Liang, C. Ma, L. Zhu, Y. Chen, L. Gao, and L. V. Wang, Sci. Adv. 3,

e1601814 (2017).25. J. Liang, L. Zhu, and L. V. Wang, Light: Sci. Appl. 7, 42 (2018).26. D. Wu, X. Zhou, B. Yao, R. Li, Y. Yang, T. Peng, M. Lei, D. Dan, and T.

Ye, Appl. Opt. 54, 8632 (2015).27. B. D. Buckner and D. L’Esperance, Opt. Eng. 52, 083105 (2013).28. Hamamatsu Corporation, https://www.hamamatsu.com/resources/

pdf/sys/SHSS0006E_STREAK.pdf.29. J. M. Bioucas-Dias and M. A. T. Figueiredo, IEEE Trans. Image

Process. 16, 2992 (2007).30. T. Chen, T. J. Wardill, Y. Sun, S. R. Pulver, S. L. Renninger, A.

Baohan, E. R. Schreiter, R. A. Kerr, M. B. Orger, V. Jayaraman,L. L. Looger, K. Svoboda, and D. S. Kim, Nature 499, 295 (2013).

31. H. Mikami, L. Gao, and K. Goda, Nanophotonics 5, 497 (2016).32. D. Jaque and F. Vetrone, Nanoscale 4, 4301 (2012).

1390 Vol. 44, No. 6 / 15 March 2019 / Optics Letters Letter