science journals — aaas · cytometry ghostcytometry sadao ota1,2,3*†, ryoichi horisaki3,4*,...

7
CYTOMETRY Ghost cytometry Sadao Ota 1,2,3 *, Ryoichi Horisaki 3,4 *, Yoko Kawamura 1,2 *, Masashi Ugawa 1 *, Issei Sato 1,2,3,5 , Kazuki Hashimoto 2,6 , Ryosuke Kamesawa 1,2 , Kotaro Setoyama 1 , Satoko Yamaguchi 2 , Katsuhito Fujiu 2 , Kayo Waki 2 , Hiroyuki Noji 2,7 Ghost imaging is a technique used to produce an objects image without using a spatially resolving detector. Here we develop a technique we term ghost cytometry, an image-free ultrafast fluorescence imagingcytometry based on a single-pixel detector. Spatial information obtained from the motion of cells relative to a static randomly patterned optical structure is compressively converted into signals that arrive sequentially at a single-pixel detector. Combinatorial use of the temporal waveform with the intensity distribution of the random pattern allows us to computationally reconstruct cell morphology. More importantly, we show that applying machine-learning methods directly on the compressed waveforms without image reconstruction enables efficient image-free morphology-based cytometry. Despite a compact and inexpensive instrumentation, image-free ghost cytometry achieves accurate and high-throughput cell classification and selective sorting on the basis of cell morphology without a specific biomarker, both of which have been challenging to accomplish using conventional flow cytometers. I maging and analyzing many single cells holds the potential to substantially increase our understanding of heterogeneous systems in- volved in immunology (1), cancer (2), neuro- science (3), hematology (4), and development (5). Many key applications in these fields require accurate and high-throughput isolation of spe- cific populations of cells according to informa- tion contained in the high-content images. This raises several challenges. First, despite recent developments (610), simultaneously meeting the needs of high sensitivity, polychromaticity, high shutter speed and high frame rates, con- tinuous acquisition, and low cost remains dif- ficult. Second, ultrafast and continuous image acquisition subsequently requires computational image reconstruction and analysis that is costly in terms of both time and money (11). Given this, fluorescence imagingactivated cell sorting, for instance, has not been realized yet. Here we show that directly applying machine-learning methods to compressed imaging signals mea- sured with a single-pixel detector enables ultra- fast, sensitive, and accurate image-free (without image production), morphology-based cell anal- ysis and sorting in real time, which we call ghost cytometry(GC). In GC, as an object passes through a pseudo- random static optical structure, each randomly arranged spot in the structure sequentially ex- cites fluorophores at different locations of the object (Fig. 1). These encoded intensities from each fluorophore are multiplexed and measured compressively and continuously as a single tem- poral waveform measured with a single-pixel detector (Fig. 1, bottom graphs), which, in this work, was a photomultiplier tube (PMT). As- suming the object is in constant unidirectional motion with velocity v, the signal acquisition is mathematically described as gðt Þ¼ Hðx; yÞI ðx vt ; yÞdxdy ð1Þ where g(t) is the multiplexed temporal wave- form, H(x, y) is the intensity distribution of the optical structure, and I(x, y) is the intensity distribution of the moving object. Note that H, acting as a spatial encoding operator, is static, so that no scanning or sequential light projection is needed in GC. We designed a binary random pattern for the optical structure as a simple im- plementation (figs. S1 to S3). In the measurement process of GC, the object is convolved with the op- tical structure along the x direction, and the re- sultant signals are integrated along the y direction. In the compressive sensing literature, randomized convolutions are regarded as imaging modalities (12). Given Eq. 1 as a forward model, the image- reconstruction process amounts to solving the inverse problem. This solution can be iteratively estimated by minimizing an objective function that is computed by combinatorial use of the multi- plexed temporal waveform, g(t), and the inten- sity distribution of the optical structure, H. For sparse events in a regularization domain, we can reasonably estimate the moving object from the measured signal, g(t), by adopting a compressed- sensing algorithm, which, in this work, was two- step iterative shrinkage/thresholding (TwIST) (13) (as detailed in the supplementary methods). This reconstruction process shares its concept with ghost imaging, in which the original image is computationally recovered after sequentially pro- jecting many random optical patterns onto the object and recording the resultant signals with a single-pixel detector (1419). Although ghost imag- ing has attracted considerable attention in the scientific community, the sequential projection of light patterns makes it slow and has hampered its practical use. Even when compressive sensing was used to reduce the time required for the light projections, the method was still slower than conventional arrayed-pixel cameras (18). By con- trast, GC does not require any movement of equipment, and the speed of image acquisition increases with the objects motion, up to the high bandwidth of single-pixel detectors. The use of motion thus transforms slow ghost imaging RESEARCH Ota et al., Science 360, 12461251 (2018) 15 June 2018 1 of 6 1 Thinkcyte Inc., 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654, Japan. 2 University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654, Japan. 3 Japan Science and Technology Agency, PRESTO, 4-1-8 Honcho, Kawaguchi-shi, Saitama 332-0012, Japan. 4 Department of Information and Physical Sciences, Graduate School of Information Science and Technology, Osaka University, 1-5 Yamadaoka, Suita, Osaka 565-0871, Japan. 5 RIKEN AIP, Nihonbashi 1-chome Mitsui Building, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan. 6 Japan Aerospace Exploration Agency, 6-13-1 Osawa, Mitaka-shi, Tokyo 181-0015, Japan. 7 ImPACT Program, Cabinet Office, Government of Japan, Chiyoda-ku Tokyo 100-8914, Japan. *These authors equally contributed to this work. Corresponding author. Email: [email protected]. ac.jp Fig. 1. Schematic of the compressive sensing process in GC. The relative motion of an object across a static, pseudorandom optical structure, H(x, y), is used for compressively mapping the objects spatial information into a train of temporal signals. F 1 and F 2 are the representative fluorescent features in the object. According to the objects motion, the spatial modulation of H is encoded into the temporal modulation of emission intensity from each fluorophore in the object, and their sum g(t) is recorded with a single-pixel detector, as shown in the bottom graph. In the imaging mode of GC, the objects 2D image can be computa- tionally reconstructed by a combinatorial use of the multiplexed temporal waveform, g(t), and the intensity distribution of the optical structure, H. In the image-free mode of GC, directly applying machine- learning methods to the compressive temporal waveform yields high-throughput, highly accurate, image-free morphology-based cell classification. Schematics are not to scale. Flow Direction Static Random Light Structure, H(x,y) y x F1 Emission intensity from F1 Emission intensity from F2 Total (sum) emission intensity from object, g(t) F2 Time Time Time Corrected 15 June 2018. See full text. on May 12, 2020 http://science.sciencemag.org/ Downloaded from

Upload: others

Post on 12-May-2020

9 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Science Journals — AAAS · CYTOMETRY Ghostcytometry Sadao Ota1,2,3*†, Ryoichi Horisaki3,4*, Yoko Kawamura1,2*, Masashi Ugawa1*, Issei Sato1,2,3,5, Kazuki Hashimoto2,6, Ryosuke

CYTOMETRY

Ghost cytometrySadao Ota1,2,3*†, Ryoichi Horisaki3,4*, Yoko Kawamura1,2*, Masashi Ugawa1*,Issei Sato1,2,3,5, Kazuki Hashimoto2,6, Ryosuke Kamesawa1,2, Kotaro Setoyama1,Satoko Yamaguchi2, Katsuhito Fujiu2, Kayo Waki2, Hiroyuki Noji2,7

Ghost imaging is a technique used to produce an object’s image without using a spatiallyresolving detector. Here we develop a technique we term “ghost cytometry,” an image-freeultrafast fluorescence “imaging” cytometry based on a single-pixel detector. Spatialinformation obtained from themotion of cells relative to a static randomly patterned opticalstructure is compressively converted into signals that arrive sequentially at a single-pixeldetector. Combinatorial use of the temporal waveform with the intensity distribution of therandom pattern allows us to computationally reconstruct cell morphology. Moreimportantly, we show that applying machine-learning methods directly on the compressedwaveforms without image reconstruction enables efficient image-free morphology-basedcytometry. Despite a compact and inexpensive instrumentation, image-free ghostcytometry achieves accurate and high-throughput cell classification and selective sorting onthe basis of cell morphology without a specific biomarker, both of which have beenchallenging to accomplish using conventional flow cytometers.

Imaging and analyzingmany single cells holdsthe potential to substantially increase ourunderstanding of heterogeneous systems in-volved in immunology (1), cancer (2), neuro-science (3), hematology (4), and development

(5). Many key applications in these fields requireaccurate and high-throughput isolation of spe-cific populations of cells according to informa-tion contained in the high-content images. Thisraises several challenges. First, despite recentdevelopments (6–10), simultaneously meetingthe needs of high sensitivity, polychromaticity,high shutter speed and high frame rates, con-tinuous acquisition, and low cost remains dif-ficult. Second, ultrafast and continuous imageacquisition subsequently requires computationalimage reconstruction and analysis that is costlyin terms of both time andmoney (11). Given this,fluorescence imaging–activated cell sorting, forinstance, has not been realized yet. Here weshow that directly applying machine-learningmethods to compressed imaging signals mea-sured with a single-pixel detector enables ultra-fast, sensitive, and accurate image-free (withoutimage production), morphology-based cell anal-ysis and sorting in real time, whichwe call “ghostcytometry” (GC).In GC, as an object passes through a pseudo-

random static optical structure, each randomly

arranged spot in the structure sequentially ex-cites fluorophores at different locations of theobject (Fig. 1). These encoded intensities fromeach fluorophore are multiplexed and measuredcompressively and continuously as a single tem-poral waveform measured with a single-pixeldetector (Fig. 1, bottom graphs), which, in thiswork, was a photomultiplier tube (PMT). As-suming the object is in constant unidirectionalmotion with velocity v, the signal acquisition ismathematically described as

gðtÞ ¼ ∬Hðx; yÞIðx � vt; yÞdxdy ð1Þ

where g(t) is the multiplexed temporal wave-form,H(x, y) is the intensity distribution of theoptical structure, and I(x, y) is the intensity

distribution of the moving object. Note that H,acting as a spatial encoding operator, is static,so that no scanning or sequential light projectionis needed in GC. We designed a binary randompattern for the optical structure as a simple im-plementation (figs. S1 to S3). In themeasurementprocess of GC, the object is convolved with the op-tical structure along the x direction, and the re-sultant signals are integrated along theydirection.In the compressive sensing literature, randomizedconvolutions are regarded as imaging modalities(12). Given Eq. 1 as a forward model, the image-reconstruction process amounts to solving theinverse problem. This solution can be iterativelyestimated by minimizing an objective functionthat is computed by combinatorial use of the multi-plexed temporal waveform, g(t), and the inten-sity distribution of the optical structure, H. Forsparse events in a regularization domain, we canreasonably estimate the moving object from themeasured signal, g(t), by adopting a compressed-sensing algorithm, which, in this work, was two-step iterative shrinkage/thresholding (TwIST) (13)(as detailed in the supplementary methods). Thisreconstruction process shares its concept withghost imaging, in which the original image iscomputationally recovered after sequentially pro-jecting many random optical patterns onto theobject and recording the resultant signals with asingle-pixel detector (14–19). Although ghost imag-ing has attracted considerable attention in thescientific community, the sequential projectionof light patternsmakes it slow and has hamperedits practical use. Even when compressive sensingwas used to reduce the time required for the lightprojections, the method was still slower thanconventional arrayed-pixel cameras (18). By con-trast, GC does not require any movement ofequipment, and the speed of image acquisitionincreases with the object’s motion, up to thehigh bandwidth of single-pixel detectors. The useof motion thus transforms slow ghost imaging

RESEARCH

Ota et al., Science 360, 1246–1251 (2018) 15 June 2018 1 of 6

1Thinkcyte Inc., 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8654,Japan. 2University of Tokyo, 7-3-1 Hongo, Bunkyo-ku,Tokyo 113-8654, Japan. 3Japan Science and TechnologyAgency, PRESTO, 4-1-8 Honcho, Kawaguchi-shi, Saitama332-0012, Japan. 4Department of Information and PhysicalSciences, Graduate School of Information Science andTechnology, Osaka University, 1-5 Yamadaoka, Suita,Osaka 565-0871, Japan. 5RIKEN AIP, Nihonbashi 1-chomeMitsui Building, 1-4-1 Nihonbashi, Chuo-ku, Tokyo103-0027, Japan. 6Japan Aerospace Exploration Agency,6-13-1 Osawa, Mitaka-shi, Tokyo 181-0015, Japan.7ImPACT Program, Cabinet Office, Government of Japan,Chiyoda-ku Tokyo 100-8914, Japan.*These authors equally contributed to this work.†Corresponding author. Email: [email protected]

Fig. 1. Schematic of the compressivesensing process in GC.The relativemotion of an object across a static,pseudorandom optical structure, H(x, y),is used for compressively mapping theobject’s spatial information into a train oftemporal signals. F1 and F2 are therepresentative fluorescent features in theobject. According to the object’s motion,the spatial modulation of H is encodedinto the temporal modulation of emissionintensity from each fluorophore in theobject, and their sum g(t) is recorded witha single-pixel detector, as shown in thebottom graph. In the imaging mode of GC,the object’s 2D image can be computa-tionally reconstructed by a combinatorialuse of the multiplexed temporal waveform,g(t), and the intensity distribution of theoptical structure, H. In the image-freemode of GC, directly applying machine-learning methods to the compressive temporal waveform yields high-throughput, highly accurate,image-free morphology-based cell classification. Schematics are not to scale.

Flow Direction

Static Random Light Structure, H(x,y)yx

F1

Emissionintensityfrom F1

Emissionintensityfrom F2

Total (sum) emission

intensity fromobject, g(t)

F2

Time

Time

Time

Corrected 15 June 2018. See full text. on M

ay 12, 2020

http://science.sciencemag.org/

Dow

nloaded from

Page 2: Science Journals — AAAS · CYTOMETRY Ghostcytometry Sadao Ota1,2,3*†, Ryoichi Horisaki3,4*, Yoko Kawamura1,2*, Masashi Ugawa1*, Issei Sato1,2,3,5, Kazuki Hashimoto2,6, Ryosuke

into a practical, ultrafast, and continuous imag-ing procedure—GC is 10,000 times faster thanexisting fluorescence ghost imaging (19–21).As an experimental proof of concept of GC in

the imagingmode, we imaged fluorescent beadsmounted on a glass coverslip and moved themacross a random pattern using an electronictranslational stage (Fig. 2, A and B). The beadswere kept in focus as they moved in the direc-tion parallel to the row direction of H. Incor-porating the random optical structure in theoptical path before or after the sample is math-ematically equivalent. This means that GC-basedimaging can be experimentally realized by eitherrandom structured illumination (SI, shown inFig. 2A) or random structured detection (SD,shown in Fig. 2B). These configurations corre-spond to computational ghost imaging (15) andsingle-pixel compressive imaging (22), respec-tively. The encoding operator H in Eq. 1 in theSI mode can be experimentally measured as theexcitation intensity distribution in the sampleplane; the operator H in the SD mode can bemeasured as the pointwise product of the ex-citation intensity distribution in the sample planeand the transmissibility distribution of the photo-mask in the conjugated plane between the sam-ple and detector. We experimentally calibratedthe exact operator,H, by placing a thin sheet of afluorescent polymer in the sample plane andmea-suring its intensity distribution with a spatiallyresolvingmultipixel detector (fig. S1). A blue light-emitting diode (LED) was used as an excitationlight source. During the object’smotion, the PMTcollected the photons that were emitted from theobject as temporally modulated fluorescence in-tensity (Fig. 2, C and E, for the SI and SDmodes,respectively). Figure 2, D and F, shows the com-putationally recovered fluorescence images ofmultiple beads for each waveform. For compar-ison, Fig. 2G shows the image acquired with anarrayed detector-based scientific complementarymetal-oxide semiconductor (CMOS) camera. Themorphological features of the beads are clear, val-idating GC imaging in both the SI and SDmodes.The simple design of the GC optics means that

adding multiple light sources, dielectric mirrors,and optical filters enables multicolored fluores-cence imaging with a single photomask (Fig. 3A).To validate GC for cell imaging,we stainedMCF-7cells (a human breast adenocarcinoma cell line)in three colors: Themembranes, nuclei, and cyto-plasmwere stainedwith red (EpCAMPE-CF594),blue [4′,6-diamidino-2-phenylindole (DAPI)], andgreen [fixable green (FG)] dyes, respectively.Stained cells were mounted on a glass cover-slip. We used a blue continuous-wave laser andultraviolet LED light sources for exciting thefluorophores. We adopted the SD mode and ex-perimentally estimated the operator,H, for eachexcitation light source (fig. S2). In the experi-ment,wemoved the stage onwhich the coverslipwas mounted and measured the temporal sig-nals from each color channel using three PMTs,respectively (Fig. 3, A andB). Figure 3C, (i) to (iii),shows the computationally reconstructed fluores-cence images for each color, clearly revealing the

Ota et al., Science 360, 1246–1251 (2018) 15 June 2018 2 of 6

A

E

00 1 2 3 4 1 2 3 4

0.4

0.8

1.2

0

0.4

0.8

1.2

Vol

tage

, g(t

) [V

]

Time [sec] Time [sec]

Vol

tage

, g(t

) [V

]C FD G

B

0

Fig. 2. Demonstration of motion-based compressive fluorescence imaging. (A and B) Opticalsetups for the SI and SD modes, respectively. We moved aggregates of fluorescent beads,represented as a red sphere, on a glass coverslip (not shown) across the optical structure in thedirection of the red arrow using an electronic translational stage. (C) In the SI mode, the beadsgo through the structured illumination according to the motion, resulting in the generation of thetemporal waveform of the fluorescence intensity. (D) From this acquired temporal signal, a 2Dfluorescence image is computationally reconstructed. (E) In the SD mode, the beads are illuminatedby uniform light. A conjugate fluorescence image of the beads then passes through structuredpinhole arrays, resulting in the generation of the temporal waveform. (F) From this temporalsignal, a 2D fluorescence image is computationally reconstructed. (G) A fluorescence imageof the same aggregated beads, acquired with an arrayed-pixel camera. Scale bars, 20 mm.

0

100

200

300

400

(i)

(iv)

(v)

(iii)

(ii)(i)

(ii)

(iii)Vol

tage

, g(t

) [V

]

Time [sec]

Vol

tage

, g(t

) [m

V]

00

0.05

0

0

0.1

0.1

4

LightSource 1

Ligh

tS

ourc

e 2

(v)

0 0.04 0.08

Time [msec]0.12

0

20

40

60

80

100

(iii)

(iv)

(i)

(ii)

PMT3

PMT1

PMT2

PMT3

PMT1

PMT2

Overlay

PMT3

PMT2

PMT1

(i)

(iv)

(v)

(iii)

(ii)

(v)

A C D

B

8

Fig. 3. Multicolor and high-throughput fluorescence cell imaging by GC. (A) An optical setupfor the multicolor motion-based compressive fluorescence imaging (SD mode). The setup utilizesa 473-nm-wavelength blue laser and a 375-nm-wavelength ultraviolet LED as excitation light sourcescoupled by a dichroic mirror (fig. S2) to create a relatively uniform illumination, shown as a purplerectangle through which a cell moves in the direction of the arrow. Cultured MCF-7 cells were used inthe experiment, with membranes, cytoplasm, and nuclei fluorescently stained red, green, and blue,respectively.When the labeled cells moved with an electronic translational stage, their conjugatedimages passed through the optical encoder and generated temporal waveforms. The signals werethen split into three-color channels of (i) red, (ii) green, and (iii) blue with dichroic mirrors, andfinally recorded by different PMTs. (B) The representative traces recorded by the PMTs. (C) Fromthe temporal signals for each of the three PMTs, fluorescence images of labeled cells werecomputationally recovered in (i) red, (ii) green, and (iii) blue, respectively. (iv) A pseudocoloredmulticolor image combined from (i), (ii), and (iii). (v) A multicolor fluorescence image acquiredwith an arrayed-pixel color camera. (D) Multicolor submillisecond fluorescence imaging of the cellsunder flow at the throughput rate above 10,000 cells/s. In the experiment, 488-nm-wavelength blueand 405-nm-wavelength violet lasers passed through diffractive optical elements to generate therandom structured illumination to the cell stream (fig. S3, SI mode). (i) and (ii) show the green andblue fluorescence signals from the cytoplasm and the nucleus, respectively. From the temporalsignals for each PMT, fluorescence images of the labeled cells were computationally recovered in (iii) greenand (iv) blue, respectively. (v) The reconstructed multicolor fluorescence image. Scale bars, 20 mm.

RESEARCH | REPORT

Corrected 15 June 2018. See full text. on M

ay 12, 2020

http://science.sciencemag.org/

Dow

nloaded from

Page 3: Science Journals — AAAS · CYTOMETRY Ghostcytometry Sadao Ota1,2,3*†, Ryoichi Horisaki3,4*, Yoko Kawamura1,2*, Masashi Ugawa1*, Issei Sato1,2,3,5, Kazuki Hashimoto2,6, Ryosuke

fine features of cellular membranes, cytoplasm,and nuclei. For comparison, Fig. 3C, (iv), showsan overlaid color image, and Fig. 3C, (v), shows afluorescence image acquired with a convention-al color camera (Wraycam SR 130, WRAYMER,Inc., Japan). The average of peak signal-to-noiseratios—which are calculated as eq. S3—of thered, green, and blue channels is 26.2 dB betweenthe GC’s reconstructed image, Fig. 3C, (iv), andthe camera image, Fig. 3C, (v). These resultsdemonstrate a good performance of multicolorGC imaging in delineating the morphologicalfeatures of cells.We also show that GC can achieve fast multi-

color continuous fluorescence imaging of flowingcells. We used a flow cell assembly (HamamatsuPhotonics, Japan) for focusing a stream of flow-ing fluorescent cells in the three-dimensional

(3D) space, so that the cells are focused in theplane perpendicular to the flow that is alignedparallel to the length direction of the encoderH.Using diffractive optical elements that generatestructured illuminations inside the flow cell (fig.S3),we performed continuous spreadoptical pointscans of 100 pixels perpendicular to the flow,corresponding to the image size in the y direc-tion in the computational reconstruction. Figure3D, (i) and (ii), shows the temporal waveformsfrom each color channel of a single MCF-7 cell,with its cytoplasm labeled by FG and its nucleuslabeled by DAPI. Fluorescence images were com-putationally reconstructed for each waveform in(iii) and (iv), respectively. Figure 3D, (v), shows thecomputationally reconstructed multicolor fluo-rescence image, with clearly resolved cellularmorphological features. The cells were flowed at

a throughput higher than 10,000 cells/s, a rate atwhich arrayed-pixel cameras such as charge-coupled devices and CMOSs create completelymotion-blurred images. The total input excita-tion intensities of the structured illuminationsafter the objective lenswere ~58 and ~14mW for488- and 405-nm lasers, where those assigned toindividual random spots were <43 and <10 mWon average, respectively. By designing and adopt-ing the appropriate diffractive optical elements,we created the light pattern with minimal loss,suggesting high sensitivity of GC imaging, cal-culated as the minimal number of detectablefluorophores (as detailed in the supplementarymethods), close to single-molecule levels.Using the object’s motion across the static

light pattern H for optical encoding and sparsesampling, we achieve blur-free, high–frame rate

Ota et al., Science 360, 1246–1251 (2018) 15 June 2018 3 of 6

Time

Label: MIA PaCa-2

Label: MCF-7

Time

Sig

nal

Sig

nal

Train Classifier

MixMixMCF-7

MIAPaCa-2

A (ii)(i)

(i) (ii) (iii)

(iii) (iv) B

C D

0.0 0.2 0.4 0.6 0.8 1.0

False Positive Rate

0.0

0.2

0.4

0.6

0.8

1.0

True

Pos

itive

Rat

e MIAPaCa-2 MCF-7

0SVM Score

0

10 3

1.02.03.04.0

6.0

Num

ber

of c

ells

0.0 0.2 0.4 0.6 0.8 1.00.0

0.2

0.4

0.6

0.8

1.0

False Positive Rate

True

Pos

itive

Rat

e

0 10 20SVM Score

01.02.03.04.0

Num

ber

of c

ells

MCF-7

PBMC

10 3

0 0.2 0.4 0.6 0.8 1.0

MCF-7/(MCF-7 + MIA PaCa-2)classified by intensity of BV421

0

0.2

0.4

0.6

0.8

1.0

MC

F-7

/(M

CF

-7 +

MIA

PaC

a-2)

clas

sifie

d by

GC

(ii)(i)

(iii)

(iv)

Total intensity in blue

Nu

mb

ero

f C

ells

Nu

mb

ero

f C

ells

Nu

mb

ero

f C

ells

SVM scores in green

Total intensity in green

MIAPaCa-2

MCF-7

MCF-7

MIAPaCa-2

E

Fig. 4. High-throughput and highly accurate fluorescence image-free“imaging” cytometry by GC via direct machine-learning of compressivesignals. (A) The procedure for training a classifier model in GC. (i) Differentbut morphologically similar cell types (MCF-7 and MIA PaCa-2 cells) werefluorescently labeled: For both cell types, the cytoplasm was stained in greenwith FG, whereas the membranes of only the MCF-7 cells were stainedin blue with BV421-EpCAM. Scale bars, 20 mm. (ii) By separately flowing thedifferent cell types through the encoding optical structure used in Fig. 3Dat the throughput rate of >10,000 cells/s, (iii) compressive waveforms of eachcell type were collectively extracted from the temporally modulated signalsof fluorescence intensity. (iv) A library of waveforms labeled with each celltype was used as a training dataset to build a cell classifier. A supportvector machine model was used in this work. (B) Procedure for testing theclassifier model. (i) The different types of cells were experimentally mixedat a variety of concentration ratios before analysis. (ii) When flowing the cellmixture through the encoder at the same throughput rate, (iii) we appliedthe trained model directly to the waveform for classifying the cell type.(C) In (i), blue data points are the concentration ratios of MCF-7 cellsin each sample estimated by applying the trained SVM-based classification

directly on the waveforms of FG intensity (iv), compared with those obtainedby measuring the total intensity of BV421 (ii). Red data points are theconcentration ratios of MCF-7 cells estimated by applying the sameprocedure of SVM-based classification to the total intensity of FG (iii), whichwe obtained by integrating each GC waveform over time, compared withthe results from measurement with BV421 (ii). Seventy samples weremeasured for a variety of concentration ratios, with each sample comprising700 tests of randomly mixed cells. The image-free GC results shown withblue data points in (C) reveal a small RMSD of 0.046 from y = x and an AUC[ROC curve shown in (D)] of 0.971 over about 50,000 cells, even thoughthe morphologies of these cells appear similar to the human eye. (D) Eachpoint on the ROC curve corresponds to a threshold value applied to scoresfrom the trained SVM model, wherein red and green colors in the histogramare labeled according to the intensity of BV421 (inset derived from eq. S4).By contrast, the red data points in (C) reveal inferior classification results,with a large RMSD of 0.289 and poor ROC-AUC of 0.596. (E) Whenclassifying the model cancer (MCF-7) cells against a complex mixtureof PBMCs, the ultrafast image-free GC recorded high values of AUC ~0.998,confirming its robust and accurate performance in a case of practical use.

RESEARCH | REPORT

Corrected 15 June 2018. See full text. on M

ay 12, 2020

http://science.sciencemag.org/

Dow

nloaded from

Page 4: Science Journals — AAAS · CYTOMETRY Ghostcytometry Sadao Ota1,2,3*†, Ryoichi Horisaki3,4*, Yoko Kawamura1,2*, Masashi Ugawa1*, Issei Sato1,2,3,5, Kazuki Hashimoto2,6, Ryosuke

imagingwith high signal-to-noise ratio. The framerate r and pixel scan rate p are defined as

r = v/(width of H + width of I) (2)

p = v/(single-spot size in H) (3)

where I is the final image and p is the inverseof the time taken for a fluorophore to pass overeach excitation spot in H. First, compressive en-coding reduces the number of sampling points,defined by the length of H and required for oneframe acquisition, such that we effectively low-er the required bandwidth for achieving highframe rates. This reduction is important, espe-cially in ultrafast imaging with a small numberof photons, because shot noise increases as thebandwidth increases. This feature allows us toeffectively reduce the excitation power for thefluorescence signals to overcome noise. Second,at a sufficient signal-to-noise ratio, GC can takefull advantage of the high bandwidth of single-

pixel detectors while H is temporally static,unlike in the other techniques that temporallymodulate the excitation intensity before the ob-ject passes through a pixel of excitation light.Consequently, GC yields blur-free images, unlessthe pixel scan rate is at least two times greaterthan the bandwidth of the PMT. For example,for an H spot size of 500 nm and a PMT highbandwidth of 100 MHz, GC provides blur-freeimages, unless the flow rate surpasses at least100 m/s.Beyond a powerful imager, direct analysis of

the GC’s compressively generated signals enableshigh throughput and accurate classification ofthe cell’s morphology at considerably lower com-putational cost, thus leading to the realizationof ultrafast fluorescence “imaging”–activated cellsorting (FiCS) and analysis. This can be achievedbecause compressive sensing in GC substantiallyreduces the size of the imaging data while re-taining sufficient information for reconstructingthe object image. Although human recognition is

not capable of classifying the waveforms directly,machine-learningmethods can analyze the wave-formswithout image recovery. Herewe show thatsupervised machine learning directly applied towaveformsmeasured at the rate of ~10,000 cells/sclassifies fluorescently labeled cells with high per-formance, surpassing that of existing flow cytom-eters and human image recognition (Fig. 4).Our image-free GC consists of two steps:

(i) training and (ii) testing a model of cell clas-sification (Fig. 4, A and B). We first built themodel based on the support vector machine(SVM) algorithm (23) by computationally mixingthe waveforms of fluorescence signals from dif-ferent cell types. This training data of waveformswas collected by experimentally passing eachcell type separately through the optical encoder(Fig. 4A and table S5). We then tested this modelby flowing experimentally mixed cells and clas-sifying the cell types (Fig. 4B). Before the experi-ment, two different types of cells were cultured,fixed, and fluorescently labeled: MCF-7 cells andMIA PaCa-2 cells (a human pancreatic carci-noma cell line) [Fig. 4A, (i)]. For both cell types,the cytoplasm was labeled in green with FGfor classification by image-free GC [Fig. 4A, (i)],whereas the membranes were labeled in bluewith BV421-EpCAM (BV421 mouse anti-humanCD326 clone EBA-1) only in MCF-7 cells. MIAPaCa-2 cells had only low autofluorescence inthe blue channel, providing easily distinguishablecontrast in this channel (fig. S5, A to C).We usedthis to validate the GC’s classification that reliedon similar cytoplasmic labeling in both cell types.Using the same GC imaging setup (fig. S3), weused both violet and blue continuous-wave lasersfor exciting these fluorophores while a digitizer(M4i.4451, Spectrum, Germany) recorded the re-sultant signals from the PMTs. We developed amicrofluidic system to spatially control the posi-tion of the stream of cells with respect to therandom optical structure, corresponding to thecells’ positions in the reconstructed images (fig.S4). Using this optofluidic platform, we collectedwaveforms of green fluorescence intensity fromthe cytoplasm for each cell type. From this train-ing dataset, we then built SVM-based classifierswith no arbitrary feature extraction. To test thistrained classifier, we introduced a series of solu-tions containing a combination of different celltypesmixed at various concentration ratios. Eachclassifier then identified ~700 waveforms of themixed cells as a single dataset and estimatedthe concentration ratios for each. We used acombination of MCF-7 and MIA PaCa-2 cells,so that the classification results could be quan-titatively scored by measuring the total fluo-rescence intensity of BV421 at the membraneof MCF-7 cells [Fig. 4C, (ii), and fig. S5].A plot of the concentration ratio of MCF-7 and

MIA PaCa-2 cells measured by the blue fluores-cence intensity versus thatmeasured by applyingthe model to the green fluorescence waveformgives a line on the diagonal, with a small rootmean square deviation (RMSD) of 0.046 fromy = x. Using the BV421 measurement to evaluatethe GC-based classification of ~49,000mixed cells

Ota et al., Science 360, 1246–1251 (2018) 15 June 2018 4 of 6

Fig. 5. Demonstration of machinelearning–based FiCS. (A) A microfluidicdevice consists of three functional sites:A flow stream of cells is first focusedby 3D hydrodynamic flow-focusing(top left), then experiences the randomstructured-light illumination (right),and finally arrives at a sorting area(bottom left). Upon sorting action, a PZTactuator driven by input voltages bendsto transversely displace a fluid toward thejunction for sorting the targeted cellsinto a collection outlet. For real-timeclassification and selective isolation ofthe cells (right), analog signals measuredat PMTs are digitized and then analyzedat a FPGA in which we implemented thetrained SVM-based classifier. When aclassification result is positive, theFPGA sends out a time-delayed pulsethat consequently actuates the PZTdevice. Experiments were performed ata throughput rate of ~3000 cells/s.The cytoplasm of all MIA PaCa-2, MCF-7,and PBMC cells, were labeled in greenwith FG. The membranes of MCF-7cells in (B) and the cytoplasm of MIAPaCa-2 cells in (C) were labeled inblue with BV421-conjugated EpCAMantibodies and anti–pan cytokeratinprimary and AF405-conjugatedsecondary antibodies, respectively. (B) Accurate isolation of MIA PaCa-2 cells against morpho-logically similar MCF-7 cells. GC directly classified the green fluorescence waveforms without imagereconstruction. (i) is a histogram of maximum blue fluorescence intensity measured for the originalcell mixture, showing a purity of 0.626 for the MIA PaCa-2 cells, whereas (ii) is a histogram for thesame mixture after we applied FiCS, showing a purity of 0.951. A dashed line corresponding to athreshold value of 0.05 was used to distinguish the populations of the two cell types. (C) Accurateisolation of model cancer (MIA PaCa-2) cells against a complex mixture of PBMCs. (i) is a histogram ofmaximum blue fluorescence intensity measured for the original cell mixture, showing a purity of0.117 for the MIA PaCa-2 cells, whereas (ii) is a histogram for the same mixture after we applied FiCS,showing a purity of 0.951. A dashed line corresponding to a threshold value of 40 was used todistinguish the populations of the two cell types.

0.5

1.0

1.5

PMT

+ -PZT

A

PZT

10

20

30

0

0

102102

Blue Fluorescence IntensityBlue Fluorescence Intensity

Num

ber

of C

ells

Num

ber

of C

ells

B

102 103101

Time

Ele

ctric

alS

igna

lE

lect

rical

Sig

nal

Judg

e-m

ent i

nF

PG

A

8.0

8.0

6.0

4.0

2.0

0

Num

ber

of C

ells

6.0

4.0

2.0

0

Num

ber

of C

ells

10-1 10010-2

(i)

(ii)

(i)

(ii)

C

RESEARCH | REPORT

Corrected 15 June 2018. See full text. on M

ay 12, 2020

http://science.sciencemag.org/

Dow

nloaded from

Page 5: Science Journals — AAAS · CYTOMETRY Ghostcytometry Sadao Ota1,2,3*†, Ryoichi Horisaki3,4*, Yoko Kawamura1,2*, Masashi Ugawa1*, Issei Sato1,2,3,5, Kazuki Hashimoto2,6, Ryosuke

gave an AUC [area under a receiver operatingcharacteristic (ROC) curve] of 0.971 (Fig. 4D), con-firming that cell classification by GC is accurate.Each point on the ROC curve corresponds to athreshold value applied to the score obtainedfrom the trained SVM model (eq. S4), where redand green colors in the histogram are labeledaccording to the intensity of BV421 (Fig. 4D, in-set, and fig. S5). To confirm that the high perform-ance of GC is due to the spatial informationencoded in the waveforms, we applied the sameprocedure of SVM-based classification to thetotal green fluorescence intensity obtained byintegrating eachGCwaveformover time [Fig. 4C,(iii), and fig. S5D]. The results, shown as red datapoints in Fig. 4C, gave a poor ROC-AUC of 0.596and a large RMSD of 0.289 from y = x and thusshow little contribution of the total fluorescenceintensity to the high performance of GC. In addi-tion, by computing a simple linear fit (fig. S6),we confirmed that the SVM-based classificationconsistently retains its accuracy over a wide rangeof concentration ratios. Therefore, image-free GCis an accurate cell classifier even when the tar-geted cells are similar in size, total fluorescenceintensity, and apparent morphological featuresand are present in mixtures at different concen-trations. Indeed, in the absence of molecule-specific labeling, classifying such similar cell typeshas been a considerable challenge for existingcytometers and even for human recognition.Besides classifying two cell types that share

similar morphology, GC can accurately classifya specific cell type from a complex cell mixture athigh throughput. Such technology is important,for example, in detecting rare circulating tumorcells in the peripheral blood of patients (24–27).Here we applied the workflow of the image-freeGC for classifying model cancer cells (MCF-7)from peripheral bloodmononuclear cells (PBMCs;Astarte Biologics, Inc.), a heterogeneous popula-tion of blood cells including lymphocytes andmonocytes. Again, the cytoplasm of all the cellswas labeled in green with FG for classificationby image-free GC, whereas the membranes ofonlyMCF-7 cells were labeled in blue with BV421-EpCAM to validate the GC classification result.We first trained the classifier SVM model by ex-perimentally collecting the green fluorescencewaveforms for labeledMCF-7 and PBMC cells andcomputationally mixing them. We then testedthis model by flowing experimentally mixed cellsand classifying their cell types one by one. Allsignals weremeasured at a throughput of greaterthan 10,000 cells/s by using the same experimen-tal setup as used previously (fig. S3). For trainingthe model, 1000 MCF-7 cells and 1000 PBMCswere used. For testing themodel, 1000 cells froma random mixture of MCF-7 cells and PBMCswere used. After performing cross-validations10 times, the AUC recorded 0.998 for the SVM-based classifier of the GC waveforms, and Fig. 4Eshows one of the ROC curves, proving the abilityof ultrafast and accurate detection of the specificcell type from the complex cell mixture.Reducing the data size by compressive sensing

and avoiding image reconstruction in GC shortens

the calculation time required for classifying thesingle waveform. By combining this efficient sig-nal processing with a microfluidic system, wefinally realized ultrafast and accurate cell sortingon the basis of real-time analysis of imaging data(Fig. 5A). Here we demonstrate the ability ofFiCS to isolate a specific cell population fromanother population of morphologically similarcells as well as a complex cell mixture at highthroughput and high accuracy. In a microfluidicdevice made of polydimethylsiloxane (Fig. 5A,left), we designed three functional sites (fig. S7A).First, a flow stream of cells was focused into atight stream by a 3D hydrodynamic flow-focusing(28, 29) structure (fig. S7B). The cells then ex-perienced the randomstructured-light illuminationof GC and finally arrived at a junction where sort-ing occurs. A piezoelectric (PZT) actuator was con-nected to this junction through a channel directedin a direction perpendicular to the main flowstream (fig. S7C). For sorting action, this actua-tor, driven by an input voltage, bends and trans-versely displaces a fluid toward the junction tosort the targeted cells into a collection outlet. Asshown in Fig. 5A and fig. S7D, for the real-timeclassification and selective isolation of the cells,their fluorescence signals, recorded as analogvoltages by PMTs, were digitized by an analog-to-digital converter and then analyzed by a field-programmable gate array (FPGA) in which weimplemented the SVM-based classifier in advance.When the FPGA classifies the cell of interest aspositive, it sends out a time-delayed pulse thatconsequently drives the PZT actuator in the chip.The computation time in the FPGA for classifyingeach compressive waveform was short enough(<10 ms) to enable reproducible sorting. Through-out the experiment, the width of GC’s waveformwas maintained for about 300 ms, correspondingto a throughput of ~3000 cells/s. After measur-ing the green fluorescence waveforms of posi-tive and negative cells with their labels madeby the maximum blue fluorescence intensity,we built the classifier model in a computer off-line and implemented it in the FPGA. In theexperiment, the cytoplasm of all the MIA PaCa-2cells, MCF-7 cells, and PBMCs was labeled ingreen with FG; in addition, the membranes ofMCF-7 cells in Fig. 5B and the cytoplasm ofMIAPaCa-2 cells in Fig. 5Cwere labeled in blue withBV421-conjugated EpCAM antibodies and anti–pan cytokeratin primary and AF405-conjugatedsecondary antibodies, respectively.We first show that integrated FiCS enables ac-

curate isolation ofMIA PaCa-2 cells fromMCF-7cells, which are similar in size, total fluorescenceintensity, and apparentmorphology. Two hundredwaveforms of MIA PaCa-2 cells and 200 of MCF-7cells were used for training the SVMmodel. Whenwemixed the two cell types and thenmeasuredtheirmaximumblue fluorescence intensity witha homebuilt flow cytometer (analyzer), two dis-tinct peaks corresponding to the two cell typesappeared in the histogram [Fig. 5B, (i)]. After weapplied themachine learning–driven FiCS to thesame cell mixture by classifying the green fluo-rescencewaveforms, wemeasured themaximum

blue fluorescence intensity of the sorted mixturein the same manner. As a result, the peak atstronger intensity, corresponding toMCF-7 cells,disappeared, and the purity of MIA PaCa-2 cellsincreased from 0.625 to 0.951 [compare Fig. 5B,(i) and (ii)].We thus confirmed that, with just theuse of cytoplasmic staining (FG) alone,which doesnot specifically label the targetedmolecules, FiCScan recognize and physically isolate the appar-ently similar cell types on the basis of their mor-phologies with high accuracy and throughput.Finally, we show that FiCS can accurately

enrichMIA PaCa-2 cells against the complex mix-ture of PBMCs. Two hundred waveforms of MIAPaCa-2 cells and 200 of PBMCs were used fortraining the SVMmodel.Whenwemixed the twocell types and then measured their maximumblue fluorescence intensity with a homebuiltflow cytometer (analyzer), the peak at strongerintensity, corresponding to the population ofMIAPaCa-2 cells, was relatively small [Fig. 5C, (i)].After we applied FiCS to the same cell mixtureby classifying the green fluorescencewaveforms,we measured the maximum blue fluorescenceintensity of the sortedmixture in the sameman-ner. As a result, the purity of MIA PaCa-2 cellsincreased from 0.117 to 0.951 [compare Fig. 5C,(i) and (ii)]. We thus confirmed that FiCS cansubstantially enrich themodel cancer cells againstthe background of a complex cell mixture with-out any specific biomarker at high accuracy andthroughput.Recent research has extensively used imaging

flow analyzers for the detection and/or charac-terization of critical cells in various fields, includ-ing oncology, immunology, and drug screening(30, 31). GC’s ability to notably increase theanalysis throughput and selectively isolate cellpopulations according to high-content informa-tion in real time will lead to integration of themorphology-based analysis with comprehensive,downstream “-omics” analyses at single-cell levels.Beyond conventional image generation and pro-cessing that rely on the limited knowledge andcapability of humans, we anticipate that the ideaof applying machine-learning methods directlyto compressive modalities will have broad appli-cability for the real-time application of high-quantity and high-dimensional data.

REFERENCES AND NOTES

1. O. Thaunat et al., Science 335, 475–479 (2012).2. Y. Urano et al., Nat. Med. 15, 104–109 (2009).3. E. A. Susaki et al., Cell 157, 726–739 (2014).4. L. Samsel, J. P. McCoy Jr., J. Immunol. Methods 423, 52–59

(2015).5. J. A. Knoblich, Nat. Rev. Mol. Cell Biol. 11, 849–860 (2010).6. K. Goda, K. K. Tsia, B. Jalali, Nature 458, 1145–1149 (2009).7. L. Gao, J. Liang, C. Li, L. V. Wang, Nature 516, 74–77

(2014).8. E. D. Diebold, B. W. Buckley, D. R. Gossett, B. Jalali, Nat.

Photonics 7, 806–810 (2013).9. A. Orth, D. Schaak, E. Schonbrun, Sci. Rep. 7, 43148 (2017).10. B. Guo et al., PLOS ONE 11, e0166214 (2016).11. Y. Han, Y. Gu, A. C. Zhang, Y.-H. Lo, Lab Chip 16, 4639–4647

(2016).12. J. Romberg, SIAM J. Imaging Sci. 2, 1098–1128 (2009).13. J. M. Bioucas-Dias, M. A. T. Figueiredo, IEEE Trans. Image

Process. 16, 2992–3004 (2007).14. Y. Bromberg, O. Katz, Y. Silberberg, Phys. Rev. A 79, 053840

(2009).

Ota et al., Science 360, 1246–1251 (2018) 15 June 2018 5 of 6

RESEARCH | REPORT

Corrected 15 June 2018. See full text. on M

ay 12, 2020

http://science.sciencemag.org/

Dow

nloaded from

Page 6: Science Journals — AAAS · CYTOMETRY Ghostcytometry Sadao Ota1,2,3*†, Ryoichi Horisaki3,4*, Yoko Kawamura1,2*, Masashi Ugawa1*, Issei Sato1,2,3,5, Kazuki Hashimoto2,6, Ryosuke

15. J. H. Shapiro, Phys. Rev. A 78, 061802 (2008).16. A. Gatti, E. Brambilla, M. Bache, L. A. Lugiato, Phys. Rev. Lett.

93, 093602 (2004).17. B. Sun et al., Science 340, 844–847 (2013).18. O. Katz, Y. Bromberg, Y. Silberberg, Appl. Phys. Lett. 95,

131110 (2009).19. N. Tian, Q. Guo, A. Wang, D. Xu, L. Fu, Opt. Lett. 36,

3302–3304 (2011).20. M. Tanha, S. Ahmadi-Kandjani, R. Kheradmand, H. Ghanbari,

Eur. Phys. J. D 67, 44 (2013).21. V. Studer et al., Proc. Natl. Acad. Sci. U.S.A. 109, E1679–E1687

(2012).22. M. F. Duarte et al., IEEE Signal Process. Mag. 25, 83–91 (2008).23. V. N. Vapnik, The Nature of Statistical Learning Theory

(Springer, 1995).24. M. Takao, K. Takeda, Cytometry A 79A, 107–117 (2011).25. W. Sun et al., PLOS ONE 8, e75865 (2013).26. B. M. Dent et al., Int. J. Cancer 138, 206–216 (2016).27. Z. Liu et al., Sci. Rep. 6, 39808 (2016).28. Y. Q. Wang, J. Y. Wang, H. L. Chen, Z. C. Zhu, B. Wang,

Microsyst. Technol. 18, 1991–2001 (2012).29. S. H. Cho, C. H. Chen, F. S. Tsai, J. M. Godin, Y.-H. Lo, Lab Chip

10, 1567–1573 (2010).30. A. M. Khalil, J. C. Cambier, M. J. Shlomchik, Science 336,

1178–1181 (2012).

31. M. Doan et al., Blood 130, 1437 (2017).32. S. Ota et al., Dataset for “ghost cytometry.” Zenodo (2018).

ACKNOWLEDGMENTS

We thank H. Suzuki, T. Amemiya, and K. Nakagawa for their kindsupport in the material production. Funding: This work was mainlysupported by JST-PRESTO, Japan, grant numbers JPMJPR14F5 toS.O., JPMJPR17PB to R.H., and JPMJPR1302 to I.S. and partiallysupported by funds of a visionary research program from TakedaScience Foundation, and the Mochida Memorial Foundation for Medicaland Pharmaceutical Research. The work is based on results obtainedfrom a project commissioned by the New Energy and IndustrialTechnology Development Organization (NEDO). S.Y., K.F., and K.W.are members of the Department of Ubiquitous Health Informatics,which is engaged in a cooperative program between the University ofTokyo and NTT DOCOMO, Inc. Author contributions: S.O., R.H., Y.K.,andM.U. contributed equally to this work. R.H. and S.O. conceived anddesigned the concepts, experiments, data analysis, and overallresearch. S.O., M.U., and K.H. developed the setups and performedexperiments of optical imaging. R.H. developed algorithms for theimaging recovery and data analysis, and M.U., R.K., K.S., and S.O.modified and used them for the cell analysis with the strong supportof I.S. Y.K. developed and performed microfluidic cell sorting with thesupport of R.K. and S.O. M.U. and S.O. developed and performed

experiments of image-free GC analysis. S.O., S.Y., Y.K., K.F., I.S.,and K.W. designed the experiments of detecting cancer cells in blood.S.O., R.H., I.S., and H.N. supervised the work. S.O., R.H., Y.K., andM.U. wrote the manuscript with the input of the other authors.Competing interests: S.O., R.H., and I.S. are the founders andshareholders of Thinkcyte, Inc., a company engaged in the developmentof the ultrafast imaging cell sorter. S.O., R.H., Y.K., I.S., K.H., S.Y., K.F.,and K.W. are inventors on patent applications submitted by theUniversity of Tokyo and Osaka University covering the motion-basedghost imaging as well as image-free morphology analysis. Data andmaterials availability: Original measurement data and codes foranalysis are available in the supplementary materials and aredeposited in Zenodo (32), a repository open to the public.

SUPPLEMENTARY MATERIALS

www.sciencemag.org/content/360/6394/1246/suppl/DC1Materials and MethodsFigs. S1 to S7References (33–40)Movie S1

25 February 2017; resubmitted 10 March 2018Accepted 14 May 201810.1126/science.aan0096

Ota et al., Science 360, 1246–1251 (2018) 15 June 2018 6 of 6

RESEARCH | REPORT

Corrected 15 June 2018. See full text. on M

ay 12, 2020

http://science.sciencemag.org/

Dow

nloaded from

Page 7: Science Journals — AAAS · CYTOMETRY Ghostcytometry Sadao Ota1,2,3*†, Ryoichi Horisaki3,4*, Yoko Kawamura1,2*, Masashi Ugawa1*, Issei Sato1,2,3,5, Kazuki Hashimoto2,6, Ryosuke

Ghost cytometry

Setoyama, Satoko Yamaguchi, Katsuhito Fujiu, Kayo Waki and Hiroyuki NojiSadao Ota, Ryoichi Horisaki, Yoko Kawamura, Masashi Ugawa, Issei Sato, Kazuki Hashimoto, Ryosuke Kamesawa, Kotaro

DOI: 10.1126/science.aan0096 (6394), 1246-1251.360Science 

, this issue p. 1246Scienceactivated cell sorter.−fluorescence imaging

reconstructing an image. The method was able to separate morphologically similar cell types in an ultrahigh-speed omputationally costly. Instead, using machine learning, cells are classified directly from the compressed signals, without

measured by a single-pixel detector. Images can be reconstructed from this spatial and temporal information, but this is cequentiallya patterned optical structure provides spatial information that is compressed into temporal signals, which are s

sorting based on the morphology of the cytoplasm, labeled with a single-color fluorophore. The motion of cells relative to describe a technique called ghost cytometry that allows cell et al.cells displaying different fluorophores are sorted. Ota

In fluorescence-activated cell sorting, characteristic target features are labeled with a specific fluorophore, andSeeing ghosts

ARTICLE TOOLS http://science.sciencemag.org/content/360/6394/1246

MATERIALSSUPPLEMENTARY http://science.sciencemag.org/content/suppl/2018/06/13/360.6394.1246.DC1

REFERENCES

http://science.sciencemag.org/content/360/6394/1246#BIBLThis article cites 36 articles, 4 of which you can access for free

PERMISSIONS http://www.sciencemag.org/help/reprints-and-permissions

Terms of ServiceUse of this article is subject to the

is a registered trademark of AAAS.ScienceScience, 1200 New York Avenue NW, Washington, DC 20005. The title (print ISSN 0036-8075; online ISSN 1095-9203) is published by the American Association for the Advancement ofScience

Science. No claim to original U.S. Government WorksCopyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of

on May 12, 2020

http://science.sciencem

ag.org/D

ownloaded from