an imagej/fiji plugin for segmenting and quantifying sub...

26
1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular structures in fluorescence microscopy images Aur´ elien Rizk 2,1 , Gr´ egory Paul 1,5 , Pietro Incardona 1 , Milica Bugarski 2 , Maysam Mansouri 2 , Axel Niemann 4 , Urs Ziegler 3 , Philipp Berger 2* , Ivo F. Sbalzarini 1* 1 MOSAIC Group, Center of Systems Biology, Max Planck Institute of Molecular Cell Biology and Genetics, 01307 Dresden, Germany. (Previously: MOSAIC Group, Department of Computer Science, ETH Zurich, 8092 Z¨ urich, Switzerland.) 2 Paul Scherrer Institute, Biomolecular Research, Molecular Cell Biology, Villigen PSI, Switzerland. 3 University of Zurich, Center for Microscopy and Image Analysis, 8057 Z¨ urich, Switzerland. 4 ETH Zurich, Institute of Molecular Health Sciences, 8093 Z¨ urich, Switzerland. 5 Now at: ETH Zurich, Computer Vision Laboratory, 8092 Z¨ urich, Switzerland * Philipp Berger and Ivo F. Sbalzarini are joint corresponding authors. E-mails: [email protected]; [email protected] Abstract Detection and quantification of fluorescently labeled molecules in sub-cellular compartments is a key step in analyzing many cell-biological processes. Pixel-wise colocalization analyses, however, are not al- ways suitable because they do not provide object-specific information and are vulnerable to noise and background fluorescence. Here we present a versatile method for detecting, delineating, and quantify- ing sub-cellular structures in fluorescence microscopy images. The method is implemented as a freely available, user-friendly plugin for ImageJ and Fiji. It works on both 2D and 3D images, accounts for the microscope optics, computes cell masks, provides sub-pixel accuracy, and can be used on parallel computer clusters. The present method allows both colocalization and shape analyses. We compare it with other colocalization techniques and apply it to studying sub-cellular localization of RAB GTPases and to mitochondria morphology analysis. 1 Introduction An increasing amount of biological and medical research relies on single-cell imaging to obtain information about the phenotypic response of cells to a variety of chemical, mechanical, and genetic perturbations. While it is possible to distinguish obvious phenotypes by eye, computational analyses allow one to process large datasets, to obtain quantitative and less biased results, and to detect more subtle changes in phenotype by statistical analysis. Finally, the ability to observe and quantify multiple fluorescent markers in the same cell under various conditions and over time opens doors for spatiotemporal modeling of biological processes [1]. Quantifying the colocalization of objects between different channels is an important tool, since the fluorescent probes are usually related to cellular markers. Colocalization can be quantified using vari- ous methods that are either pixel-based or object-based [2]. Pixel-based methods compute an overlap measure between the pixel intensities of the different color channels. This includes Pearson’s correlation coefficient [3], the overlap and Manders’ overlap coefficients [4], intensity correlation [5], cross corre- lation [6], and techniques correcting for unspecific (random) colocalization [7]. Object-based methods first detect and delineate the objects represented in the image and then quantify their overlap [3, 8] or nearest-neighbor distances [9]. This also allows correcting for the cellular context and for unspecific colocalization [8]. Methods based on intensity correlation are not suitable when fluorophores are not in ratiometric numbers [10], while methods looking for co-occurence are very sensitive to noise and back-

Upload: others

Post on 26-Sep-2019

17 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

1

An ImageJ/Fiji plugin for segmenting and quantifyingsub-cellular structures in fluorescence microscopy images AurelienRizk2,1, Gregory Paul1,5, Pietro Incardona1, Milica Bugarski2, Maysam Mansouri2, Axel Niemann4, UrsZiegler3, Philipp Berger2∗, Ivo F. Sbalzarini1∗

1 MOSAIC Group, Center of Systems Biology, Max Planck Institute of Molecular CellBiology and Genetics, 01307 Dresden, Germany. (Previously: MOSAIC Group,Department of Computer Science, ETH Zurich, 8092 Zurich, Switzerland.)2 Paul Scherrer Institute, Biomolecular Research, Molecular Cell Biology, Villigen PSI,Switzerland.3 University of Zurich, Center for Microscopy and Image Analysis, 8057 Zurich,Switzerland.4 ETH Zurich, Institute of Molecular Health Sciences, 8093 Zurich, Switzerland.5 Now at: ETH Zurich, Computer Vision Laboratory, 8092 Zurich, Switzerland∗ Philipp Berger and Ivo F. Sbalzarini are joint corresponding authors. E-mails:[email protected]; [email protected]

Abstract

Detection and quantification of fluorescently labeled molecules in sub-cellular compartments is a keystep in analyzing many cell-biological processes. Pixel-wise colocalization analyses, however, are not al-ways suitable because they do not provide object-specific information and are vulnerable to noise andbackground fluorescence. Here we present a versatile method for detecting, delineating, and quantify-ing sub-cellular structures in fluorescence microscopy images. The method is implemented as a freelyavailable, user-friendly plugin for ImageJ and Fiji. It works on both 2D and 3D images, accounts forthe microscope optics, computes cell masks, provides sub-pixel accuracy, and can be used on parallelcomputer clusters. The present method allows both colocalization and shape analyses. We compare itwith other colocalization techniques and apply it to studying sub-cellular localization of RAB GTPasesand to mitochondria morphology analysis.

1 Introduction

An increasing amount of biological and medical research relies on single-cell imaging to obtain informationabout the phenotypic response of cells to a variety of chemical, mechanical, and genetic perturbations.While it is possible to distinguish obvious phenotypes by eye, computational analyses allow one to processlarge datasets, to obtain quantitative and less biased results, and to detect more subtle changes inphenotype by statistical analysis. Finally, the ability to observe and quantify multiple fluorescent markersin the same cell under various conditions and over time opens doors for spatiotemporal modeling ofbiological processes [1].

Quantifying the colocalization of objects between different channels is an important tool, since thefluorescent probes are usually related to cellular markers. Colocalization can be quantified using vari-ous methods that are either pixel-based or object-based [2]. Pixel-based methods compute an overlapmeasure between the pixel intensities of the different color channels. This includes Pearson’s correlationcoefficient [3], the overlap and Manders’ overlap coefficients [4], intensity correlation [5], cross corre-lation [6], and techniques correcting for unspecific (random) colocalization [7]. Object-based methodsfirst detect and delineate the objects represented in the image and then quantify their overlap [3, 8] ornearest-neighbor distances [9]. This also allows correcting for the cellular context and for unspecificcolocalization [8]. Methods based on intensity correlation are not suitable when fluorophores are not inratiometric numbers [10], while methods looking for co-occurence are very sensitive to noise and back-

Page 2: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

2

ground levels. A more robust method indicating the fraction of colocalized molecules based on crosscorrelation and autocorrelation has been proposed, but is not automatic [11].

Object-based analyses provide access to additional features of biological relevance, such as the spa-tial distribution of objects within the cell, and the shapes and sizes of objects. This allows inferringinteractions between objects and testing statistical hypotheses about their distribution (e.g., randomvs. non-random) [9]. Object-based methods, however, require that the objects in the image be first de-tected and delineated, a non-trivial task that may involve image segmentation. Object detection andimage segmentation are still frequently done by hand or using ad-hoc heuristics such as thresholding orhand-crafted pipelines of filters.

Recent progress in computer vision, however, has provided well-founded theories that can give justifi-cation to the methodology [12]. Here we make use of a novel segmentation method that directly connectsthe image-segmentation task with biological reality through prior knowledge about the imaged objects,the image-formation process, and the noise present in the image. This allows adapting the same methodto a wide spectrum of images by adjusting the prior knowledge. Also, our method provides theoreticalperformance and robustness guarantees, is independent of manual initialization, and directly corrects formicroscope blur and noise, yielding optimally deconvolved segmentations [13].

The latter is achieved by accounting for the microscope’s point-spread function (PSF), improvingthe capacity to segment objects with sizes close to the resolution limit. Our method provides sub-pixelaccurate segmentations in both 2D and 3D. Sub-pixel segmentation was so far only available in twodimensions [14, 15] and has been shown useful, e.g., for studying the live morphology of endosomes [16].We further extend the method by cell masks that allow restricting the analysis to a sub-population ofcells in an image, e.g., to transfected cells.

We evaluate the present approach against other segmentation techniques and colocalization analysismethods. Finally, we demonstrate application results on the colocalization of endosomes with sub-cellularmarkers and the morphological analysis of mitochondria.

2 Results

Image segmentation

Image segmentation is a well-researched topic in computer vision, and many technological advanceshave successfully been transferred to bio-image analysis [12]. A wealth of user-friendly software tools isavailable for analyzing and quantifying fluorescence microscopy images [17]. They are either based onapplying various filters to the pixels of the image (e.g., Cell Profiler [18]), on using machine-learningtechniques to classify pixels as belonging to an object or to the background (e.g., Ilastik [19]), or onincluding models of the imaged objects and the image-formation process (e.g., Region Competition [15]).Filter-based approaches require the user to construct an appropriate pipeline of filters and to select allfilter parameters. Machine-learning approaches require the user to manually label or segment a part ofthe data for the algorithm to learn the task. Model-based approaches require the user to design or choosea model that appropriately describes the type of image to be segmented.

Model-based methods aim at finding the segmentation that best explains the image. In other words,they compute the segmentation that has the highest probability of resulting in the actually observedimage when imaged with the specific microscope used. We choose this framework because it is generic toa wide range of image types and segmentation tasks, and because it provide direct access to probabilisticquantitates that can be used in downstream analyses. We base our work on a recent extension of a familyof image-segmentation models that allows including a variety of denoising and deconvolution tasks [13].Moreover, the present approach is independent of initialization and robustly finds the optimal solution,because the underlying optimization problem is convex and hence has only globally optimal solutions;see Ref. [13] for theoretical and algorithmic details.

Page 3: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

3

We consider an image model where the intensity distribution within each object is homogeneous andwhere the image consists of two regions: a brighter foreground and a darker background. The noise inthe image can be either Gaussian or Poisson. Three quantities are hence to be estimated for each objectin a given image: the segmentation, which is a mask assigning to each pixel a score of belonging tothe object or to the background, the fluorescence intensity in the local background around the object,and the fluorescence intensity in the interior of the object. The method provides optimal segmentationsusing prior knowledge about the microscope optics (see Online Methods for details). This strategyhence combines image denoising, deconvolution, and segmentation [13]. It is not necessary to separatelydenoise or deconvolve the images beforehand. The results provided by the present joint deconvoluiton-segmentation procedure are of higher quality and are more robust against noise than those obtained bystandard deconvolution followed by segmentation [13]. This is because the two tasks naturally regularizeeach other when considered jointly. This helps cope with very noisy data and with small objects close tothe diffraction limit.

We account for spatial variations in the background intensity and for different objects having differentforeground intensities by iteratively applying the procedure to local windows in the image around eachobject. Hence, each object may have a different estimated intensity, and the background around eachobject may locally be different (see Fig. 1).

We illustrate the present segmentation procedure using an example image of cherry-tagged RAB5endosomes (Fig. 1a,b). First, object detection and separation is performed (steps 1–4), followed by esti-mating the local background and object intensities and computing the optimal segmentation (steps 5–7).The individual steps in detail are:

(1) Background subtraction (Fig. 1c) is performed first, as the segmentation model assumes locallyconstant intensities. Background variations are non-specific signals that are not accounted for by thismodel. We subtract the image background using the rolling-ball algorithm [20].

(2) Segmentation is performed on the whole image (Fig. 1d). This assigns to each pixel a numberbetween 0 and 1, representing a score for the pixel to belong to an object or to the background. For thisinitial, rough segmentation, we set the object intensity to the maximum intensity present in the imageand estimate the background using k-means clustering.

(3) Thresholding of the score mask reveals the initial objects (Fig. 1e). For this preliminary segmen-tation the threshold value is set to the user-defined minimum intensity of objects to be included in theanalysis. All objects with intensities lower than this threshold are discarded. Connected regions areidentified as individual objects [15].

(4) Decomposition of the image into smaller parts is done in order to allow locally different back-ground and foreground intensities for different objects (Fig. 1e). We decompose the image by (possiblyoverlapping) 2D or 3D boxes around each detected object from the previous step. We ensure that objectsin different boxes do not influence each other by computing the Voronoi diagram of the binary maskobtained in step 3. The image region considered for each object is then the intersection of the box aroundit with the Voronoi cell containing it (Fig. 1e).

(5) Local background and object intensities are estimated in each image region (Fig. 1e). This de-termines what will be considered background and foreground in the subsequent segmentation step, andprovides local analysis of the image. Intensities are estimated by solving the optimization problem de-scribed in the Online Methods.

(6) Individual object segmentation is obtained by running the algorithm separately for each image

Page 4: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

4

region (Fig. 1f). This step can be done with a segmentation resolution that is higher than the resolutionof the image, thus providing sub-pixel accuracy [16]. Since this oversampling is only done in local patchesaround the objects, rather than on the whole image, it has only moderate impact on the computationalcost (cf. Table 1).

(7) The final segmentation (Fig. 1g) is obtained by thresholding the masks obtained in step 6 usingthe optimal threshold that minimizes the segmentation error, automatically determined as previouslydescribed [13].

Colocalization analysis

Object-based colocalization is computed after segmenting the objects, using the information about theshapes and intensities of all objects in both channels. This allows straightforward calculation of thedegree of overlap between objects from the different channels. We consider three different colocalizationmeasures (see Online Methods for details). The first one, Cnumber, simply counts the number of objectsthat overlap between the two channels. Objects are considered overlapping if at least 50% of their volumescoincide. The second measure, Csize, quantifies the fraction of the total volume occupied by objects thatoverlaps with objects from the other channel. The third measure, Csignal, quantifies colocalization inan intensity-dependent manner. It computes the sum of all pixel intensities in either channel one orchannel two in all regions where objects overlap with objects from the respective other channel. Thesesignal-based definitions have been described and used before [8], but here we use the estimated objectintensities from the segmentation rather than the raw pixel values. This improves robustness againstnoise and optical blur from the microscope PSF, because the present segmentation method computes anoptimally denoised and deconvolved solution. In all cases, we use one-way ANOVA [21] followed by aTukey-Kramer [22] test in order to check the statistical significance of the observed colocalization (seeSupplementary Figs. 4 and 5).

Cell masks

Often, analysis should be restricted to a single cell or a subset of the cell present in an image. Examplesinclude working with transfected cells or with mixed cells populations. Fig. 2 shows HEK293 cells thatwere transfected with cherry-tagged RAB5 expression constructs and then stained for the early-endosomemarker EEA1. Only one cell in the image was transfected and should be considered in the analysis. Oursoftware provides the option to compute cell masks by thresholding the original image and filling theholes of the resulting binary mask. Analysis is then restricted to the intersection of the image and thecell mask in order to ensure that only positive cells are considered in both channels. Fig. 2c shows themask for the RAB5-positive cell, and Fig. 2d shows the overlay of the mask with the segmentation resultsfrom both channels.

Validation using sub-cellular localization of RAB GTPases

We analyzed the sub-cellular localization of four RAB GTPases (RAB5, 4, 11, and 7) with known sub-cellular distribution and function in order to validate our software. RAB GTPases are good markers forintracellular trafficking routes, since they are sequentially recruited to vesicles [23]. Internalized receptorsfirst appear in early endosome, which are positive for RAB5. To become a recycling endosome, earlyendosome first acquire RAB4 and loose RAB5, and then they replace RAB4 by RAB11 [24]. In order tomature to late endosomes and finally lysosomes, they replace RAB5 by RAB4 followed by RAB7 [25,26].We transfected fluorescently tagged RAB GTPases in HEK293 cells and determined their sub-cellularcolocalization with well-known markers for the following compartments: EEA1 as a marker for early

Page 5: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

5

endosomes, LBPA for late endosomes, LAMP-2 for lysosomes, and PDI for the endoplasmic reticulum(Fig. 3).

We observed that most EEA1-positive vesicles were also RAB5-positive. As expected, there was lesscolocalization with RAB4 vesicles and even less with RAB11 and RAB7 (Fig. 3a), suggesting a gradualloss of EEA1 during endosome maturation. LBPA is a phospholipid that is important for forming multi-vesicular bodies in late endosomes [27]. However, multi-vesicular bodies were also found in RAB5-positivecompartments [27]. This is in line with our observation that colocalization of LBPA increased from RAB5to RAB4 to RAB7. Recycling endosomes (RAB11) showed little colocalization with LBPA (Fig. 3b).LAMP-2 is used as a marker for lysosomes that are formed from late endosomes in an irreversible process.As expected, we found the highest colocalization of LAMP-2 with RAB7 (Fig. 3c). As a negative control,we used PDI, a marker for the endoplasmic reticulum. None of the tested RAB GTPases has previouslybeen shown to localize to this compartment, and we confirmed this result (Fig. 3d). In summary, ouranalysis method confirmed previously described localization data of RAB GTPases. In addition, it allowsone to quantitatively monitor the maturation of endosomes, either by loss of colocalization with EEA1or gain of colocalization with LBPA.

Infection of insect cells with baculovirus

We infected Sf21 cells with different amounts of YFP-expressing baculovirus in order to obtain a dose-response curve. At low titers, cells are infected by single viruses. The algorithm should therefore find alinear correlation. We indeed observed a linear correlation between the amount of virus and the infectionrate (Fig. 3d).

Mitochondria morphology analysis

Mitochondria are elongated organelles in eukaryotic cells. Their length is controlled by fusion and fissionfactors. GDAP1 is a fission factor that leads to fragmented mitochondria [28]. In cells over-expressingGDAP1, we observed reduced mitochondria lengths and increased numbers of mitochondria (Fig. 3e).The total volume of mitochondria remained constant within the statistical noise (Fig. 3d). This is in linewith previous manual observations [28]. Example images from both conditions are shown in Fig. 3f.

Comparison with other methods

We benchmark our segmentation method on a test set of synthetic images previously used to compareeleven segmentation methods [29]. Noisy synthetic images of cells with a nucleus and circular vesiclesare considered with out-of-focus blur and uneven background illumination. The ground truth of allvesicles outlines is available and used to evaluate the F -score, a combination of accuracy and recall,of the segmentation methods. With an F -score of 0.62, the present method performs as well as theMultiscale Wavelet method, which was the previous winner on this benchmark. This is remarkable, sincethe Multiscale Wavelet method is a spot detector, particularly tuned to detect round objects like thosepresent in the benchmark images, whereas our method makes no assumptions about the shapes of theobjects.

We benchmark the present object-based approach to colocalization analysis by comparing it withresults from a pixel-based Pearson correlation analysis. Fig. 4a provides a comparison of the resultsobtained for the colocalization of LAMP-2 with the different RAB GTPases. As LAMP-2 is a marker forlysosomes, colocalization is mainly expected with RAB7. While both approaches capture high colocal-ization with RAB7, the pixel-based method results in more spurious colocalization with RAB4, 5, and11. Introducing a first segmentation component, the cell masks, improves the picture. However, the fullyobject-based approach yields the best result. The differences are statistically significant. To explain thesedifferences we take a detailed look at the results obtained in the single image shown in Fig. 4c. Two

Page 6: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

6

phenomena affect the Pearson-correlation results: First, many objects in the LAMP-2 channel overlapwith objects in the RAB7 channel, but not vice versa (compare the two Csize in Fig. 4b). In such asym-metric situations, pixel-based colocalization analysis is not appropriate, as it lumps both sides together.Second, pixel-based analysis is more sensitive to imaging noise than object-based analysis. This is shownin Fig. 4b, where the Pearson correlation score tends to zero with increasing noise level. This is consistentwith previous reports [3]. As pixel-based correlation methods directly consider pixel values, they have noway to differentiate noise from signal.

We benchmark the accuracy of sub-pixel segmentation on a synthetic image obtained by blurring apreviously determined segmentation of RAB5 endosomes with the PSF of the microscope and adding(modulatory) Poisson noise of various magnitudes. We assess the ability of the sub-pixel segmentationmethod to recover the ground-truth objects by computing the F -score of the obtained segmentation withrespect to ground truth. Sub-pixel segmentation improves the segmentation accuracy of the outlines ofthe objects for signal-to-noise ratios (SNR; defined for Poisson noise according to Eq. (19) in Ref. [30])above four (Fig. 4c). Figure 4e shows the sub-pixel segmentation results in comparison with normal,pixel-level segmentation on a synthetic image with SNR=12.

3 Discussion

We presented a method to segment sub-cellular objects in fluorescence microscopy images and performcolocalization and morphological analysis. The algorithm is implemented as a free open-source plugin forImageJ and Fiji and is available on all major platforms, including Linux, MacOS, and Windows. It allowsautomatic analysis of large numbers of images and only has a small number of intuitive parameters. Byconsidering the point-spread function of the microscope and a noise model adapted to the acquisitiontechnique, the present method provides more justification to the analysis than ad-hoc filter-based meth-ods. We have shown that the presented method reproduces previously known colocalization results withhigh accuracy, is robust against noise, and segments sub-cellular objects at least as well as the previouslybest method tested on the same benchmark.

Thanks to the use of ImageJ and open-source software, the method can be customized in a number ofways. Using ImageJ macros, the procedure can be integrated into larger workflows that also include otherImageJ functions and plugins, for instance to compute additional object features after segmentation or touse different methods of background subtraction. Possible extensions of the model-based segmentationapproach include shape priors to segment objects of priorly known shape, and extensions to objects withinhomogeneous intensities. Results from the present method can also be used to perform more refinedanalyses of the spatial patterns of sub-cellular object distributions [16].

Page 7: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

7

Online Methods

Image segmentation

We use the two-region piecewise constant image model with a convolution kernel as previously described[13]. The optimal segmentation is the one that maximizes the posterior probability of having generatedthe actually observed image (in the Bayesian sense). The algorithm determines the segmentation maskM , which assigns to each pixel a value between 0 and 1, related to the probability that the pixel belongs toan object. In addition, the algorithm estimates for each object the fluorescence intensity within, cint, andthe local background intensity around it, cext. We use the alternating split-Bregman algorithm describedin Ref. [13] to solve the following optimization problem:

min(M,cext,cint)

[D(u0, K ? u(M, cext, cint)) + λL(M)] (1)

This strategy combines image denoising, deconvolution, and segmentation (Paul 2013). The data-fittingterm D(u0, K ? u(M, cext, cint)) quantifies the discrepancy between the observed image u0 and the imageu one would expect to observe if (M, cext, cint) was really the correct segmentation. The latter is computedby convolution with the microscope’s PSF K in order to account for the diffraction blur introduced bythe optics. D(·) denotes the discrepancy measure; in the present case it is a Bregman divergence thatdepends on the noise model of the image [13]. For optimal denoising, different divergences are used forGaussian and Poisson noise models as described [13]. The regularization term adds a penalty on the totallength of the contour L(M) to encode the biological prior that sub-cellular objects tend to have smoothboundaries. This helps cope with very noisy data. The user-defined regularization parameter λ tunes thetrade-off between fitting the image data and producing smooth results, i.e., not overfitting noise in theimage.

Implementation

The present method is implemented as open source and in pure Java as a plugin for ImageJ and Fiji.It is thus compatible with the wide range of image file formats supported by ImageJ and runs on mostcomputer platforms. The plugin reads images files containing one or several color channels. We provide a.lif extractor macro for ImageJ in order to convert data from .lif files. In addition, we provide a script forthe R statistical analysis software to produce plots and perform statistical analyses (see SupplementaryFigs. 4–7 for example output). The plugin provides a graphical user interface and supports ImageJ macros.It has also been tested with the headless ImageJ mode running in parallel on computer clusters. In orderto allow fast segmentation and parameter testing on desktop computers, the software is also parallelizedwithin single images. Multiple threads are created according to the number of available processor cores,and computation is parallelized across image sub-domains for steps 1–4 of the method and across theextracted blocks for steps 5–7. Supplementary Table 1 provides execution times and memory usage toanalyze images of various sizes on a dual core 2.3 GHz Intel Core i5 processor with 8 GB RAM.

Usage hints and workflow

Image acquisition

1. Care has to be taken to select spectrally separated fluorophores in order to avoid cross-talk betweenthe color channels, which would lead to spurious colocalization. Before computing colocalizationbetween different color channels, it is important to correct for chromatic aberration as well aspossible, as this biases the results [9].

File and parameter input

Page 8: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

8

2. The plugin works with any image format supported by ImageJ, but all stacks and channels of a singleimage have to be in the same file. As part of the present package, we provide the LifExtractor ImageJmacro to automatically extract dual-channel TIFF image files from .lif files. It uses BioFormats [31]and the channel rearrangement tools of ImageJ. The plugin works both on single files and on folderscontaining multiple files for batch processing.

3. In order to correct for diffraction blur, the software needs information about the PSF of the micro-scope. We model the PSF as a Gaussian [32]. The parameters of the PSF can either be estimatedfrom the image and the objective lens parameters by clicking the “Estimate PSF” button (see Sup-plementary Fig. 1), or they can be input by the user in the lateral x − y and axial z directionsseparately. Airy units only have to be entered for confocal microscopes.

4. Reduce background fluorescence using the rolling ball algorithm by selecting “Remove Background”and entering the window edge-length in units of pixels. This length should be large enough so thata square with that edge length cannot fit inside the objects to be detected, but is smaller than thelength scale of background variations.

5. Set a regularization weight on the total objects perimeter. Use higher values to avoid segmentingnoise-induced small intensity peaks (see Supplementary Fig. 2). Typical values are between 0.05and 0.25.

6. Set the threshold for the minimum object intensity. Intensity values are normalized between 0 forthe smallest value occurring in the image and 1 for the largest value.

7. Select “sub-pixel segmentation” to compute segmentations with sub-pixel resolution. The resolutionof the segmentation is increased by an over-sampling factor of 8 for 2D images and 4 for 3D images.Activating this setting requires more computation time and memory (see Supplementary Table 1).

Cell masks

8. A cell mask allows restricting the analysis to a certain region of an image, e.g., a transfected cell.Select “cell mask channel 1” and/or “cell mask channel 2” to compute cell masks based on therespective channel. Cell masks are computed by thresholding the respective channel and fillingholes in the obtained binary mask. Set a value between 0 and 1 for the threshold in “thresholdchannel 1” and/or “threshold channel 2”. Click “preview cell masks” for a live preview of theresulting masks.

Software output

9. The software offers four options to visualize the segmentation result (see Supplementary Fig. 3):With “colorized objects”, each object is visualized in a different random color. “Object labeling”assigns to all pixels of an object a value that is identical to the label (index) of that object in theresult file. “Object intensities” displays objects in their estimated fluorescence intensities. “Objectoutlines” shows an overlay of the original image with red object outlines. For two-channel imagesan additional visualization is provided attributing a distinct color to each channel in order to revealcolocalization. “Intermediate steps” allows visualizing the result both after background subtraction(step 1, Fig. 1c) and segmentation (step 2, Fig. 1d). Checking “save results” stores the resultingvisualizations in .zip files, which can be opened by ImageJ without prior decompression.

10. Click “OK” to start the analysis. The number of objects found is shown in the log window ofImageJ. The mean features of all objects from each image are stored in a .csv file ending in “Im-ages data.csv”.Individual, per-object features for each channel are stored in additional .csv files.

Page 9: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

9

For each object (indexed by its segmentation label), the software stores the size (area or volume),surface (or perimeter), length (maximum extension in the most extended direction), fluorescenceintensity, and the position of the center of mass. For two-channel images, we additionally store theamount of overlap with objects from the other channel, and the sizes and intensities of all colocal-izing objects from the other channel. These features can then be used for object classification orcolocalization analysis.

11. Optional: Use the R statistical software to plot the results into a PDF file and for analyzingtheir statistical significance. We provide a R script that generates colocalization and object-featuregraphs, performs one-way ANOVA followed by Tukey test analysis to evaluate the statistical sig-nificance, and allowing filtering of objects based on intensity and size thresholds. The output fromthis script is shown in Supplementary Figs. 4–7 for the RAB study presented here.

Parameter values

Since PSF parameters are determined by the microscope characteristics, only three parameters are leftfor the user to set: (1) the window size for background subtraction, the regularization parameter, andthe minimum intensity threshold. Window size for background removal should be set larger than thediameter of the objects of interest, but smaller than the diameter of structures that should be removed(for instance the cell nucleus). There are thus only two parameters, regularization and minimum intensity,that have to be tuned to provide the desired segmentation. We provide in the above workflow rangesof values that work well in most cases. Nevertheless, some manual tuning is necessary for a given set ofimages. The effects of varying the minimum intensities and the regularization parameter are illustratedin Supplementary Fig. 2.

Colocalization measures

We provide here the precise mathematical formulas for the colocalization measures used in the presentanalysis. The first measures are based on counting the number of objects in one channel that overlapwith objects from the other channel:

Cnumber(O1O2+/O1) =|{o ∈ O1 : Oo > 0.5}|

|O1|

Cnumber(O2O1+/O2) =|{o ∈ O2 : Oo > 0.5}|

|O2|Here, O1 is the set of all objects detected in channel 1 and O2 the set of objects detected in channel

2. | · | denotes the total number of objects in the given set. Oo is the fraction of object o that overlapswith any other object from the other channel. The notation O1O2+ means “objects in channel one thatare positive for objects in channel two”.

The size-based colocalization coefficients are defined as:

Csize(O1O2+/O1) =

∑o∈O1 SoOo∑o∈O1 So

Csize(O2O1+/O2) =

∑o∈O2 SoOo∑o∈O2 So

,

where So is the size of object o given by the number of pixels belonging to it. Hence,∑

o∈O1 SoOo

is the total size of the colocalizing regions and∑

o∈O1 So is the total number of pixels covered by anyobject in that channel.

The intensity-based colocalization coefficients are:

Page 10: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

10

Csignal(O1O2+/O1) =

∑o∈O1 IoSoOo∑o∈O1 IoSo

Csignal(O2O1+/O2) =

∑o∈O2 IoSoOo∑o∈O2 IoSo

,

where Io is the estimated intensity of object o. Hence,∑

o∈O1 IoSoOo is the total signal in thecolocalizing region in channel 1, while

∑o∈O1 IoSo is the total signal present in channel 1 altogether.

These intensity-based definitions are the same than those used in (Ramirez, 2010), except that we usethe object intensities estimated by the segmentation procedure, rather than raw pixel intensities insidethe object. This leads to better noise robustness.

Cell culture and microscopy

HEK293 cells were maintained in 10% FBS/DMEM/Penicillin/Streptomycin. For transfection, cells wereplated at a density of approximately 40% on glass coverslips coated with poly-L-lysine (Sigma-AldrichP4707) and then transfected with the indicated mCherry-tagged RAB GTPases [33] with FugeneHD(Promega) according to the manufacturer’s recommendations. Cells were fixed by transferring the coverslips into 4% formaldehyde/PBS for 10 minutes and then washed three times with PBS. Cells werepermeabilized with 1% NP40 in PBS for 10 minutes and then incubated with the following primaryantibodies for two hours: rabbit anti-EEA1 (Cell Signaling, #3288, 1:500), mouse anti-LBPA (kindlyprovided by Dr. Peter Greimel, 1:50), mouse anti-LAMP-2 (H4B4, Developmental Studies HybridomaBank, Iowa, 1:20), and mouse anti-PDI (1D3, 1:60). Secondary antibodies were labeled with DyLight488or Cy5, (Jackson ImmunoResearch, Suffolk, UK). All antibodies were diluted with PBS. Samples weremounted in Gelvatol and analyzed on a Leica SP5 confocal microscope. We combined mCherry (excitationwith 543nm, emission collected from 585nm to 620nm) with Dylight488 (488, 500–535) and Cy5 (633,640–730). The sequential scanning mode was chosen and the number of overexposed pixels was kept at aminimum. At least twenty images with 10 to 25 z-sections and a resolution of 512×512 pixels were taken(voxel size 120×120×420 nm, 16 bit resolution). Supplementary Figs. 8 to 11 give examples of the imagesobtained, showing maximum projections. The levels and gamma settings of images used for illustrationwere adjusted in Adobe Photoshop. All operations were applied to the entire image.

PAE cells were co-transfected with a MitodsRed (Clontech) and a GDAP1 expression plasmid [28].Cells were fixed with 4% formaldehyde/PBS. Pictures were taken as described above. Sf21 cells werecultured in serum-free SF-4 Baculo Express ICM medium (Amimed). Cells were plated on glass coverslipsand infected with different amounts of a baculovirus that expresses YFP (Multibac) [34]. Cells were fixedafter 36 hours and the nuclei were stained with Hoechst 33258 (1µg/ml in PBS for 15 minutes). Sampleswere mounted with gelvatol and analyzed on an Olympus IX-81 wide-field microscope. Image size was2219×1674µm (1376×1038 pixels, 8 bit). Each image contained approximately 3000 cells. Fig. 3d showsa part of a typical image.

Page 11: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

11

Supplementary material

Computation time and memory requirement

The software runs on desktop computers and computer clusters. Supplementary Table 1 provides typicaltime and memory requirements for 2D and 3D images with and without sub-pixel refinement, measuredon a dual-core 2.3 GHz Intel Core i5 with 8 GB RAM. The computational cost only depends on imagesize for steps 1 to 3 of our method. All further steps do not depend on image size, but rather on the sizesand the number of objects detected in the image. The data in Supplementary Table 1 was obtained forimages containing approximately 100 objects each. For more objects, the times scale proportionally. Theoverhead incurred by sub-pixel oversampling mainly depends on the number and the sizes of the objectsand ranges from 10% to 40% for the images used here. All pixel intensity values are normalized between0 and 1 and stored as a 64-bit Java double variables. This renders the computational performance of thesoftware independent of the bit depth of the original images.

Acknowledgments

This work was supported by SystemsX.ch, the Swiss initiative in systems biology under grant IPP-2011-113, evaluated by the Swiss National Science Foundation (SNF). The research leading to these resultshas received funding from the European Community’s Seventh Framework Programme (FP7/2007-2013)under grant agreement #290605 (PSI-FELLOW/COFUND).

References

1. Sbalzarini, I. F. Modeling and simulation of biological systems from image data. Bioessays 35,482–490 (2013).

2. Adler, J. & Parmryd, I. Quantifying colocalization by correlation: the Pearson correlation coeffi-cient is superior to the Mander’s overlap coefficient. Cytometry A. 77, 733–742 (2010).

3. Bolte, S. & Cordelieres, F. P. A guided tour into subcellular colocalization analysis in light mi-croscopy. J. Microsc. 224, 213–232 (2006).

4. Manders, E. M. M., Verbeek, F. J. & Aten, J. A. Measurement of co-localization of objects indual-colour confocal images. J. Microsc. 169, 375–382 (1993).

5. Li, Q. et al. A syntaxin 1, Galpha(o), and N-type calcium channel complex at a presynaptic nerveterminal: analysis by quantitative immunocolocalization. J. Neurosci. 24, 4070–4081 (2004).

6. van Steensel, B. et al. Partial colocalization of glucocorticoid and mineralocorticoid receptors indiscrete compartments in nuclei of rat hippocampus neurons. J. Cell Sci. 109 ( Pt 4), 787–792(1996).

7. Costes, S. V. et al. Automatic and quantitative measurement of protein-protein colocalization inlive cells. J. Biophys. 86, 3993–4003 (2004).

8. Ramirez, O., Garcia, A., Rojas, R., Couve, A. & Hartel, S. Confined displacement algorithmdetermines true and random colocalization in fluorescence microscopy. J. Microsc. 239, 173–183(2010).

9. Helmuth, J. A., Paul, G. & Sbalzarini, I. F. Beyond co-localization: inferring spatial interactionsbetween sub-cellular structures from microscopy images. BMC Bioinformatics 11, 372 (2010).

Page 12: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

12

10. Bolte, S. & Cordelieres, F. P. Reply to letter to the editor. Journal of Microscopy 227, 84–85(2007).

11. Zinchuk, V., Wu, Y., Grossenbacher-Zinchuk, O. & Stefani, E. Quantifying spatial correlationsof fluorescent markers using enhanced background reduction with protein proximity index andcorrelation coefficient estimations. Nat. Protocols 6, 1554–1567 (2011).

12. Danuser, G. Computer vision in cell biology. Cell 147, 973–978 (2011).

13. Paul, G., Cardinale, J. & Sbalzarini, I. F. Coupling image restoration and segmentation: Ageneralized linear model/bregman perspective. International Journal of Computer Vision 1–25(2013).

14. Helmuth, J. A. & Sbalzarini, I. F. Deconvolving active contours for fluorescence microscopy images.In Proc. Intl. Symp. Visual Computing (ISVC), vol. 5875 of Lecture Notes in Computer Science,544–553 (Springer, Las Vegas, USA, 2009).

15. Cardinale, J., Paul, G. & Sbalzarini, I. F. Discrete region competition for unknown numbers ofconnected regions. IEEE Trans Image Process (2012).

16. Helmuth, J. A., Burckhardt, C. J., Greber, U. F. & Sbalzarini, I. F. Shape reconstruction ofsubcellular structures from live cell fluorescence microscopy images. J. Struct. Biol. 167, 1–10(2009).

17. Eliceiri, K. W. et al. Biological imaging software tools. Nat. Methods 9, 697–710 (2012).

18. Carpenter, A. E. et al. CellProfiler: image analysis software for identifying and quantifying cellphenotypes. Genome Biol. 7, R100 (2006).

19. Sommer, C., Straehle, C., Koethe, U. & Hamprecht, F. A. ”ilastik: Interactive learning andsegmentation toolkit”. In 8th IEEE International Symposium on Biomedical Imaging (ISBI 2011)(2011).

20. Sternberg, S. R. Biomedical image processing. Computer 16, 22–34 (1983).

21. Chambers, J., Freeny, A. & Heiberger, R. Analysis of variance; designed experiments. In StatisticalModels in S, 145–193 (Wadsworth and Brooks/Cole Advanced Books and Software, Pacific Grove,California, 1992).

22. Miller, J. R. Simultaneous Statistical Inference (Springer-Verlag, New York, 1981).

23. Zwang, Y. & Yarden, Y. Systems biology of growth factor-induced receptor endocytosis. Traffic10, 349–363 (2009).

24. Sonnichsen, B., De Renzis, S., Nielsen, E., Rietdorf, J. & Zerial, M. Distinct membrane domains onendosomes in the recycling pathway visualized by multicolor imaging of Rab4, Rab5, and Rab11.J. Cell Biol. 149, 901–914 (2000).

25. Rink, J., Ghigo, E., Kalaidzidis, Y. & Zerial, M. Rab conversion as a mechanism of progressionfrom early to late endosomes. Cell 122, 735–749 (2005).

26. Ballmer-Hofer, K., Andersson, A. E., Ratcliffe, L. E. & Berger, P. Neuropilin-1 promotes VEGFR-2trafficking through Rab11 vesicles thereby specifying signal output. Blood 118, 816–826 (2011).

27. Matsuo, H. et al. Role of LBPA and Alix in multivesicular liposome formation and endosomeorganization. Science 303, 531–534 (2004).

Page 13: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

13

28. Niemann, A., Ruegg, M., La Padula, V., Schenone, A. & Suter, U. Ganglioside-induced differ-entiation associated protein 1 is a regulator of the mitochondrial network: new implications forCharcot-Marie-Tooth disease. J. Cell Biol. 170, 1067–1078 (2005).

29. Ruusuvuori, P. et al. Evaluation of methods for detection of fluorescence labeled subcellular objectsin microscope images. BMC Bioinformatics 11, 248 (2010).

30. Sbalzarini, I. F. & Koumoutsakos, P. Feature point tracking and trajectory analysis for videoimaging in cell biology. J. Struct. Biol. 151, 182–195 (2005).

31. Linkert, M. et al. Metadata matters: access to image data in the real world. J. Cell Biol. 189,777–782 (2010).

32. Zhang, B., Zerubia, J. & Olivo-Marin, J. C. Gaussian approximations of fluorescence microscopepoint-spread function models. Appl. Opt. 46, 1819–1829 (2007).

33. Kriz, A. et al. A plasmid-based multigene expression system for mammalian cells. Nat. Commun.1, 120 (2010).

34. Fitzgerald, D. J. et al. Protein complex expression by using multigene baculoviral vectors. Nat.Methods 3, 1021–1032 (2006).

Page 14: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

14

b

fe

dca

g

Figure 1. Working principle of the software illustrated on endosome segmentation. (a) Original image;scale bar is 10µm. (b) Closeup of the region highlighted by the white square in (a); scale bar is 1µm.(c) The same closeup after background subtraction. (d) Object model image found by the presentmethod. Intuitively, this is a denoised and deconvolved version of (c) taking into account themicroscope’s point-spread function. (e) Objects (white) obtained by thresholding image (d). The imageis decomposed into regions (blue boxes) and objects are separated by Voronoi decomposition (red lines).(f) Refined sub-pixel object model images obtained by applying the present method inside each regionwith individual estimates for the local object and background intensities. (g) Final segmentation withthe estimated object intensities displayed in shades of green.

b

ed

a c

e fRAB5 EEA1

Figure 2. Segmentation and colocalization of EEA1 and RAB5. All images show maximum-intensityprojections along the optical axis. (a) Raw image in the RAB5 channel. (b) Raw image in the EEA1channel. (c) Mask of a transfected cell determined using the RAB5 channel. (d) Segmentation results ofthe vesicles from both channels overlaid with the cell mask. EEA1 vesicles are shown in green, RAB5vesicles in red. The colocalization coefficient computed for this image is:Cnumber(EEA1RAB5+/EEA1) = 0.36 (Cnumber(EEA1RAB5+/EEA1) = 0.02 when not using a cell mask);Scale bar is 10µm. (e) Closeup view of the raw image data in the area highlighted by the black squarein (d); scale bar is 2µm. (f) Object segmentation and overlaps in this area.

Page 15: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

15

R5a R4a R11b R7a

Size colocalization, P value 2.34e-02, *

( R

AB

x +

PD

I ) /

PD

I

010

2030

4050

6070

R5a R4a R11b R7a

Size colocalization, P value 2.34e-02, *

( E

EA

1 +

RA

Bx

) / E

EA

1

010

2030

4050

6070

20

0

40

60

RAB5 RAB4 RAB11 RAB7

a

b

wo w5

Mitochondria Length, P value 1.33e-03, **

Length

05

1015

0.5

0

1

1.5

Control GDAP1Mito

chon

dria

leng

th (µ

m)

e

RAB5 RAB4 RAB11 RAB7

20

0

40

60

Csiz

e(P

DI R

AB

x/P

DI)

R5a R4a R11b R7a

Size colocalization, P value 2.34e-02, *

( B

MP

+ R

AB

x )

/ BM

P

010

2030

4050

6070

RAB5 RAB4 RAB11 RAB7

20

0

40

60

Csiz

e(L

BPA

RA

Bx/L

BPA

)

Csiz

e(E

EA

1 RA

Bx/E

EA

1)

R5a R4a R11b R7a

Size colocalization, P value 2.34e-02, *

( R

AB

x +

H4B

4 )

/ H4B

4

010

2030

4050

6070

RAB5 RAB4 RAB11 RAB7

20

0

40

60

Csiz

e(L

AM

P-2

RA

Bx/L

AM

P-2

)

c

wo w5

Mitochondria Object Number, P value 2.19e-01, ns

Ves

icle

#

050

100

150

200

50

0

100

150

Control GDAP1M

itoch

ondr

ia n

umbe

r

wo w5

Mitochondria Total size, P value 6.66e-01, ns

Tota

l siz

e

05000

10000

15000

20000

20

0

40

60

Control GDAP1

Mito

chon

dria

tota

l vol

ume

(µm³)

200

80f

Control GDAP1

d

5 10 15 20 25

05

1015

2025

30

virus (ul)

% in

fect

ed c

ells

30

10

0

20

virus (µl)5 10 2015 25

% in

fect

ed c

ells

Figure 3. Validation of the method. (a) Colocalization results for sub-cellular markers with differentRAB GTPases. Means and standard errors of the mean over 20 images per condition of the fraction ofsub-cellular marker colocalizing with the RAB channel, as determined by the present method. (b)Example image with closeup of EEA1 (green) and RAB5 (red) (c) Example image with closeup ofEEA1 (green) and RAB7 (red). Examples images for all conditions as well as statistical analyses areprovided in the Supplementary Material. (d) Sf21 cells were infected with YFP-expressing baculo-virus.Cells stained with Hoechst33258. A linear correlation between the amount of virus and the fraction ofinfected cells is observed. (e) Morphological analysis of mitochondria without and with over-expressionof the fission factor GDAP1. Means and standard errors of the mean over 8 images per condition. (f)Example images from both conditions; scale bar is 10µm.

Page 16: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

16

55000 1000 2000 3000 4000

0.6

0

0.1

0.2

0.3

0.4

0.5

Gaussian noise standard deviation

Colo

caliz

atio

n

Pearson correlation

c

ba

RAB5 RAB4 RAB11 RAB7

Pearson correlation / segmentation based colocalization

% Co

locali

zatio

n / Pe

arson

Corr

elatio

n

010

2030

4050

60%

Col

ocal

izat

ion

RAB5 RAB4 RAB11 RAB7

Pearson correlationPearson correlation with cell maskCsize(LAMP-2RABx+/LAMP-2)

Csize(RAB7LAMP-2+/RAB7)

Csize(LAMP-2RAB7+/LAMP-2)

RAB7 LAMP-2

13 12 11 10 9 8 6 5 4 3 27

0.7

0.72

0.74

0.76

0.78

0.8

0.82

0.84

0.86

Signal to noise ratio

F-sc

ore

4x subpixel segmentation

1x resolution segmentation

d e

Figure 4. Benchmarks of the present method. (a) Pixel-based Pearson correlation (range −1 to +1)and object-based colocalization (range 0 to 100%) results for LAMP-2 with RABx. For object-basedanalyses, images were segmented using the present software. LAMP-2 is known to colocalize mainlywith RAB7. Means and standard errors of the mean are over 20 images per condition. (b) Pearsoncorrelation and object-based colocalization coefficients for the image shown in (c) corrupted withincreasing amounts of Gaussian noise. (c) Example image with both channels shown; scale bar is 10µm(d) F -score of segmentation accuracy for 4-fold sub-pixel and normal pixel-level segmentation. We showthe mean values over five random images for each signal-to-noise ratio. (e) Example of a benchmarkimage with signal-to-noise ratio 12. The closeup views below show the ground truth outlines in greenand those reconstructed by the present method in red. The middle panel uses normal pixel-levelsegmentation, whereas the right panel uses 4-fold sub-pixel oversampling; scale bar is 1µm for top paneland 0.5µm for closeup panels.

Page 17: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

17

x× y × z dimensions & sub-pixel oversampling computer time (s) memory (MB)512×512 17.2 275512×512, 4× pixel oversampling 23.7 4071024×1024 55.6 478512×512×15 180.7 1116512×512×15, 4× pixel oversampling 201.3 1536

Supplementary Table 1. Computer time and memory requirements on a dual core 2.3 GHz IntelCore i5 with 8 GB RAM.

A

B

C

D

E

F

Supplementary Figure 1. Screenshot of the graphical user interface of the present software. Theparameter input masks consist of the following areas: (A) data input; (B) segmentation parameters; (C)Parameters for computing the cell masks; (D) Visualization options; (E) Noise and intensity models; (F)Sub-mask for estimating the microscope’s point-spread function from the image using a Gaussian model.

Page 18: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

18

A CB

Supplementary Figure 2. Illustration of how the method parameters affect segmentation results foran endosome example. (A) Segmentation with the minimum intensity threshold set to 0.100 and theregularization weight set to 0.250. (B) Segmentation with the minimum intensity threshold decreased to0.050, resulting in a larger number of dimmer objects detected. (C) Segmentation with a minimumintensity threshold of 0.050 and the regularization weight decreased to 0.075, resulting in a closer fit ofthe segmentation with the image data, but an increased sensitivity to noise (see, e.g., the small islandssegmented, which may correspond to noise pixels).

Outline Colors IntensitiesOriginal

Supplementary Figure 3. Illustration of the different ways the software provides to visualize theresults. From left to right: original image, segmented outlines overlaid with the original image, detectedobjects represented in false colors, and detected objects represented with an intensity heat-map, i.e., theintensity of green corresponds to the estimated fluorescence intensity of the object.

Page 19: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

19

R5a R4a R11b R7a

Object number colocalization, P value < 1e−4, ****

( R

AB

x +

LA

MP

−2

) /

RA

Bx

010

2030

4050

6070

R5a R4a R11b R7a

Object number colocalization, P value < 1e−4, ****

( R

AB

x +

LA

MP

−2

) /

LAM

P−

2

010

2030

4050

6070

R5a R4a R11b R7a

Size colocalization, P value < 1e−4, ****

( R

AB

x +

LA

MP

−2

) /

RA

Bx

010

2030

4050

6070

R5a R4a R11b R7a

Size colocalization, P value < 1e−4, ****

( R

AB

x +

LA

MP

−2

) /

LAM

P−

2

010

2030

4050

6070

R5a R4a R11b R7a

Signal colocalization (size and intensity), P value < 1e−4, ****

( R

AB

x +

LA

MP

−2

) /

RA

Bx

010

2030

4050

6070

R5a R4a R11b R7a

Signal colocalization (size and intensity), P value < 1e−4, ****

( R

AB

x +

LA

MP

−2

) /

LAM

P−

2

010

2030

4050

6070

R5a R4a R11b R7a

Pearson correlation, original image, P value < 1e−4, ****

Pea

rson

cor

rela

tion

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

R5a R4a R11b R7aPearson correlation with cell masks

and background removal, P value < 1e−4, ****

Pea

rson

cor

rela

tion

0.0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

Colocalization

Plugin parameters: background removal true window size 10 stddev PSF xy 0.707 stddev PSF z 0.791 Regularization 0.1 Min intensity ch1 0.15 Min intensity ch2 0.15 subpixel false Cell mask ch1 true mask threshold ch1 0.01 Cell mask ch2 false mask threshold ch2 0.0 Intensity estimation Automatic Noise model PoissonR script analysis parameters: MinIntCh1 0.000 MinIntCh2 0.000 MaxSize 100000 MinSize 0 MinObjects 5Mean and sem displayed, P values from one way ANOVA, **** for p value < 0.0001, *** 0.0001 < p value < 0.001, ** 0.001 < p value < 0.01, * 0.01 < p value < 0.05 2013−06−08

Supplementary Figure 4. Example output from the provided R analysis script, page 1/4. The scriptproduces plots showing the object-based colocalization results (here of LAMP-2 with RABx) usingnumber-, size-, and signal-based colocalization coefficients. In the bottom row, the pixel-based Pearsoncorrelations with and without cell masks are shown. The bar graphs display the means and standarderrors of the mean over all (here, 20) images. P values are computed using one-way ANOVA test.

Page 20: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

20

Object number colocalization

−10 0 10 20 30 40 50

R7a−R5a, < 1e−4, ****

R7a−R4a, < 1e−4, ****

R5a−R4a, 6.57e−01, ns

R7a−R11b, < 1e−4, ****

R5a−R11b, 9.99e−01, ns

R4a−R11b, 5.77e−01, ns

Object number colocalization

−10 0 10 20 30 40 50

R7a−R5a, < 1e−4, ****

R7a−R4a, < 1e−4, ****

R5a−R4a, 9.98e−01, ns

R7a−R11b, < 1e−4, ****

R5a−R11b, 2.36e−01, ns

R4a−R11b, 3.22e−01, ns

Size colocalization

−10 0 10 20 30 40 50

R7a−R5a, < 1e−4, ****

R7a−R4a, < 1e−4, ****

R5a−R4a, 5.06e−01, ns

R7a−R11b, 1.49e−04, ***

R5a−R11b, 7.30e−01, ns

R4a−R11b, 8.01e−02, ns

Size colocalization

−10 0 10 20 30 40 50

R7a−R5a, < 1e−4, ****

R7a−R4a, < 1e−4, ****

R5a−R4a, 9.79e−01, ns

R7a−R11b, < 1e−4, ****

R5a−R11b, 4.73e−02, *

R4a−R11b, 1.67e−02, *

Signal colocalization

−10 0 10 20 30 40 50

R7a−R5a, < 1e−4, ****

R7a−R4a, < 1e−4, ****

R5a−R4a, 5.66e−01, ns

R7a−R11b, < 1e−4, ****

R5a−R11b, 6.72e−01, ns

R4a−R11b, 7.99e−02, ns

Signal colocalization

−10 0 10 20 30 40 50

R7a−R5a, < 1e−4, ****

R7a−R4a, < 1e−4, ****

R5a−R4a, 9.80e−01, ns

R7a−R11b, < 1e−4, ****

R5a−R11b, 4.38e−02, *

R4a−R11b, 1.56e−02, *

Pearson

−0.1 0.0 0.1 0.2 0.3 0.4 0.5

R7a−R5a, < 1e−4, ****

R7a−R4a, 3.87e−04, ***

R5a−R4a, 7.73e−01, ns

R7a−R11b, 1.96e−03, **

R5a−R11b, 4.83e−01, ns

R4a−R11b, 9.64e−01, ns

Pearson with mask and background removal

−0.1 0.0 0.1 0.2 0.3 0.4 0.5

R7a−R5a, < 1e−4, ****

R7a−R4a, < 1e−4, ****

R5a−R4a, 9.84e−01, ns

R7a−R11b, < 1e−4, ****

R5a−R11b, 1.15e−01, ns

R4a−R11b, 5.05e−02, ns

Tukey test, 95% family−wise confidence levels and P values

Plugin parameters: background removal true window size 10 stddev PSF xy 0.707 stddev PSF z 0.791 Regularization 0.1 Min intensity ch1 0.15 Min intensity ch2 0.15 subpixel false Cell mask ch1 true mask threshold ch1 0.01 Cell mask ch2 false mask threshold ch2 0.0 Intensity estimation Automatic Noise model PoissonR script analysis parameters: MinIntCh1 0.000 MinIntCh2 0.000 MaxSize 100000 MinSize 0 MinObjects 5P values and confidence intervals from post−hoc Tukey test, **** for p value < 0.0001, *** 0.0001 < p value < 0.001, ** 0.001 < p value < 0.01, * 0.01 < p value < 0.05 2013−06−08

Supplementary Figure 5. Example output from the provided R analysis script, page 2/4. 95%confidence intervals of pairwise mean colocalization and Pearson correlation differences. Confidenceintervals and P values are computed using a Tukey test.

Page 21: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

21

R5a R4a R11b R7a

RABx Object Number

Obj

ect #

050

100

200

300

R5a R4a R11b R7a

LAMP−2 Object Number

Obj

ect #

050

100

200

300

R5a R4a R11b R7a

RABx Size

Siz

e

050

100

150

R5a R4a R11b R7a

LAMP−2 Size

Siz

e

050

100

150

R5a R4a R11b R7a

RABx Total size

Tota

l siz

e

050

0010

000

2000

0

R5a R4a R11b R7a

LAMP−2 Total size

Tota

l siz

e

050

0010

000

2000

0

R5a R4a R11b R7a

Total size ratio RABx/LAMP−2

Tota

l siz

e ra

tio

02

46

8

Size and number of objects

Plugin parameters: background removal true window size 10 stddev PSF xy 0.707 stddev PSF z 0.791 Regularization 0.1 Min intensity ch1 0.15 Min intensity ch2 0.15 subpixel false Cell mask ch1 true mask threshold ch1 0.01 Cell mask ch2 false mask threshold ch2 0.0 Intensity estimation Automatic Noise model PoissonR script analysis parameters: MinIntCh1 0.000 MinIntCh2 0.000 MaxSize 100000 MinSize 0 MinObjects 5 2013−06−08

Supplementary Figure 6. Example output from the provided R analysis script, page 3/4. Meannumbers and sizes (areas in 2D, volumes in 3D, in units of pixels/voxels) of objects segmented in bothchannels (here, LAMP-2 and RABx vesicles). The total size is the sum of the sizes of all individualobjects in condition. The total size ratio is the ratio between the total area occupied by objects inchannel 1 and the total area occupied by objects in channel 2.

Page 22: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

22

R5a R4a R11b R7a

RABx Length

Leng

th

02

46

810

1214

R5a R4a R11b R7a

LAMP−2 Length

Leng

th

02

46

810

1214

R5a R4a R11b R7a

RABx Intensity

Inte

nsity

0.0

0.1

0.2

0.3

0.4

0.5

0.6

R5a R4a R11b R7a

LAMP−2 Intensity

Inte

nsity

0.0

0.1

0.2

0.3

0.4

0.5

0.6

R5a R4a R11b R7a

Min Size :2 8 14 20_RABx_in_LAMP−2

( R

AB

x +

LA

MP

−2

) /

RA

Bx

010

2030

4050

6070

R5a R4a R11b R7a

Min Size :2 8 14 20_LAMP−2_in_RABx

( R

AB

x +

LA

MP

−2

) /

LAM

P−

2

010

2030

4050

6070

R5a R4a R11b R7a

Min Intensity :0.05 0.2 0.35 0.5_RABx_in_LAMP−2

( R

AB

x +

LA

MP

−2

) /

RA

Bx

010

2030

4050

6070

R5a R4a R11b R7a

Min Intensity :0.05 0.2 0.35 0.5_RABx_in_LAMP−2

( R

AB

x +

LA

MP

−2

) /

LAM

P−

2

010

2030

4050

6070

Objects lengths, intensities and threshold effects

Plugin parameters: background removal true window size 10 stddev PSF xy 0.707 stddev PSF z 0.791 Regularization 0.1 Min intensity ch1 0.15 Min intensity ch2 0.15 subpixel false Cell mask ch1 true mask threshold ch1 0.01 Cell mask ch2 false mask threshold ch2 0.0 Intensity estimation Automatic Noise model PoissonR script analysis parameters: MinIntCh1 0.000 MinIntCh2 0.000 MaxSize 100000 MinSize 0 MinObjects 5 2013−06−08

Supplementary Figure 7. Example output from the provided R analysis script, page 4/4. Meanlengths (in units of pixels) and intensities (in the scaled image) of objects segmented in both channels(here, LAMP-2 and RABx vesicles). Moreover, this last page of the analysis output shows thecolocalization coefficients when excluding objects below different size and intensity thresholds.

Page 23: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

23

Supplementary Figure 8. Example images showing the colocalization of LPBA with different RABGTPases.

Page 24: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

24

Supplementary Figure 9. Example images showing the colocalization of EEA1 with different RABGTPases.

Page 25: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

25

Supplementary Figure 10. Example images showing the colocalization of LAMP-2 with differentRAB GTPases.

Page 26: An ImageJ/Fiji plugin for segmenting and quantifying sub ...mosaic.mpi-cbg.de/Downloads/SplitBregmanSeg.pdf · 1 An ImageJ/Fiji plugin for segmenting and quantifying sub-cellular

26

Supplementary Figure 11. Example images showing the colocalization of PDI with different RABGTPases.