a new color augmentation method for deep learning

6
HAL Id: hal-02167903 https://hal-mines-paristech.archives-ouvertes.fr/hal-02167903 Submitted on 28 Jun 2019 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. A new color augmentation method for deep learning segmentation of histological images Yang Xiao, Etienne Decencière, Santiago Velasco-Forero, Hélène Burdin, Thomas Bornschlögl, Françoise Bernerd, Emilie Warrick, Thérese Baldeweck To cite this version: Yang Xiao, Etienne Decencière, Santiago Velasco-Forero, Hélène Burdin, Thomas Bornschlögl, et al.. A new color augmentation method for deep learning segmentation of histological images. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI), Apr 2019, Venise, France. hal- 02167903

Upload: others

Post on 01-May-2022

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A new color augmentation method for deep learning

HAL Id: hal-02167903https://hal-mines-paristech.archives-ouvertes.fr/hal-02167903

Submitted on 28 Jun 2019

HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, estdestinée au dépôt et à la diffusion de documentsscientifiques de niveau recherche, publiés ou non,émanant des établissements d’enseignement et derecherche français ou étrangers, des laboratoirespublics ou privés.

A new color augmentation method for deep learningsegmentation of histological images

Yang Xiao, Etienne Decencière, Santiago Velasco-Forero, Hélène Burdin,Thomas Bornschlögl, Françoise Bernerd, Emilie Warrick, Thérese Baldeweck

To cite this version:Yang Xiao, Etienne Decencière, Santiago Velasco-Forero, Hélène Burdin, Thomas Bornschlögl, etal.. A new color augmentation method for deep learning segmentation of histological images. 2019IEEE 16th International Symposium on Biomedical Imaging (ISBI), Apr 2019, Venise, France. �hal-02167903�

Page 2: A new color augmentation method for deep learning

A NEW COLOR AUGMENTATION METHOD FOR DEEP LEARNING SEGMENTATION OFHISTOLOGICAL IMAGES

Yang Xiao1, Etienne Decencière1, Santiago Velasco-Forero1, Hélène Burdin2

Thomas Bornschlögl3, Françoise Bernerd3, Emilie Warrick3 and Thérèse Baldeweck3

1 MINES ParisTech, PSL Research University, Centre for Mathematical Morphology, France2ADCIS SA, Saint-Contest, France

3L’Oréal Research and Innovation, Aulnay-sous-Bois, France

ABSTRACT

This paper addresses the problem of labeled data insufficiency inneural network training for semantic segmentation of color-stainedhistological images acquired via Whole Slide Imaging. It proposesan efficient image augmentation method to alleviate the demand for alarge amount of labeled data and improve the network’s generaliza-tion capacity. Typical image augmentation in bioimaging involvesgeometric transformation. Here, we propose a new image augmen-tation technique by combining the structure of one image with thecolor appearance of another image to construct augmented imageson-the-fly for each training iteration. We show that it improves per-formance in the segmentation of histological images of human skin,and also offers better results when combined with geometric trans-formation.

Index Terms— color-stained slide, deep learning, segmenta-tion, color transfer, histopathology, Fontana Masson, skin.

1. INTRODUCTION

Histological images of plant and animal cell tissues allow us to ex-plore their structures and functions. Image segmentation is a crucialfirst step in many image analysis tasks, especially in histopathology,and aims at identifying accurately the presence, number, distribution,size, or morphology of certain tissue features (specific cells, nuclei,...). In the dermatological field, it is used in a range of applicationsincluding melanoma detection and the assessment of histopatholog-ical damage of the skin [1]. With the recent advent of digital andwhole slide imaging, the number and the size of acquired imagesare growing up and there is a need of finding ways to also adapt thethroughput of image quantification.

A review of segmentation methods of color-stained histologicalimages of pathological skin (lymphoma) has been presented in [2],which covers various methods based on regions, thresholding, K-means, graph-cut, and watershed transform. Recent advances indeep learning have enabled automatic methods for image segmen-tation using convolutional neural networks (CNN) [3–5]. By ex-tending CNN to fully convolutional networks (FCN), we can traina network that segments arbitrary-sized images without redundantcomputation [6–8]. Nevertheless, these deep neural networks usu-ally require large training sets to achieve an acceptable performance,while the generation of the segmentation ground-truth necessary forsupervised learning is very time-consuming. Another challenge inthe segmentation of histological images using deep learning is thatthe network generalization could be influenced by the complex tissuestructures and inconsistencies in sample preparation [9].

The aim of this paper is to show that a deep neural networkcan learn a satisfactory segmentation model with relatively few data,thanks to a convenient image augmentation method. This is obtainedby using an image augmentation technique that exploits the colortransformation between different images, with a specific attention onthe stained components within each sample. This allows the networkto learn invariance to such variation, without the need to see thesetransformations in the labeled data [10], which is particularly impor-tant in the segmentation of histological images of human skin sincethe color variation is one of the most common variations [11, 12]. Itis shown below that such transformations can be efficiently imple-mented.

To augment the available labeled data for training, some peoplemake use of simple geometric transformation such as image rotationand translation for achieving the invariance to irrelevant spatial fac-tors [5, 6], while others explore the combination of geometric andphotometric augmentation techniques to increase the robustness todiffering illumination color and intensity [13]. In this paper, we pro-pose a novel image augmentation method working in the color spaceof the images, and combine it with existing geometric augmentationtechniques for increased variation generation. We demonstrate fastand accurate results on histological images of human skin and weprovide a direct comparison with other methods.

The presented work contributions are twofold: (1) It proposes anew image augmentation method adapted to the histological imageswith various color appearances; (2) Experimental results illustratethe good performance of the proposed method, which outperformstraditional ones in deep learning frameworks.

2. MATERIAL

2.1. Histological image of human skin

Skin is an epithelial tissue which possesses a specific layered struc-ture: a layer of stratum corneum (SC) located on the top of a layer ofLiving Epidermis (LE), and the dermis (see in Fig 1). Besides, thethree interfaces between these layers are respectively named Surface,Internal Epidermis Boundary (IEB), and Dermal-Epidermal Junction(DEJ).

In this paper we deal with histological images of normal and le-sional human skin (Fontana Masson staining). Our aim is to segmentstratum corneum (labeled SC) and living epidermis (labeled LE): forthat purpose, other components out of SC and LE were labeled asBackground (BG).

Page 3: A new color augmentation method for deep learning

Table 1: Description of databases

Database Number Image size in pixels MemoryDatabase1 76 0.5× 106 - 1.6× 107 835 MbDatabase2 52 0.7× 106 - 1.1× 107 420 Mb

Fig. 1: Histological image of lesional human skin (with high DEJstructural deformations) showing its main compartments and corre-sponding boundaries.

2.2. Database description

In order to test the generalization capabilities of different models,and in particular the model using the proposed color augmenta-tion method, two databases have been collected. The first one,Database1 (76 images), is used for network training. It containsimages from two clinical studies including paired lesional and non-lesional samples. Among the 76 images in Database1, we randomlyselected 35 images for training (26 images) and validation (9 im-ages), while the remaining images (41) are used for testing. More-over, in order to assure the independence of images from differentsubsets, images from the same histological samples were distributedto the same subset since they have very similar appearances.

The second one, Database2 (52 images), is used to evaluate thenetwork generalization capacity. It contains images coming from athird clinical study, which incorporates different color appearancescompared to Database1. Table 1 summarizes the characteristics ofthese two databases.

3. METHOD

The field of data augmentation is not new, and various data augmen-tation techniques have been applied to specific problems. In imageclassification, data augmentation methods artificially create trainingimages by altering available images [14]. Previous works [15–17]have shown its effectiveness to reduce overfitting, thus increasing thequality of generalization on new data. As data augmentation shouldbe adapted to the intrinsic nature of training samples, the proposedimage augmentation method focus on the color transformation of thestained components contained in different histological images.

The main idea of color augmentation, inspired by Reinhard [18],is to impose one image’s color characteristics on another using a sta-tistical analysis. In our method, instead of transferring the color ofthe whole image, the transformation is limited to the stained com-ponents in the histological images. Such transformation should aug-ment the color variations of training images used in the training pro-cess. Based on these augmented images, the neural network’s per-formance has been largely improved on the histological images ofhuman skin from outside the training set, where various color ap-pearances are present.

3.1. Lab color space

The CIE-Lab color space endows the color space with a perceptu-ally meaningful measure of Euclidean distance as color similarity,and it is related to the RGB color space through a complex trans-formation equation [19]. In RGB color space, the color informationis separated into three channels but the same three channels also en-code brightness information. On the contrary, in Lab color space, thelightness channel L is independent of color information and only en-codes brightness, while the other two channels are chromatic yellow-blue and red-green opponent channels.

Another advantage of Lab color space is that the selection ofstained components can be achieved by a simple thresholding oper-ation in the lightness channel. According to the Beer-Lambert lawmentioned in [12], the transmission of light through a material canbe modeled as

I = I0e−αcx (1)

where I0 is the intensity of the incident light, I is the intensity oflight after passing through the medium, α is the absorption coeffi-cient, c the concentration of absorbing substance, and x the distancetraveled through the medium. α and x are assumed to be constantfor a specimen and a given stain, while c can vary between differentimages and within the same image. Thus, in the histological images,the principal stained components with a large concentration of ab-sorbing substance should have a lower intensity compared to otherparts where fewer or no absorbing substance is present. In the Labcolor space, as the brightness information is encoded in the L chan-nel, these components can be extracted by selecting the pixels suchthat their L channel values are lower than a threshold value (set as0.86× Lh in our work, with Lh being the highest possible value ofthe L channel).

3.2. Color transfer

In the area of automatic image analysis, several works have beenproposed to address the problem of stain inconsistency by pre-processing images using stain normalization techniques, in whichall images from a dataset are mapped to a user-selected referenceimage [11, 20, 21]. As this method is very sensitive to the choiceof reference image, others proposed to normalize the stains in anadversarial frameworks [22], which eliminates the need for an expertto pick a representative reference image [23, 24].

In this work, we propose a stain-focused image augmentationtechnique to augment training images using color -matching. Thecolor variation between different histological images mainly comesfrom the stains, while the background remains bright (see for images(a), (b) and (d) in Fig 2). Thus, the color transformation should beapplied to the stains rather than the whole image. With this method,we aim to transfer the color appearance of a histological image to-wards another one without modifying the background. More specif-ically, for each training image, a target image is randomly selectedfrom the training set at each iteration, and an augmented image isgenerated through a color transfer defined as below:

Ctransferred = Coriginal − Coriginal + C target (2)

where C is the mean of channel C calculated over the principalstained components within an image. This translation is applied toeach channel of the image. Using this image augmentation tech-nique, the augmented image takes the color appearance from the tar-get image while its structure is the same as in the original image (seeFig 2).

Page 4: A new color augmentation method for deep learning

(a) Original image (b) Target image (c) Augmented image

(d) Target image (e) Augmented image

Fig. 2: Examples of two augmented images (c and e) obtained froman original image (a) using two different target images (b and d).This augmentation is applied to crops of size 512 × 512 in training.

3.3. Data preparation and preprocessing

To use the data augmentation method proposed in Section 3, im-ages in RGB color space are transformed into Lab color space foraugmentation using the color transfer. Besides, we use a geodesicreconstruction to cope with non-local information within fully con-volutional networks [25]. Then, as the histological images in thedatasets are of huge yet variable sizes, crops of size 512 × 512 areextracted from them and used for training (158 crops) and validation(53). When necessary in an experiment, image augmentation meth-ods are applied to these crops on-the-fly during training. Moreover,crops containing only the background are removed since no interfaceappears in them.

3.4. Network training

In this paper, we consider a U-Net architecture [8], a typicalFCN used in biomedical imaging, which consists of an "encoding-decoding" architecture for extracting high-level information withoutlosing the object details. Here, the window size is 3 × 3 for convo-lution and 2 × 2 for max-pooling and upsampling. In our network,four layers of downsampling/upsampling are contained in the encod-ing/decoding path, with the output number of filters being 16 afterthe first convolution layer. At the end of the decoding path, a 1x1convolution with the sigmoid activation is applied to have a channeldimension equaling the number of classes in the segmentation task.Thus, three probability maps of the same spatial dimensions as theinput image is obtained at the output of the network.

Besides the network’s architecture, another essential element indeep learning is the loss function. Inspired by [26], who proposeda differentiable version of the Jaccard distance to measure the dis-similarity between two sets, we encode the ground truth into one-hotvectors and define the loss function as below:

LJ = 1−∑i,j,k(tijkpijk)∑

i,j,k t2ijk +

∑i,j,k p

2ijk −

∑i,j,k(tijkpijk)

(3)

where tijk = 1 if the true class of pixel Iij in the input image is k(tijk = 0 otherwise), and pijk represents an estimated probabilitythat this pixel belongs to class k.

3.5. Post-processing

After training the network on crops of size 512× 512, we can applyit to images of arbitrary size as long as they can be passed on GPU

Table 2: Training results on Database1

Method Time Best train loss Best val lossNo aug 14 mins 0.0060 0.0307

Geo 44 mins 0.0133 0.0243Color 22 mins 0.0114 0.0272Mix 58 mins 0.0178 0.0211

for network prediction. Another constraint on the image size is thatit has to be the multiples of 16 (24) since four 2 × 2 max-poolinglayers are included in the network. Then, as proposed in [25], thefollowing post-processing is applied. For the stratum corneum andthe living epidermis, only the largest connected component is kept;for the background, the connected components touching the top orthe bottom of the image are kept. Based on these components, thefinal segmentation result is obtained through a watershed transform.

4. EXPERIMENTAL RESULTS

To illustrate the performance of the proposed augmentation method,we trained four networks with different methods on Database1. Sev-eral experiments were conducted to define the parameters for geo-metric transformation: 5 degrees of rotation range, 0.1 total widthfor horizontal shift range, 0.1 total height for vertical shift range,random horizontal flip, and interpolated by nearest value. ’No aug’refers to no augmentation; ’Geo’ refers to augmentation with geo-metric transformation; ’Color’ refers to augmentation with the pro-posed color transfer and ’Mix’ combines our color transfer with geo-metric transformation to augment the variability of the labeled data.

The networks were implemented using Keras with TensorFlowbackend and were trained on an NVIDIA Titan-X GPU of 11 Gbmemory. All networks were trained using the same architecture andloss function. Besides, we used Adadelta optimizer with the defaultparameters proposed by [27], during 200 epochs and patience of 50.For the learning process, online learning (one training sample foreach iteration) was applied while the augmented images were con-structed on-the-fly.

4.1. Results on the test set from Database1 (41 images)

In Table 2, we present the best training and validation losses ob-tained during the training process. Convergence seems satisfactoryin most cases. However, a problem of overfitting appeared in the un-augmented training process. Data augmentation has mitigated over-fitting, while the mixed augmentation achieved the best validationloss with only a tiny increase compared to the training loss.

For evaluating the network’s performance on the test set, we cal-culated a Jaccard index [28] for each class, also known as the Inter-section over Union (IoU), which is commonly used in the evaluationof medical image segmentation. Besides, as histological images ofhuman skin possess a specific layered structure, correct segmenta-tion results should contain three interfaces as shown in Fig 1. There-fore, for each interface within a slide, we can calculate a mean spa-tial distance D between the interface in the ground truth Igt and theinterface predicted by the network Ipre, defined as:

D =1

2

1

|Igt|∑

p∈Ipre

d(p, Igt) +1

|Ipre|∑p∈Igt

d(p, Ipre)

(4)

where d(·) is the Euclidean distance calculated in pixels. Finally, for

Page 5: A new color augmentation method for deep learning

Fig. 3: Three examples of segmentation results on Database2 obtained with networks trained with various image augmentation techniques(BG in black, SC in gray, and LE in white). From left to right: original image, ground-truth, no augmentation, geometric transformation,color transfer, combined augmentation.

Table 3: Test results on Database1

Method Jaccard index (per class) D (per interface)BG SC LE Surface IEB DEJ

No aug 0.97 0.77 0.87 54.6 311.5 7.9Geo 0.99 0.82 0.89 16.3 30.9 5.1

Color 0.99 0.89 0.91 2.1 9.7 4.3Mix 0.99 0.89 0.92 2.0 9.7 3.6

each interface, a mean distance averaged over all the test set is cal-culated to determine the segmentation’s quality. In the case where acertain interface is not detected in the segmentation result, the corre-sponding mean distance would take the value of the image diagonalas a penalization.

Table 3 presents the test results on the test set from Database1.While the Jaccard index of the BG class is equally improved usingdifferent data augmentation techniques, the networks trained withcolor transfer give better results for the Jaccard indexes of SC and LEcompared to the un-augmented training and the augmented trainingusing geometric transformation.

In terms of mean distances, networks trained with data augmen-tation introduce a great improvement compared to the un-augmentedtraining. Moreover, networks trained with the proposed method ofcolor augmentation give better results than geometric transforma-tion, while the best result is provided by the combination of thesetwo augmentation techniques.

4.2. Results on Database2 used as test set (52 images)

In the real application, the trained networks would be applied to im-ages from different studies. In order to verify the networks gener-alization, we applied them to Database2, which consists of imagescoming from a different study than Database1. Fig 3 shows exam-ples of segmentation results on this database.

Quantitative evaluation results are given in Table 4. Firstly, themean distance of IEB detected by the un-augmented training net-work is much larger than the other networks, which is a consequenceof the segmentation results where no corresponding IEB is present(see the third image of the first line in Fig 3). Secondly, an improve-ment by a large margin has been achieved by networks using the

Table 4: Test results on Database2

Method Jaccard index (per class) D (per interface)BG SC LE Surface IEB DEJ

No aug 0.92 0.47 0.70 156.6 992.5 58.5Geo 0.95 0.74 0.70 19.2 321.5 72.8

Color 0.99 0.91 0.91 1.8 9.0 5.1Mix 0.99 0.92 0.92 1.4 8.0 4.2

color transfer, especially in the aspect of mean distances. The per-formance difference between the network using only geometric dataaugmentation and the networks using color augmentation is muchlarger in the case of Database2. This shows that the proposed methodbrings a welcome generalization capacity to the models dealing withthis kind of histological data. Last but not least, the best result onDatabase2 is given by the network trained with the combined aug-mentation, which is also true for Database1.

5. CONCLUSIONS

We present an image augmentation method to automatically in-crease the color appearance variety of color-stained histologicalimages used for neural network training, which yields improvedperformance in comparison to a typical augmentation technique. Weevaluated the networks trained with various methods on histologicalimages of human skin, where the color transformation is one of themost important variations between different images. By applying thenetworks to a generalization database containing histological imagescoming from a different study, a satisfactory result was obtained byusing our method.

The main constraint with our algorithm is that its performance isvery dependent on the color variety existing in the training set, whichcould be insufficient to cover all the variations observed in the realapplication. We consider extending our method of color augmenta-tion with elastic deformation [8] to increase the appearance varietyexisting in the training images. Additional future work may includecombining image augmentation with stain normalization techniques,in which a generative adversarial network, such as [24,29,30], couldbe used to automatize the whole process.

Page 6: A new color augmentation method for deep learning

6. REFERENCES

[1] J. Haggerty, X. Wang, A. Dickinson, C. O’Malley, and E. Mar-tin, “Segmentation of epidermal tissue with histopathologicaldamage in images of haematoxylin and eosin stained humanskin,” BMC medical imaging, vol. 14, pp. 7, 2014.

[2] T. Azevedo Tosta, L. Neves, and M. do Nascimento, “Segmen-tation methods of H&E-stained histological images of lym-phoma: A review,” Informatics in Medicine Unlocked, vol.9, pp. 35 – 43, 2017.

[3] K. Fukushima, “Neocognitron: A self-organizing neural net-work model for a mechanism of pattern recognition unaffectedby shift in position,” Biological Cybernetics, vol. 36, no. 4, pp.193–202, Apr 1980.

[4] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard,W. Hubbard, and L. D. Jackel, “Backpropagation applied tohandwritten zip code recognition,” Neural Comput., vol. 1, no.4, pp. 541–551, Dec. 1989.

[5] D. Ciresan, A. Giusti, L. Gambardella, and J. Schmidhuber,“Deep neural networks segment neuronal membranes in elec-tron microscopy images,” in Proceedings NIPS’12, USA,2012, Curran Associates Inc.

[6] J. Long, E. Shelhamer, and T. Darrell, “Fully convolu-tional networks for semantic segmentation,” CoRR, vol.abs/1411.4038, 2014.

[7] V. Badrinarayanan, A. Kendall, and R. Cipolla, “Segnet:A deep convolutional encoder-decoder architecture for imagesegmentation,” CoRR, vol. abs/1511.00561, 2015.

[8] O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolu-tional networks for biomedical image segmentation,” in MIC-CAI (3). 2015, vol. 9351 of Lecture Notes in Computer Science,pp. 234–241, Springer.

[9] D. Komura and S. Ishikawa, “Machine learning methods forhistopathological image analysis,” CoRR, vol. abs/1709.00786,2017.

[10] H. Zhang, M. Cisse, Y. Dauphin, and D. Lopez-Paz, “mixup:Beyond empirical risk minimization,” in ICLR, 2018.

[11] D. Magee, D. Treanor, D. Crellin, M. Shires, K. Smith,K. Mohee, and P. Quirke, “Colour normalisation in digitalhistopathology images,” in MICCAI Workshop, 2009.

[12] J. Vicory, H. D. Couture, N. E. Thomas, D. Borland, J. S. Mar-ron, J. T. Woosley, and M. Niethammer, “Appearance normal-ization of histology slides,” Comput Med Imaging Graph, vol.43, pp. 89–98, 2015.

[13] S. Hauberg, O. Freifeld, A. Boesen Lindbo Larsen, J. Fisher,and L. Kai Hansen, “Dreaming more data: Class-dependentdistributions over diffeomorphisms for learned data augmenta-tion,” in AISTATS, 2016.

[14] Luis Perez and Jason Wang, “The effectiveness of data aug-mentation in image classification using deep learning,” CoRR,vol. abs/1712.04621, 2017.

[15] C. Nader Vasconcelos and B. Nader Vasconcelos, “Increas-ing deep learning melanoma classification by classical andexpert knowledge based image transforms,” CoRR, vol.abs/1702.07025, 2017.

[16] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenetclassification with deep convolutional neural networks,” inNIPS’12, USA, 2012, pp. 1097–1105, Curran Associates Inc.

[17] Yan Xu, Ran Jia, Lili Mou, Ge Li, Yunchuan Chen, YangyangLu, and Zhi Jin, “Improved relation classification by deeprecurrent neural networks with data augmentation,” in Proc.COLING 2016, 2016, pp. 1461–1470.

[18] E. Reinhard, M. Ashikhmin, B. Gooch, and P. Shirley, “Colortransfer between images,” IEEE Comput. Graph. Appl., vol.21, no. 5, pp. 34–41, Sept. 2001.

[19] F. W. Billmeyer, “Color science: Concepts and methods, quan-titative data and formulae, 2nd ed., by gunter wyszecki and w.s. stiles, john wiley and sons, new york, 1982,” Color Research& Application, vol. 8, no. 4, pp. 262–263.

[20] M. Macenko, M. Niethammer, J. S. Marron, D. Borland, J. T.Woosley, Xiaojun Guan, C. Schmitt, and N. E. Thomas, “Amethod for normalizing histology slides for quantitative analy-sis,” in ISBI 2009, June 2009, pp. 1107–1110.

[21] A. M. Khan, N. Rajpoot, D. Treanor, and D. Magee, “Anonlinear mapping approach to stain normalization in digitalhistopathology images using image-specific color deconvolu-tion,” IEEE Transactions on Biomedical Engineering, vol. 61,no. 6, pp. 1729–1738, June 2014.

[22] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generativeadversarial nets,” in NIPS 27, Z. Ghahramani, M. Welling,C. Cortes, N. D. Lawrence, and K. Q. Weinberger, Eds., pp.2672–2680. Curran Associates, Inc., 2014.

[23] A. Bentaieb and G. Hamarneh, “Adversarial stain transfer forhistopathology image analysis,” IEEE Transactions on Medi-cal Imaging, vol. 37, no. 3, pp. 792–802, March 2018.

[24] M. Tarek Shaban, Christoph Baur, Nassir Navab, and Shadi Al-barqouni, “Staingan: Stain style transfer for digital histologicalimages,” CoRR, vol. abs/1804.01601, 2018.

[25] E. Decenciere, S. Velasco-Forero, F. Min, J. Chen, G. GauthierH. Burdin, B. Lay, T. Bornschloegl, and T. Baldeweck, “Deal-ing with topological information within a fully convolutionalneural network,” in ACIVS, 2018.

[26] Yading Yuan, Ming Chao, and Yeh-Chi Lo, “Automatic skinlesion segmentation using deep fully convolutional networkswith jaccard distance,” IEEE Transactions on Medical Imag-ing, vol. 36, pp. 1876–1886, 2017.

[27] M. D. Zeiler, “Adadelta: An adaptive learning rate method,”CoRR, vol. abs/1212.5701, 2012.

[28] Maxim Berman and Matthew B. Blaschko, “Optimizationof the jaccard index for image segmentation with the lovászhinge,” CoRR, vol. abs/1705.08790, 2017.

[29] F. Mahmood, D. Borders, R. Chen, G. N. McKay, K. J. Sal-imian, A. S. Baras, and N. J. Durr, “Deep adversarial trainingfor multi-organ nuclei segmentation in histopathology images,”CoRR, vol. abs/1810.00236, 2018.

[30] L. Hou, A. Agarwal, D. Samaras, T. M. Kurç, R. R. Gupta, andJ. H. Saltz, “Unsupervised histopathology image synthesis,”CoRR, vol. abs/1712.05021, 2017.