optimizing the trainable b-cosfire filter for retinal ... · introduction the analysis of shape,...

23
Submitted 20 March 2018 Accepted 28 September 2018 Published 13 November 2018 Corresponding author Sufian A. Badawi, [email protected] Academic editor Henkjan Huisman Additional Information and Declarations can be found on page 19 DOI 10.7717/peerj.5855 Copyright 2018 Badawi and Fraz Distributed under Creative Commons CC-BY 4.0 OPEN ACCESS Optimizing the trainable B-COSFIRE filter for retinal blood vessel segmentation Sufian A. Badawi and Muhammad Moazam Fraz School of Electrical Engineering and Computer Science, National University of Sciences and Technology, Islamabad, Pakistan ABSTRACT Segmentation of the retinal blood vessels using filtering techniques is a widely used step in the development of an automated system for diagnostic retinal image analysis. This paper optimized the blood vessel segmentation, by extending the trainable B-COSFIRE filter via identification of more optimal parameters. The filter parameters are introduced using an optimization procedure to three public datasets (STARE, DRIVE, and CHASE- DB1). The suggested approach considers analyzing thresholding parameters selection followed by application of background artifacts removal techniques. The approach results are better than the other state of the art methods used for vessel segmentation. ANOVA analysis technique is also used to identify the most significant parameters that are impacting the performance results (p-value < 0.05). The proposed enhancement has improved the vessel segmentation accuracy in DRIVE, STARE and CHASE-DB1 to 95.47, 95.30 and 95.30, respectively. Subjects Ophthalmology, Radiology and Medical Imaging, Human-Computer Interaction, Computational Science Keywords Retinal blood vessels, B-COSFIRE, Retinal images, Computer Aided Diagnosis (CAD), BCOSFIRE INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics of blood vessels in human retinal images are critical diagnostic indicators to eye diseases. Various ophthalmic and systemic diseases including diabetic retinopathy, age-related macular degeneration, hypertensive retinopathy, arteriolar narrowing, and arteriosclerosis (Kanski et al., 2011). The association of abnormalities in retinal vasculature with cardiovascular diseases has been reported in the literature (Wong et al., 2002). Therefore, accurate segmentation of retinal blood vessels can be considered as the first step in the development of an automated system for diagnostic retinal image analysis. The segmentation of blood vessels in the retina is a challenging problem. In the retinal image captured from fundus camera, the blood vessels emerge from retinal Optic Disc with their branches spread across the retina. The retinal vasculature is comprised of arterioles and venules, appears to be piecewise line-shaped and its width is variable across the vessel length (Abràmoff, Garvin & Sonka, 2010). The cross-sectional gray level intensity profile of retinal vessel approximates the Gaussian-shaped curve or a mixture of Gaussian in case central vessel reflex is present. The central vessel reflex appears as a strong reflection across How to cite this article Badawi SA, Fraz MM. 2018. Optimizing the trainable B-COSFIRE filter for retinal blood vessel segmentation. PeerJ 6:e5855 http://doi.org/10.7717/peerj.5855

Upload: others

Post on 08-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Submitted 20 March 2018Accepted 28 September 2018Published 13 November 2018

Corresponding authorSufian A. Badawi,[email protected]

Academic editorHenkjan Huisman

Additional Information andDeclarations can be found onpage 19

DOI 10.7717/peerj.5855

Copyright2018 Badawi and Fraz

Distributed underCreative Commons CC-BY 4.0

OPEN ACCESS

Optimizing the trainable B-COSFIREfilter for retinal blood vesselsegmentationSufian A. Badawi and Muhammad Moazam FrazSchool of Electrical Engineering and Computer Science, National University of Sciences and Technology,Islamabad, Pakistan

ABSTRACTSegmentation of the retinal blood vessels using filtering techniques is a widely used stepin the development of an automated system for diagnostic retinal image analysis. Thispaper optimized the blood vessel segmentation, by extending the trainable B-COSFIREfilter via identification ofmore optimal parameters. The filter parameters are introducedusing an optimization procedure to three public datasets (STARE,DRIVE, andCHASE-DB1). The suggested approach considers analyzing thresholding parameters selectionfollowed by application of background artifacts removal techniques. The approachresults are better than the other state of the art methods used for vessel segmentation.ANOVA analysis technique is also used to identify the most significant parameters thatare impacting the performance results (p-value < 0.05). The proposed enhancementhas improved the vessel segmentation accuracy in DRIVE, STARE and CHASE-DB1 to95.47, 95.30 and 95.30, respectively.

Subjects Ophthalmology, Radiology and Medical Imaging, Human-Computer Interaction,Computational ScienceKeywords Retinal blood vessels, B-COSFIRE, Retinal images, Computer Aided Diagnosis (CAD),BCOSFIRE

INTRODUCTIONThe analysis of shape, appearance, tortuosity and other morphological characteristicsof blood vessels in human retinal images are critical diagnostic indicators to eyediseases. Various ophthalmic and systemic diseases including diabetic retinopathy,age-related macular degeneration, hypertensive retinopathy, arteriolar narrowing, andarteriosclerosis (Kanski et al., 2011). The association of abnormalities in retinal vasculaturewith cardiovascular diseases has been reported in the literature (Wong et al., 2002).Therefore, accurate segmentation of retinal blood vessels can be considered as the first stepin the development of an automated system for diagnostic retinal image analysis.

The segmentation of blood vessels in the retina is a challenging problem. In the retinalimage captured from fundus camera, the blood vessels emerge from retinal Optic Disc withtheir branches spread across the retina. The retinal vasculature is comprised of arteriolesand venules, appears to be piecewise line-shaped and its width is variable across the vessellength (Abràmoff, Garvin & Sonka, 2010). The cross-sectional gray level intensity profile ofretinal vessel approximates the Gaussian-shaped curve or a mixture of Gaussian in casecentral vessel reflex is present. The central vessel reflex appears as a strong reflection across

How to cite this article Badawi SA, Fraz MM. 2018. Optimizing the trainable B-COSFIRE filter for retinal blood vessel segmentation.PeerJ 6:e5855 http://doi.org/10.7717/peerj.5855

Page 2: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

the vessel centerline and is more evident in the arterioles of retinal images of childrendue to the difference in oximetry level than that of adults (Fraz, Basit & Barman, 2013).The crossover of different arterioles and venules branches and its branch-points furthercomplicates the vessel profile model. There may be a number of pathological structurespresent in the retina which includes hemorrhages, exudates, drusens, cotton-wool spots andmicroaneurysms, etc. The intensity profiles of some of these pathologies may resemble thatof the boll vessels. Besides variation in contrast, uneven illumination during image capture,low quality of the captured retinal image and the presence of pathologies are the furtheradded challenges for the development of robust automated retinal vessel segmentationmethodologies (Fraz & Barman, 2014).

Several approaches have been proposed in the literature for the segmentation of retinalvasculature. A comprehensive review of retinal vessel segmentationmethods is also availablein (Fraz et al., 2012b). Thesemethods can be classified into twomajor categories, supervisedand unsupervised methods.

The supervised methods compute pixel-wise features and use the labeled ground truthdata to train the classifiers. The prerequisite is the availability of labeled data whichis difficult to obtain in case of medical images. In this regard, various classifiers havebeen used including k-Nearest Neighbor classifier (Staal et al., 2004), Gaussian MixtureModel (Soares et al., 2006), Support Vector Machine (Ricci & Perfetti, 2007), and ensembleclassifier of Decision Trees (Fraz et al., 2012c), Bayesian classifier in combination withmulti scale analysis of Gabor wavelets (Soares et al., 2006), Neural Network that can beclassified as shallow (Marín et al., 2011) or deep learning that creates a real advancementin computer vision by introducing the improvement such as the rectified linear units withthe convolutional neural networks (CNNs) (Long, Shelhamer & Darrell, 2015; Chen et al.,2014; Krizhevsky, Sutskever & Hinton, 2012), a variety of CNNs-based methods have beenintroduced for the vessel segmentation of the retinal images. Li et al. (2016) proposed a deepneural network to model it as a cross modality transformation problem. On the other hand,Fu et al. (2016) achieved the vessel segmentation as a CNNcombinedwith a fully-connectedConditional Random Fields (CRFs).Maninis et al. (2016) have introduced vessel and opticdisk segmentation using CNN. Dasgupta (Dasgupta & Singh, 2017) implemented a pixelwise binary classification of the retinal image using batches of 28× 28 pixels each.Orlando,Prokofyeva & Blaschko (2017) proposed for the vessel segmentation a trained discriminativeconditional random field model.

In contrast the unsupervisedmethod does not use classifiers, but relies on the applicationof matched filtering that is based on techniques of kernel matching and special filtering thatrelies on linear operations using predefined templates (Azzopardi et al., 2015; Cinsdikici& Aydın, 2009; Fraz et al., 2012a; Hari, Raj & Gopikakumari, 2017; Sofka & Stewart, 2006),vessel tracking (Yin, Adel & Bourennane, 2012), model fitting (Al-Diri, Hunter & Steel,2009), and mathematical morphology (Fraz et al., 2012b; Fraz et al., 2012a; Wisaeng &Sa-ngiamvibool, 2018), from the non-supervised methods Combination Of Shifted FilterResponses (Azzopardi & Petkov, 2013b), Rotating Derivative of Left invariant (Zhang etal., 2016), Fussy convergence (Hoover, Kouznetsova & Goldbaum, 2000), local Adaptive

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 2/23

Page 3: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

thresholding and mathematical morphology citepmendonca2006segmentation, multi-scale analysis (Martinez-Perez et al., 1999), rotation-invariant line operator and linearkernel support vector machine (SVM) (Lau et al., 2013), Ribbon of Twins (Al-Diri, Hunter& Steel, 2009), and multi-concavity modeling approach (Lam, Gao & Liew, 2010).

Recently, a trainable template matching method for vessel segmentation wasintroduced (Strisciuglio & Petkov, 2017; Strisciuglio, Azzopardi & Petkov, 2017; Azzopardi& Petkov, 2013b; Azzopardi et al., 2015; Cinsdikici & Aydın, 2009; Fraz et al., 2012a; Hari,Raj & Gopikakumari, 2017; Sofka & Stewart, 2006). It segments the tree musculature viaparameters configuration. They manage to detect the shape of vessels by prototyping twopatterns. These patterns are in the form of bar and half bar. They configure the geometricfeatures of the pattern to detect the vessels in the retina and segment it. However it needsa manual configuration for the pattern detection which is prone to errors, moreover, it isnoticed that there are some background artifacts that may increase the false positive ratiowhich in turn will affect the accuracy and precision ratio. Thus to improve the results of theB-COSFIRE method we hypothesize that we need to optimize the thresholding parametervalues and background artifacts removal mechanism.

This work aims at enhancing the parameter configuration of trainable filter that achievesa higher vessel segmentation performance on publicly available datasets DRIVE, STARE,and CHASE-DB1.

The rest of this paper is organized as follows: ‘Trainable COSFIRE Filter’ an overview ofthe trainable COSFIRE filter. ‘Trainable B-COSFIRE Filter’ is the overview of B-COSFIREfilter. ‘Improving B-COSFIRE Method’ is describing the improved B-COSIRE filter.‘Experimental Evaluation’ is explaining experimental evaluation. ‘Results’ discusses results.‘Discussion’ presents the discussion. While the final section discusses conclusion andfuture work.

TRAINABLE COSFIRE FILTERCombination Of Shifted Filter Responses (COSFIRE) is an unsupervised pattern detectionmethod in computer vision. It is based on the computational model ‘Combination ofReceptive Fields’ (CORF) in the visual cortex of the brain (Strisciuglio, Azzopardi & Petkov,2017; Azzopardi & Petkov, 2012). COSFIRE filter is invariant to scale, rotation, shift andreflection transformations (Azzopardi & Petkov, 2013b). The learning and recognition arecomparatively fast with high accuracy and shape selectivity (Cadieu et al., 2007). It is usedfor the detection of contour-based patterns (as shown in Fig. 1A).

The applications of COSFIRE in computer vision include keyword detection inhandwritten text (Azzopardi & Petkov, 2014), complex scene objects identification(Azzopardi & Petkov, 2014), color based object recognition (Gecer, Azzopardi & Petkov,2017), gender recognition using facial features (Azzopardi, Greco & Vento, 2016), edgedetection (Azzopardi & Petkov, 2012), handwritten digits recognition (Azzopardi etal., 2016), traffic signals recognition (Azzopardi et al., 2016), and detection of vesselsbifurcations in retinal images (Azzopardi et al., 2016).

Difference of Gaussian (DOG) filter is used for generating the combination of filterresponses. The COSFIRE method can be classified as a combination of shifted filter

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 3/23

Page 4: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Figure 1 (A) DoG functions showing the contours of each function; (B) illustration of the vessel as abar and identifying the five points to design the B-COSFIRE prototype.

Full-size DOI: 10.7717/peerj.5855/fig-1

responses, key point detection, and pattern recognition. It is a trainable filter and combinesthe responses from a group of DoG filters (represented by blobs).

TRAINABLE B-COSFIRE FILTERBar-Combination of Shifted Filter Responses (B-COSFIRE) (Azzopardi & Petkov, 2013b) isbased on COSFIRE filter. It is an extension of a combination of the Shifted Filter Responses.It is also unsupervised vessel segmentation method as COSFIRE method.

B-COSFIRE is trainable to be selective for the symmetric and asymmetric bar-shapedpatterns. It takes its input as five blobs from the DoG filter on identified points at a specificdistance from the center of the kernel (as shown in Fig. 1B). Also, being a templatematchingfilter, it achieves the filter response by extracting the dominant orientation that has theconcerned features and its geometric arrangements, Its output is computed as the weightedgeometric mean of blurred and shifted responses of the DoG.

The applications of B-COSFIRE in computer vision are: detection of roads, cracks, rivers,tile edges, land plots and plant leaf’s nerves (Strisciuglio & Petkov, 2017), and detection ofvessels in retinal images (Strisciuglio, Azzopardi & Petkov, 2017).

The point locations are identified and enumerated in the bar and half bar via an automaticconfiguration process as shown in Figs. 2 and 3. The filter is described as ‘trainable’ andis configured automatically by detecting a given pattern of interest for the retinal vessels.There are two patterns; the first one is for the detection of internal part of the vessel whilethe second one is used to detect the end of the vessel. These two kernels are rotated intwelve orientations in order to cover all probable vessel directions. As a result, it forms afilter bank of 15◦ rotation of the filter that is good enough for optimal detection.

B-COSFIRE filter uses the DoG filter for the detection of patterns (as shown in Fig. 3).However, Gabor filter used by Strisciuglio, Azzopardi & Petkov (2017) and Azzopardi &Petkov (2013a) and first-order derivative used by Fraz, Basit & Barman (2013) are alsogood alternatives that can be considered for the same task. B-COSFIRE filter is a trainablefilter which combines the responses of DoG by multiplication of the response outputs.

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 4/23

Page 5: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Figure 2 Illustration of B-COSFIRE prototype and the generated B-COSFIRE filter; (A) DoG filter; (B)automatically created a B-COSFIRE filter for the designed prototype; (C) bar-shaped prototype.

Full-size DOI: 10.7717/peerj.5855/fig-2

The configuration is comprised of orientations that are illustrated by ellipses, and blurringfunction is illustrated by 2D Gaussian blobs as shown in Fig. 4.

B-COSFIRE filter is composed of the following steps:1. For DoG filter responses (as shown in Eq. (1)) convolve DoG filter on the input image

as in Eq. (4).2. Blur the gained responses of the above DoG filters as in Eq. (2).3. Shift the generated blurred responses to the direction of the filter center as in Eq. (3).

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 5/23

Page 6: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Figure 3 B-COSFIRE Bar and half bar filter prototype patterns. (A) Bar-shaped prototype pattern. (B)Half bar-shaped prototype pattern. (C) Illustration of the designed DoG filter combinations to generatethe pattern blobs for the B-COSFIRE filter.

Full-size DOI: 10.7717/peerj.5855/fig-3

Figure 4 (A) Difference of Gaussian (DoG) 3D function and the blob generated from its contours. (B)The (DoG) blob. (C) Multi-scale of DoG 3D function and the created blobs.

Full-size DOI: 10.7717/peerj.5855/fig-4

4. Getting the B-COSFIRE filter response by computing weighted geometric mean rs(x,y)as in Eq. (6).The details of the above-mentioned steps can be found in Azzopardi & Petkov (2013b).

DoGσdef=

12πσ 2 exp

(x2+y2

2σ 2

)−

12πσ 2 exp

(x2+y2

2(0.5σ )2

). (1)

The given location (x,y) represents the center of DoG function as in Eq. (1) where sigmais the standard deviation (SD), and it represents the intensity profile spread. The SD of theinner Gaussian function is 0.5.

σ ′= σo+αρi. (2)

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 6/23

Page 7: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

The next step is to blur the gained responses by applying Eq. (2), and the third step isto shift the blurred DoG responses with the help of shift vector Eq. (3), achieved with shiftvector.

(4xi,4yi)=

[−ρicos(θi)−ρisin(θi)

](3)

Cσdef= |I ?DoGσ |+. (4)

For a given intensity distribution I (x ′,y ′) of an image I , the response C(σ )(x,y) of a DoGfilter with a kernel functionDoGσ (x−4x−x ′,y−4y−y ′). If the result of the convolutionis negative it is replaced with 0 as shown in Eq. (4).

Sσi,ρi,φi(x,y)=max(x−4x−x ′,y−4y−y ′)DoG′σ (x′,y ′) (5)

rs(x,y)def=

∣∣∣ s∏1

([S(σi,ρi,φi(x,y))ωi])1∑s1ωi

∣∣∣t

(6)

where

0≤ t ≤ 1,ωi= exp(ρi)

2

2τ2 ,τ =

(13max(ρi)

)1|s|. (7)

The filter threshold (t ) is used to suppress the output of B-COSFIRE filter Eq. (6).The resulting B-COSFIRE filter is achieved by applying a number of responses that areorientation-selective and arranged around the point (x,y).

IMPROVING B-COSFIRE METHODTo improve the results of the B-COSFIREmethodwe hypothesized that we need to optimizethe thresholding parameter values and background artifacts removal mechanism. As it wasnoticed that there is some background artifacts that may increase the false positive ratiowhich in turnwill affect the accuracy and precision ratio. In this work, we have selected threeparameters for optimization (preprocessing threshold, filter threshold, and backgroundartifact size), the ‘pre-processing threshold’ parameter is used to suppress the input filterresponses that are less than a specific value defined, while the second parameter ‘filterthreshold’ shown in Eq. (6) is used to suppress the response of B-COSFIRE filter. If theresponse value is less than themax response of ‘filter threshold’ and the ‘background artifactsize’ is less than the size of the connected component, then it will be deleted. Moreover,three background artifacts removal algorithms are applied here in this experiment to getan enhanced output. The first background artifacts removal mechanism used is calledBackground artifacts Filtering algorithm and it focuses on removing all small size noisyobjects that are disconnected and leaves only the vascular tree. The second is known asK-Median it uses the K-median clustering algorithm for removing the background artifactsand the third algorithm is known as the black and white artifact clearance that is used toremove small objects and fill small holes.

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 7/23

Page 8: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Table 1 Range of parameter values for the optimization of results.

Parameter Range start Range end

Preprocessing thresould 0.1 0.6Filter threshold 25 50Background artifact size 0 48

To evaluate the proposed approach, each dataset is divided into two equal sets, onefor evaluation and the other for testing. The evaluation set is used as input to identify thebest parameters. The training images of DRIVE dataset are used for evaluation, while forSTARE and CHASE-DB1, we split each of them into two subsets, one for evaluation and,the other for testing of the proposed method.

It is pertinent to highlight the simplicity of the employed optimization technique, whichis inspired from the random hyper parameter optimization search in Bergstra & Bengio(2012). We have defined a limited search range of parameters which are shown in Table 1.Afterwards, we have conducted multiple experiments to identify the best combinationof parameter values. For each experiment, the sensitivity, specificity, and accuracy arecalculated and documented. The parameter optimization for each dataset is illustrated inFig. 5. Initially, we performed a number of experiments on the evaluation sets with theaim of determining the best combination of parameters for B-COSFIRE. Afterwards, Gridsearch method is employed for the purpose of identifying the combination of parametersthat gives the optimal score of accuracy, sensitivity and specificity on the training images.

We let the values of the parameters be incremented to cover all the studied parameterssearch space till the optimized values were achieved.

From another angle, to illustrate the benefits of our proposed approach. It is suggested touse one way analysis of variance ANOVA to compare statistical significance of the differentvalues of each parameter on the results of the vessel segmentation.

EXPERIMENTAL EVALUATIONThis section briefly explains the details about the datasets used for conducting theseexperiments and showing the performance measures used on each of these datasets.

MaterialsThe optimization experiments are performed on three publicly available datasets (DRIVE,STARE, and CHASE-DB1) to find the optimal parameters.

DRIVE datasetDRIVE dataset (Niemeijer et al., 2004) is a standardized set of fundus images used forbenchmarking the effectiveness of vessel segmentation and classification. These imageswere captured in Netherlands for the purpose of screening the diabetic retinopathy. Atotal of 400 diabetic subjects data is collected between the age group of 25–90 years. Allimages are compressed with JPEG. Canon CR5 3CCD non-mydriatic camera was used forcapturing these images. The resolution of each image is 768 by 584 pixels having 8 bits per

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 8/23

Page 9: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Figure 5 Schematic representation of our proposed parameter optimization.Full-size DOI: 10.7717/peerj.5855/fig-5

pixel. A set of 20 images in each training and test datasets. The dataset includes manuallyprepared ground truth images and masks that are made by two experts.

STARE datasetSTARE (Hoover, Kouznetsova & Goldbaum, 2000) dataset was used for blood vesselsegmentation. It is composed of 20 retinal images 10 of them have pathologies. Theimage capturing is achieved using a TopCon TRV-50 fundus camera. Each image has aresolution of 605 by 700 pixels with 8-bit RGB channels. Two sets of manually segmentedimages are prepared by two different observers.

CHASE-DB1 datasetCHASE-DB1 (Fraz et al., 2012c) dataset is collected in England for Child Heart and HealthStudy. It contains 28 pairs of high-resolution fundus images, the resolution of images is1,280 by 960 pixels. A total of 14 children left and right eyes images were captured with a30◦ field of view. The Nidek NM-200-D fundus camera is used for obtaining these images.It is more prone to bias because all the images are paired with the same person.

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 9/23

Page 10: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Table 2 Performance metrics used in this work to compare the results.

Metric Preprocessing threshold

Accuracy It measures the percentage of pixels correctly segmented inthe dataset.Accuracy = (TP+TN )

(TP+FP+FN+TN )

Sensitivity Sensitivity(Recall)= TP(TP+FN )

Specificity Specificity = TN(TN+FP)

AUC AUC is the area under the ROC curve it measures howperfect the method can distinguish whether the pixel is avessel pixel or a background one (Vessel/background).

Performance measuresFor performance measures selection used in the experiments, let’s assume that the inputimage is ‘‘I’’ and assume the B-COSFIRE segmented image be ‘‘Y’’. To decide whetherthe segmented output ‘‘Y’’ is correct or not, it is compared with the correspondingmanual hand labeled image, customarily called Ground Truth label (GT). The GT labelis usually prepared for the retinal image by an experienced human observer to compareand identify the quantitative performance of the segmentation with the computer output.The comparison yields true positive (TP) (pixels detected as vessel pixels in Y and theyappear as vessels in the GT label), false positive (FP) (pixels classified as vessel pixels in Ywhile they exist in the background in the GT label), true negative (TN) (pixels classified asbackground pixels in Y and they appear as non-vessel pixels in the GT label), false negative(FN) (pixels classified as background pixels in Y while they look as vessels in the GT label).Table 2 is showing the performance metrics used to compare the performance results.

RESULTSThis section presents the quantitative performance results obtained by the proposedalgorithm and the visual illustrations of the segmented retinal vasculature for best andworst case accuracy obtained in the retinal image datasets.

Quantitative performance resultsFor optimization, the empirical experiment is performed on each dataset training imageslike (DRIVE, STARE, and CHASE-DB1) using the parameter optimization (see Fig. 5).The Green channel is used in all experiments. Table 3 summarizes the achieved optimalresults of specific parameters against each dataset as shown in the table (preprocessingthreshold, filter threshold, and background artifact size). The selected values representthe best combination which indicates a very sensitive balance between the measurementsthat highlight a critical evaluation issue of any proposed solution. It is clear from therepeated empirical experiments applied to a different range of parameter values, that theperformance of the given parameters are higher than the B-COSFIRE results with respectto sensitivity, specificity, and accuracy as summarized in Table 4.

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 10/23

Page 11: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Table 3 ‘‘Best combination’’ summary performance results.

Parameters best combination Optimized results

Dataset Prep.threshold

Filterthreshold

Backgroundartifact size

Sensitivity Specificity Accuracy

DRIVE 0.3 30 48 0.790 0.971 0.955STARE 0.5 27 18 0.865 0.961 0.953CHASE-DB1 0.2 31 38 0.800 0.964 0.953

Table 4 Summary results for the achieved improvement compared to the original B-COSFIRE resultson DRIVE, STARE and CHASE-DB1.

Dataset Approach Sensitivity Specificity Accuracy

Our approach 0.791 0.971 0.955DRIVE

B-COSFIRE 0.766 0.970 0.944Our approach 0.865 0.961 0.953

STAREB-COSFIRE 0.772 0.971 0.950Our approach 0.800 0.964 0.953CHASE-

DB1 B-COSFIRE 0.759 0.959 0.939

Figure 6 Optimized B-COSFIRE vs. B-COSFIRE performance results on the datasets (A) DRIVE, (B)STARE and (C) CHASE-DB1.

Full-size DOI: 10.7717/peerj.5855/fig-6

The bar charts in Fig. 6 are visualizing the comparison between B-COSFIRE methodand optimized B-COSFIRE method with respect to sensitivity, specificity, and accuracy onDRIVE, STARE and CHASE-DB1 datasets.

Segmented vasculatureFigures 7–9 are showing the segmentation results of DRIVE, STARE, and CHASE-DB1datasets respectively. Two images from each dataset are selected with their best and worstcase accuracies for visual results.

Quantitative results with optimal parameter selectionOur approach is to optimize and select the parameters that obtained the higher performancemeasures as compared to the B-COSFIRE method. The selected best parameters

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 11/23

Page 12: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Figure 7 DRIVE dataset segmentation result: (A) original image, (B) ground truth and (C) segmentedvessels.

Full-size DOI: 10.7717/peerj.5855/fig-7

combination is pointed and underlined in Tables 5–7 for DRIVE, STARE, and CHASE-DB1respectively. The results are sorted by sensitivity in descending order.

DISCUSSIONThis Work has improved the results of B-COSFIRE method compared to recent workin B-COSFIRE, as it has been improved in eight out of nine performance measures, asillustrated in Table 4, moreover ANOVA analysis has identified the parameter valuesthat enhanced the segmentation output in DRIVE STARE and CHASE-DB1 in the threeperformance measures Acc. Sc. and Sp, and it showed that the parameters (preprocessingthreshold, Filter threshold) have contributed significantly in this optimization, while thebackground artifact size contribution in the optimization is insignificant as detailed in theTables 8–9, and finally the improvement is obvious in the Tables 10–12 when comparedwith other state of the art vessel segmentation results.

In this section, we have quantitatively identified the significant impact of each selectedparameter on the vessel segmentation results using ANOVA inspections. The comparisonof the optimized results with other methods is also discussed.

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 12/23

Page 13: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Figure 8 STARE dataset segmentation result: (A) original image, (B) ground truth and (C) segmentedvessels.

Full-size DOI: 10.7717/peerj.5855/fig-8

ANOVA inspectionsFor the improvement of the empirical experiment, one-way analysis of variance (ANOVA)is used to compare statistical significance of different values of each parameter on theresults of the segmentation.

It is shown fromTables 8 and 9 that the threshold has a significant effect on the specificity(TN) for all the given datasets under study, whereas the background artifact size has nosignificant effect on the result for all of the given three datasets. Preprocessing Threshold,on the other hand, has a significant impact on all the results for the DRIVE and STAREdataset but not on the CHASE-DB1.

Different possible combinations of one way ANOVA analyses are conducted to specifythe significant impact of each parameter on the results and the conclusions found asfollows:1. The preprocessing threshold is having a significant impact on the results for DRIVE

and CHASE-DB1 as well as the existence of impact of the threshold on the specificity,this indicates that error in setting the threshold value decreases the true negative rate.

2. Based on ANOVA analysis, background artifact size factor on improving the methodis found statistically insignificant on the accuracy and sensitivity.

3. In CHASE-DB1 dataset preprocessing threshold is of insignificant impact and thiscould be due to high background artifacts in this dataset images while the threshold

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 13/23

Page 14: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Original Image Ground Truth Segmetation Result

Bes

tAcc

urac

y95

.32

Wor

stA

ccur

acy

89.2

8

(a) (b) (c)

Figure 8. STARE dataset segmentation result; (a) original image (b) ground truth (c) segmented vessels

Original Image Ground Truth Segmentation Result

Bes

tAcc

urac

y95

.32

Wor

stA

ccur

acy

89.2

8

(a) (b) (c)

Figure 9. CHASE-DB1 dataset segmentation result; (a) original image (b) ground truth (c) segmentedvessels

parameters (preprocessing threshold, Filter threshold) have contributed significantly in this optimization,256

while the background artifact size contribution in the optimization is insignificant as detailed in the tables257

8-9, and finally the improvement is obvious in the tables 10-12 when compared with other state of the art258

vessel segmentation results.259

In this section, we have quantitatively identified the significant impact of each selected parameter on260

the vessel segmentation results using ANOVA inspections. The comparison of the optimized results with261

other methods is also discussed.262

10/17

PeerJ reviewing PDF | (2018:02:26121:2:2:NEW 28 Aug 2018)

Manuscript to be reviewed

Figure 9 CHASE-DB1 dataset segmentation result: (A) original image, (B) ground truth and (C) seg-mented vessels.

Full-size DOI: 10.7717/peerj.5855/fig-9

Table 5 Experiment-wise performance results on DRIVE dataset (the best parameters combination isunderlined).

ExperimentNo.

Prep.threshold

Filterthreshold

Backgroundartifact size

Sensitivity Specificity Accuracy

1 0.4 30 18 0.813 0.965 0.9522 0.4 30 28 0.811 0.966 0.9523 0.5 30 18 0.811 0.965 0.9514 0.4 31 48 0.798 0.969 0.9545 0.4 32 18 0.794 0.970 0.9546 0.3 30 28 0.794 0.970 0.9547 0.4 32 28 0.792 0.970 0.9548 0.5 32 18 0.792 0.970 0.9549 0.3 30 38 0.792 0.970 0.95410 0.3 30 48 0.790 0.971 0.95511 0.4 32 48 0.788 0.971 0.95512 0.5 32 38 0.788 0.971 0.95413 0.5 32 48 0.787 0.971 0.955

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 14/23

Page 15: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Table 6 Experiment-wise performance results on STARE dataset (the best parameters combination isunderlined).

ExperimentNo.

Prep.threshold

Filterthreshold

Backgroundartifact size

Sensitivity Specificity Accuracy

1 0.5 31 18 0.802 0.964 0.9482 0.5 31 18 0.802 0.964 0.9493 0.5 31 28 0.801 0.963 0.9514 0.5 31 28 0.801 0.962 0.9535 0.5 27 18 0.800 0.961 0.9536 0.5 28 0 0.800 0.962 0.9537 0.5 28 9 0.799 0.963 0.9538 0.4 28 18 0.798 0.964 0.9539 0.3 30 18 0.786 0.964 0.95210 0.2 31 0 0.784 0.964 0.95211 0.2 31 9 0.784 0.964 0.95212 0.2 41 28 0.779 0.964 0.95213 0.1 41 38 0.779 0.964 0.952

Table 7 Experiment-wise performance results on CHASE-DB1 Dataset (the best parameters combina-tion is underlined).

Experimentno.

Prep.threshold

Filterthreshold

Backgroundartifact size

Sensitivity Specificity Accuracy

1 0.1 31 18 0.802 0.964 0.9532 0.2 31 18 0.802 0.964 0.9533 0.1 31 28 0.801 0.964 0.9534 0.2 31 28 0.801 0.964 0.9535 0.1 31 38 0.80.1 0.964 0.9536 0.2 31 38 0.800 0.964 0.9537 0.3 31 18 0.800 0.964 0.9538 0.3 31 28 0.799 0.964 0.9539 0.3 31 38 0.798 0.965 0.95310 0.4 31 18 0.786 0.964 0.95211 0.4 31 28 0.785 0.964 0.95212 0.4 31 38 0.784 0.965 0.952

factor is of high impact for the CHASE-DB1 dataset and deviating from the rightbalance in the threshold impact directly all the confusion matrix metrics.

4. In CHASE-DB1 method the factor threshold is significant which has an impact on theresults. Therefore, the highest value of the threshold, i.e., 49 results in higher accuracy,precision, and specificity.

5. For DRIVE and STARE datasets preprocessing threshold is a significant factor foraccuracy, precision, and specificity. However, filter threshold factor is only significantfor specificity.

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 15/23

Page 16: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Table 8 Analysis of variance (ANOVA) summary for parameters significance.

Measure Accuracy Sensitivity Specificity

DatasetParameter

DRIVE STARE CHASE -DB1 DRIVE STARE CHASE -DB1 DRIVE STARE CHASE -DB1

Prep. threshold X X χ X X χ X X χ

Filterthreshold χ χ X χ χ X X X X

Artifact size χ χ χ χ χ χ χ χ χ

Notes.Note:XSignificant Factor if P-Value ≤ 0.05, χ Insignificant Factor if P-Value> 0.05.

Table 9 Details of one-way analysis of variance (ANOVA).

Data Set Parameters Acc. Se. Sp. Conclusion

DRIVE Preprocessing threshold X0.000. X0.000 X0.000 As p-value= 0.000 ≤ 0.05, then preprocessing thresholdparameters (0.1,0.2,0.3, 0.4,0.5,0.6) are significant withrespect to accuracy as well as Sensitivity and specificity

DRIVE Filter threshold χ 1.000 χ 0.252 X0.000 As p-value= 0.000 ≤ 0.05 only for Specificity thenthreshold parameter is only significant for Specificity inDRIVE DB

DRIVE Atifact size χ 0.995 χ 0.852 χ 0.886 Insignificant, as p-value> 0.05STARE Preprocessing Threshold X0.000 X0.000 X0.000 As p-value= 0.000 ≤ 0.05, the prepr-ocessing threshold

parameters (0.1,0.2,0.3, 0.4, 0.5,0.6) are sigy with respect toaccuracy as well as Sensitivity and specificity

STARE Filter threshold χ 0.975 χ 1.000 X0.003 As p-value= 0.000 ≤ 0.05 only for Specificity thenthreshold parameter is only significant for Specificity inStare DB

STARE Atifact size χ 0.999 χ 1.000 χ 0.992 Insignificant, as p-value> 0.05CHASE-DB1 Preprocessing Threshold χ 0.801 χ 0.998 χ 0998 As p-value= 0.000 ≤ 0.05, therefore parameters

(0.1,0.2,0.3, 0.4) pre- processing threshold (0.5,0.6) aresignificant with respect to accuracy y as well as Sensitivityand specificity

CHASE-DB1 Filter threshold X0.000 X0.000 X0.000 As p-value= 0.000 ≤ 0.05 for all the three performancemeasures then threshold parameter is significant foraccurate, sensitivity and specificity

CHASE-DB1 Atifact size χ 0.947 χ 0.967 χ 0.972 Insignificant, as p-value> 0.05

Notes.Note 1: (X) Significant Factor if P-Value ≤ 0.05, (χ) Insignificant Factor if P-Value> 0.05.Note 2: Values in the columns: Acc., Se., and Sp. represent the significance of the p-value of ANOVA statistical analysis for the corresponding combination (Dataset, parameter,Performance Measure).

Comparison with other methodsThe performance measure (Sensitivity, Specificity and Accuracy) of the proposed approachis compared with the previously published methodologies in Tables 10–12 for DRIVE,STARE and CHASE-DB1 datasets, respectively. It is clear from these results that theproposed approach is giving better results as compared to the previousmethods. Tables 10–12 are showing that our results are better than the other state of the art supervised andunsupervised vessel segmentation methods in terms of accuracy, sensitivity, specificity),

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 16/23

Page 17: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Table 10 Vessel segmentation performance on DRIVE dataset.

Method Type Authors Se. Sp. Acc. AUC Time

ThisWork 0.790 0.971 0.955 – 8 SAzzopardi & Petkov (2013b) 0.766 0.970 0.944 0.961 10 sZhang et al. (2016) 0.747 0.976 0.947 0.952 –Yin et al. (2015) 0.725 0.979 0.940 – –Roychowdhury, Koozekanani & Parhi (2015) 0.740 0.978 0.949 0.967 2.5 min

Unsupervised

Fraz et al. (2012a) 0.715 0.976 0.943 – 5 minOrlando, Prokofyeva & Blaschko (2017) 0.790 0.969 – – –Dasgupta & Singh (2017) 0.7691 0.9801 0.9533 – –Fu et al. (2016) 0.760 – 0.952 – 1.3 sStrisciuglio et al. (2016) 0.778 0.970 0.945 0.960 –Li et al. (2016) 0.757 0.982 0.953 0.974 1.2 minFraz et al. (2012c) 0.741 0.981 0.948 0.975 2 min

Supervised

Marín et al. (2011) 0.707 0.980 0.945 0.959 1.5 min

Table 11 Vessel segmentation performance on STARE dataset.

Method type Authors Se. Sp. Acc. AUC Time

This work 0.858 0.961 0.953 – 8 sAzzopardi & Petkov (2013b) 0.772 0.970 0.950 0.956 10 sZhang et al. (2016) 0.768 0.976 0.955 0.961 –Yin et al. (2015) 0.854 0.942 0.933 – –Roychowdhury, Koozekanani & Parhi (2015) 0.732 0.984 0.956 0.967 2.5 minFraz et al. (2012a) 0.731 0.968 0.944 – 2 min

Unsupervised

Al-Diri, Hunter & Steel (2009) 0.752 0.968 – – 11 minOrlando, Prokofyeva & Blaschko (2017) 0.768 0.974 – – –Strisciuglio et al. (2016) 0.805 0.971 0.953 0.964 –Li et al. (2016) 0.773 0.984 0.963 0.988 1.2 minFu et al. (2016) 0.741 – 0.959 – –Fraz et al. (2012c) 0.755 0.976 0.953 0.977 2 min

Supervised

Marín et al. (2011) 0.694 0.982 0.953 0.977 1.5 min

and comparing the average time of processing one image our optimization approachoutperform the majority of approaches.

It is pertinent to mention that the quantitative performance measures (Sensitivity,Specificity and Accuracy) achieved by the proposed method are better than other state ofthe art supervised, and unsupervised vessel segmentation methods. We have consideredmulti-objective optimization, where the three performance measures have been optimizedsimultaneously. The specificity reported by Dasgupta & Singh (2017) is higher than theproposed method, however their reported sensitivity and accuracy are less than our work.Furthermore, (Dasgupta & Singh, 2017) has used the DRIVE database only for performanceevlauation. The performance on more challenging datasets (STARE and CHASE-DB1) is

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 17/23

Page 18: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Table 12 Vessel segmentation performance on CHASE-DB1 dataset.

Method type Authors Se. Sp. Acc. AUC Time

This work 0.800 0.964 0.953 – 8 sAzzopardi & Petkov (2013b) 0.759 0.959 0.939 0.949 10 sZhang et al. (2016) 0.756 0.966 0.946 0.956 –Roychowdhury, Koozekanani & Parhi (2015) 0.762 0.957 0.947 0.623 2.5 min

Unsupervised

Fraz et al. (2012a) 0.722 0.971 0.947 0.971 –Orlando, Prokofyeva & Blaschko (2017) 0.728 0.971 – – –Li et al. (2016) 0.751 0.979 0.958 0.972 1.2 minFu et al. (2016) 0.713 – 0.948 – 1.3 s

Supervised

Fraz et al. (2012c) 0.755 0.976 0.953 0.977 2 min

not reported. Fu et al. (2016) work has reported higher accuracy values, but their reportedsensitivity is less and they had not reported the specificity.

The approximated time required to segment one fundus image is 8 s when performedon a CPU running at 2,700 GHz with 16 GB of RAM on Windows 10 Operating system.Currently this proposed work is applied by means of Matlab 2017b, The presentation canbe computationally enhanced more. As shown in Tables 10–12, the processing time of thiswork is slightly near the newly presented approaches, the time includes the processing ofbar and half bar filters. It is worth to note that we have used single processor for runningthe proposed work for optimization and it will be accelerated by running its computationalmodel using the GPU programming.

CONCLUSION AND FUTURE WORKB-COSFIRE is a generic algorithm for vessel segmentation in retinal fundus images.Optimization of the parameters gets more efficient vessel segmentation with higheraccuracy. It can also be tuned to detect and recognize patterns in videos. In this work,we have introduced a mechanism which is improving the results of B-COSFIRE. Byoptimizing the suppressing mechanism for the filter input and output thresholds. Suchidea outperformed B-COSFIRE reported vessel segmentation results. The optimizedthree parameters are preprocessing threshold, as well as post-processing threshold, andbackground artifact size are chosen for optimization. The analysis is done with the helpof ANOVA to show the impact of each parameter on results and also to evaluate thesignificance and insignificance of these parameters. ANOVA analysis for the experimentsperformed is showing a significant impact of the first two parameters like ‘‘preprocessingthreshold’’ and ‘‘filter threshold’’, while the third parameter ‘‘background artifact size’’showing the insignificant impact on the results. The empirical experiments have evaluatedand identified the new parameters configurations on three datasets. The selection of theseoptimized parameters makes this work get better results than the normal B-COSFIREalgorithm. Optimization of the three parameters discussed has outperformed the standardB-COSFIRE sensitivity, specificity, and accuracy on DRIVE and CHASE-DB1. At thesame time in the STARE dataset, the selected combinations achieved higher accuracy

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 18/23

Page 19: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

and sensitivity while the specificity performed closer to B-COSFIRE reported results. Itindicates the fact that the optimization of preprocessing threshold and filter threshold arenot the whole optimization story. Although they are significant as per the ANOVA analysis,other parameters like σ , φ, and ρ need to be optimized for better results.

ADDITIONAL INFORMATION AND DECLARATIONS

FundingThe authors received no funding for this work.

Competing InterestsThe authors declare there are no competing interests.

Author Contributions• Sufian A. Badawi conceived and designed the experiments, performed the experiments,analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/ortables, authored or reviewed drafts of the paper, approved the final draft.• Muhammad Moazam Fraz conceived and designed the experiments, analyzed the data,contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper,approved the final draft.

Data AvailabilityThe following information was supplied regarding data availability:

http://vision.seecs.edu.pk/optimized_bcosfire/.

REFERENCESAbràmoff MD, GarvinMK, SonkaM. 2010. Retinal imaging and image analysis. IEEE

Reviews in Biomedical Engineering 3:169–208 DOI 10.1109/RBME.2010.2084567.Al-Diri B, Hunter A, Steel D. 2009. An active contour model for segmenting and

measuring retinal vessels. IEEE Transactions on Medical imaging 28(9):1488–1497DOI 10.1109/TMI.2009.2017941.

Azzopardi G, Fernández-Robles L, Alegre E, Petkov N. 2016. Increased generalizationcapability of trainable cosfire filters with application to machine vision. In: Pat-tern recognition (ICPR), 2016 23rd international conference on. Piscataway: IEEE,3356–3361.

Azzopardi G, Greco A, VentoM. 2016. Gender recognition from face images withtrainable COSFIRE filters. In: 2016 13th IEEE international conference on AdvancedVideo and Signal Based Surveillance (AVSS). Piscataway: IEEE, 235–241.

Azzopardi G, Petkov N. 2012. A CORF computational model of a simple cell thatrelies on LGN input outperforms the Gabor function model. Biological Cybernetics106(3):177–189 DOI 10.1007/s00422-012-0486-6.

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 19/23

Page 20: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Azzopardi G, Petkov N. 2013a. Automatic detection of vascular bifurcations in seg-mented retinal images using trainable COSFIRE filters. Patterns Recognition Letters34:922–933 DOI 10.1016/j.patrec.2012.11.002.

Azzopardi G, Petkov N. 2013b. Trainable COSFIRE filters for keypoint detection andpattern recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence35(2):490–503 DOI 10.1109/TPAMI.2012.106.

Azzopardi G, Petkov N. 2014. Ventral-stream-like shape representation: from pixelintensity values to trainable object-selective COSFIRE models. Frontiers in Compu-tational Neuroscience 8:80 DOI 10.3389/fncom.2014.00080.

Azzopardi G, Strisciuglio N, VentoM, Petkov N. 2015. Trainable COSFIRE filtersfor vessel delineation with application to retinal images.Medical Image Analysis19(1):46–57 DOI 10.1016/j.media.2014.08.002.

Bergstra J, Bengio Y. 2012. Random search for hyper-parameter optimization. Journal ofMachine Learning Research 13(Feb):281–305.

Cadieu C, KouhM, Pasupathy A, Connor CE, Riesenhuber M, Poggio T. 2007. A modelof V4 shape selectivity and invariance. Journal of Neurophysiology 98(3):1733–1750DOI 10.1152/jn.01265.2006.

Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL. 2014. Semantic imagesegmentation with deep convolutional nets and fully connected crfs. ArXiv preprint.arXiv:1412.7062.

Cinsdikici MG, Aydın D. 2009. Detection of blood vessels in ophthalmoscope imagesusing MF/ant (matched filter/ant colony) algorithm. Computer Methods andPrograms in Biomedicine 96(2):85–95 DOI 10.1016/j.cmpb.2009.04.005.

Dasgupta A, Singh S. 2017. A fully convolutional neural network based structuredprediction approach towards the retinal vessel segmentation. In: Biomedical imaging(ISBI 2017), 2017 IEEE 14th international symposium on. Piscataway: IEEE, 248–251.

Fraz MM, Barman SA. 2014. Computer vision algorithms applied to retinal vesselsegmentation and quantification of vessel caliber. Image Analysis and Modeling inOphthalmology 49:49–84 DOI 10.1201/b16510-5.

Fraz MM, Barman SA, Remagnino P, Hoppe A, Basit A, Uyyanonvara B, RudnickaAR, Owen CG. 2012a. An approach to localize the retinal blood vessels using bitplanes and centerline detection. Computer Methods and Programs in Biomedicine108(2):600–616 DOI 10.1016/j.cmpb.2011.08.009.

Fraz MM, Basit A, Barman S. 2013. Application of morphological bit planes inretinal blood vessel extraction. Journal of Digital Imaging 26(2):274–286DOI 10.1007/s10278-012-9513-3.

Fraz MM, Remagnino P, Hoppe A, Uyyanonvara B, Rudnicka AR, Owen CG, Bar-man SA. 2012b. Blood vessel segmentation methodologies in retinal images—a survey. Computer Methods and Programs in Biomedicine 108(1):407–433DOI 10.1016/j.cmpb.2012.03.009.

Fraz MM, Remagnino P, Hoppe A, Uyyanonvara B, Rudnicka AR, Owen CG, BarmanSA. 2012c. An ensemble classification-based approach applied to retinal blood

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 20/23

Page 21: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

vessel segmentation. IEEE Transactions on Biomedical Engineering 59(9):2538–2548DOI 10.1109/TBME.2012.2205687.

Fu H, Xu Y,Wong DWK, Liu J. 2016. Retinal vessel segmentation via deep learningnetwork and fully-connected conditional random fields. In: Biomedical imaging(ISBI), 2016 IEEE 13th international symposium on. Piscataway: IEEE, 698–701.

Gecer B, Azzopardi G, Petkov N. 2017. Color-blob-based COSFIRE filters for objectrecognition. Image and Vision Computing 57:165–174DOI 10.1016/j.imavis.2016.10.006.

Hari V, Raj VJ, Gopikakumari R. 2017. Quadratic filter for the enhancement of edgesin retinal images for the efficient detection and localization of diabetic retinopathy.Pattern Analysis and Applications 20(1):145–165 DOI 10.1007/s10044-015-0480-4.

Hoover A, Kouznetsova V, GoldbaumM. 2000. Locating blood vessels in retinal imagesby piecewise threshold probing of a matched filter response. IEEE Transactions onMedical Imaging 19(3):203–210 DOI 10.1109/42.845178.

Kanski JJ, Bowling B, Nischal KK, Pearson A. 2011. Clinical ophthalmology: a systematicapproach. Edinburgh: Elsevier/Saunders.

Krizhevsky A, Sutskever I, Hinton GE. 2012. Imagenet classification with deep convo-lutional neural networks. In: NIPS’12 proceedings of the 25th international conferenceon neural information processing systems—Volume 1. La Jolla: Neural InformationProcessing Systems, 1097–1105.

Lam BS, Gao Y, Liew AW-C. 2010. General retinal vessel segmentation usingregularization-based multiconcavity modeling. IEEE Transactions on Medical Imaging29(7):1369–1381 DOI 10.1109/TMI.2010.2043259.

Lau QP, Lee ML, HsuW,Wong TY. 2013. Simultaneously identifying all true vesselsfrom segmented retinal images. IEEE Transactions on Biomedical Engineering60(7):1851–1858 DOI 10.1109/TBME.2013.2243447.

Li Q, Feng B, Xie L, Liang P, Zhang H,Wang T. 2016. A cross-modality learningapproach for vessel segmentation in retinal images. IEEE Transactions on MedicalImaging 35(1):109–118 DOI 10.1109/TMI.2015.2457891.

Long J, Shelhamer E, Darrell T. 2015. Fully convolutional networks for semanticsegmentation. In: Proceedings of the IEEE conference on computer vision and patternrecognition. 3431–3440.

Maninis K-K, Pont-Tuset J, Arbeláez P, Van Gool L. 2016. Deep retinal image under-standing. In: International conference on medical image computing and computer-assisted intervention. Cham: Springer, 140–148.

Marín D, Aquino A, Gegúndez-Arias ME, Bravo JM. 2011. A new supervised methodfor blood vessel segmentation in retinal images by using gray-level and momentinvariants-based features. IEEE Transactions on Medical Imaging 30(1):146–158DOI 10.1109/TMI.2010.2064333.

Martinez-Perez ME, Hughes AD, Stanton AV, Thom SA, Bharath AA, Parker KH. 1999.Segmentation of retinal blood vessels based on the second directional derivative andregion growing. In: Image processing, 1999. ICIP 99. Proceedings. 1999 internationalconference on, vol. 2. Piscataway: IEEE, 173–176.

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 21/23

Page 22: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Niemeijer M, Staal J, Van Ginneken B, LoogM, Abramoff MD. 2004. Comparativestudy of retinal vessel segmentation methods on a new publicly available database.In:Medical imaging 2004: image processing. Vol. 5370. Bellingham: InternationalSociety for Optics and Photonic, 648–657.

Orlando JI, Prokofyeva E, BlaschkoMB. 2017. A discriminatively trained fullyconnected conditional random field model for blood vessel segmentation infundus images. IEEE Transactions on Biomedical Engineering 64(1):16–27DOI 10.1109/TBME.2016.2535311.

Ricci E, Perfetti R. 2007. Retinal blood vessel segmentation using line operators and sup-port vector classification. IEEE Transactions on Medical Imaging 26(10):1357–1365DOI 10.1109/TMI.2007.898551.

Roychowdhury S, Koozekanani DD, Parhi KK. 2015. Iterative vessel segmentationof fundus images. IEEE Transactions on Biomedical Engineering 62(7):1738–1749DOI 10.1109/TBME.2015.2403295.

Soares JV, Leandro JJ, Cesar RM, Jelinek HF, Cree MJ. 2006. Retinal vessel segmentationusing the 2-D Gabor wavelet and supervised classification. IEEE Transactions onMedical Imaging 25(9):1214–1222 DOI 10.1109/TMI.2006.879967.

SofkaM, Stewart CV. 2006. Retinal vessel centerline extraction using multiscale matchedfilters, confidence and edge measures. IEEE Transactions on Medical Imaging25(12):1531–1546 DOI 10.1109/TMI.2006.884190.

Staal J, Abràmoff MD, Niemeijer M, Viergever MA, Van Ginneken B. 2004. Ridge-based vessel segmentation in color images of the retina. IEEE Transactions on MedicalImaging 23(4):501–509 DOI 10.1109/TMI.2004.825627.

Strisciuglio N, Azzopardi G, Petkov N. 2017. Detection of curved lines with b-cosfirefilters: a case study on crack delineation. In: International conference on computeranalysis of images and patterns. Cham: Springer, 108–120.

Strisciuglio N, Azzopardi G, VentoM, Petkov N. 2016. Supervised vessel delineation inretinal fundus images with the automatic selection of B-COSFIRE filters.MachineVision and Applications 27(8):1137–1149 DOI 10.1007/s00138-016-0781-7.

Strisciuglio N, Petkov N. 2017. Delineation of line patterns in images using B-COSFIREfilters. In: Bioinspired intelligence (IWOBI), 2017 international conference andworkshop on. Funchal, Madeira Island: IEEE, 1–6.

Wisaeng K, Sa-ngiamviboolW. 2018. Improved fuzzy C-means clustering in theprocess of exudates detection using mathematical morphology. Soft Computing22(8):2753–2764 DOI 10.1007/s00500-017-2532-8.

Wong TY, Klein R, Sharrett AR, Duncan BB, Couper DJ, Tielsch JM, Klein BE,Hubbard LD. 2002. Retinal arteriolar narrowing and risk of coronary heartdisease in men and women: the Atherosclerosis risk in communities study. Jama287(9):1153–1159.

Yin B, Li H, Sheng B, Hou X, Chen Y,WuW, Li P, Shen R, Bao Y, JiaW. 2015. Vesselextraction from non-fluorescein fundus images using orientation-aware detector.Medical Image Analysis 26(1):232–242 DOI 10.1016/j.media.2015.09.002.

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 22/23

Page 23: Optimizing the trainable B-COSFIRE filter for retinal ... · INTRODUCTION The analysis of shape, appearance, tortuosity and other morphological characteristics ... in the development

Yin Y, Adel M, Bourennane S. 2012. Retinal vessel segmentation using a probabilistictracking method. Pattern Recognition 45(4):1235–1244DOI 10.1016/j.patcog.2011.09.019.

Zhang J, Dashtbozorg B, Bekkers E, Pluim JP, Duits R, Ter Haar Romeny BM.2016. Robust retinal vessel segmentation via locally adaptive derivative framesin orientation scores. IEEE Transactions on Medical Imaging 35(12):2631–2644DOI 10.1109/TMI.2016.2587062.

Badawi and Fraz (2018), PeerJ, DOI 10.7717/peerj.5855 23/23