chonglingchee_fyp2010.docx - capstone project

127
Student : E0604276 (PI no.) SUPERVISOR : DR rajendra acharya udyavara Project Code: JAN2010/BME/0016 SIM UNIVERSITY SCHOOL OF SCIENCE AND TECHNOLOGY AUTOMATED DETECTION OF DIABETIC RETINOPATHY USING DIGITAL FUNDUS IMAGES BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 80

Upload: cardiacinfo

Post on 24-Dec-2014

2.066 views

Category:

Documents


8 download

DESCRIPTION

 

TRANSCRIPT

Page 1: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Student : E0604276 (PI no.)

SUPERVISOR : DR rajendra acharya udyavara

Project Code: JAN2010/BME/0016

SIM UNIVERSITYSCHOOL OF SCIENCE AND TECHNOLOGY

AUTOMATED DETECTION OF DIABETIC RETINOPATHY USING DIGITAL FUNDUS

IMAGES

A project report submitted to SIM Universityin partial fulfilment of the requirements for the degree of

Bachelor of Biomedical Engineering

November 2010

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT

80

Page 2: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 2

81

Page 3: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

TABLE OF CONTENTSPage

ABSTRACT 5

ACKNOWLEDGEMENT 6

LISTS OF FIGURES 7

LIST OF TABLES 10

CHAPTER ONE

AIMS AND INTRODUCTION 11-12

1.1 Background 11

1.2 Objectives 12

1.3 Scope 12

CHAPTER TWO

LITERATURE 13-22

2.1 Anatomy structure of the human eye 13

2.1.1 The Cornea 14

2.1.2 The Aqueous Humor 14

2.1.3 The Iris 14

2.1.4 The Pupil 14

2.1.5 The Lens 15

2.1.6 The Vitreous Humor 15

2.1.7 The Sclera 15

2.1.8 The Optic Disc 15

2.1.9 The Retina 15

2.1.10 Macula 16

2.1.11 Fovea 16

2.2 Diabetic Retinopathy (DR) and stages 16

2.3 Diabetic Retinopathy (DR) features 18

2.3.1 Blood Vessels 18

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 3

81

Page 4: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

2.3.2 Microaneurysms 19

2.3.3 Exudates 20

2.4 Diabetic Retinopathy (DR) examination methods 20

2.4.1 Opthalmoscopy (Indirect and Direct) 20

2.4.2 Fluorescein Angiography 21

2.4.3 Fundus Photography 21

2.5 Diabetic Retinopathy (DR) treatment 21

2.5.1 Scatter Laser treatment 21

2.5.2 Vitrectomy 22

2.5.3 Focal Laser treatment 22

2.5.4 Laser Photocoagulation 22

CHAPTER THREE

METHODS AND MATERIALS 23-54

3.1 System block diagram 23

3.2 Image processing techniques 24

3.2.1 Image preprocessing 24

3.2.2 Structuring Element 25

3.2.2.1 Disk shaped Structuring Element (SE) 26

3.2.2.2 Ball shaped Structuring Element (SE) 26

3.2.2.3 Octagon shaped Structuring Element (SE) 27

3.2.3 Morphological image processing 27

3.2.3.1 Morphological operations 28

3.2.3.2 Dilation and Erosion 28

3.2.3.3 Dilation 29

3.2.3.4 Erosion 29

3.2.3.5 Opening and Closing 30

3.2.4 Thresholding 31

3.2.5 Edge detection 32

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 4

81

Page 5: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

3.2.6 Median filtering 35

3.3 Feature extraction 36

3.3.1 Blood vessels detection 36

3.3.2 Microaneurysms detection 40

3.3.3 Exudates detection 44

3.3.4 Texture analysis 47

3.4 Significance test 48

3.4.1 Student’s t-test 48

3.5 Classification 49

3.5.1 Fuzzy 51

3.5.2 Gaussian Mixture Model (GMM) 54

CHAPTER FOUR

RESULTS 58-60

4.1 Graphical User Interface (GUI) 60

CHAPTER FIVE

CONCLUSION AND RECOMMENDATION 61

CHAPTER SIX

REFLECTIONS 62-63

REFERENCES 64-66

APPENDIX A: BOX PLOT FOR FEATURES (AREA) 67-68

APPENDIX B: BLOOD VESSELS MATLAB CODE 69-70

APPENDIX C: MICROANEURYSMS MATLAB CODE 71-72

APPENDIX D: EXUDATES MATLAB CODE 73-74

APPENDIX E: TEXTURES MATLAB CODE 75

APPENDIX F: MEETING LOGS 76-79

APPENDIX G: GANTT CHART 80-81

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 5

81

Page 6: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

ABSTRACT

Diabetic retinopathy (DR) is the resultant cause of blindness due to diabetes. The main

aim for this project is to develop a system to automate the detection of DR using the fundus

images. This is first achieved by using the fundus images to image processed them using

morphological processing techniques and texture analysis to extract features such as areas of

blood vessels, exudates, microaneurysms and textures. Using the significance test on the

features to determine which features have statistically significance of around p ≤ 0.05. The

selected features are then input to fuzzy and GMM classifier for automatic classification.

After which, the best classifier is then used for the final graphical user interface (GUI) based

on percentage of correct data rate of 85.2% and average classification rate of 85.2%.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 6

81

Page 7: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

ACKNOWLEGEMENTS

I would like to thank my family for their support and encouragement.

I would like to thank NUH of Singapore for providing me the fundus images for this

project.

I would like to thank Unisim for the school facilities.

I would like to thank Fabian Pang for his patience and guidance on fuzzy and GMM

classification.

I would like to thank Jacqueline Tham, Vicky Goh, Mabel Loh, Brenda Ang and

Audrey Tan for their moral support and encouragement.

Last but most importantly, I would like to thank my project supervisor, Dr Rajendra

Acharya Udyavara for his kindness, patience, guidance, advice and enlightenment.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 7

81

Page 8: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

LIST OF FIGURES

Page

Figure 2.1: Anatomy structure of the eye 13

Figure 2.1.10: Location of macula, fovea and optic disc 16

Figure 2.3.1: Retinal blood vessels 19

Figure 2.3.2: Microaneurysms in DR 19

Figure 2.3.3: Exudates in DR 20

Figure 3.1: System block diagram for the detection and classification of diabetic retinopathy 24

Figure 3.2.1a: Original image (left) and its histogram (right) 25

Figure 3.2.1b: Image after CLAHE (left) and its histogram (right) 25

Figure 3.2.2.1: Disk shaped structuring element 26

Figure 3.2.2.2: Ball shaped structuring element (nonflat ellipsoid) 27

Figure 3.2.2.3: Octagon shaped structuring element 27

Figure 3.2.3.3a: Original image 29

Figure 3.2.3.3b: Image after dilation with disk shaped SE 29

Figure 3.2.3.4a: Original image 30

Figure 3.2.3.4b: Image after erosion with disk shaped SE 30

Figure 3.2.3.5a: Opening operation with disk shaped image 31

Figure 3.2.3.5b: Closing operation with disk shaped SE image 31

Figure 3.2.4a: Original image 32

Figure 3.2.4b: Image with too high threshold value 32

Figure 3.2.4c: Image with too low threshold value 32

Figure 3.2.5a: Original image 34

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 8

81

Page 9: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.2.5b: Sobel 34

Figure 3.2.5c: Prewitt 34

Figure 3.2.5d: Roberts 34

Figure 3.2.5e: Laplacian of Gaussian (LoG) 34

Figure 3.2.5f: Canny 35

Figure 3.2.6a: Illustration of a 3 x 3 median filter 35

Figure 3.2.6b: Original image (left) and image after median filtering (right) 36

Figure 3.3.1a: System block diagram for detecting blood vessels 36

Figure 3.3.1b: Normal retinal fundus image 37

Figure 3.3.1c: Green component 37

Figure 3.3.1d: Inverted green component 37

Figure 3.3.1e: Image after CLAHE 38

Figure 3.3.1f: Image after opening operation 38

Figure 3.3.1g: Image after subtraction 38

Figure 3.3.1h: Image after thresholding 39

Figure 3.3.1i: Image after median filtering 39

Figure 3.3.1j: Final image 39

Figure 3.3.1k: Final image (inverted) 39

Figure 3.3.2a: System block diagram for detecting microaneurysms 40

Figure 3.3.2b: Abnormal retinal fundus image 40

Figure 3.3.2c: Red component 41

Figure 3.3.2d: Inverted red component 41

Figure 3.3.2e: Image after Canny edge detection 41

Figure 3.3.2f: Image with boundary 41

Figure 3.3.2g: Image after boundary subtraction 41

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 9

81

Page 10: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.3.2h: Image after filling up the holes or gaps 42

Figure 3.3.2i: Image after subtraction 42

Figure 3.3.2j: Blood vessels detection 42

Figure 3.3.2k: Blood vessels after edge detection 42

Figure 3.3.2l: Image after subtraction 43

Figure 3.3.2m: Image after filling holes or gaps 43

Figure 3.3.2n: Final image 43

Figure 3.3.3a: System block diagram for detecting exudates 44

Figure 3.3.3b: Abnormal retinal fundus image 44

Figure 3.3.3c: Green component 45

Figure 3.3.3d: Image after closing operation 45

Figure 3.3.3e: Image after column wise neighbourhood operation 45

Figure 3.3.3f: Image after thresholding 46

Figure 3.3.3g: Image after morphological closing 46

Figure 3.3.3h: Image after Canny edge detection 46

Figure 3.3.3i: Image after ROI 46

Figure 3.3.3j: Image after removing optic disc 46

Figure 3.3.3k: Image after removing border 46

Figure 3.3.3l: Final image 47

Figure 3.5: Block diagram of training and testing data 50

Figure 3.5.2: Block diagram of GMM method 55

Figure 4: Graphical plot for average percentage 59classification results from two classifiers

Figure 4.1: GUI 60

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 10

81

Page 11: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 11

81

Page 12: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

LIST OF TABLES

Page

Table 2.2: Summary of the features of diabetic retinopathy 18

Table 3.2.3.2: Rules for dilation and erosion 28

Table 3.2.5: Methods and description of various edge detection algorithms 33

Table 3.4.1: Student’s t-test results 49

Table 3.5.1a: testing1, testing2 and testing3 data output using fuzzy classifier 52

Table 3.5.1b: testing1 data output calculation using fuzzy classifier 53

Table 3.5.1c: testing2 data output calculation using fuzzy classifier 53

Table 3.5.1d: testing3 data output calculation using fuzzy classifier 54

Table 3.5.2a: testing1, testing2 and testing3 data output using GMM classifier 56

Table 3.5.2b: testing1 data output calculation using GMM classifier 56

Table 3.5.2c: testing2 data output calculation using GMM classifier 57

Table 3.5.2d: testing3 data output calculation using GMM classifier 57

Table 4a: Fuzzy classification results 58

Table 4b: GMM classification results 59

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 12

81

Page 13: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

CHAPTER ONE

AIMS AND INTRODUCTION

1.1 BACKGROUND

Diabetes mellitus or commonly known as diabetes is a chronic systemic disease of

disordered metabolism of carbohydrate, protein and fat[21]; most notably known for its

condition in which a person has a high blood sugar (glucose) level and as a result of the

body either not able to produce enough insulin (type 1 insulin-dependent diabetes mellitus or

IDDM[48]) or insulin resistance (type 2 non-insulin-dependent diabetes mellitus or

NIDDM[48]). Diabetes is always a disease burden[32], especially in developed countries.

According to Ministry Of Health (MOH) in Singapore, 8.2% of total population suffered

from diabetes in 2004[32].

Diabetic retinopathy (DR) is one of the complications resulted from prolonged diabetic

condition usually after ten to fifteen years of having diabetes. In the case of DR, the high

glucose level or hyperglycemia causes damage to the tiny blood vessels inside the retina.

This tiny blood vessels will leak blood and fluid on the retina, forming features such as

microaneurysms, haemorrhages, hard exudates, cotton wool spots, or venous loops[47]. DR

affects about 60% of patients having diabetes for 15 years or more and a percentage of these

are at risk of developing blindness[44] in Singapore. Despite these intimidating statistics,

research indicates that at least 90% of these new cases could be reduced if there was proper

and vigilant treatment and monitoring of the eyes[50].

Laser photocoagulation is an example of surgical method that can reduce the risk of

blindness in people who have proliferative retinopathy[9]. However, it is of vital importance

for diabetic patients to have regular eye checkups. Current examination methods use to

detect and grade retinopathy include ophthalmoscopy (indirect and direct)[23], photography

(fundus images) and fluorescein angiography. These methods of detection and assessment

of diabetic retinopathy is manual, expensive and require trained ophthalmologists.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 13

81

Page 14: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Therefore, it is important to have an automatic detection method for diabetic

retinopathy in an early stage to retard the progression in order to prevent blindness, thus

encouraging improvement in diabetic control. It can also reduce the total annual economic

cost of diabetes significantly.

1.2 OBJECTIVES

The objective of this project is to implement an automated detection of diabetic

retinopathy (DR) using digital fundus images. By using MATLAB to extract and detect the

features such as blood vessels, microaneurysms, exudates and textures which will

determine two general classifications: normal or abnormal (DR) eye. An early detection of

diabetic retinopathy enables medication or laser therapy to be performed to prevent or delay

visual loss.

1.3 SCOPE

The scope of this project involves using various MATLAB imaging techniques (eg;

converting image to binary format, erosion, dilation, boundary detection, etc) to obtain the

desire final image and area of the features (eg: blood vessels, microaneurysms, exudates and

textures) before using the values to do significance test (eg: student’s t-test) to determine the

accuracy of the results obtained mentioned earlier. Next, using the chosen results obtained

from student’s t-test to insert into the classifier (eg: Fuzzy and Gaussian Mixture Model or

GMM) to obtain the average classification rate, sensitivity and specificity and to classify

them into normal and abnormal classes.

Lastly, using the data collected to develop a graphical user interface (GUI) for

displaying normal or abnormal (DR) eye images based on the best classifier.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 14

81

Page 15: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

CHAPTER TWO

LITERATURE

This section will discuss about the structure of the eye, definition of diabetic

retinopathy (DR) and stages, examination and treatment methods and DR features.

2.1 ANATOMY STRUCTURE OF THE HUMAN EYE

The eye is a hollow, spherical organ about 2.5cm in diameter. It has a wall composed

of three layers, and its interior spaces are filled with fluids that support the walls and

maintain the shape of the eye[45]. Figure 2.1 shows the cross-sectional structure of the eye.

The eyes are so important that four-fifth of all of the information the brain receives, come

from the eyes. Section 2.1 will explain some of the important parts of the eye.

Figure 2.1: Anatomy structure of the eye[3]

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 15

81

Page 16: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

2.1.1 THE CORNEA

The cornea is a transparent medium situated in the front of the eye covering the iris,

pupil and anterior chamber that helps to focus incoming light[20] with a water content of

78%[38]. The cornea is elliptical in shape with a vertical and horizontal diameter of 11 and

12mm, respectively[38]. The cornea is supplied with oxygen and nutrients through tear-fluid

and not through blood vessels[28]. Therefore, there are no blood vessels in it. The function of

the cornea is to refract and transmit light[38].

2.1.2 THE AQUEOUS HUMOR

The aqueous humor contains aqueous fluid in the front part of the eye between the lens

and the cornea. The aqueous fluid’s main function is to supply the cornea and the lens with

nutrients and oxygen[28].

2.1.3 THE IRIS

The iris is a thin, pigmented, circular structure in the eye which regulates the amount

of light that enters the eye[28]. The function of the iris is to control the size of the pupil by

adjusting it to the intensity of the lighting conditions[38]. By expanding the size of the pupil,

more light can then enter. This reflex known as the Accommodation Reflex [28] expands the

pupil to allow more light to enter when focusing on distant objects or in the darkness.

2.1.4 THE PUPIL

The pupil is a hole in the center of the iris. The size of the pupil determines the amount

of light that enters the eye. The pupil size is controlled by the dilator and sphincter muscles

of the iris[42].  It appears black because most of the light entering the pupil is absorbed by

the tissues inside the eye[36].

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 16

81

Page 17: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

2.1.5 THE LENS

The lens is a transparent, biconvex structure in the eye that, along with the cornea,

helps to refract light to be focused on the retina[27]. By changing the shape of the lens, the

lens is able to change the focal distance of the eye so it can focus on objects at different

distances, thus allowing sharp image to form on the retina.

2.1.6 THE VITREOUS HUMOR

The vitreous humor contains clear fluid which fills the eyeball (between the lens and

the retina). It is the largest domain of the human eye. The fluid contains more than 95% of

water.

2.1.7 THE SCLERA

The sclera is the white opaque tissue that acts as the eye protective outer coat. Six

tiny muscles connect to it around the eye and control the eye's movements.  The optic nerve

is attached to the sclera at the very back of the eye[42].

2.1.8 THE OPTIC DISC

The optic disc, also known as the optic nerve head or the blind spot, the optic disc is

where the optic nerve attaches to the eye[28]. There are no light sensitive rods or cones to

respond to a light stimulus at this point. This causes a break in the visual field called "the

blind spot" or the "physiological blind spot"[35]. Figure 2.1.10 shows the location of the optic

disc.

2.1.9 THE RETINA

The retina is a thin layer of neural cells[38] that lines in the inner back of the eye. It is

light sensitive and absorbs light. The image signals are received and send to the brain. The

retina contains two kinds of light receptors; rods and cones. The rods absorb light in black BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 17

81

Page 18: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

MaculaFovea Optic disc

and white. The rods are responsible for night vision. The cones are colour sensitive and

absorb stronger light. The cones are responsible for colour vision.

2.1.10 MACULA

The macula is the area around the fovea[28]. It is an oval-shaped highly pigmented

yellow spot near the center of the retina[31] as shown in Figure 2.1.10. It is a small and

highly sensitive part of the retina responsible for detailed central vision.

Figure 2.1.10: Location of macula, fovea and optic disc

2.1.11 FOVEA

The fovea is the most central part of the macula. The visual cells located in the

fovea are packed tightest, resulting in optimal sharpness of vision. Unlike the retina, it has

no blood vessels to interfere the passage of light striking the foveal cone mosaic [15]. Figure

2.1.10 shows the location of fovea.

2.2 DIABETIC RETINOPATHY (DR) AND STAGES

Diabetes is the chronic state caused by an abnormal increase in the glucose level in the

blood and which causes the damage to the blood vessels. The tiny blood vessels that nourish

the retina are damaged by the increased glucose level[47]. Diabetic retinopathy (DR) is one of

the complications that affect retinal capillaries. This effect causes thickening of arterial wall

and blockage of blood flow to the eye occurs.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 18

81

Page 19: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

DR can be broadly classified as non-proliferative diabetic retinopathy (NPDR) and

proliferative diabetic retinopathy (PDR)[47] as shown in Figure 2.2. There are four DR

stages:

1.Stage 1 – Background diabetic retinopathy (also termed mild or moderate non-

proliferative retinopathy). At least one microaneurysm with or without the presence

of retinal haemorrhages, hard exudates, cotton wool spots or venous loops will be

present[6,7].

2. Stage 2 – Moderate non-proliferative retinopathy. Numerous microaneurysms and

retinal haemorrhages will be present. Cotton wool spots and a limited amount of

venous beading can also be seen[47]. Some blood vessels are starting to become

blocked.

3. Stage 3 – Severe non-proliferative retinopathy. Many features such as haemorrhages

and microaneurysms are present in the retina. Other features are also present except

less growth of new blood vessels; many more blood vessels are now blocked and

these areas of the retina start to send signals to the body to grow new blood vessels

for nourishment[38].

4. Stage 4 – Proliferative retinopathy. PDR is the advanced stage where the fluids sent by

the retina for nourishment trigger the growth of new blood vessels [22]. The main

blood vessels become stiff and blockage of blood flow occurs. Small pockets of

blood begin to form around the boundary of the main blood vessels. These fragile

blood vessels have thin walls and when the walls burst, blood spatters form.

Exudates (proteins and other lipids) and blood from the leakage forms around the

retina and in some cases, leakage may form on the fovea, resulting in sudden severe

vision loss and blindness.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 19

81

Page 20: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 2.2: Stages of DR fundus images[51]

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 20

81

Page 21: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

The features of each stage are summarised in Table 2.2.

Classification Alternative terminology Features

Background diabetic retinopathy

Mild/moderate non-proliferative diabetic retinopathy

Haemorrhages

Oedema

Microaneurysms

Exudates

Cotton wool spots

Dilated viens

Pre-proliferative diabetic retinopathy

Severe/very severe non-proliferative retinopathy

Deep retinal haemorrhages in four quadrants

Venous abnormalities

Intraretinal microvascular

abnormalities (IRMA)

Multiple cotton wool spots

Proliferative diabetic retinopathy

Proliferative diabetic retinopathy (PDR)

New vessels on optic disc

New vessels elsewhere

Advanced diabetic eye disease

Complications of proliferative diabetic retinopathy

Vitreous haemorrhage

Retinal detachment

Neovascular glaucoma

Table 2.2: Summary of the features of diabetic retinopathy[18]

2.3 DIABETIC RETINOPATHY (DR) FEATURES

There are many features which are present in a DR eye. However, since the main

objective of this project is to have an automated system for early DR detection on some of

the extracted features. Therefore, features such as blood vessels, microaneurysms, exudates

and textures (in feature extraction section) will be discussed.

2.3.1 BLOOD VESSELS

In normal retina, the main function of the blood vessels is to send nutrients such as

oxygen and blood to the eye (Figure 2.3.1). In the case of DR, the simulation to the growth

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 21

81

Page 22: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

RetinalBloodVessels

Microaneurysms

Figure 2.3.2: Microaneurysms in DR

of new fragile blood vessels is due to the blockage and thickening of the main blood

vessels. When the main blood vessels are blocked, new vessels are triggered to grow in an

attempt to send oxygen and nourishment to the eye. However, these new blood vessels are

very fragile and abnormal. They are prone to rupture and leak fluids (proteins and lipids)

and blood into the eye. This may not hinder the patient’s sight if the leakage does not occur

on the fovea or macula. However, if the blood spatters happen to be on the fovea or macula,

sudden loss of vision in that eye occurs as the spatters block all light entering into the eye.

Figure 2.3.1: Retinal blood vessels

2.3.2 MICROANEURYSMS

Microaneurysms are small saclike out pouching in the small vessels and

capillaries[25] as shown in Figure 2.3.2. They are an early feature of DR and it appears as

small red dots due to the ballooning of capillaries. They represent a small weakness in the

retinal capillary wall that leaks blood and serum[18]. They appear as tiny red dots in fundus

photographs.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 22

81

Page 23: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Exudates

Figure 2.3.3: Exudates in DR

2.3.3 EXUDATES

Exudates often described as hard exudates, these are deposits of extravasated

plasma proteins, especially lipoproteins as shown in Figure 2.3.3. They leak into retinal

tissue with serum, and are left behind as oedema fluid is absorbed. Eventually exudates are

cleared from the retina by macrophages[18]. They appear as yellow-white dots within the

retina. The yellow deposits may be seen as either individual spots or clusters[25], usually

near optic disc.

Sometimes the exudates may be formed on macula or fovea, as a result, there will

be sudden loss of vision in that eye, regardless of the diabetic retinopathy stages.

2.4 DIABETIC RETINOPATHY (DR) EXAMINATION METHODS

There are few types of DR examination methods but mainly ophthalmoscopy (indirect

and direct), fluorescein angiogram and fundus photography.

2.4.1 OPTHALMOSCOPY (INDIRECT AND DIRECT)

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 23

81

Page 24: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Direct opthalmoscopy is the examination method performs by the specialist in a

dark room. A beam of light is shined through the pupil using opthalmoscope. This allows

the specialist to view the back of the eyeball.

Indirect opthalmoscopy is performed with a head or spectacles-mounted source of

illumination positioned in the middle of the forehead[26]. A bright light is shined into the eye

using the instrument on the forehead. The condensing lens is placed on the eye to intercept

the fundus reflex. A real and inverted image of the fundus will form between the examiner

and the patient[26].

2.4.2 FLUORESCEIN ANGIOGRAPHY

Fluorescein angiography is a test which allows the blood vessels in the back of the

eye to be photographed as a fluorescent dye is injected into the bloodstream via the hand or

arm[49]. The pupils will be dilated with eye drops and the yellow dye (Fluorescein Sodium)

is injected into a vein in the arm[49]. It is used to examine the blood circulation of the retina

using the dye tracing method.

2.4.3 FUNDUS PHOTOGRAPHY

Fundus photography is the usage of fundus camera to photograph the regions of the

vitreous, retina, choroid and optic nerve[16]. Fundus photographs are only considered

medically necessary where the results may influence the management of the patient. In

general, fundus photography is performed to evaluate abnormalities in the fundus, follow the

progress of a disease, plan the treatment for a disease, and assess the therapeutic effect of

recent surgery[16]. In this report, the images for imaging processing were taken from fundus

camera.

2.5 DIABETIC RETINOPATHY (DR) TREATMENT

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 24

81

Page 25: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Treatment of diabetic retinopathy varies depending on the extent of the disease [10].

During the early stages of DR, no treatment is needed unless macular oedema is present.

However, for advanced DR such as proliferative diabetic retinopathy, surgery is necessary.

2.5.1 SCATTER LASER TREATMENT

Advanced stage diabetic retinopathy is treated by performing scatter laser treatment.

During scatter laser treatment, an ophthalmologist uses a laser to "scatter" many small

burns across the retina. This causes leaking and abnormal blood vessels to shrink[10]. This

surgical method is used to reduce vision loss. However, if there is significant amount of

haemorrhages, scatter laser treatment is not suitable.

2.5.2 VITRECTOMY

A vitrectomy is performed under either local or general anesthesia. An

ophthalmologist makes a tiny incision in the eye and carefully removes the vitreous gel that

is clouded with blood. After the vitreous gel is removed from the eye, a clear salt solution is

injected to replace the contents[10].

2.5.3 FOCAL LASER TREATMENT

Leakage of fluid from blood vessels can sometimes lead to macular oedema, or

swelling of the retina. Focal laser treatment is performed to treat macular oedema. Several

hundred small burns are placed around the macula in order to reduce the amount of fluid

build-up in the macula[10].

2.5.4 LASER PHOTOCOAGULATION

Laser photocoagulation is a powerful beam of light which, combined with

ophthalmic equipment and lenses, can be focused on the retina[41]. Small bursts of laser are

used to seal leaky blood vessels, destroy abnormal blood vessels, seal retinal tears, and

destroy abnormal tissue in the back of the eye[41]. This procedure is used to treat diabetic

retinopathy patients in proliferative diabetic retinopathy stage. The main advantage of using BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 25

81

Page 26: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

this surgical method is the short surgical duration and the patient usually can resume

activities immediately.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 26

81

Page 27: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

CHAPTER THREE

METHODS AND MATERIALS

Total 60 fundus images from various demographics are used in this project. These

fundus images were taken from the ophthalmology department in National University

Hospital (NUH) of Singapore. The images were taken in 720 x 576 pixels.

3.1 SYSTEM BLOCK DIAGRAM

Figure 3.1 shows the system block diagram for identification of diabetic retinopathy.

Using the input image, the image is processed using the image processing techniques using

MATLAB. Features such as, areas of blood vessels, microaneurysms, exudates and textures

are extracted. The extracted features are then inserted into Student’s t-test to generate

significance test (probability of true significance). Using the Student’s t-test results (results

which have high probability of true significance) to the classifiers (eg: Fuzzy and Gaussian

Mixture Model or GMM), the average classification rate, sensitivity and specificity, etc are

generated. Lastly, at the final stage, using the results generated from the classifiers to

determine the diabetic retinopathy (DR) classes; normal and abnormal.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 27

81

Page 28: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Input Image

ImageProcessingTechniques

Feature Extraction

Areas of

Blood vesselsMicroaneurysms

ExudatesTextures

Classification

Fuzzy and GMM Classifiers

Significance Test

Student’s t-test

Normal

Abnormal

3.2 IMAGE PROCESSING TECHNIQUES

The image processing techniques are used to enhance the images, morphological image

processing and texture analysis. They are also used to reduce image noise, contrast and

invert the images.

3.2.1 IMAGE PREPROCESSING

Before image processing is carried out, the fundus images need to be preprocessed to

remove non-uniform background. Non-uniform brightness and variation in the fundus

images are the main reasons for non-uniformity. Therefore, the error needs to be corrected

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 28

81

Figure 3.1: System block diagram for the detection and classification of diabetic retinopathy

Page 29: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

by applying contrast-limited adaptive histogram equalization (CLAHE) to the image before

applying the image processing operations[22].

A histogram is a graph which indicates the number of times each gray level occurs in

an image. For example, in bright images, the gray levels will be clustered at the upper end

of the graph. As for images that are darker, the gray levels will then be at the lower end of

the graph. For a gray level that is evenly spread out in the histogram, the image is well-

contrasted. CLAHE operates on small regions in the image, called tiles. Each tile's contrast

is enhanced, so that the histogram of the output region approximately matches a specified

histogram[2]. Figure 3.2.1a shows the fundus image before CLAHE and its histogram shows

more bright level regions than dark level regions. Figure 3.2.1b shows the fundus image

after CLAHE and its histogram shows an evenly distributed brightness.

Figure 3.2.1a: Original image (left) and its histogram (right)

Figure 3.2.1b: Image after CLAHE (left) and its histogram (right)

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 29

81

Page 30: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

3.2.2 STRUCTURING ELEMENT

A structuring element (SE) is a binary morphology that is used to probe the image. It is

a matrix consisting of only 0's and 1's that can have any arbitrary shape and size. The pixels

with values of 1 define the neighbourhood[34]. There are two types of SE, two-dimensional or

flat SE usually consists of origin, radius and approximation N value. Three-dimensional or

nonflat SE usually consists of radius (x-y planes), height and approximation N value. There

are different types of SE shapes but in this project, disk shaped, ball shaped and octagon

shaped SE are used.

3.2.2.1 DISK SHAPED STRUCTURING ELEMENT (SE)

Disk shaped SE, SE = strel('disk', R, N) creates a flat, disk shaped structuring

element, where R specifies the radius. R must be a nonnegative integer. N must be 0, 4, 6,

or 8[8]. Figure 3.2.2.1 shows a disk shaped SE with radius 3 and its centre of origin.

Figure 3.2.2.1: Disk shaped structuring element

3.2.2.2 BALL SHAPED STRUCTURING ELEMENT (SE)

Ball shaped SE, SE = strel('ball', R, H, N) creates a nonflat, ball-shaped structuring

element (actually an ellipsoid) whose radius in the X-Y plane is R and whose height is H.

Note that R must be a nonnegative integer, H must be a real scalar, and N must be an even

nonnegative integer[8]. Figure 3.2.2.2 shows a ball shaped SE with x-y axis as radius and z

axis as height.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 30

81

Page 31: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.2.2.2: Ball shaped structuring element (nonflat ellipsoid)

3.2.2.3 OCTAGON SHAPED STRUCTURING ELEMENT (SE)

Octagon shaped SE, SE = strel('octagon', R) creates a flat, octagonal structuring

element, where R specifies the distance from the structuring element origin to the sides of

the octagon, as measured along the horizontal and vertical axes. R must be a nonnegative

multiple of 3[8]. Figure 3.2.2.3 shows an octagon shaped SE with radius 3 and its centre of

origin.

Figure 3.2.2.3: Octagon shaped structuring element

3.2.3 MORPHOLOGICAL IMAGE PROCESSING

Morphological image processing is a branch of image processing that is particularly

useful for analyzing shapes in images[3]. Mathematical morphology is the foundation of

morphological image processing, which consists of a set of operators that transform images

according to size, shape, connectivity, etc.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 31

81

Page 32: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

3.2.3.1 MORPHOLOGICAL OPERATIONS

Morphological operations are used to understand the structure or form of an

image. This usually means identifying objects or boundaries within an image. Morphological

operations play a key role in applications such as machine vision and automatic object

detection[33].

Morphological operations apply a structuring element to an input image, creating an

output image of the same size. In a morphological operation, the value of each pixel in the

output image is based on a comparison of the corresponding pixel in the input image with its

neighbours. By choosing the size and shape of the neighborhood, a morphological operation

can be created that is sensitive to specific shapes in the input image [34]. There are many types

of morphological operations such as dilation, erosion, opening and closing.

3.2.3.2 DILATION AND EROSION

Dilation and erosion are basic morphological processing operations. They are

defined in terms of more elementary set operations, but are employed as the basic elements

of many algorithms. Both dilation and erosion are produced by the interaction of structuring

element with a set of pixels of interest in the image[19].

Dilation adds pixels to the boundaries of objects in an image, while erosion removes

pixels on object boundaries. The number of pixels added or removed from the objects in an

image depends on the size and shape of the structuring element used to process the image.

In the morphological dilation and erosion operations, the state of any given pixel in the

output image is determined by applying a rule to the corresponding pixel and its neighbours

in the input image. The rule used to process the pixels defines the operation as a dilation or

an erosion[34]. Table 3.2.3.2 shows the operations and the rules.

Operation RuleDilation The value of the output pixel is the maximum value of all the pixels in the

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 32

81

Page 33: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

input pixel's neighborhood. In a binary image, if any of the pixels is set to the value 1, the output pixel is set to 1.

Erosion The value of the output pixel is the minimum value of all the pixels in the input pixel's neighborhood. In a binary image, if any of the pixels is set to 0, the output pixel is set to 0.

Table 3.2.3.2: Rules for dilation and erosion[34]

3.2.3.3 DILATION

Suppose A and B are sets of pixels. Then the dilation of A by B, denoted A⊕B, is

defined as A⊕B=∪x∈B Ax. This means that for every point x∈B, A is translated by those

coordinates. An equivalent definition is that A⊕B={ ( x , y )+(u , v ) :(x , y)∈ A ,(u , v)∈B }.

Dilation is seen to be commutative, that A⊕B=B⊕A [3]. Figure 3.2.3.3a shows an original

fundus image before dilation and Figure 3.2.3.3b shows the same image after dilation with

disk shaped SE of radius 8. Optic disc becomes more prominent and exudates can also be

seen near macula.

Figure 3.2.3.3a: Original image

3.2.3.4 EROSION

Given sets A and B, the erosion of A by B, written A⊝B, is defined as

A⊝B={w : Bw⊆ A }[3]. Figure 3.2.3.4a shows an original fundus image before dilation and

Figure 3.2.3.4b shows the same image after erosion with disk shaped SE of radius 8. Blood

vessels become more prominent.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 33

81

Figure 3.2.3.3b: Image after dilation with disk shaped SE

Page 34: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.2.3.4a: Original image

3.2.3.5 OPENING AND CLOSING

Dilation and erosion are often used in combination to implement image processing

operations[34]. Erosion followed by dilation is called an open operation. Opening of an

image smoothes the contour of an object, breaks narrow isthmuses (“bridges”) and

eliminates thin protrusions[12]. Dilation followed by erosion is called a close operation.

Closing of an image smoothes section of contours, fuses narrow breaks and long thin gulfs,

eliminates small holes in contours and fills gaps in contours[12].

Opening operation of image is defined as A∘B=( A⊖B)⊕B[3]. Since opening

operation of image consists of erosion followed by dilation, therefore it can also be defined

as A∘B=∪{Bw : Bw⊆ A }[3].

Closing operation of image is defined as A ∙ B=( A⊕B)⊖B[3]. Figure 3.2.3.5a and

Figure 3.2.3.5b shows the difference between opening operation and closing operation of

fundus images.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 34

81

Figure 3.2.3.4b: Image after erosion with disk shaped SE

Page 35: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

3.2.4 THRESHOLDING

Thresholding turns a colour or grayscale image into a 1-bit binary image. This is done

by allocating every pixel in the image either black or white, depending on their value. The

pivotal value that is used to decide whether any given pixel is to be black or white is

the threshold[17].

Thresholding is useful to remove unnecessary detail from an image to concentrate

on essentials[3]. In the case of the fundus image, by removing all gray level information, the

blood vessels are reduced to binary pixels. It is necessary to distinguish blood vessels

foreground from the background information. Thresholding can also be used to bring out

hidden detail. It is very useful in the image region which is obscured by similar gray levels.

Therefore, choosing an appropriate threshold value is important because a low value

may decrease the size of some of the objects or reduce the number and a high value may

include extra background information. Figure 3.2.4a shows original fundus image before

thresholding with CLAHE. Figure 3.2.4b shows the same image with too high threshold

value resulting in too much background information. Figure 3.2.4c shows the same image

with too low threshold value resulting in missing foreground information.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 35

81

Figure 3.2.3.5a: Opening operation with disk shaped image

Figure 3.2.3.5b: Closing operation with disk shaped SE image

Page 36: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.2.4a: Original image

Figure 3.2.4b: Image with too high threshold value

Figure 3.2.4c: Image with too low threshold value

3.2.5 EDGE DETECTION

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 36

81

Page 37: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

In an image, an edge is a curve that follows a path of rapid change in image intensity.

Edges are often associated with the boundaries of objects in a scene [4]. Edge detection refers

to the process of identifying and locating sharp discontinuities in an image[39]. It is possible

to use edges to measure the size of objects in an image, isolate particular objects from their

background, and to recognize or classify objects[3].

There are generally six edge detection algorithms and they are, Sobel, Prewitt, Roberts,

Laplacian of Gaussian (LoG), zero-cross and Canny. Table 3.2.5 shows the six edge

detection methods and their descriptions.

Methods Descriptions

Sobel The Sobel method finds edges using the Sobel approximation to the derivative. It returns edges at those points where the gradient of I is maximum.

Prewitt The Prewitt method finds edges using the Prewitt approximation to the derivative. It returns edges at those points where the gradient of I is maximum.

Roberts The Roberts method finds edges using the Roberts approximation to the derivative. It returns edges at those points where the gradient of I is maximum.

Laplacian of Gaussian (LoG) The Laplacian of Gaussian method finds edges by looking for zero crossings after filtering I with a Laplacian of Gaussian filter.

zero-cross The zero-cross method finds edges by looking for zero crossings after filtering I with a filter the user specify.

Canny The Canny method finds edges by looking for local maxima of the gradient of I. The gradient is calculated using the derivative of a Gaussian filter. The method uses two thresholds, to detect strong and weak edges, and includes the weak edges in the output only if they are connected to strong edges. This method is therefore less likely than the others to be fooled by noise, and more likely to detect true weak edges.

Table 3.2.5: Methods and description of various edge detection algorithms[14]

After comparing all six edge detection algorithms, the Canny method performs better

than the others due to the fact that it uses two thresholds to detect strong and weak edges and

for this reason, Canny algorithm is chosen for edge detection over the others for this project.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 37

81

Page 38: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.2.5a, b, c, d, e, f shows original image, Sobel edge detection, Prewitt edge

detection, Roberts edge detection, Laplacian of Gaussian (LoG) edge detection and Canny

edge detection methods respectively. It is apparent that by using Canny edge detection

method, the weak fine blood vessels can be detected.

Figure 3.2.5a: Original image

Figure 3.2.5b: Sobel Figure 3.2.5c: Prewitt

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 38

81

Page 39: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.2.5d: Roberts Figure 3.2.5e: Laplacian of Gaussian (LoG)

Figure 3.2.5f: Canny

3.2.6 MEDIAN FILTERING

Median filtering is a nonlinear operation often used in image processing to reduce "salt

and pepper" noise. A median filter is more effective than convolution when the goal is to

simultaneously reduce noise and preserve edges[1]. Median of a set is the middle value when

values are sorted. For even number of values, the median is the mean of the middle of two [3].

Figure 3.2.6a shows an illustration of a 3 x 3 median filter for a set of sorted values to

obtain the median value.

55 70 57

68 260 63

66 65 62

Figure 3.2.6a: Illustration of a 3 x 3 median filter

This method of obtaining the median value means that very large or very small values

(noisy values) will be replaced by the value closer to its surroundings. Figure 3.2.6b shows

the difference before and after applying median filtering. The “salt and pepper” noise in the

original image have been clearly reduced after applying the median filtering.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 39

81

55 57 62 63 65 66 68 70 260 65

Page 40: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.2.6b: Original image (left) and image after median filtering (right)

3.3 FEATURE EXTRACTION

Features namely, blood vessels, microaneurysms, exudates and textures are

extracted. The steps are explained below.

3.3.1 BLOOD VESSELS DETECTION

Figure 3.3.1a shows the system block diagram of blood vessels detection. The

detailed steps are explained below.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 40

81

Page 41: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

MaculaFovea Optic disc

RetinalBloodVessels

Figure 3.3.1a: System block diagram for detecting blood vesselsAll coloured images consist of RGB (red, green blue) primary colours channels. Each

pixel has a particular colour by the amount of red, green and blue. If each colour component

has a range of 0-255 then three components give 2553 = more than 16 million colours. Each

pixel consists of 24 bits and therefore is a 24-bit colour image. The fundus images used in

this project are 24-bit, 720 x 576 pixels. Normal images as shown in Figure 3.3.1b basically

consist of blood vessels, optic disc and macula without any other abnormal features. Blood

vessels detection is important in identification of diabetic retinopathy (DR) through image

processing techniques.

Figure 3.3.1b: Normal retinal fundus image

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 41

81

Original imageGreen component of

original imageInverting intensity

of green component

Edge detection (Canny)

Border detection

Morphological opening using disk

SE of radius 8

Blood vessels detection

Perform CLAHE (adaptive histogram

equalization)

Morphological opening using ball SE of radius and

height 8

Thresholding

Perform median filtering

Image with boundary is

obtained (after subtracting image

with border)

Fiill holes and remove boundary

Final image and area extracted

Page 42: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Firstly, as part of the image preprocessing step, the green component of the image is

extracted as shown in Figure 3.3.1c and the green component’s intensity is inverted as

shown in Figure 3.3.1d.

Figure 3.3.1c: Green component Figure 3.3.1d: Inverted green component

After inverting the green component’s intensity, edge detection is performed using

Canny method. The border is then detected and a disk shaped structuring element (SE) of

radius 8 is created with morphological opening operation (erosion then dilation). Next,

subtract the eroded image with the original image and the border or boundary is obtained.

Afterwards, adaptive histogram equalization is performed to improve the contrast of

the image and to correct uneven illumination as shown in Figure 3.3.1e. A morphological

opening operation (erosion then dilation) is performed using the ball shaped structuring

element (SE) to smooth the background and to highlight the blood vessels as shown in

Figure 3.3.1f.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 42

81

Page 43: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.3.1e: Image after CLAHE Figure 3.3.1f: Image after opening operation

The image is then subtracted from the adaptive histogram equalized image (CLAHE).

As shown in Figure 3.3.1g, the resulting image shows higher intensity at the foreground

(blood vessels) as compared with the background – a contrast.

Figure 3.3.1g: Image after subtraction

From the subtracted image, the image is converted from grayscale to binary by

performing thresholding with value of 0.1 as shown in Figure 3.3.1h. Median filtering is

performed to remove "salt and pepper" noise as shown in Figure 3.3.1i. The boundary is

obtained after subtracting the border with disk shaped SE with image with median filtering.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 43

81

Page 44: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.3.1h: Image after thresholding Figure 3.3.1i: Image after median filtering

The border is then eliminated after filling the holes that do not touch the edge to obtain

the final image as shown in Figure 3.3.1j. The pixel values of the image are inverted to get

only the blood vessels with black background as shown in Figure 3.3.1k. The detailed

MATLAB code is attached in Appendix B.

Figure 3.3.1j: Final image Figure 3.3.1k: Final image (inverted)

3.3.2 MICROANEURYSMS DETECTION

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 44

81

Page 45: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.3.2a shows the system block diagram of microaneurysms detection. The

detailed steps are explained below.

Figure 3.3.2a: System block diagram for detecting microaneurysms

Microaneurysms appear as tiny red dots on retinal fundus image as shown in Figure

3.3.2b, therefore the red component of the RGB image are used to identify the

microaneurysms as shown in Figure 3.3.2c. Next, the intensity is then inverted as shown in

Figure 3.3.2d. Similar to blood vessels detection, Canny method is used for edge detection

for microaneurysms detection as shown in Figure 3.3.2e.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 45

81

O riginal im ageR ed com ponent

of orig inal im age

Inverting intensity of red

com ponent

Edge detection (C anny)

B order detection

M orphological opening using

disk SE of radius 8

R em ove boundary

Fill holes

Subtract im age w ith holes from

im age w ith filled holes

B lood vessels detection

Edge detection(C anny)

Subtract im age w ithout

boundary w ith blood vessels

after edge detection

Fill holes

Subtract im age w ith filled holes from the im age

w ith m icroaneurysm s

and unw anted artifacts

Final im age and area extracted

Page 46: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Microaneurysms

Figure 3.3.2b: Abnormal retinal fundus image

Figure 3.3.2c: Red component Figure 3.3.2d: Inverted red component

The boundary is detected by filling up the holes and a disk shaped structuring element

(SE) of radius 8 is created with morphological opening operation (erosion then dilation) as

shown in Figure 3.3.2f. The edge detected image is then subtracted from the image with

boundary to obtain image without boundary as shown in Figure 3.3.2g.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 46

81

Page 47: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.3.2f: Image with boundary

Figure 3.3.2g: Image after boundary subtraction

After which, the holes or gaps are filled, resulting in microaneurysms and other

unwanted artifacts present as shown in Figure 3.3.2h. The image with filled holes or gaps

then subtracts the image before filled holes or gaps. The resulting image thus has

microaneurysms and other unwanted artifacts without the edge as shown in Figure 3.3.2i.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 47

81

Figure 3.3.2h: Image after filling up the holes or gaps

Figure 3.3.2e: Image after Canny edge detection

Page 48: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.3.2i: Image after subtraction

The blood vessels are detected using the same method mentioned in section 3.3.1.

Figure 3.3.2j shows the blood vessels detected image. Edge detection Canny method is then

used on the blood vessels image to detect the edges as shown in Figure 3.2.2k. This image

is then subtracted from the image after boundary subtraction (Figure 3.3.2g). The resulted

image is shown in Figure 3.3.2l.

Figure 3.3.2j: Blood vessels detection

Figure 3.3.2l: Image after subtraction

Finally, after filling the holes or gaps as shown in Figure 3.3.2m, this image is

subtracted with the image with microaneurysms and unwanted artifacts to obtain the final

image with only microaneurysms as shown in Figure 3.3.2n. The detailed MATLAB code is

attached in Appendix C.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 48

81

Figure 3.3.2k: Blood vessels after edge detection

Page 49: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.3.2n: Final image

3.3.3 EXUDATES DETECTION

Figure 3.3.3a shows the system block diagram of exudates detection. The detailed

steps are explained below.

Figure 3.3.3a: System block diagram for detecting exudates

Exudates appear as yellowish dots in the fundus images as shown in Figure 3.3.3b.

It is easier to spot them than microaneurysms. In order to detect exudates, firstly similar to

blood vessels detection, green component of the RGB image is extracted as shown in

Figure 3.3.3c and octagon shaped structuring element (SE) of size 9 is created. A

morphological closing is performed on the SE as shown in Figure 3.3.3d. As clearly BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 49

81

O rig inal im age

G reen com ponen t o f o rig inal im age

M orpholog ica l c losing using cc tagon shaped S E of rad ius 9

C olum n w ise neighbourhood

operation

T hresho ld ing

M orpholog ica l c losing using d isk

S E of rad ius 10

E dge detection (C anny)

R O I of rad ius 82

R em ove op tic d isc

R em ove border

M orpholog ica l erosion operation using d isk

shaped SE of rad ius 3

F inal im age and area ex tracted

Figure 3.3.2m: Image after filling holes or gaps

Page 50: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Exudates

shown, the exudates become more prominent than the background although the optic disc is

also present, as their grey levels are similar.

Figure 3.3.3b: Abnormal retinal fundus image

Figure 3.3.3c: Green component Figure 3.3.3d: Image after closing operation

Column wise neighbourhood operation is performed to rearrange the image into

columns first. The parameter sliding indicates that overlapping neighbourhoods are being

used[3]. This operation is performed to remove most of the unwanted artifacts leaving only

the border, exudates and the optic disc as shown in Figure 3.3.3e.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 50

81

Page 51: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.3.3e: Image after column wise neighbourhood operation

Next, thresholding is performed to the image with the threshold value of 0.7 as

shown in Figure 3.3.3f. Morphological closing with disk shaped structuring element (SE)

of size 10 is used to fill up the holes or gaps of the exudates as shown in Figure 3.3.3g.

The optic disc contains the highest pixel value in the image. Therefore, to remove

the optic disc, edge detection using Canny method (Figure 3.3.3h) is used together with

region of interest (ROI). First, a radius of 82 is defined as most optic disc is of size 80 x 80

pixels as shown in Figure 3.3.3i. Next, the optic disc is removed together with the border

as shown in Figure 3.3.3j and Figure 3.3.3k.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 51

81

Figure 3.3.3f: Image after thresholding Figure 3.3.3g: Image after morphological closing

Page 52: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Finally, by performing morphological erosion operation with disk shaped

structuring element (SE) of size 3 to obtain the final image with only exudates as shown in

Figure 3.3.3l. The detailed MATLAB code is attached in Appendix D.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 52

81

Figure 3.3.3h: Image after Canny edge detection

Figure 3.3.3i: Image after ROI

Figure 3.3.3l: Final image

Figure 3.3.3j: Image after removing optic disc

Figure 3.3.3k: Image after removing border

Page 53: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

3.3.4 TEXTURE ANALYSIS

Texture describes the physical structure characteristic of a material such as

smoothness and coarseness. It is a spatial concept indicating what, apart from color and the

level of gray, characterizes the visual homogeneity of a given zone of an image [24]. Texture

analysis of an image is the study of mutual relationship among intensity values of

neighbouring pixels repeated over an area larger than the size of the relationship [22]. The

main types of texture analysis are structural, statistical and spectral.

Mean, standard deviation, third moment and entropy are statistical type. Mean,

standard deviation and third moment are concern with properties of individual pixels. Mean

is defined as: Mean = µ1 = ∑i=0

N−1

∑j=0

N−1

iPi , j[6] and standard deviation is defined as: SD = σ1 =

√∑i=0

N −1

∑j=0

N−1

Pi , j (i−μ1)2[6]. Third moment is a measure of the skewness of the histogram and is

defined as: μ3 ( z )=∑i=0

L−1

¿¿[37]. Entropy is a statistical type of texture that measures

randomness in an image texture. An image that is perfectly flat will have entropy of zero.

Consequently, they can be compressed to a relatively small size. On the other hand, high

entropy images such as an image of heavily cratered areas on the moon have a great deal of

contrast from one pixel to the next and consequently cannot be compressed as much as low

entropy images[7]. Entropy is defined as: −∑ P log2 P. The texture features used in this

project are mean, standard deviation, third moment and entropy. The detailed MATLAB

code is attached in Appendix E.

3.4 SIGNIFICANCE TEST

Significance test is to calculate statistically whether set(s) of data is occurred by

chance or true occurrence or the level of true occurrence. The significance level is defined

as p-value. The lower the p-value, the more statistically significant a set of data is. For

example, given set A has a p-value of 0.1 and set B has a p-value of 0.05, then set B is said

to be more statistically significant than set A as there is only 5% chance that it could occur

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 53

81

Page 54: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

by chance or coincidence. Unlike set B, set A has 5% more chance than set B that it could

occur by chance or coincidence. The typical level of significance is 5% or p-value ≤ 0.05.

Significance test is done prior to classification.

3.4.1 STUDENT’S T-TEST

Student’s t-test deals with the problems associated with inference based on “small”

samples[46]. When independent samples are available from each population the procedure is

often known as the independent samples t-test and the test statistic is: t=

x1−x2

s √ 1n1

+ 1n2

where

x1 and x2 are the means of samples of size n1 and n2 taken from each population[5].

Using the area of the features for blood vessels, microaneurysms, exudates, mean,

standard deviation, third moment and entropy into Student’s t-test, the significance test

results are generated. Appendix A shows the box plot for various features (area) with high,

median and low values. Table 3.4.1 shows the p-values of each set of features (area). The

highlighted (yellow) rows indicate that data is statistically significant. Therefore, only the

statistically significant sets of data are used in the classification (ie: blood vessels,

microaneurysms, mean and third moment

After selecting the features, normalization of the data is then processed prior to

classification. Normalization is done by dividing each value in the particular feature by the

highest value of that particular feature. This is to ensure each value is ≤ 1 > 0 to improve

the classification as it will have less distribution among the data.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 54

81

Page 55: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

3.5 CLASSIFICATION

For this project, Fuzzy and Gaussian Mixture Model (GMM) are used for automatic

classification of diabetic retinopathy (DR). There are 42 training data and 18 testing data.

Figure 3.5 shows the block diagram of training and testing data processing prior to

inputting to the classifier. Normalized data is first split into 70% and 30%. Step I consists

of 70% of normal and abnormal data and 30% of normal and abnormal data. They are then

grouped into step III. Train1 and test1 is then further split into set A, B, C and D (step IV).

They are then mixed and split into train2, test2 and train3, test3 (step V). Lastly, the training

and testing data is exported to MATLAB as variables to load into the classifier.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 55

81

Table 3.4.1: Student’s t-test results

FeaturesMean ± Standard Deviation

P-ValueNormal Abnormal

Blood Vessels 31170 ± 7989 35950 ± 10430 0.051

Exudates 1909 ± 1224 1477 ± 957 0.13

Microaneurysms 330±238 884±564 <0.0001

TexturesMean

74.1± 17.0 83.3±21.4 0.072

Textures Standard Deviation

37.0±6.42 39.5±8.36 0.20

TexturesThird Moment

0.139±0.609 -0.400±0.389 0.0001

TexturesEntropy

4.04±0.320 4.13±0.305 0.30

Page 56: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.5: Block diagram of training and testing data

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 56

81

Normalized data

70%

70% normal

70% train1normaltrain1

A - 30%B -

30%D -

10%

train2

C - 30%B -

30%D -

10%

train3C - 30%A - 30%D - 10%

70% abnormal

70% train1

abnormaltrain1

A - 30%B -

30%D -

10%

train2

C - 30%B -

30%D -

10%

train3C - 30%A - 30%D - 10%

30%

30% normal

30% test1

normal

test1C -

30%

test2A -

30%

test3

B - 30%

30% abnormal

30% test1

abnormal

test1C -

30%

test2A -

30%

test3

B - 30%

Step I

Step II

Step III

Step IV

Step V

Page 57: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

3.5.1 FUZZY

A fuzzy classifier is any classifier which uses fuzzy sets either during its training or

during its operation[29]. Fuzzy pattern recognition is sometimes identified with fuzzy

clustering or with fuzzy if-then systems used as classifiers[29].

In a fuzzy classification system, a case or an object can be classified by applying a

set of fuzzy rules based on the linguistic values of its attributes. Every rule has a weight,

which is a number between 0 and 1, and this is applied to the number given by the

antecedent. It involves 2 distinct parts. The first part involves evaluating the antecedent,

fuzzifying the input and applying any necessary fuzzy operators[40] such as, union:

μ A ∩B ( x )=¿ Min[μ A ( x ) , μB ( x )], intersection: μ A ∩B ( x )=¿ Min[μ A ( x ) , μB ( x )],

complement: μA−( x )=1−μ A(x) where μ is the membership function[40]. The second part

requires application of that result to the consequent, known as inference. A fuzzy inference

system is a rule-based system that uses fuzzy logic, rather than Boolean logic, to reason

about data[40]. Fuzzy Logic (FL) is a multivalued logic, that allows intermediate values to be

defined between conventional evaluations like true/false, yes/no, high/low, etc[30]. These

fuzzy rules define the connection between input and output fuzzy variables[40].

Table 3.5.1a shows the output of 3 set of testing data from fuzzy classifier. The

correct (true) Boolean rule from nos 1-9 is supposed to be [0, 1] (true positives for normal

data) so [1, 0] (false positives) is incorrect (false). Therefore, there are some errors.

Likewise for data from nos 10-18, the correct (true) Boolean rule is supposed to be [1, 0]

(true negatives for abnormal data) so [0, 1] (false negatives) is incorrect (false). Therefore,

there are some errors too. Label 1 denotes normal data and label 2 denotes abnormal data.

The correct labeling should be 1 from nos 1-9 and 2 from nos 10-18.

Table 3.5.1b-d shows fuzzy testing data for positive predictive value, negative

predictive value, sensitivity and specificity calculation. TP denotes true positives, TN

denotes true negatives, FP denotes false positives and FN denotes false negatives. Using

the formula: Specificity = number of true negatives

number of true negatives+number of false positives∗100 %[43]

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 57

81

Page 58: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

and Sensitivity = number of true positives

number of true positivies+number of falsenegatives∗100 %[43]. A

specificity of 100% means that the test recognizes all actual negatives [43] and a sensitivity of

100% means that the test recognizes all actual positives[43]. Positive predictive value

denotes positive test results which are correctly diagnosed and Negative predictive value

denotes negative test results which are correctly diagnosed.

No

Fuzzy comparing

testing1Label Error

Fuzzy comparing

testing2Label Error

Fuzzy comparing

testing3Label Error

1 0 1 1 0 1 1 0 1 1

2 0 1 1 0 1 1 0 1 1

3 1 0 2 Error 0 1 1 0 1 1

4 0 1 1 0 1 1 0 1 1

5 0 1 1 1 0 2 Error 0 1 1

6 1 0 2 Error 0 1 1 1 0 2 Error

7 0 1 1 0 1 1 0 1 1

8 1 0 2 Error 0 1 1 0 1 1

9 1 0 2 Error 0 1 1 0 1 1

10 1 0 2 1 0 2 1 0 2

11 1 0 2 1 0 2 1 0 2

12 1 0 2 1 0 2 1 0 2

13 1 0 2 1 0 2 1 0 2

14 0 1 1 Error 1 0 2 1 0 2

15 1 0 2 0 1 1 Error 1 0 2

16 1 0 2 1 0 2 0 1 1 Error

17 1 0 2 1 0 2 1 0 2

18 1 0 2 1 0 2 0 1 1 Error

Table 3.5.1a: testing1, testing2 and testing3 data output using fuzzy classifier

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 58

81

Page 59: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

  Fuzzy comparing testing1      POSITIVE NEGATIVE    

POSITIVE TP = 5 FP = 4

Positive predictive

value = TP / (TP + FP) = 5 / (5 + 4) = 5 / 9

5 / 9 * 100 = 55.6%

NEGATIVE FN = 1 TN = 8

Negative predictive

value = TN / (FN +

TN)= 8 / (1 +

8)= 8 / 9

8 / 9 * 100 = 88.9%

 

Sensitivity = TP / (TP +

FN) = 5 / (5 + 1) = 5 / 6

Specificity= TN / (FP + TN) = 8 / (4 + 8) = 8 / 12

   

 5 / 6 * 100 =

83.3%8 / 12 * 100

= 66.7%    Table 3.5.1b: testing1 data output calculation using fuzzy classifier

Fuzzy comparing testing2

POSITIVENEGATIV

E

POSITIVE TP = 8 FP = 1

Positive predictive

value = TP / (TP + FP) = 8 / (8 + 1) = 8 / 9

8 / 9 * 100 = 88.9%

NEGATIVE

FN = 1 TN = 8

Negative predictive

value = TN / (FN +

TN)= 8 / (1 +

8)= 8 / 9

8 / 9 * 100 = 88.9%

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 59

81

Page 60: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Sensitivity = TP / (TP +

FN) = 8 / (8 + 8) = 8 / 16

Specificity= TN / (FP + TN) = 8 / (1 + 8) = 8 / 9

8 / 16 * 100 =

50%

8 / 9 * 100 = 88.9%

Table 3.5.1c: testing2 data output calculation using fuzzy classifier

Fuzzy comparing testing3

POSITIVE NEGATIVE

POSITIVE TP = 8 FP = 1

Positive predictive

value = TP / (TP + FP) = 8 / (8 + 1) =

8 / 9

8 / 9 * 100 = 88.9%

NEGATIVE FN = 2 TN = 7

Negative predictive

value = TN / (FN +

TN)= 7 / (2 + 7)

= 7 / 9

7 / 9 * 100 = 77.8%

Sensitivity = TP / (TP +

FN) = 8 / (8 + 2) = 8 / 10

Specificity= TN / (FP + TN) = 7 / (1 + 7) = 7 / 8

8 / 10 * 100 = 80%

7 / 8 * 100 = 87.5%

Table 3.5.1d: testing3 data output calculation using fuzzy classifier

3.5.2 GAUSSIAN MIXTURE MODEL (GMM)

A Gaussian Mixture Model (GMM) is a parametric probability density function

represented as a weighted sum of Gaussian component densities. GMMs are commonly

used as a parametric model of the probability distribution of continuous measurements or

features in a biometric system[11]. A GMM is a weighted sum of Mcomponent Gaussian

densities as given by the equation: p ( x|λ )=∑i=1

M

wi g¿ where x is a D-dimensional

continuous-valued data vector, w i ,i=1 , …, M , are the mixture weights, and g¿ are the

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 60

81

Page 61: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Figure 3.5.2: Block diagram of GMM method

Normalized data

GMM Classifier

Output

TestTrain

component Gaussian densities. Each component density is a D-variate Gaussian function of

the form: g¿, with mean vector μiand covariance matrix ∑i

. The mixture weights satisfy

the constraint that ∑i=1

M

wi=1[11].

Figure 3.5.2 shows the GMM classification method. Table 3.5.2a shows the output

of 3 set of testing data from GMM classifier. Column No of incorrect normal data denotes

false positives and there are 2 incorrect normal data in testing1. Column No of incorrect

abnormal data denotes false negatives and there are 6 incorrect abnormal data in testing1

and testing3. The Classification rate denotes percentage of correct data. The higher the

classification rate, the higher the accuracy.

Table 3.5.2b-d shows GMM testing data for positive predictive value, negative

predictive value, sensitivity and specificity calculation. TP denotes true positives, TN

denotes true negatives, FP denotes false positives and FN denotes false negatives. Using

the formula: Specificity = number of true negatives

number of true negatives+number of false positives∗100 %[43]

and Sensitivity = number of true positives

number of true positivies+number of falsenegatives∗100 %[43]. A

specificity of 100% means that the test recognizes all actual negatives [43] and a sensitivity of

100% means that the test recognizes all actual positives[43]. Positive predictive value

denotes positive test results which are correctly diagnosed and Negative predictive value

denotes negative test results which are correctly diagnosed.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 61

81

Page 62: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

No of correct normal

data

No of incorrect normal

data

No of correct

abnormal data

No of incorrect abnorma

l data

Classification rate

GMM comparing testing1

7 2 6 3 72.2%

GMM comparing testing2

9 0 9 0 100%

GMM comparing testing3

9 0 6 3 83.3%

Average classification rate

(testing1+testing2

+ testing3) / 3 = 85.2%

Table 3.5.2a: testing1, testing2 and testing3 data output using GMM classifier

GMM comparing testing1

POSITIVENEGATIV

E

POSITIVE TP = 7 FP = 2

Positive predictive

value = TP / (TP + FP) = 7 / (7 + 2) =

7 / 9

7 / 9 * 100 = 77.8%

NEGATIVE

FN = 3 TN = 6

Negative predictive

value = TN / (FN +

TN)= 6 / (3 + 6)

= 6 / 9

6 / 9 * 100 = 66.7%

Sensitivity = TP / (TP +

FN) = 7 / (7 + 3) = 7 / 10

Specificity= TN / (FP + TN) = 6 / (2 + 6) = 6 / 8

7 / 10 * 100 =

70%

6 / 8 * 100 = 75%

Table 3.5.2b: testing1 data output calculation using GMM classifier

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 62

81

Page 63: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

GMM comparing testing2

POSITIVE NEGATIVE

POSITIVE TP = 9 FP = 0

Positive predictive

value = TP / (TP + FP) = 9 / (9 + 0) =

9 / 9 = 1

9 / 9 * 100 = 100%

NEGATIVE FN = 0 TN = 9

Negative predictive

value = TN / (FN +

TN)= 9 / (0 + 9)= 9 / 9 = 1

9 / 9 * 100 = 100%

Sensitivity = TP / (TP +

FN) = 9 / (9 + 0) = 9 / 9 = 1

Specificity= TN / (FP + TN) = 9 / (0 + 9) = 9 / 9 =

1

9 / 9 * 100 = 100%

9 / 9 * 100 = 100%

Table 3.5.2c: testing2 data output calculation using GMM classifier

GMM comparing testing3

POSITIVENEGATIV

E

POSITIVE TP = 9 FP = 0

Positive predictive

value = TP / (TP + FP) = 9 / (9 + 0) =

9 / 9 = 1

9 / 9 * 100 =

100%

NEGATIVE

FN = 3 TN = 6

Negative predictive

value = TN / (FN +

TN)= 6 / (3 + 6)

= 6 / 9

6 / 9 * 100 =

66.7%

Sensitivity = TP / (TP +

FN) = 9 / (9 + 3) = 9 / 12

Specificity= TN / (FP + TN) = 6 / (0 + 6) = 6 / 6

= 1

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 63

81

Page 64: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

9 / 12 * 100 =

75%

6 / 6 * 100 = 100%

Table 3.5.2d: testing3 data output calculation using GMM classifier

CHAPTER FOUR

RESULTS

The features such as blood vessels area, microaneurysms area, exudates area and

textures corresponding to three features were extracted using the proposed algorithms and

methods. Table 4a shows the results of fuzzy classification and Table 4b shows the results

of GMM classification. The percentage of correct data for fuzzy classification over total

data shows a significantly high percentage of 81.5% which makes a good classifier choice

for the final graphical user interface (GUI). However, Table 3.5.2 shows the average GMM

classification rate of 85.2% over the three testing data which makes an even better classifier

choice than fuzzy. Therefore, GMM classifier will be used for the final GUI. Figure 4

shows the graphical plot for average percentage classification results for fuzzy and GMM

classifiers.

Testing1 Testing2 Testing3Total no of

correct data

Total no of

incorrect data

% of correct

data over total data

Average

No of correct data

13 16 15 44(44 / 54) *

100 =81.5%

No of incorrect data

5 2 3 10

Positive predictive value

55.6% 88.9% 88.9% 77.8%

Negative predictive value

88.9% 88.9% 77.8% 85.2%

Sensitivity 83.3% 50% 80% 71.1%

Specificity 66.7% 88.9% 87.5% 81%

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 64

81

Page 65: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Table 4a: Fuzzy classification results

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 65

81

Page 66: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Testing1 Testing2 Testing3Total no

of correct data

Total no of incorrect

data

% of correct

data over total data

Average

No of correct data

13 18 15 46(46 / 54) *

100 = 85.2%

No of incorrect data

5 0 3 8

Classification rate

72.2% 100% 83.3% 85.2%

Positive predictive value

77.8% 100% 100% 92.6%

Negative predictive value

66.7% 100% 66.7% 77.8%

Sensitivity 70% 100% 75% 81.7%

Specificity 75% 100% 100% 91.7%

Table 4b: GMM classification results

Avg classification rate

Avg positive predictive value

Avg negative predictive value

Avg sensitivity Avg specificity0

10

20

30

40

50

60

70

80

90

100

81.577.8

85.2

71.1

8185.2

92.6

77.881.7

91.7

FuzzyGMM

Figure 4: Graphical plot for average percentage classification results from two classifiers

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 66

81

Page 67: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

4.1 GRAPHICAL USER INTERFACE (GUI)

Figure 4.1: GUI

Graphical User Interface or GUI is a type of user interface that allows users to

interact with the program by clicking or typing. It allows the image features to display for

both normal and abnormal classification.

Figure 4.1 shows the print screen of the GUI, the list box shows the list of fundus

images. By clicking ‘Extract Features’ button, blood vessels, microaneurysms, textures

mean and textures third moment are displayed together with their areas. The corresponding

patient’s data can be shown for every fundus image by clicking ‘Patient’s Data’. Clicking

the ‘Diagnosis’ button will display either normal or abnormal classification.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 67

81

Page 68: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

CHAPTER FIVE

CONCLUSION AND RECOMMENDATION

In this report, the system developed have demonstrated a reasonably accuracy of

classification rate of 85.2% (GMM average classification rate) with sensitivity and

specificity of 81.7% and 91.7% respectively (GMM average sensitivity and specificity). The

algorithms and methods used for significance test and classification were fairly fast in

computation speed; a good choice for comparing and computing for two classes of fundus

images. The results have also demonstrated that the system can help to detect diabetic

retinopathy (DR) at early stage for any DR abnormalities. This is important for

ophthalmologist to detect DR and perform necessary treatments to prevent or delay vision

loss.

However, the system can be improved further by using more than two classifiers to

improve sensitivity and specificity, more input features, diverse demographics and most

importantly, the quality of the original fundus images (ie: even background illumination)

need to be improved to show more detailed features as well as to improve the overall

accuracy for significance test and classification.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 68

81

Page 69: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

CHAPER SIX

REFLECTIONS

Doing capstone project has been a whole new, exciting and ‘thrilling’ experience for

me. Although I’ve learned quite a lot from biomedical engineering degree, however, my

choice of capstone project made me felt all awed and bewildered at the beginning. I had no

prior knowledge or experience in MATLAB programming nor did I know anything about

image processing. I began to doubt whether I could complete my project successfully.

In order to begin my project, I needed firstly, to know more about diabetes and diabetic

retinopathy, the complication of diabetes. By finding more information from the internet,

journals and as well as books, I gained better understanding regarding the disease. Most

importantly, the literature review enabled me to start on my proposal.

The greatest hurdle was starting on MATLAB programming. I needed to find materials

and information to practice on programming. I had to juggle between practicing the

programming and reading the journals. Lastly, I needed to start writing the image processing

codes. I spent most of my time practicing on MATLAB programming and understanding

simple debugging. It was difficult to understand all codes and had to look for help from the

materials as well as from my supervisor. It was quite depressing and frustrated when hitting

brick walls and stuck at some point. However, it was very rewarding when I have resolved

the problems.

Starting on image processing was not all that smooth. Bits of problems surfaced during

this time. I had to find solutions to solve / debug these coding problems. I also had to find

and explore right threshold and structure element values. After some struggling and advice

from my supervisor, I was able to finish feature extraction codes. Initially, I had tried and

wanted to include haemorrhages feature for my project, however I was not able to

implement it successfully, but it was then dawned on me that using the texture (together with

the others) features to differentiate between normal and abnormal retinas were adequate as

normal and abnormal retinas have different texture values.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 69

81

Page 70: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Next, I had to find out about various significant tests and with the advice from my

supervisor, decided to use Student’s t-test method. I also learned about the significance p-

values which features are more significant than the others, then using the data to generate

normalized values for the classifiers.

The learning curve for creating classifiers had been confusing and frustrating. Luckily

with the help of my supervisor and Fabian, I was able to understand how to create the

training and testing data for my classifiers. Lastly, I needed to learn to create a graphical

user interface (GUI) for my project presentation. It was a fun and enjoyable experience

which was reminiscent of my Visual Basic lessons from my poly days. All in all, the

capstone project was a priceless experience for me and which I was quite satisfied with my

efforts and outcomes.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 70

81

Page 71: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

REFERENCES

[1] 2-D median filtering – MATLAB

http://www.mathworks.com/help/toolbox/images/ref/medfilt2.html.

[2] Adjusting Pixel Intensity Values :: Analyzing and Enhancing Images (Image Processing

Toolbox™). http://www.mathworks.com/help/toolbox/images/f11-14011.html.

[3] Alasdair McAndrew Introduction to Digital Image Processing With Matlab.

[4] Analyzing Images :: Analyzing and Enhancing Images (Image Processing Toolbox™).

http://www.mathworks.com/help/toolbox/images/f11-11942.html.

[5] B.S. Everitt. The Cambridge Dictionary of Statistics in the Medical Sciences.

[6] C.M.R. Caridade, A.R.S. Marcal & T. Mendonca. The use of texture for image

classification of black & white air-photographs.

[7] Cassini Lossy Compression.

http://www.astro.cornell.edu/research/projects/compression/entropy.html.

[8] Create morphological structuring element (STREL) – MATLAB.

http://www.mathworks.com/help/toolbox/images/ref/strel.html.

[9] Diabetic Retinopathy. http://www.hoptechno.com/book45.htm.

[10] Diabetic Retinopathy Treatment - Treatment of Diabetic Retinopathy.

http://vision.about.com/od/diabeticretinopathy/a/Diabetic_Retinopathy_Treatment.htm.

[11] Douglas Reynolds. Gaussian Mixture Models.

[12] Dr Hanno Coetzer. Morphological Image Processing Lecture 21.

[13] Eye - Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Eye.

[14] Find edges in grayscale image – MATLAB.

http://www.mathworks.com/help/toolbox/images/ref/edge.html.

[15] Fovea centralis - Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Fovea.

[16] Fundus Photography. http://www.aetna.com/cpb/medical/data/500_599/0539.html.

[17] generation5 - Thresholding and Segmentation.

http://www.generation5.org/content/2003/segmentation.asp.

[18] Gillian C. Vafidis. Features of diabetic eye disease.

[19] Harvey Rhody, Chester F. Carlson Center for Imaging Science, Rochester Institute of

Technology. Lecture 3: Basic Morphological Image Processing.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 71

81

Page 72: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

[20] How the Eye Works - Singapore National Eye Centre. http://www.snec.com.sg/eye-

conditions-and-treatments/Pages/how-the-eye-works.aspx.

[21] Ida G. Dox, B. John Melloni, Gilbert M. Eisner, June L. Melloni. Melloni’s Illustrated

Medical Dictionary (4th ed).

[22] Jagadish Nayak, P Subbanna Bhat, Rajendra Acharya U, C M Lim, Manjunath

Kagathi. Automated Identification of Diabetic Retinopathy Stages Using Digital Fundus

Images.

[23] James L. Kinyoun, Donald C. Martin, Wilfred Y. Fujimoto, Donna L. Leonetti.

Opthalmoscopy Versus Fundus Photographs for Detecting and Grading Diabetic

Retinopathy.

[24] Jean-Pascal Aribot. Texture Segmentation.

[25] John Paul Vetter. Biomedical Photography.

[26] K R Bishai An inexpensive method of indirect opthalmoscopy.

[27] Lens (anatomy) - Wikipedia, the free encyclopedia.

http://en.wikipedia.org/wiki/Lens_(anatomy).

[28] LensShopper. Anatomy of the eye.

[29] Ludmila Ilieva Kuncheva. Fuzzy classifier design.

[30] M. Hellmann. Fuzzy Logic Introduction.

[31] Macula of retina - Wikipedia, the free encyclopedia.

http://en.wikipedia.org/wiki/Macula.

[32] Ministry of Health: Disease Burden. http://www.moh.gov.sg/mohcorp/statistics.aspx?

id=23712.

[33] Morphological Operations.

http://www.viz.tamu.edu/faculty/parke/ends489f00/notes/sec1_9.html.

[34] Morphology Fundamentals: Dilation and Erosion :: Morphological Operations (Image

Processing Toolbox™). http://www.mathworks.com/help/toolbox/images/f18-12508.html.

[35] Optic disc - Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Optic_disc.

[36] Pupil - Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Pupil.

[37] Rafael C. Gonzalez, Richard Eugene Woods. Digital image processing.

[38] Rajendra Acharya U, Eddie Y. K. Ng, Jasjit S. Suri. Image Modeling of the Human

Eye.

[39] Raman Maini, Dr. Himanshu Aggarwal. Study and Comparison of Various Image

Edge Detection Techniques.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 72

81

Page 73: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

[40] Ravi Jain, Ajith Abraham. A Comparative Study of Fuzzy Classification Methods on

Breast Cancer Data.

[41] Retina-Vitreous Center | Procedures.

http://www.retinavitreouscenter.com/procedures_laser_photocoagulation.html.

[42] Scott & Christie and Associates Eye Diagram.

http://www.scottandchristie.com/eye.cfm?noflash=1.

[43] Sensitivity and specificity - Wikipedia, the free encyclopedia.

http://en.wikipedia.org/wiki/Sensitivity_and_specificity.

[44] Singapore Association of the Visually Handicapped.

http://www.savh.org.sg/info_cec_diseases.php.

[45] Stanley E. Gunstream. Anatomy and Physiology with Integrated Study Guide (3rd ed).

[46] Student's t-Tests. http://www.physics.csbsju.edu/stats/t-test.html.

[47] U R Acharya, C M Lim, E Y K Ng, C Chee and T Tamura. Computer-based detection

of diabetes retinopathy stages using digital fundus images.

[48] Vinod Patel. Diabetes mellitus: the disease.

[49] Wendy Strouse Watt, O.D. Fluorescein Angiogram.

[50] What is Diabetic Retinopathy? http://www.news-medical.net/health/What-is-Diabetic-

Retinopathy.aspx.

[51] Wong Li Yun, Rajendra Acharya U, Y V. Venkatesh, Caroline Chee, Lim Choo Min,

E.Y.K.Ng. Identification of Different Stages Of Diabetic Retinopathy Using Retinal Optical

Images.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 73

81

Page 74: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

APPENDIX A

BOX PLOT FOR FEATURES (AREA)

Box plot for blood vessels, exudates and microaneurysms respectively

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 74

81

Page 75: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Box plot for mean, standard deviation and third moment respectively

Box plot for entropy

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 75

81

Page 76: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

APPENDIX B

BLOOD VESSELS MATLAB CODE

clear allclc % Read original retinal imageb = imread(‘file name');b = imresize(b,[576 720]); % b(:,:,1) = red component, b(:,:,2) = green component, b(:,:,3) = blue% Assigning green component to g1g1 = b(:,:,2); % Extract green component%============ Figure 3.3.1c =============% % Inverting the green componentg2 = 255-g1;%============ Figure 3.3.1d =============% % Edge detection using canny methoded = edge(g2, 'canny'); %============ Border detection (NEW) =============%Border = imfill(ed,'holes'); [row col] = size(Border);for x = 2:5 for y = 100:650 Border(x,y) = 0; endendfor x = 573:575 for y = 100:650 Border(x,y) = 0; endend % Morphological opening using the disk structuring elements1 = strel('disk',8); e1 = imerode(Border,s1); % Perform erosiond1 = imdilate(Border,s1); % Perform dilation f1 = d1-e1; % Border created%===============================================% %============== Blood vessel from background ===========%% Assigning new green component to g3g3 = 255-g1; % Create new extacted green componenta = adapthisteq(g3); % Perform adaptive histogram equalization%============ Figure 3.3.1e =============% s2 = strel('ball',8,8); % Perform morphological opening operation with structuring element 'ball'e2 = imerode(a,s2);d2 = imdilate(e2,s2);

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 76

81

Page 77: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

%============ Figure 3.3.1f =============% f2 = a-d2; % Subtract from original image to show blood vessels vividly%============ Figure 3.3.1g =============% th = ~im2bw(f2,0.1);%============ Figure 3.3.1h =============% mf = medfilt2(th,[3 3]); % Perform median filtering to lessen noise%============ Figure 3.3.1i =============% f3 = mf-f1; % Image with boundary attainedIfill = imfill(f3,'holes'); %Fill holes NOT touching edge for x = 1:50 % eliminate top border for y = 1:80 f3(x,y) = 1; endend %================= Calculate area =================% H = Ifill+f1;Final = unwanted(H); %final imagefigure, imshow(Final);%============ Figure 3.3.1j =============% Final1 = ~Final;figure, imshow(Final1);%============ Figure 3.3.1k =============% % Area CalculationL = 0;for i = 1:size(Final) for j = 1:size(Final) if Final(i,j) == 0 L = L+1; end endendL

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 77

81

Page 78: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

APPENDIX C

MICROANEURYSMS MATLAB CODE

clear allclc % Read original retinal imagemi1 = imread('file name');mi1 = imresize(mi1,[576 720]); % mi1(:,:,1) = red component, mi1(:,:,2) = green component, mi1(:,:,3) = bluer1 = mi1(:,:,1); % Extract red component%============ Figure 3.3.2c =============% % Inverting the red componentr2 = 255-r1;%============ Figure 3.3.2d =============% % Edge detection using canny methoded = edge(r2,'canny');%============ Figure 3.3.2e =============% [row col] = size(ed);for x = 2:5 for y = 100:650 ed(x,y) = 1; endendfor x = 573:575 for y = 100:650 ed(x,y) = 1; endend %============= Border detection (NEW) =============%Border = imfill(ed,'holes'); s1 = strel('disk',5); e1 = imerode(Border,s1); % Perform erosion with disk of radius = 5d1 = imdilate(Border,s1); % Perform dilation with disk of radius = 5f1 = e1+(~d1); % Border created%============ Figure 3.3.2f =============% %===============================================% G = f1-(~ed); % Edge detection without border%============ Figure 3.3.2g =============% K = imfill(G,'holes'); % Fill holes%============ Figure 3.3.2h =============% P = K - ed; % With unwanted artifacts%============ Figure 3.3.2i =============%

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 78

81

Page 79: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

%============= Blood vessel detection =============%t3 = adapthisteq(r2);se = strel('ball',8,8);BW4 = imerode(t3,se);BW5 = imdilate(BW4,se);Im = t3-BW5;BW3 =~ im2bw(Im,0.08);B = BW3-(~f1);Ifill = imfill(B,'holes');%============ Figure 3.3.2j =============% L = im2double(Ifill);L1 = edge(L,'canny');%============ Figure 3.3.2k =============%%===================================================% %================ Final improvisations =====================%K = G-L1;%============ Figure 3.3.2l =============% Final = imfill(K,'holes');%============ Figure 3.3.2m =============% Final2 = Final-(~P);figure, imshow(Final2);%============ Figure 3.3.2n =============% % Area CalculationL=0;for i=1:size(Final2) for j=1:size(Final2) if Final2(i,j) == 1 L=L+1; end endendL

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 79

81

Page 80: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

APPENDIX D

EXUDATES MATLAB CODE

clear allclc % Read original retinal imageex1=imread('file name');ex1=imresize(ex1,[576 720]); % ex1(:,:,1) = red component, ex1(:,:,2) = green component, ex1(:,:,3) = blue% Assigning green component to g1g1 = ex1(:,:,2); % Extract green component%============ Figure 3.3.3c =============% % Morphological opening using the octagon structuring elements1 = strel('octagon',9);imc = imclose(g1,s1); % Morphological closing%============ Figure 3.3.2d =============% imc = double(imc);fun = @var;im2 = uint8(colfilt(imc,[11 11],'sliding',fun));%============ Figure 3.3.2e =============% th = im2bw(im2,0.7); %============ Figure 3.3.2f =============% s2 = strel('disk',10);d1 = imdilate(th,s2); %dilatione1 = imerode(d1,s2); %erosion%============ Figure 3.3.2g =============% ed = edge(uint8(e1),'canny');%============ Figure 3.3.2h =============% %===================================================%G1 = rgb2gray(ex1); % Convert RGB image to grayscaleG2 = imadjust(G1); % Adjust image intensity values % Detection of Optical Disk max_Ie = max(max(G2)); % Finding maximum value on the image[r, c] = find(G2 == max_Ie);Rmed = median(r);Cmed = median(c);R = floor(Rmed);C = floor(Cmed);% MaskIeSizeX = 576; IeSizeY = 720; radius = 82; [x,y] = meshgrid(1:IeSizeY, 1:IeSizeX); mask = sqrt((x-C).^2 + (y-R).^2) <= radius;

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 80

81

Page 81: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

%============ Figure 3.3.2i =============% % Optical Disk Removalex2 = imsubtract(e1, mask);%============ Figure 3.3.2j =============% %===================================================%g2 = 255-g1; % Image inversionbd = g2-225;bde = edge(bd,'roberts'); % Edgingsq = ones(20,20); % Thickening of edgesd2 = imdilate(bde,sq); fi = ex2-d2; % Subtracting edges%============ Figure 3.3.2k =============% for x = 1:10 % Eliminate top border for y = 1:720 fi(x,y) = 0 ; endendfor x = 560:576 % Eliminate bottom border for y = 1:720 fi(x,y) = 0; endend for x = 1:576 for y = 1:10 % left fi(x,y) = 0 ; endendfor x = 1:576 for y = 710:720 % right fi(x,y) = 0; endend s3=strel('disk',3);e2=imerode(fi,s3); % Erosion%============ Figure 3.3.2l =============% % Area CalculationL=0;for i=1:size(fi) for j=1:size(fi) if fi(i,j)==1 L=L+1; end endendL

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 81

81

Page 82: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

APPENDIX E

TEXTURES MATLAB CODE

function [output] = firstOrderStat(image)x = rgb2gray(image); %convert to grayscale.

%calculate mean mean = 0; for k = 1:255 mean = mean + k*intenProb(k,x); end %calculate Standard Deviation stddev = 0; y = int16(x)-int16(mean); % convert from uint8 to int16 to avoid overflow. z = y.*y; stddev = sum(sum(z)); stddev = stddev/numel(x); stddev = sqrt(stddev); %calculate third moment thirdMoment = 0; total = (y./stddev).^3; %(x(i)- mean)^3 thirdMoment = sum(sum(total)); thirdMoment = thirdMoment/numel(x); %divide by number of element N; %calculate entropy temp = 0; entropy = 0; for k=1:255 temp = (k-mean)*(k-mean)*intenProb(k,x); if(intenProb(k,x) ~= 0) entropy = entropy + intenProb(k,x)*log(intenProb(k,x)); end end entropy = -1*entropy; %print output output = struct('Mean',mean,'Deviation', stddev, 'Third_Moment',thirdMoment,'Entropy', entropy); endfunction out = intenProb(i,x) %function h(i) first Order Statistic. numOccur = 0; numOccur = sum(sum(x==i)); out = numOccur/numel(x);end

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 82

81

Page 83: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

APPENDIX F

MEETING LOGS

Capstone project meeting log - 11 Date 16 January 20102 Time 12pm – 12.30pm3 Duration ½ hour4 Minutes of current meeting Overview of diabetes and diabetic retinopathy.5 Action items/ Targets to

achieveFind and read some related journals and online information regarding diabetic retinopathy and the stages and diabetes.

Capstone project meeting log - 21 Date 6 February 20102 Time 11.15am – 11.45am3 Duration ½ hour4 Minutes of current meeting Discussed further about individual DR features such

as blood vessels, exudates, microaneurysms and textures of normal and abnormal (DR) in different stages. Overview of detection of DR based on the features data and values using MATLAB.

5 Action items/ Targets to achieve

Continue on literature review. Gained better understanding and had rough idea on how to proceed on my proposal.

Capstone project meeting log - 31 Date 13 February 20102 Time 10.45am – 11.45am3 Duration 1 hour4 Minutes of current meeting Overview of some of the MATLAB commands. I am

required to practice using the MATLAB commands to prepare for writing image processing codes. I am also required to begin on my project proposal.

5 Action items/ Targets to achieve

Starting on proposal and practice MATLAB commands. Submitted proposal draft to supervisor for vetting before submission.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 83

81

Page 84: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Capstone project meeting log - 41 Date 13 March 20102 Time 5pm – 5.30pm3 Duration 1/2 hour4 Minutes of current meeting Updated my ongoing MATLAB practice progress.

Starting to write blood vessels extraction MATLAB codes.

5 Action items/ Targets to achieve

Continue on literature review. Ongoing MATLAB practice.

Capstone project meeting log - 51 Date 17 April 20102 Time 11am – 11.30am3 Duration 1/2 hour4 Minutes of current meeting Updated my ongoing MATLAB practice progress and

blood vessels codes.

5 Action items/ Targets to achieve

Continue on literature review. Ongoing MATLAB practice. Starting on interim report. Submitted interim report draft to supervisor for vetting before submission.

Capstone project meeting log - 61 Date 8 May 20102 Time 11am – 11.30am3 Duration 1/2 hour4 Minutes of current meeting Reported some problems with MATLAB codes /

function (threshold value and structuring elements) on blood vessels. Received advice on finding the threshold value.

5 Action items/ Targets to achieve

Continue on literature review. Ongoing MATLAB practice. Discover the appropriate threshold value and SE value.

Capstone project meeting log - 71 Date 22 May 20102 Time 11.15am – 11.45am3 Duration 1/2 hour4 Minutes of current meeting Finished getting the average threshold value and

structuring elements value. Problems with area of blood vessels values were resolved.

5 Action items/ Targets to achieve

Continue on literature review. Ongoing MATLAB practice. Starting on microaneurysms and exudates feature extraction coding.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 84

81

Page 85: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Capstone project meeting log - 81 Date 26 June 20102 Time 11.30am – 12pm3 Duration 1/2 hour4 Minutes of current meeting Discussed about microaneurysms and exudates

feature extraction codes. Overview of getting the p-values (statistically significant) for different features on all images.

5 Action items/ Targets to achieve

Continue on literature review. Ongoing MATLAB practice. Explore and discover the best way to obtain p-values for different features on all images.

Capstone project meeting log - 91 Date 10 July 20102 Time 11.10am – 11.40am3 Duration 1/2 hour4 Minutes of current meeting Discussed about texture feature extraction.

5 Action items/ Targets to achieve

Continue on literature review. Ongoing MATLAB practice. Starting to write texture feature extraction codes.

Capstone project meeting log - 101 Date 31 July 20102 Time 11.00am – 11.30am3 Duration 1/2 hour4 Minutes of current meeting Discussed about texture feature extraction codes.

Overview of classifier and using different classifiers to generate results.

5 Action items/ Targets to achieve

Continue on literature review. Ongoing MATLAB practice. Starting to write classifier codes and preparing training and testing data.

Capstone project meeting log - 111 Date 14 August 20102 Time 11.00am – 11.30am3 Duration 1/2 hour4 Minutes of current meeting Discussed about classifiers results and overview of

creating graphical user interface (GUI).5 Action items/ Targets to

achieveStarting to write GUI codes.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 85

81

Page 86: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

Capstone project meeting log - 121 Date 21 August 20102 Time 11.00am – 11.30am3 Duration 1/2 hour4 Minutes of current meeting Presenting GUI to supervisor.

5 Action items/ Targets to achieve

Preparing the materials to start writing final report.

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 86

81

Page 87: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

APPENDIX G

GANTT CHART

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 80

Page 88: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 2 81

Page 89: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 3 81

Page 90: ChongLingChee_FYP2010.docx - CAPSTONE PROJECT

BME499 ENG499 MTD499 ICT499 MTH499 CAPSTONE PROJECT REPORT 4 81