ppt

34
Prediction of Cerebral Aneurysm Rupture Q. Peter Lau, Wynne Hsu, Mong Li Lee National University of Singapore {plau, whsu, leeml}@comp.nus.edu.sg Ying Mao, Liang Chen Huashan Hospital, China [email protected], [email protected]

Upload: yashika54

Post on 20-Jun-2015

186 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: PPT

Prediction of Cerebral Aneurysm Rupture

Q. Peter Lau, Wynne Hsu, Mong Li LeeNational University of Singapore

{plau, whsu, leeml}@comp.nus.edu.sgYing Mao, Liang Chen

Huashan Hospital, [email protected], [email protected]

Page 2: PPT

Cerebral Aneurysms

• Weak or thin spots on blood vessels in the brain that balloon out

• Common in adult populations, about 5%

• Majority are small and do not rupture

Page 3: PPT

Cerebral Aneurysms

• Picture taken from: J. L. Brisman, J. K. Song, and D. W. Newell. Cerebral Aneurysms. The New England Journal of Medicine,355(9):928–939, 2006.

Page 4: PPT

Cerebral Aneurysms

• Treatment involves invasive neurosurgery

• Surgical treatment often affects the patient’s quality of life

• However a rupture can cause death

Page 5: PPT

Domain Experts (Neurosurgeons)

• Currently expert opinion on the factors that cause rupture is controversial

• Experts make decisions based on local experience

• There have been some study on individual risk factors using statistical techniques

Page 6: PPT

Motivation

• At Huashan Hospital advances in imaging technology has led to many patients with aneurysms discovered

• The current doctrine is surgical treatment since it is the lesser of the two evils

Page 7: PPT

Motivation

• Too many require surgery:– a neurosurgeon may operate on 4 patients a

day each requiring 4 hours

• Need another way of predicting ruptures while expert opinion is still divided

Page 8: PPT

Data-mining Approach

• Patient data consists of patient records and human annoted diagnosis from images

• Issues:– Automatic class labeling– Unequal Misclassification Cost– Class Imbalance

Page 9: PPT

Automatic Class Labeling

• Binary class:– LR : Aneurysm will rupture within n months

– LNR : Aneurysm will not rupture within n months

• n months correspond to consultation interval

• n may be varied

Page 10: PPT

Unequal Misclassification Cost

• Case 1: Will not rupture classified as will rupture.– Patient undergoes needless surgery but likely

to be alive

• Case 2: Will rupture classified as will not– Rupture occurs and is likely to be fatal

• Cost of case 2 is higher than case 1

Page 11: PPT

Class Imbalance

• Most of the available data is from will rupture cases

• There is a tendency to focus more on the serious cases of the disease

• Thus “will not rupture” is the rare class (15% of dataset)

Page 12: PPT

Data-mining Approach

• Much work done in investigating individually:– Classification– Rare class or rare case– Feature selection– Unequal Misclassification costs– Ensemble techniques

• Effects in combination not well established

Page 13: PPT

Methodology

• Run a combination of algorithms for different tasks in order to find “best” combination

• Many combinations when considering ensembles – pruning required

Page 14: PPT

Methodology (Filters)

Classification Feature Selection Class Imbalance

Page 15: PPT

Methodology (Ensembles)

Single Ensemble Multiple Ensemble

Page 16: PPT

Methodology (Evaluation)

• Need to compare combinations

• Use evaluation metric to reflect unequal misclassification costs

• Score algorithm combinations accordingly

Page 17: PPT

Methodology (Evaluation)

• Precision & Recall of “will not rupture”– Precision(LNR) = T(LNR) / [T(LNR) + F(LNR)]

• The will not ruptures should not include will ruptures

– Recall (LNR) = T(LNR) / [T(LNR) + F(LR)]• We want to detect reasonable amount of will not ruptures

• Weighted F-measure (1 + b) Precision(LNR) Recall (LNR)

Fb = -------------------------------------------

b Precision(LNR) + Recall (LNR)

Page 18: PPT

Methodology (Evaluation)

• Use b = 0.5 to weight precision more favourably

• Intuition: We want to predict will not rupture cases more accurately than we want to detect them

Page 19: PPT

Methodology (Two stage)

• Stage 1: Run filter combinations with 10-fold CV

• Stage 2: Take the top few combinations and use them in single ensemble and multiple ensemble variants

• Output: combination with best F score

Page 20: PPT

Methodology (Inputs)

• Classification:– J48 (C4.5) decision tree– Rule-based variant– SMO variant of SVM– Averaged One-dependence Estimators

(AODE)– k-NN classifier using Heterogeneous Value

Difference Metric

Page 21: PPT

Methodology (Inputs)

• Feature selection/transformation:– Fast Correlation Based Filter using

Symmetrical Uncertainty– Ranking using Symmetrical Uncertainty– Principle Component Analysis

Page 22: PPT

Methodology (Inputs)

• Some Class Imbalance Algos (out of 17):– Tomek Links for under-sampling– Cluster-Based Sampling– Synthetic Minority Over-sampling TEchnique

(SMOTE)– Cluster-Based SMOTE– Sampling with Tomek Links

• Sample rare class to 1:1 and 1:2 ratios

Page 23: PPT

Methodology (Inputs)

• Single Ensemble– Bagging– Boosting

• Multiple Ensemble– Stacking with various meta-classification

algorithms– Voting (un-weighted sum)

Page 24: PPT

Experiments

• Used algorithms in WEKA augmented with those absent

• Run the methodology on 12 other UCI datasets

• 10-fold CV for each combination is done in parallel

Page 25: PPT

Experiments

• Score gain over best plain classification

• Methodology handles rare class problem with varying success

• No dataset is worse off after stage one

0

5

10

15

20

25

30

0 2 4 6 8 10 12 14

maj/min class ratio

Sco

re G

ain

ecoli glass flags

aneurysm sponge zoo

vehicle hepatitis autos

credit-german haberman wine

primary-tumor

Page 26: PPT

Experiments

• Most of the gain is in stage 1

• For our aneurysm dataset, every bit of gain is useful

0 5 10 15 20 25 30

ecoli

glass

flags

aneurysm

sponge

zoo

vehicle

hepatitis

autos

credit-german

haberman

w ine

primary-tumor

Score Gain

Stage 2 Gain Stage 1 Gain

Page 27: PPT

Observations

• If a filter performs better than another, it is not always true that any combination with it is better than the rest

• Taking the top combinations from stage 1 for ensemble methods is not always optimal

• The number of base algorithms to combine in multiple ensembles do not directly relate to a better score

Page 28: PPT

Prediction Tool

• A prediction tool implemented uses best algorithm combination from methodology

• This normally involves an ensemble– The model is not human understandable– Attempt to allow novice user to explore

prediction reasoning

Page 29: PPT

Prediction Tool• On-the-fly prediction

Page 30: PPT

Prediction Tool• Nearest Neighbours visualization

Page 31: PPT

Prediction Tool (Limitations)

• More robust techniques will be required to provide human understandable reasoning

• However this is difficult, algorithms that create such models perform poorly (in terms of accuracy) on our dataset

Page 32: PPT

Conclusion

• An application of a large variety of data-mining techniques to predict aneurysm rupture

• A systematic methodology to achieve the best combination of algorithms for prediction was presented

Page 33: PPT

Conclusion

• Prediction tool is implemented and deployed at hospital, currently the 10-fold CV accuracy is 92%

• More advanced techniques needed for exploring prediction reasoning to improve user confidence

• Alternatively, test more patients in a trial-run to “show” accuracy

Page 34: PPT

The End.

• Q&A

Q. Peter Lau, Wynne Hsu, Mong Li LeeNational University of Singapore

{plau, whsu, leeml}@comp.nus.edu.sgYing Mao, Liang Chen

Huashan Hospital, [email protected], [email protected]