evaluation of supervised learning algorithms on gene expression data csci 6505 – machine learning

18
Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning Adan Cosgaya [email protected] Winter 2006 Dalhousie University Machine Learnin g Predicti on

Upload: jillian-fry

Post on 02-Jan-2016

27 views

Category:

Documents


0 download

DESCRIPTION

Machine Learning. Prediction. Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning. Adan Cosgaya [email protected] Winter 2006 Dalhousie University. Outline. Introduction Definition of the Problem Related Work Algorithms - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

Evaluation of Supervised Learning Algorithms on Gene Expression Data

CSCI 6505 – Machine Learning

Adan Cosgaya [email protected]

Winter 2006Dalhousie University

Machine Learning Prediction

Page 2: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

2 / 18

Outline

Introduction Definition of the Problem Related Work Algorithms Description of the Data Methodology of Experiments Results Relevance of Results Conclusions & Future Work

Page 3: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

3 / 18

Introduction

ML has gained attention in the biomedical field. Need to turn biomedical data into meaningful

information. Microarray technology is used to generate gene

expression data. Gene expression data involves a huge number of

numeric attributes (gene expression measurements). This kind of data is also characterized by consisting of a

small numbers of instances. This work investigates the classification problem on such

data.

Page 4: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

4 / 18

Definition of the Problem

Classifying Gene Expression Data Number of features (n) is much greater than the number

of sample instances (m). (n >> m)

Typical data: n > 5000, and m < 100

High risk of overfitting the data due the abundance of

attributes and shortage of available samples.

The datasets produced by Microarray experiments are

highly dimensional and often noisy due to the process

involved in the experiments.

Page 5: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

5 / 18

Related Work

Using gene expression data for the task of classification, has recently gained attention in the biomedical community.

Golub et al. describe an approach to cancer classification based on gene expression applied to human acute Leukemia (ALL vs AML).

A. Rosenwald et al. developed a model predictor of patient survival after chemotherapy (Alive vs Dead).

Furey et al. present a method to analyze microarray expression data using SVM.

Guyon et al. experiment with reducing the dimensionality of gene expression data.

Page 6: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

6 / 18

Algorithms

K-Nearest Neighbor (KNN) It is one of the simplest and widely used algorithms for data

classification. Naive Bayes (NB)

It assumes that the effect of a feature value on a given class is independent of the values of other features.

Decision Trees (DT) Internal nodes represent tests on one or more attributes

and leaf nodes indicate decision outcomes. Support Vector Machines (SVM)

Works well on high dimensional data

Page 7: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

7 / 18

Description of the Data

Leukemia dataset. A collection of 72 expression measurements. The samples are divided

into two variants of leukemia: 25 samples of acute myeloid leukemia (AML) and 47 samples acute lymphoblastic leukemia (ALL).

Diffuse Large-B-Cell Lymphoma (DLBCL) dataset Biopsy samples that were examined for gene expression with the use of

DNA microarrays. Each sample corresponds to the prediction of survival after chemotherapy for diffuse large-B-cell lymphoma (Alive, Dead).

Dataset # Instances # Classes # Features

Leukemia 72 2 7129 1026

DLBCL 240 2 7399 68

# Features after feature selection

Page 8: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

8 / 18

Methodology of Experiments

Feature Selection Remove irrelevant features (but may have

biological meaning). Use of GainRatio

Selecting a Supervised Learning Method KNN, NB, DT, SVM

Testing Methodology Evaluation over independent test set (train/test split)

Ratios: 66/34, 80/20, 90/10 10-fold Cross-Validation Compare both methods and see if they are in logical agreement

Feature Selection

(gene subset)

Algorithm

All features

Page 9: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

9 / 18

Methodology of Experiments (cont…)

Measuring Performance Accuracy

Precision (p) Recall (r) F-Measure

It is hard to compare two classifiers using two measures. F-Measure combines precision and recall into one measure.

F-Measure is the harmonic mean of precision, and recall. For F to be large, both p and r must be large.

cases test ofnumber Total

tionsclassificacorrect ofNumber

rp

prMeasureF

2

Page 10: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

10 / 18

Results

Without Feature Selection

KNN NB DT SVM

0.000

10.000

20.000

30.000

40.000

50.000

60.000

70.000

80.000

90.000

100.000

Leukemia (no feature selection)

train / test splitcross-validation

Algorithms

Accura

cy

KNN NB DT SVM

0.000

5.00010.000

15.00020.000

25.000

30.00035.000

40.00045.000

50.000

55.00060.000

65.000

70.000

DLBCL (no feature selection)

train / test split

cross-validation

Algorithms

Acc

ura

cy

Naive Bayes and SVM perform better

KNN and SVM perform better

Cross-validation results are lower; it uses nearly all the data for training and testing, giving a more realistic estimation.

Page 11: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

11 / 18

Results (cont…)

With Feature Selection

KNN NB DT SVM

0.000

10.000

20.000

30.000

40.000

50.000

60.000

70.000

80.000

90.000

100.000

Leukemia (feature selection)

train / test splitcross-validation

Algorithms

Accura

cy

KNN NB DT SVM

0.0005.000

10.00015.00020.00025.00030.00035.00040.00045.00050.00055.00060.00065.00070.00075.00080.000

DLBCL (feature selection)

train / test splitcross-validation

Algorithms

Accu

racy

KNN and SVM perform better

NB and SVM perform better

There is an increase in the overall accuracy, more notorious in DLBCL

Page 12: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

12 / 18

Results (cont…)

Summary of classification accuracies with cross-validation

Leukemia dataset DLBCL dataset

All features Feature selection All features Feature selectionKNN 87.500 98.611 62.917 62.250NB 98.611 97.222 59.167 70.833

DT 86.111 84.722 56.250 64.167

SVM 98.611 98.611 57.917 71.250

F-Measures for both datasets with and without feature selection

KNN NB DT SVM

0.000

0.100

0.200

0.300

0.400

0.500

0.600

0.700

0.800

0.900

1.000

Leukemia All

Leukemia F.S.

DLBCL All

DLBCL F.S.

Algorithms

F-M

easu

re

Page 13: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

13 / 18

Relevance of Results

Performance depends on the characteristics of the problem,

the quality of the measurements in the data, and the

capabilities of the classifier in finding regularities in the data.

Feature selection, helps to minimize the use of redundant

and/or noisy features.

SVM gave the best results, they perform well with high

dimensional data, and also benefit from feature selection.

Decision Trees had the overall worst performance, however,

they still work at a competitive level.

Page 14: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

14 / 18

Relevance of Results (cont…)

Surprisingly, KNN behaves relatively well despite its simplicity, this characteristic allows it to scale well for large feature spaces.

In the case of the Leukemia dataset, very high accuracies were achieved here for all the algorithms. Perfect accuracy was achieved in many cases.

The DLBCL dataset shows lower accuracies, although using feature selection improved them.

In the overall, the observations of the accuracy results are consistent with those from the F-measure, giving us confidence in the relevance of the results obtained.

Page 15: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

15 / 18

Conclusions & Future Work

Supervised learning algorithms can be used to the

classification of gene expression data from DNA microarrays

with high accuracy.

SVM by its very own nature, deal well with high dimensional

gene expression data.

We have verified that there are subsets of features (genes)

that are more relevant than others and better separate the

classes.

The use of one algorithm instead of others should be

evaluated on a case by case basis

Page 16: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

16 / 18

Conclusions & Future Work (cont…)

The use of feature selection proved to be beneficial to

improve the overall performance of the algorithms. This idea

can be extended to the use of other feature selection methods

or data transformation such as PCA.

Analysis of the effect of noisy gene expression data on the

reliability of the classifier.

While the scope of our experimental results is confined to a

couple of datasets, the analysis can be used as a baseline for

future use of supervised learning algorithms for gene

expression data

Page 17: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

17 / 18

References T.R. Golub et al. Molecular classification of cancer: class discovery and

class prediction by gene-expression monitoring. Science, Vol. 286, 531–537, 1999.

A. Rosenwald, G. Wright, W. C. Chan, et al. The use of molecular profiling to predict survival after chemotherapy for diffuse large B-cell lymphoma. New England Journal of Medicine, Vol. 346, 1937–1947, 2002.

Terrence S. Furey, Nello Cristianini, et al. Support vector machine classification and validation of cancer tissue samples using microarray expression data. Bioinformatics, Vol. 16, 906–914, 2001.

I. Guyon, J. Weston, S. Barnhill, and V. Vapnik. Gene selection for cancer classification using support vector machines. BIOWulf Technical Report, 2000.

Ethem Alpaydin. Introduction to Machine Learning. The MIT Press, 2004. Ian H. Witten, Eibe Frank. Data Mining: Practical Machine Learning Tools

and Techniques. Second Edition. Morgan Kaufmann Publishers , 2005 Wikipedia: www.wikipedia.org Alvis Brazma, Helen Parkinson, Thomas Schlitt, Mohammadreza Shojatalab.

A quick introduction to elements of biology-cells, molecules, genes, functional genomics, microarrays. European Bioinformatics Institute.

Page 18: Evaluation of Supervised Learning Algorithms on Gene Expression Data CSCI 6505 – Machine Learning

18 / 18

Thank You!