discriminative frequent pattern analysis for effective classification

22
1 Discriminative Frequent Pattern Discriminative Frequent Pattern Analysis for Effective Analysis for Effective Classification Classification Presenter: Han Liang Presenter: Han Liang COURSE PRESENTATION: COURSE PRESENTATION:

Upload: taline

Post on 07-Jan-2016

23 views

Category:

Documents


0 download

DESCRIPTION

COURSE PRESENTATION:. Discriminative Frequent Pattern Analysis for Effective Classification. Presenter: Han Liang. Outline. Motivation Academic Background Methodologies Experimental Study Contributions. Motivation. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Discriminative Frequent Pattern Analysis for Effective Classification

11

Discriminative Frequent Pattern Discriminative Frequent Pattern Analysis for Effective ClassificationAnalysis for Effective Classification

Presenter: Han LiangPresenter: Han Liang

COURSE PRESENTATION:COURSE PRESENTATION:

Page 2: Discriminative Frequent Pattern Analysis for Effective Classification

22

OutlineOutline MotivationMotivation Academic BackgroundAcademic Background MethodologiesMethodologies Experimental StudyExperimental Study ContributionsContributions

Page 3: Discriminative Frequent Pattern Analysis for Effective Classification

33

MotivationMotivation

Frequent patterns are potentially useful in many Frequent patterns are potentially useful in many classification tasks, such as association rule-based classification tasks, such as association rule-based classification, text mining, and protein structure classification, text mining, and protein structure prediction.prediction.

Frequent patterns can accurately reflect underlying Frequent patterns can accurately reflect underlying semantics among items (attribute-value pairs). semantics among items (attribute-value pairs).

Page 4: Discriminative Frequent Pattern Analysis for Effective Classification

44

IntroductionIntroduction This paper investigates the connections between the support of a This paper investigates the connections between the support of a

pattern and its information gain (a discriminative measure), and it also pattern and its information gain (a discriminative measure), and it also proposes a pattern selection algorithm. The generated frequent patterns proposes a pattern selection algorithm. The generated frequent patterns can be used for building high quality classifiers. can be used for building high quality classifiers.

Experiments on UCI data sets indicate that the frequent pattern-based Experiments on UCI data sets indicate that the frequent pattern-based classification framework can achieve high classification accuracy and classification framework can achieve high classification accuracy and good scalability. good scalability.

Page 5: Discriminative Frequent Pattern Analysis for Effective Classification

55

Classification Classification –– A Two-Step Process A Two-Step Process

Step1: Classifier Construction: learning the underlying class probability Step1: Classifier Construction: learning the underlying class probability distributionsdistributions. .

> The set of instances used for building classifiers is called > The set of instances used for building classifiers is called trainingtraining data data set.set.

> The learned classifier can be represented as classification rules, decision > The learned classifier can be represented as classification rules, decision trees, or mathematical formulae (e.g. Bayesian rules).trees, or mathematical formulae (e.g. Bayesian rules).

Step2: Classifier Usage: classifying unlabeled instancesStep2: Classifier Usage: classifying unlabeled instances.. > Estimate accuracy of the classifier.> Estimate accuracy of the classifier. > Accuracy rate is the percentage of test instances which are correctly > Accuracy rate is the percentage of test instances which are correctly

classified by the classifier. classified by the classifier. > If the accuracy is acceptable, use the classifier to classify instances > If the accuracy is acceptable, use the classifier to classify instances

whose class labels are not known. whose class labels are not known.

Page 6: Discriminative Frequent Pattern Analysis for Effective Classification

66

Classification Process I Classification Process I –– Classifier Classifier ConstructionConstruction

NAMENAME RANKRANK YEARSYEARS TENUREDTENURED

MikeMike Assistant ProfAssistant Prof 33 NoNo

MaryMary Assistant ProfAssistant Prof 77 YesYes

BillBill ProfessorProfessor 22 YesYes

JimJim Associate ProfAssociate Prof 77 YesYes

DaveDave Assistant ProfAssistant Prof 66 NoNo

AnneAnne Associate ProfAssociate Prof 33 NoNo

Training Training DataData

Classification AlgorithmsClassification Algorithms

Learned Learned ClassifierClassifier

IF rank = IF rank = ‘‘ProfessorProfessor’’ OR years > 6 THEN OR years > 6 THEN

tenured = tenured = ‘‘YesYes’’

Page 7: Discriminative Frequent Pattern Analysis for Effective Classification

77

Classification Process II Classification Process II –– Use the Classifier Use the Classifier in Prediction in Prediction

NAMENAME RANKRANK YEARSYEARS TENUREDTENURED

TomTom Assistant ProfAssistant Prof 22 NoNo

MerlisaMerlisa Associate ProfAssociate Prof 77 NoNo

GeorgeGeorge ProfessorProfessor 55 YesYes

JosephJoseph Assistant ProfAssistant Prof 77 YesYes

RussRuss Associate ProfAssociate Prof 66 YesYes

HarryHarry ProfessorProfessor 33 YesYes

Test Test DataData

Learned Learned ClassifierClassifier

IF rank = IF rank = ‘‘ProfessorProfessor’’

OR years > 6 OR years > 6 THEN tenured = THEN tenured =

‘‘YesYes’’

Unseen DataUnseen Data

(Jeff, Professor, 4)(Jeff, Professor, 4)

Tenured?Tenured?

Page 8: Discriminative Frequent Pattern Analysis for Effective Classification

88

Association-Rule based Classification (ARC)Association-Rule based Classification (ARC)

Classification Rule Mining: discovering a small set of classification rules that Classification Rule Mining: discovering a small set of classification rules that forms an accurate classifier.forms an accurate classifier.

> The data set > The data set EE is represented by a set of items (or attribute-value pairs) is represented by a set of items (or attribute-value pairs) I I = {a= {a11,,……aann}} and a set of class memberships and a set of class memberships C = {cC = {c11,..c,..cmm}.}.

> Classification Rule: X => Y, where X is the body and Y is the head.> Classification Rule: X => Y, where X is the body and Y is the head. > X is a set of items (a sub set of > X is a set of items (a sub set of I,I, denoted as X I). denoted as X I). > Y is a class membership item. > Y is a class membership item.

> Confidence of a classification rule: conf = S(X Y) /S(X).> Confidence of a classification rule: conf = S(X Y) /S(X). > Support S(X): the number of training instances that satisfy X. > Support S(X): the number of training instances that satisfy X.

Page 9: Discriminative Frequent Pattern Analysis for Effective Classification

99

ARC-II: Mining - AprioriARC-II: Mining - AprioriGenerate all the classification rules with support and confidence larger than Generate all the classification rules with support and confidence larger than

predefined values. predefined values.

> Divide training data set into several subsets; one subset for each class > Divide training data set into several subsets; one subset for each class membership.membership.

> For each subset, with the help of > For each subset, with the help of on-the-shelfon-the-shelf rule mining algorithms (e.g. rule mining algorithms (e.g. Apriori), mines all item sets above the minimum support, and call them Apriori), mines all item sets above the minimum support, and call them frequentfrequent item sets. item sets.

> Output rules by dividing frequent item sets in rule body (attribute-value > Output rules by dividing frequent item sets in rule body (attribute-value pairs) and head (one class label). pairs) and head (one class label).

> Check if the confidence of a rule is above the minimum confidence. > Check if the confidence of a rule is above the minimum confidence.

> Merge rules from each sub set, and sort rules according to their confidences.> Merge rules from each sub set, and sort rules according to their confidences.

miningmining pruningpruning classificationclassification

Page 10: Discriminative Frequent Pattern Analysis for Effective Classification

1010

ARC-III: Rule PruningARC-III: Rule PruningPrune the classification rules with the goal of improving accuracy.Prune the classification rules with the goal of improving accuracy. > Simple Strategy: > Simple Strategy: > Bound the number of rules. > Bound the number of rules. > Pessimistic error-rate based pruning. > Pessimistic error-rate based pruning. > For a rule, if we remove a single item from the rule body and the new > For a rule, if we remove a single item from the rule body and the new rule decreases in error rate, we will prune this rule. rule decreases in error rate, we will prune this rule. > Data set coverage approach. > Data set coverage approach. > If a rule can classify at least one instance correctly, we will put it > If a rule can classify at least one instance correctly, we will put it into the resulting classifier. into the resulting classifier. > Delete all covered instances from training data set.> Delete all covered instances from training data set.

miningmining pruningpruning classificationclassification

Page 11: Discriminative Frequent Pattern Analysis for Effective Classification

1111

ARC-IV: ClassificationARC-IV: ClassificationUse the resulting classification rules to classify unseen instances.Use the resulting classification rules to classify unseen instances. > Input: > Input: > Pruned, sorted list of classification rules . > Pruned, sorted list of classification rules . > Two different approaches:> Two different approaches:

> Majority vote> Majority vote

> Use the first rule that is applicable to the unseen instance for > Use the first rule that is applicable to the unseen instance for classification. classification.

miningmining pruningpruning classificationclassification

Page 12: Discriminative Frequent Pattern Analysis for Effective Classification

1212

The Framework of Frequent-Pattern The Framework of Frequent-Pattern based Classification (FPC)based Classification (FPC)

Discriminative Power vs. Information GainDiscriminative Power vs. Information Gain

Pattern SelectionPattern Selection

Page 13: Discriminative Frequent Pattern Analysis for Effective Classification

Why Are Frequent Patterns Useful?

> Frequent Pattern

- A non-linear combination of single features.

- Increase the expressive (or discriminative) power of the feature

space. * Exclusive OR example.

* Data is linearly separable in (x, y, xy), but not in (x, y)

X Y C

0 0 0

0 1 1

1 0 1

1 1 0

X Y XY C

0 0 0 0

0 1 0 1

1 0 0 1

1 1 1 0

Transform

Linear classifier: f = x + y - 2xy

Page 14: Discriminative Frequent Pattern Analysis for Effective Classification

Discriminative Power vs. Information Gain - IDiscriminative Power vs. Information Gain - IThe discriminative power of a pattern is evaluated by its information gain. The discriminative power of a pattern is evaluated by its information gain. Pattern based Information Gain: Pattern based Information Gain: > Data set S is divided into several sub sets, and each sub set is corresponding to > Data set S is divided into several sub sets, and each sub set is corresponding to a class membership. S can be denoted as S = {Sa class membership. S can be denoted as S = {S11…S…Sii…S…Smm}. }. > For a pattern X, S is divided into two subsets: the group where pattern X is > For a pattern X, S is divided into two subsets: the group where pattern X is applicable and the group where pattern X is rejected. (binary splitting) applicable and the group where pattern X is rejected. (binary splitting) > The information gain of pattern X is calculated via:> The information gain of pattern X is calculated via:

where where I(I(SS1,1,SS2,2,……SSmm) ) is represented by:is represented by:

and and E(X) E(X) is computed by: is computed by:

s

slog

s

s),...,s,ssI(

im

i

im21 2

1

)(),...,()( 21 XESSSIXGain m

)(||

||)(

},{

j

xxj

jSI

S

SXE

Page 15: Discriminative Frequent Pattern Analysis for Effective Classification

Discriminative Power vs. Information Gain - IIDiscriminative Power vs. Information Gain - II

> To simplify the analysis, assume pattern > To simplify the analysis, assume pattern XX {0,1} and {0,1} and CC= {0,1}. = {0,1}. Let P (x=1) = , P (c=1) = p and P (x=1|c=1) =q. Let P (x=1) = , P (c=1) = p and P (x=1|c=1) =q. > Then, > Then,

can be instantiated as: can be instantiated as:

where and q are actually the support and confidence of pattern where and q are actually the support and confidence of pattern X. X.

1

)1()1(log))1()1((

1log)()1log()1(log

)|(log)|()()(}1,0{ }1,0{

qppq

qppqqqqq

xcPxcPxPXEx c

)(||

||)(

},{

j

xxj

jSI

S

SXE

Page 16: Discriminative Frequent Pattern Analysis for Effective Classification

Discriminative Power vs. Information Gain - IIIDiscriminative Power vs. Information Gain - III

> Given a dataset with a fixed class probability distribution, > Given a dataset with a fixed class probability distribution, I(I(SS1,1,SS2,2,……SSmm) ) is a constant is a constant part.part. > After mathematical analysis, we draw the following two conclusions that: > After mathematical analysis, we draw the following two conclusions that:

> A pattern with a low support has a low discriminative power measured by > A pattern with a low support has a low discriminative power measured by information gain. It will harm classification accuracy because it will over-fitting information gain. It will harm classification accuracy because it will over-fitting the training data.the training data. > A pattern with a very high support also has a low discriminative power measured > A pattern with a very high support also has a low discriminative power measured by information gain. It is useless for improving classification accuracy and will by information gain. It is useless for improving classification accuracy and will increase the bias. increase the bias. > Experiments on UCI datasets. > Experiments on UCI datasets.

AustralAustral SonarSonar

The X axis represents the The X axis represents the support of a pattern and the Y support of a pattern and the Y

axis represents the information axis represents the information gain. We can clearly see that gain. We can clearly see that

both low-support and very high-both low-support and very high-support patterns have small support patterns have small values of information gain.values of information gain.

Page 17: Discriminative Frequent Pattern Analysis for Effective Classification

Pattern Selection Algorithm MMRFSPattern Selection Algorithm MMRFS Relevance: A relevance measure S is a function mapping a pattern X to a real Relevance: A relevance measure S is a function mapping a pattern X to a real

value. It benchmarks how much pattern X is relevant to the class label. value. It benchmarks how much pattern X is relevant to the class label.

> Information gain can be used as a relevance measure. > Information gain can be used as a relevance measure.

> A pattern can be selected if it is relevant to the class label measured by IG. > A pattern can be selected if it is relevant to the class label measured by IG.

Redundancy: A redundancy measure R is a function mapping two patterns X and Z Redundancy: A redundancy measure R is a function mapping two patterns X and Z

to a real value such that R (X, Z) is the redundancy value between them. to a real value such that R (X, Z) is the redundancy value between them.

> Mapping function: > Mapping function:

> A pattern can be chosen if it contains very low redundancy to the patterns > A pattern can be chosen if it contains very low redundancy to the patterns

already selected. already selected.

1717

))(),(min(),()()(

),(),( ZSXS

ZXPZPXP

ZXPZXR

Page 18: Discriminative Frequent Pattern Analysis for Effective Classification

Pattern Selection Algorithm MMRFS-IIPattern Selection Algorithm MMRFS-II The algorithm searches over the pattern space The algorithm searches over the pattern space

in a greedy way.in a greedy way.

At first, a pattern with highest relevance value At first, a pattern with highest relevance value (information gain value) is selected. Then the (information gain value) is selected. Then the algorithm selects more patterns from F one by algorithm selects more patterns from F one by one. one.

A pattern is chosen if it has the maximum gain A pattern is chosen if it has the maximum gain among the remaining patterns F-Famong the remaining patterns F-Fss. This gain is . This gain is calculated by: calculated by:

The coverage parameter is set to ensure that The coverage parameter is set to ensure that each training instance is covered at least each training instance is covered at least times by the selected patterns. In this way, the times by the selected patterns. In this way, the number of patterns selected is automatically number of patterns selected is automatically determined.determined.

1818

),(max)()(

RSgFs

Page 19: Discriminative Frequent Pattern Analysis for Effective Classification

Experimental StudyExperimental Study

Basic learning classifiers – used to classify unseen instances. Basic learning classifiers – used to classify unseen instances.

> C4.5 and SVM> C4.5 and SVM For each dataset, a set of frequent patterns F is generated. For each dataset, a set of frequent patterns F is generated.

> A basic classifier, which is built based on all patterns in F,> A basic classifier, which is built based on all patterns in F,

is called Pat_All.is called Pat_All.

> A basic classifier, which is built based on a set of selected features > A basic classifier, which is built based on a set of selected features

FFss, is called Pat_FS. , is called Pat_FS.

> For comparisons, the authors also include the basic classifiers > For comparisons, the authors also include the basic classifiers

which are built based on all single features, called Item_All, and a which are built based on all single features, called Item_All, and a

set of selected single features, called Item_FS.set of selected single features, called Item_FS. Classification Accuracy.Classification Accuracy. All experimental results are obtained by use of ten-fold cross validation. All experimental results are obtained by use of ten-fold cross validation.

1919

Page 20: Discriminative Frequent Pattern Analysis for Effective Classification

Experimental Study-IIExperimental Study-II

2020

Table 1 shows the results by SVM. Table 1 shows the results by SVM.

Pat_FS is better than Item_All and Item_FS. This Pat_FS is better than Item_All and Item_FS. This conclusion indicates that: conclusion indicates that:

> the discriminative power of some frequent patterns is > the discriminative power of some frequent patterns is

higher than that of single features. higher than that of single features.

Pat_FS is better than Pat_All. This confirms that:Pat_FS is better than Pat_All. This confirms that:

> redundant and non-discriminative patterns will make > redundant and non-discriminative patterns will make

classifiers over-fit the training data and decrease the classifiers over-fit the training data and decrease the

classification accuracy. classification accuracy.

The experimental results by C4.5 is similar to SVM’s. The experimental results by C4.5 is similar to SVM’s.

Page 21: Discriminative Frequent Pattern Analysis for Effective Classification

ContributionsContributions This paper propose a framework of frequent pattern-based classification, This paper propose a framework of frequent pattern-based classification,

by analyzing the relations between pattern support and its discriminative by analyzing the relations between pattern support and its discriminative power. It shows that frequent patterns are very useful for classification. power. It shows that frequent patterns are very useful for classification.

Frequent pattern-based classification can use the state-of-the-art frequent Frequent pattern-based classification can use the state-of-the-art frequent pattern mining algorithm for pattern generation, thus achieving good pattern mining algorithm for pattern generation, thus achieving good scalability. scalability.

An effective and efficient pattern selection algorithm is proposed to select An effective and efficient pattern selection algorithm is proposed to select a set of frequent and discriminative patterns for classification. a set of frequent and discriminative patterns for classification.

2121

Page 22: Discriminative Frequent Pattern Analysis for Effective Classification

2222

Thanks!Thanks!

Any Question?Any Question?