friday, febuary 2, 2001 presenter:ajay gavade paper #2: liu and motoda, chapter 3

27
Kansas State University Department of Computing and Information Sciences 830: Advanced Topics in Artificial Intelligence Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3 Aspects Of Feature Selection for KDD Presentation Presentation

Upload: april-fernandez

Post on 01-Jan-2016

28 views

Category:

Documents


0 download

DESCRIPTION

Presentation. Aspects Of Feature Selection for KDD. Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3. Outline. Categories of Feature Selection Algorithms Feature Ranking Algorithms Minimum Subset Algorithms Basic Feature Generation Schemes & Algorithms - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Friday, Febuary 2, 2001

Presenter:Ajay Gavade

Paper #2: Liu and Motoda, Chapter 3

Aspects OfFeature Selection for KDD

PresentationPresentation

Page 2: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

OutlineOutline

• Categories of Feature Selection Algorithms

– Feature Ranking Algorithms– Minimum Subset Algorithms

• Basic Feature Generation Schemes & Algorithms

– How do we generate subsets?

– Forward, backward, bidirectional, random

• Search Strategies & Algorithms

– How do we systematically search for a good subset?

– Informed & Uninformed Search

– Complete search

– Heuristic search

– Nondeterministic search

• Evaluation Measure

– How do we tell how good a candidate subset is?

– Information gain, Entropy.

Page 3: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

The Major Aspects Of Feature Selection

• Search Directions (Feature Subset Generation)

• Search Strategies

• Evaluation Measures

• A Particular method of feature selection is a combination of some possibilities of every aspect. Hence each method can be represented by a point in the 3-D structure.

Page 4: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Major Categories of Feature Selection Algorithms

(From The Point Of View Of Method’s Output)

• Feature Ranking Algorithms

• These algorithms return a ranked list of features ordered according to some evaluation measure. The algorithm tells the importance (relevance) of a feature compared to others.

Page 5: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Major Categories of Feature Selection Algorithms

(From The Point Of View Of Method’s Output)

• Minimum Subset Algorithms

• These algorithms return a minimum feature subset , and no difference is made for features in the subset. Theses algorithms are used when we don’t know the number of relevant features.

Page 6: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Basic Feature Generation Schemes

• Sequential Forward Generation

• N-step look-ahead form.

• Starts with empty set and adds features from the original set sequentially. Features are added according to relevance.

• One -step look-ahead form is the most commonly used schemes because of good efficiency

• A minimum feature subset or ranked list can be obtained.• Can deal with noise in data.

Page 7: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Basic Feature Generation Schemes

• Sequential Backward Generation

• Starts with full set and removes one feature at a time from the original set sequentially. Least relevant feature is removed.

• But this tells nothing about the ranking of the relevant features remaining.• Doesn't guarantee absolute minimal subset.

Page 8: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Basic Feature Generation Schemes

• Bidirectional Generation

• This runs SFG and SBG in parallel, and stops when one algorithm finds a satisfactory subset.

• Optimizes the speed if number of relevant features is unknown.

Page 9: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Basic Feature Generation Schemes

• Random Generation

• Sequential Generation Algorithms are fast on average, but they can’t guarantee absolute minimum valid set i.e. optimal feature subset. Because if they hit a local minimum (a best subset at the moment) they have no way to get out.

• Random Generation scheme produces subset at random. A good random number generator is required so that every combination of features ideally has a chance to occur and occurs just once.

Page 10: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Search Strategies

• Exhaustive Search

• Depth-First Search

• Exhaustive search is complete since it covers all combinations of features. But a complete search may not be exhaustive.

• This search goes down one branch entirely, and then backtracks to another branch.This uses stack data structure (explicit or implicit)

Page 11: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Depth-First SearchDepth-First Search

Depth-First Search3 features a,b,c

a b c

a, b a,c b,c

a,b,c

Page 12: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Search Strategies

• Breadth-First Search

• This search moves down layer by layer, checking all subsets with one feature , then with two features , and so on. This uses queue data structure.

• Space Complexity makes it impractical in most cases.

Page 13: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Breadth-First SearchBreadth-First Search

Breadth-First Search3 features a,b,c

a b c

a, b a,c b,c

a,b,c

Page 14: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Search Strategies

• Complete Search• Branch & Bound Search

• It is a variation of depth-first search hence it is exhaustive search.

If evaluation measure is monotonic, this search is a complete search and guarantees optimal subset.

Page 15: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Branch & Bound SearchBranch & Bound Search

Branch & Bound Search3 features a,b,c

Bound Beta =12a,b,c

a b c

a, b a,c b,c

11

15

17

12

9

1000

13

17

Page 16: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Heuristic Search

• Best-First Search

• Quick To Find Solution (Subset of Features)

• This is derived from breadth-first search. This expands its search space layer by layer , and chooses one best subset at each layer to expand.

• Finds Near Optimal Solution• More Speed With Little Loss of Optimality

• Beam Search

Page 17: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Best-First SearchBest-First Search

Best-First Search3 features a,b,c

a b c

a, b a,c b,c

a,b,c

1000

17

13

19 18

12 10

20

Page 18: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Search Strategies

• Approximate Branch & Bound Search• This is an extension of the Branch & Bound Search

In this the bound is relaxed by some amount , this allows algorithm to continue and reach optimal subset. By changing , monotonicity of the measure can be observed.

Page 19: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Approximate Branch & Bound SearchApproximate Branch & Bound Search

Approximate Branch & Bound Search 3 features a,b,c

a,b,c

a,b a,c b,c

a b c

11

13

1717

1000

9

1512

Page 20: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Nondeterministic Search

• Avoid Getting Stuck in Local Minima• Capture The Interdependence of Features• RAND• It keeps only the current best subset. • If sufficiently long running period is allowed and a good random function is used, it

can find optimal subset. Problem with this algorithm is we don’t know when we reached the optimal subset. Hence stopping condition is the number of maximum loops allowed.

Page 21: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

• What is Entropy ?

• A Measure of Uncertainty– The Quantity

• Purity: how close a set of instances is to having just one label• Impurity (disorder): how close it is to total uncertainty over labels

– The Measure: Entropy• Directly proportional to impurity, uncertainty, irregularity, surprise• Inversely proportional to purity, certainty, regularity, redundancy

• Example– For simplicity, assume H = {0, 1}, distributed according to Pr(y)

• Can have (more than 2) discrete class labels• Continuous random variables: differential entropy

– Optimal purity for y: either• Pr(y = 0) = 1, Pr(y = 1) = 0• Pr(y = 1) = 1, Pr(y = 0) = 0

• Entropy is 0 if all members of S belong to same class.

Evaluation Measures

0.5 1.0

p+ = Pr(y = +)

1.0

H(p

) =

En

tro

py

(p)

Page 22: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Entropy:Entropy:Information Theoretic DefinitionInformation Theoretic Definition

• Components

– D: a set of examples {<x1, c(x1)>, <x2, c(x2)>, …, <xm, c(xm)>}

– p+ = Pr(c(x) = +), p- = Pr(c(x) = -)

• Definition

– H is defined over a probability density function p

– D contains examples whose frequency of + and - labels indicates p+ and p- for the

observed data

– The entropy of D relative to c is:

H(D) -p+ logb (p+) - p- logb (p-)

• If a target attribute can take on c different values, the entropy of S relative to this c-wise classification is defined as ,

c

iii ppSEntropy

12log)(

• where pi is the proportion of S belonging to the class I.

Page 23: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Entropy

• Entropy is 1 when S contains equal number of positive & negative examples.

• Entropy specifies the minimum number of bits of information needed to encode the classification of an arbitrary member of S.

–What is the least pure probability distribution?

•Pr(y = 0) = 0.5, Pr(y = 1) = 0.5

•Corresponds to maximum impurity/uncertainty/irregularity/surprise

•Property of entropy: concave function (“concave downward”)

•What Units is H Measured In?

–Depends on the base b of the log (bits for b = 2, nats for b = e, etc.)

–A single bit is required to encode each example in the worst case (p+ = 0.5)

–If there is less uncertainty (e.g., p+ = 0.8), we can use less than 1 bit each

Page 24: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Information Gain

• It is a measure of the effectiveness of an attribute in classifying the training data.• Measures the expected reduction in Entropy caused by partitioning the examples

according to the attribute.

• Measure the uncertainty removed by splitting on the value of attribute A• The information gain ,Gain(S,A) of an attribute A, relative to collection of examples S

is,

• where values(A) is the set of all possible values of A.• Gain(S,A) is the information provided about the target function value, given the

value of some attribute A.• The value of Gain(S,A) is the number of bits saved when encoding the target value

of an arbitrary member of S, by knowing the value of attribute A.

)(

)()(),(Avaluesv

SvEntropyS

SvSEntropyASGain

Page 25: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

An Illustrative ExampleAn Illustrative Example

• Prior (unconditioned) distribution: 9+, 5-– H(D) = -(9/14) lg (9/14) - (5/14) lg (5/14) bits = 0.94 bits

– H(D, Humidity = High) = -(3/7) lg (3/7) - (4/7) lg (4/7) = 0.985 bits

– H(D, Humidity = Normal) = -(6/7) lg (6/7) - (1/7) lg (1/7) = 0.592 bits

– Gain(D, Humidity) = 0.94 - (7/14) * 0.985 + (7/14) * 0.592 = 0.151 bits

– Similarly, Gain (D, Wind) = 0.94 - (8/14) * 0.811 + (6/14) * 1.0 = 0.048 bits

Day Outlook Temperature Humidity Wind PlayTennis?1 Sunny Hot High Light No2 Sunny Hot High Strong No3 Overcast Hot High Light Yes4 Rain Mild High Light Yes5 Rain Cool Normal Light Yes6 Rain Cool Normal Strong No7 Overcast Cool Normal Strong Yes8 Sunny Mild High Light No9 Sunny Cool Normal Light Yes10 Rain Mild Normal Light Yes11 Sunny Mild Normal Strong Yes12 Overcast Mild High Strong Yes13 Overcast Hot Normal Light Yes14 Rain Mild High Strong No

values(A)vv

v DHD

DDH- AD,Gain

[6+, 1-][3+, 4-]

Humidity

High Normal

[9+, 5-]

[3+, 3-][6+, 2-]

Wind

Light Strong

[9+, 5-]

Page 26: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Attributes with Many ValuesAttributes with Many Values

• Problem

– If attribute has many values, Gain(•) will select it (why?)

– Imagine using Date = 06/03/1996 as an attribute!

• One Approach: Use GainRatio instead of Gain

– SplitInformation: directly proportional to c = | values(A) |

– i.e., penalizes attributes with more values

• e.g., suppose c1 = cDate = n and c2 = 2

• SplitInformation (A1) = lg(n), SplitInformation (A2) = 1

• If Gain(D, A1) = Gain(D, A2), GainRatio (D, A1) << GainRatio (D, A2)

– Thus, preference bias (for lower branch factor) expressed via GainRatio(•)

values(A)v

vv

values(A)vv

v

D

D

D

DAD,mationSplitInfor

AD,mationSplitInfor

AD,GainAD,GainRatio

DHD

DDH-AD,Gain

lg

Page 27: Friday, Febuary 2, 2001 Presenter:Ajay Gavade Paper #2: Liu and Motoda, Chapter 3

Kansas State University

Department of Computing and Information SciencesCIS 830: Advanced Topics in Artificial Intelligence

Summary PointsSummary Points

• Heuristic : Search :: Inductive Bias : Inductive Generalization

• Entropy and Information Gain

– Goal: to measure uncertainty removed by splitting on a candidate attribute A

• Calculating information gain (change in entropy)

• Using information gain in construction of tree

• Search & Measure• Search and measure play dominant role in feature selection.• Stopping criteria are usually determined by a particular combination of search & measure.• There are different feature selection methods with different combinations of search &

evaluation measures.