lecture 3: data mining - kent · 2006-02-22 · 20 20 data mining (cont.) descriptive patterns...

66
1 1 Lecture 3: Data Mining Lecture 3: Data Mining http://www.cs.kent.edu/~jin/advdatabases.html

Upload: others

Post on 25-May-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

1

1

Lecture 3: Data MiningLecture 3: Data Mining

http://www.cs.kent.edu/~jin/advdatabases.html

Page 2: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

2

2

RoadmapRoadmap

■ What is data mining?

■ Data Mining Tasks

★ Classification/Decision Tree

★ Clustering

★ Association Mining

■ Data Mining Algorithms

★ Decision Tree Construction

★ Frequent 2-itemsets

★ Frequent Itemsets (Apriori)

★ Clustering/Collaborative Filtering

Page 3: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

3

3

What is Data Mining?What is Data Mining?

■ Discovery of useful, possibly unexpected, patterns in data.

■ Subsidiary issues:

★ Data cleansing: detection of bogus data.

✔ E.g., age = 150.

★ Visualization: something better than megabyte files of output.

★ Warehousing of data (for retrieval).

Page 4: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

4

4

Typical Kinds of PatternsTypical Kinds of Patterns

1. Decision trees: succinct ways to classify by testing properties.

2. Clusters: another succinct classification by similarity of properties.

3. Bayes, hidden-Markov, and other statistical models, frequent-itemsets: expose important associations within data.

Page 5: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

5

5

Example: ClustersExample: Clusters

x xx x x xx x x x

x x xx x

xxx x

x x x x x

xx x x

x

x xx x x x x x x

x

x

x

Page 6: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

6

6

Applications (Among Many)Applications (Among Many)

■ Intelligence-gathering.

★ Total Information Awareness.

■ Web Analysis.

★ PageRank.

■ Marketing.

★ Run a sale on diapers; raise the price of beer.

■ Detective?

Page 7: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

7

7

CulturesCultures

■ Databases: concentrate on large-scale (non-main-memory) data.

■ AI (machine-learning): concentrate on complex methods, small data.

■ Statistics: concentrate on inferring models.

Page 8: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

8

8

Models vs. Analytic ProcessingModels vs. Analytic Processing

■ To a database person, data-mining is a powerful form of analytic processing --- queries that examine large amounts of data.

★Result is the data that answers the query.

■ To a statistician, data-mining is the inference of models.

★Result is the parameters of the model.

Page 9: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

9

9

Meaningfulness of AnswersMeaningfulness of Answers

■ A big risk when data mining is that you will “discover” patterns that are meaningless.

■ Statisticians call it Bonferroni’s principle: (roughly) if you look in more places for interesting patterns than your amount of data will support, you are bound to find crap.

Page 10: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

10

10

ExamplesExamples

■ A big objection to TIA was that it was looking for so many vague connections that it was sure to find things that were bogus and thus violate innocents’ privacy.

■ The Rhine Paradox: a great example of how not to conduct scientific research.

Page 11: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

11

11

Rhine Paradox --- (1)Rhine Paradox --- (1)

■ David Rhine was a parapsychologist in the 1950’s who hypothesized that some people had Extra-Sensory Perception.

■ He devised an experiment where subjects were asked to guess 10 hidden cards --- red or blue.

■ He discovered that almost 1 in 1000 had ESP --- they were able to get all 10 right!

Page 12: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

12

12

Rhine Paradox --- (2)Rhine Paradox --- (2)

■ He told these people they had ESP and called them in for another test of the same type.

■ Alas, he discovered that almost all of them had lost their ESP.

■ What did he conclude?

★Answer on next slide.

Page 13: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

13

13

Rhine Paradox --- (3)Rhine Paradox --- (3)

■ He concluded that you shouldn’t tell people they have ESP; it causes them to lose it.

Page 14: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

14

14

A Concrete ExampleA Concrete Example

■ This example illustrates a problem with intelligence-gathering.

■ Suppose we believe that certain groups of evil-doers are meeting occasionally in hotels to plot doing evil.

■ We want to find people who at least twice have stayed at the same hotel on the same day.

Page 15: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

15

15

The DetailsThe Details

■ 109 people being tracked.

■ 1000 days.

■ Each person stays in a hotel 1% of the time (10 days out of 1000).

■ Hotels hold 100 people (so 105 hotels).

■ If everyone behaves randomly (I.e., no evil-doers) will the data mining detect anything suspicious?

Page 16: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

16

16

Calculations --- (1)Calculations --- (1)

■ Probability that persons p and q will be at the same hotel on day d :

★1/100 * 1/100 * 10-5 = 10-9.

■ Probability that p and q will be at the same hotel on two given days:

★10-9 * 10-9 = 10-18.

■ Pairs of days:

★5*105.

Page 17: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

17

17

Calculations --- (2)Calculations --- (2)

■ Probability that p and q will be at the same hotel on some two days:

★5*105 * 10-18 = 5*10-13.

■ Pairs of people:

★5*1017.

■ Expected number of suspicious pairs of people:

★5*1017 * 5*10-13 = 250,000.

Page 18: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

18

18

ConclusionConclusion

■ Suppose there are (say) 10 pairs of evil-doers who definitely stayed at the same hotel twice.

■ Analysts have to sift through 250,010 candidates to find the 10 real cases.

★Not gonna happen.

★But how can we improve the scheme?

Page 19: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

19

19

Data Mining TasksData Mining Tasks■ Data mining is the process of semi-automatically analyzing large

databases to find useful patterns

■ Prediction based on past history

★ Predict if a credit card applicant poses a good credit risk, based on some attributes (income, job type, age, ..) and past history

★ Predict if a pattern of phone calling card usage is likely to be fraudulent

■ Some examples of prediction mechanisms:

★ Classification

✔ Given a new item whose class is unknown, predict to which class it belongs

★ Regression formulae

✔ Given a set of mappings for an unknown function, predict the function result for a new parameter value

Page 20: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

20

20

Data Mining (Cont.)Data Mining (Cont.)

■ Descriptive Patterns

★ Associations

✔ Find books that are often bought by “similar” customers. If a new such customer buys one such book, suggest the others too.

★ Associations may be used as a first step in detecting causation

✔ E.g. association between exposure to chemical X and cancer,

★ Clusters

✔ E.g. typhoid cases were clustered in an area surrounding a contaminated well

✔ Detection of clusters remains important in detecting epidemics

Page 21: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

21

21

Decision TreesDecision Trees

sale custId car age city newCarc1 taurus 27 sf yesc2 van 35 la yesc3 van 40 sf yesc4 taurus 22 sf yesc5 merc 50 la noc6 taurus 25 la no

Example:• Conducted survey to see what customers were interested in new model car• Want to select customers for advertising campaign

trainingset

Page 22: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

22

22

One PossibilityOne Possibility

sale custId car age city newCarc1 taurus 27 sf yesc2 van 35 la yesc3 van 40 sf yesc4 taurus 22 sf yesc5 merc 50 la noc6 taurus 25 la no

age<30

city=sf car=van

likely likelyunlikely unlikely

YY

Y

NN

N

Page 23: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

23

23

Another PossibilityAnother Possibility

sale custId car age city newCarc1 taurus 27 sf yesc2 van 35 la yesc3 van 40 sf yesc4 taurus 22 sf yesc5 merc 50 la noc6 taurus 25 la no

car=taurus

city=sf age<45

likely likelyunlikely unlikely

YY

Y

NN

N

Page 24: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

24

24

IssuesIssues

■ Decision tree cannot be “too deep”

✔would not have statistically significant amounts of data for lower decisions

■ Need to select tree that most reliably predicts outcomes

Page 25: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

25

25

ClusteringClustering

age

income

education

Page 26: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

26

26

Another Example: TextAnother Example: Text

■ Each document is a vector

★ e.g., <100110...> contains words 1,4,5,...

■ Clusters contain “similar” documents

■ Useful for understanding, searching documents

internationalnews

sports

business

Page 27: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

27

27

IssuesIssues

■ Given desired number of clusters?

■ Finding “best” clusters

■ Are clusters semantically meaningful?

Page 28: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

28

28

Association Rule MiningAssociation Rule Mining

tran1 cust33 p2, p5, p8tran2 cust45 p5, p8, p11tran3 cust12 p1, p9tran4 cust40 p5, p8, p11tran5 cust12 p2, p9tran6 cust12 p9

transa

ction

id custo

mer

id products

bought

salesrecords:

• Trend: Products p5, p8 often bough together• Trend: Customer 12 likes product p9

market-basketdata

Page 29: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

29

29

Association RuleAssociation Rule

■ Rule: {p1, p3, p8}

■ Support: number of baskets where these products appear

■ High-support set: support ≥ threshold s

■ Problem: find all high support sets

Page 30: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

30

30

Finding High-Support Finding High-Support PairsPairs

■ Baskets(basket, item)

■ SELECT I.item, J.item, COUNT(I.basket)FROM Baskets I, Baskets JWHERE I.basket = J.basket AND I.item < J.itemGROUP BY I.item, J.itemHAVING COUNT(I.basket) >= s;

Page 31: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

31

31

ExampleExample

basket itemt1 p2t1 p5t1 p8t2 p5t2 p8t2 p11... ...

basket item1 item2t1 p2 p5t1 p2 p8t1 p5 p8t2 p5 p8t2 p5 p11t2 p8 p11... ... ...

check ifcount ≥ s

Page 32: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

32

32

IssuesIssues

■ Performance for size 2 rules

basket itemt1 p2t1 p5t1 p8t2 p5t2 p8t2 p11... ...

basket item1 item2t1 p2 p5t1 p2 p8t1 p5 p8t2 p5 p8t2 p5 p11t2 p8 p11... ... ...

bigevenbigger!

■ Performance for size k rules

Page 33: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

33

33

RoadmapRoadmap

■ What is data mining?

■ Data Mining Tasks

★ Classification/Decision Tree

★ Clustering

★ Association Mining

■ Data Mining Algorithms

★ Decision Tree Construction

★ Frequent 2-itemsets

★ Frequent Itemsets (Apriori)

★ Clustering/Collaborative Filtering

Page 34: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

34

34

Classification RulesClassification Rules

■ Classification rules help assign new objects to classes.

★ E.g., given a new automobile insurance applicant, should he or she be classified as low risk, medium risk or high risk?

■ Classification rules for above example could use a variety of data, such as educational level, salary, age, etc.

★ ∀ person P, P.degree = masters and P.income > 75,000

⇒ P.credit = excellent

★ ∀ person P, P.degree = bachelors and (P.income ≥ 25,000 and P.income ≤ 75,000) ⇒ P.credit = good

■ Rules are not necessarily exact: there may be some misclassifications

■ Classification rules can be shown compactly as a decision tree.

Page 35: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

35

35

Decision TreeDecision Tree

Page 36: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

36

36

Decision Tree ConstructionDecision Tree Construction

NAME Balance Employed Age DefaultMatt 23,000 No 30 yesBen 51,100 No 48 yesChris 68,000 Yes 55 noJim 74,000 No 40 noDavid 23,000 No 47 yesJohn 100,000 Yes 49 no

Balance

>=50K<50K

Age

>=45<45

Employed

Class=NotDefault

NoYes

Class=YesDefault

Root

LeafClass=Yes

Default

Class=NotDefault

Node

Page 37: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

37

37

Construction of Decision TreesConstruction of Decision Trees

■ Training set: a data sample in which the classification is already known.

■ Greedy top down generation of decision trees.

★ Each internal node of the tree partitions the data into groups based on a partitioning attribute, and a partitioning condition for the node

★ Leaf node:

✔ all (or most) of the items at the node belong to the same class, or

✔ all attributes have been considered, and no further partitioning is possible.

Page 38: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

38

38

Finding the Best Split Point for Numerical Finding the Best Split Point for Numerical AttributesAttributes

Complete Class Histograms

02040

20 28 36 44 52 60 68 76

Age

Cou

nts

class-2

class-1

The data comes from a IBM Quest synthetic dataset for function 0

Gain Function

00.020.040.060.08

0.1

20 27 34 41 48 55 62 69 76

Age

gai

n

gain function

Best Split Point

In-core algorithms, such as C4.5, will just online sort the numerical attributes!

Page 39: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

39

39

Best SplitsBest Splits■ Pick best attributes and conditions on which to partition

■ The purity of a set S of training instances can be measured quantitatively in several ways.

★ Notation: number of classes = k, number of instances = |S|, fraction of instances in class i = pi.

■ The Gini measure of purity is defined as

[

Gini (S) = 1 - ∑

★ When all instances are in a single class, the Gini value is 0

★ It reaches its maximum (of 1 –1 /k) if each class the same number of instances.

kk

ii- 1- 1pp22

ii

Page 40: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

40

40

Best Splits (Cont.)Best Splits (Cont.)■ Another measure of purity is the entropy measure, which is defined as

entropy (S) = – ∑

■ When a set S is split into multiple sets Si, I=1, 2, …, r, we can measure the purity of the resultant set of sets as:

purity(S1, S2, ….., Sr) = ∑

■ The information gain due to particular split of S into Si, i = 1, 2, …., r

Information-gain (S, {S1, S2, …., Sr) = purity(S ) – purity (S1, S2, … Sr)

rr

ii= 1= 1

||SSii||

||SS||purity purity ((SSii))

kk

i- i- 11ppiiloglog2 2 ppii

Page 41: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

41

41

Finding Best SplitsFinding Best Splits

■ Categorical attributes (with no meaningful order):

★ Multi-way split, one child for each value

★ Binary split: try all possible breakup of values into two sets, and pick the best

■ Continuous-valued attributes (can be sorted in a meaningful order)

★ Binary split:

✔ Sort values, try each as a split point

– E.g. if values are 1, 10, 15, 25, split at ≤1, ≤ 10, ≤ 15

✔ Pick the value that gives best split

★ Multi-way split:

✔ A series of binary splits on the same attribute has roughly equivalent effect

Page 42: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

42

42

Decision-Tree Construction AlgorithmDecision-Tree Construction AlgorithmProcedure GrowTree (S )

Partition (S );

Procedure Partition (S)if ( purity (S ) > δp or |S| < δs ) then return;for each attribute A

evaluate splits on attribute A;Use best split found (across all attributes) to partition

S into S1, S2, …., Sr,for i = 1, 2, ….., r Partition (Si );

Page 43: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

43

43

Finding Association RulesFinding Association Rules

■ We are generally only interested in association rules with reasonably high support (e.g. support of 2% or greater)

■ Naïve algorithm

1. Consider all possible sets of relevant items.

2. For each set find its support (i.e. count how many transactions purchase all items in the set).

★ Large itemsets: sets with sufficiently high support

Page 44: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

44

44

Example: Association RulesExample: Association Rules

■ How do we perform rule mining efficiently?

■ Observation: If set X has support t, then each X subset must have at least support t

■ For 2-sets:

★ if we need support s for {i, j}

★ then each i, j must appear in at least s baskets

Page 45: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

45

45

Algorithm for 2-SetsAlgorithm for 2-Sets

(1) Find OK products

★ those appearing in s or more baskets

(2) Find high-support pairs using only OK products

Page 46: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

46

46

Algorithm for 2-SetsAlgorithm for 2-Sets

■ INSERT INTO okBaskets(basket, item) SELECT basket, item FROM Baskets GROUP BY item HAVING COUNT(basket) >= s;

■ Perform mining on okBaskets SELECT I.item, J.item, COUNT(I.basket) FROM okBaskets I, okBaskets J WHERE I.basket = J.basket AND I.item < J.item GROUP BY I.item, J.item HAVING COUNT(I.basket) >= s;

Page 47: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

47

47

Counting EfficientlyCounting Efficiently

■ One way:

basket I.item J.itemt1 p5 p8t2 p5 p8t2 p8 p11t3 p2 p3t3 p5 p8t3 p2 p8... ... ...

sort

basket I.item J.itemt3 p2 p3t3 p2 p8t1 p5 p8t2 p5 p8t3 p5 p8t2 p8 p11... ... ...

count &remove count I.item J.item

3 p5 p85 p12 p18... ... ...

threshold = 3

Page 48: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

48

48

Counting EfficientlyCounting Efficiently

■ Another way:

basket I.item J.itemt1 p5 p8t2 p5 p8t2 p8 p11t3 p2 p3t3 p5 p8t3 p2 p8... ... ...

removecount I.item J.item

3 p5 p85 p12 p18... ... ...

scan &count

count I.item J.item1 p2 p32 p2 p83 p5 p85 p12 p181 p21 p222 p21 p23... ... ...

keep counterarray in memory

threshold = 3

Page 49: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

49

49

Yet Another WayYet Another Way

basket I.item J.itemt1 p5 p8t2 p5 p8t2 p8 p11t3 p2 p3t3 p5 p8t3 p2 p8... ... ...

(1)scan &hash &count

count bucket1 A5 B2 C1 D8 E1 F... ...

in-memoryhash table threshold = 3

basket I.item J.itemt1 p5 p8t2 p5 p8t2 p8 p11t3 p5 p8t5 p12 p18t8 p12 p18... ... ...

(2) scan &remove

count I.item J.item3 p5 p85 p12 p18... ... ...

(4) remove

count I.item J.item3 p5 p81 p8 p115 p12 p18... ... ...

(3) scan& count

in-memorycounters

false positive

Page 50: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

50

50

DiscussionDiscussion

■ Hashing scheme: 2 (or 3) scans of data

■ Sorting scheme: requires a sort!

■ Hashing works well if few high-support pairs and many low-support ones

item-pairs ranked by frequency

fre

que

ncy

threshold

iceberg queries

Page 51: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

51

51

Review (1)Review (1)

■ What’s classification problem?

■ What’s decision tree? How to build it?

■ What frequent itemset mining problem?

■ How to use SQL to express “finding frequent pairs” problem?

■ What’s the alternatives to find such pairs?

Page 52: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

52

52

Review (2) Review (2) Decision Tree ConstructionDecision Tree Construction

NAME Balance Employed Age DefaultMatt 23,000 No 30 yesBen 51,100 No 48 yesChris 68,000 Yes 55 noJim 74,000 No 40 noDavid 23,000 No 47 yesJohn 100,000 Yes 49 no

Balance

>=50K<50K

Age

>=45<45

Employed

Class=NotDefault

NoYes

Class=YesDefault

Root

LeafClass=Yes

Default

Class=NotDefault

Node

•Recursively construct the decision tree•Best test condition for a node

•Categorical attributes, best subset test•Numerical attributes, best split point

•Partition the dataset

Page 53: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

53

53

Review (3) Review (3) Finding High-Support Finding High-Support PairsPairs

■ Baskets(basket, item)

■ SELECT I.item, J.item, COUNT(I.basket)FROM Baskets I, Baskets JWHERE I.basket = J.basket AND I.item < J.itemGROUP BY I.item, J.itemHAVING COUNT(I.basket) >= s;

Page 54: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

54

54

Frequent Itemsets MiningFrequent Itemsets Mining

{ A, B, C }900

{ A, C, E }1000

{ A, B, C, E }800

{ A, B }700

{ A, C }600

{ B, C }500

{ A, C }400

{ A, B, E }300

{ B, D }200

{ A, B, E }100

TransactionsTID ■ Desired frequency 50% (support level)

★ {A},{B},{C},{A,B}, {A,C}

■ Down-closure (apriori) property

★ If an itemset is frequent, all of its subset must also be frequent

Page 55: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

55

55

Found to be Infrequent

null

AB AC AD AE BC BD BE CD CE DE

A B C D E

ABC ABD ABE ACD ACE ADE BCD BCE BDE CDE

ABCD ABCE ABDE ACDE BCDE

ABCDE

Lattice for Enumerating Frequent ItemsetsLattice for Enumerating Frequent Itemsets

n u l l

A B A C A D A E B C B D B E C D C E D E

A B C D E

A B C A B D A B E A C D A C E A D E B C D B C E B D E C D E

A B C D A B C E A B D E A C D E B C D E

A B C D E

Pruned supersets

Page 56: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

56

56

Apriori Apriori L0 = ∅

C1 = { 1-item subsets of all the transactions }

For ( k=1; Ck ≠ 0; k++ )

{ * support counting * }

for all transactions t ∈ D

for all k-subsets s of t

if k ∈ Ck

s.count++

{ * candidates generation * }

Lk = { c ∈ Ck | c.count>= min sup }

Ck+1 = apriori_gen( Lk )

Answer = UkLk

Page 57: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

57

57

ClusteringClustering

■ Clustering: Intuitively, finding clusters of points in the given data such that similar points lie in the same cluster

■ Can be formalized using distance metrics in several ways

★ Group points into k sets (for a given k) such that the average distance of points from the centroid of their assigned group is minimized

✔ Centroid: point defined by taking average of coordinates in each dimension.

★ Another metric: minimize average distance between every pair of points in a cluster

■ Has been studied extensively in statistics, but on small data sets

★ Data mining systems aim at clustering techniques that can handle very large data sets

★ E.g. the Birch clustering algorithm!

Page 58: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

58

58

K-Means ClusteringK-Means Clustering

Page 59: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

59

59

Hierarchical ClusteringHierarchical Clustering■ Example from biological classification

★ (the word classification here does not mean a prediction mechanism)

chordata

mammalia reptilia

leopards humans snakes crocodiles

■ Other examples: Internet directory systems (e.g. Yahoo!)

■ Agglomerative clustering algorithms

★ Build small clusters, then cluster small clusters into bigger clusters, and so on

■ Divisive clustering algorithms

★ Start with all items in a single cluster, repeatedly refine (break) clusters into smaller ones

Page 60: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

60

60

Collaborative FilteringCollaborative Filtering

■ Goal: predict what movies/books/… a person may be interested in, on the basis of

★ Past preferences of the person

★ Other people with similar past preferences

★ The preferences of such people for a new movie/book/…

■ One approach based on repeated clustering

★ Cluster people on the basis of preferences for movies

★ Then cluster movies on the basis of being liked by the same clusters of people

★ Again cluster people based on their preferences for (the newly created clusters of) movies

★ Repeat above till equilibrium

■ Above problem is an instance of collaborative filtering, where users collaborate in the task of filtering information to find information of interest

Page 61: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

61

61

Other Types of MiningOther Types of Mining

■ Text mining: application of data mining to textual documents

★ cluster Web pages to find related pages

★ cluster pages a user has visited to organize their visit history

★ classify Web pages automatically into a Web directory

■ Data visualization systems help users examine large volumes of data and detect patterns visually

★ Can visually encode large amounts of information on a single screen

★ Humans are very good a detecting visual patterns

Page 62: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

62

62

Data StreamsData Streams

■ What are Data Streams?

★ Continuous streams

★ Huge, Fast, and Changing

■ Why Data Streams?

★ The arriving speed of streams and the huge amount of data are beyond our capability to store them.

★ “Real-time” processing

■ Window Models

★ Landscape window (Entire Data Stream)

★ Sliding Window

★ Damped Window

■ Mining Data Stream

Page 63: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

63

63

A Simple ProblemA Simple Problem

■ Finding frequent items

★ Given a sequence (x1,…xN) where xi ∈[1,m], and a real number θ between zero and one.

★ Looking for xi whose frequency > θ

★ Naïve Algorithm (m counters)

■ The number of frequent items 1/≤ θ

■ Problem: N>>m>>1/θ

P×(Nθ) ≤ N

Page 64: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

64

64

KRP algorithm KRP algorithm ─ ─ Karp, et. al (TODS’ 03)Karp, et. al (TODS’ 03)

Θ=0.35

⌈1/θ ⌉ = 3

N=30 m=12

N/ (⌈1/θ ) ≤ N⌉ θ

Page 65: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

65

65

Enhance the AccuracyEnhance the Accuracy

Θ=0.35

⌈1/θ⌉ = 3

ε=0.5 Θ(1- ε )=0.175

⌈1/(θ×ε)⌉ = 6

N=30 m=12

NΘ>10

(ε*Θ)N (Θ-(ε*Θ))N

Deletion/Reducing = 9

Page 66: Lecture 3: Data Mining - Kent · 2006-02-22 · 20 20 Data Mining (Cont.) Descriptive Patterns ★Associations Find books that are often bought by “similar” customers.If a new

66

66

Frequent Items for Transactions with Fixed Frequent Items for Transactions with Fixed LengthLength

Each transaction has 2 items

Θ=0.60

⌈1/θ ×⌉ 2= 4