![Page 1: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/1.jpg)
CSE5334 DATA MINING
CSE4334/5334 Data Mining, Fall 2014
Department of Computer Science and Engineering, University of Texas at Arlington
Chengkai Li (Slides courtesy of Vipin Kumar)
Lecture 12:
Clustering (3)
![Page 2: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/2.jpg)
Hierarchical Clustering
![Page 3: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/3.jpg)
Hierarchical Clustering
Produces a set of nested clusters organized as a hierarchical tree
Can be visualized as a dendrogram A tree like diagram that records the
sequences of merges or splits
1 3 2 5 4 60
0.05
0.1
0.15
0.2
1
2
3
4
5
6
1
23 4
5
3
![Page 4: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/4.jpg)
Strengths of Hierarchical Clustering
Do not have to assume any particular number of clusters Any desired number of clusters can be
obtained by ‘cutting’ the dendogram at the proper level
They may correspond to meaningful taxonomies Example in biological sciences (e.g., animal
kingdom, phylogeny reconstruction, …)
4
![Page 5: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/5.jpg)
Hierarchical Clustering
Two main types of hierarchical clustering Agglomerative:
Start with the points as individual clusters At each step, merge the closest pair of
clusters until only one cluster (or k clusters) left
Divisive: Start with one, all-inclusive cluster At each step, split a cluster until each cluster
contains a point (or there are k clusters)
5
![Page 6: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/6.jpg)
Agglomerative Clustering Algorithm
More popular hierarchical clustering technique
Basic algorithm is straightforward1. Compute the proximity matrix2. Let each data point be a cluster3. Repeat4. Merge the two closest clusters5. Update the proximity matrix6. Until only a single cluster remains
Key operation is the computation of the proximity of two clusters
Different approaches to defining the distance between clusters distinguish the different algorithms
6
![Page 7: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/7.jpg)
Starting Situation
Start with clusters of individual points and a proximity matrix
...p1 p2 p3 p4 p9 p10 p11 p12
7
p1
p3
p5
p4
p2
p1 p2 p3 p4 p5 . . .
.
.
. Proximity Matrix
![Page 8: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/8.jpg)
Intermediate Situation
After some merging steps, we have some clusters.
C1
C4
C2 C5
C3
...p1 p2 p3 p4 p9 p10 p11 p12
8
C2C1
C1
C3
C5
C4
C2
C3 C4 C5
Proximity Matrix
![Page 9: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/9.jpg)
Intermediate Situation
We want to merge the two closest clusters (C2 and C5) and update the proximity matrix.
C1
C4
C2 C5
C3
...p1 p2 p3 p4 p9 p10 p11 p12
9
C2C1
C1
C3
C5
C4
C2
C3 C4 C5
Proximity Matrix
![Page 10: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/10.jpg)
After Merging The question is: “How do we update the proximity
matrix?” i.e. “How do we measure proximity (distance, similarity) between two clusters?”
C1
C4
C2 U C5
C3
...p1 p2 p3 p4 p9 p10 p11 p12
10
? ? ? ?
?
?
?
C2 U C5C1
C1
C3
C4
C2 U C5
C3 C4
![Page 11: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/11.jpg)
How to Define Inter-Cluster Similarity
Similarity?
MIN (Single Linkage) MAX (Complete Linkage) Group Average Distance Between Centroids Other methods driven by an objective function
– Ward’s Method uses squared error11
![Page 12: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/12.jpg)
How to Define Inter-Cluster Similarity
MIN (Single Linkage) MAX (Complete Linkage) Group Average Distance Between Centroids
12
![Page 13: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/13.jpg)
How to Define Inter-Cluster Similarity
MIN (Single Linkage) MAX (Complete Linkage) Group Average Distance Between Centroids
13
![Page 14: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/14.jpg)
How to Define Inter-Cluster Similarity
MIN (Single Linkage) MAX (Complete Linkage) Group Average Distance Between Centroids
14
![Page 15: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/15.jpg)
How to Define Inter-Cluster Similarity
MIN (Single Linkage) MAX (Complete Linkage) Group Average Distance Between Centroids
15
![Page 16: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/16.jpg)
Cluster Similarity: MIN or Single Link
Similarity of two clusters is based on the two most similar (closest) points in the different clusters Determined by one pair of points, i.e., by one link
in the proximity graph.
I1 I2 I3 I4 I5I1 1.00 0.90 0.10 0.65 0.20I2 0.90 1.00 0.70 0.60 0.50I3 0.10 0.70 1.00 0.40 0.30I4 0.65 0.60 0.40 1.00 0.80I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5
16
![Page 17: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/17.jpg)
Hierarchical Clustering: MIN
Nested Clusters Dendrogram
1
2
3
4
5
6
1
2
3
4
5
3 6 2 5 4 10
0.05
0.1
0.15
0.2
17
![Page 18: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/18.jpg)
Similarity vs. Distance
I1 I2 I3 I4 I5I1 1.00 0.90 0.10 0.65 0.20I2 0.90 1.00 0.70 0.60 0.50I3 0.10 0.70 1.00 0.40 0.30I4 0.65 0.60 0.40 1.00 0.80I5 0.20 0.50 0.30 0.80 1.00
I1 I2 I3 I4 I5I1 0.00 0.20 1.80 0.70 1.60I2 0.20 0.00 0.60 0.80 1.00I3 1.80 0.60 0.00 1.20 1.40I4 0.70 0.80 1.20 0.00 0.40I5 1.60 1.00 1.40 0.40 0.00
Similarity Matrix
Distance Matrix
![Page 19: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/19.jpg)
Strength of MIN
Original Points Two Clusters
• Can handle non-elliptical shapes
19
![Page 20: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/20.jpg)
Limitations of MIN
Original Points Two Clusters
• Sensitive to noise and outliers
![Page 21: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/21.jpg)
Cluster Similarity: MAX or Complete Linkage
Similarity of two clusters is based on the two least similar (most distant) points in the different clusters Determined by all pairs of points in the two
clusters
I1 I2 I3 I4 I5
I1 1.00 0.90 0.10 0.65 0.20
I2 0.90 1.00 0.70 0.60 0.50I3 0.10 0.70 1.00 0.40 0.30
I4 0.65 0.60 0.40 1.00 0.80
I5 0.20 0.50 0.30 0.80 1.001 2 3 4 5
21
![Page 22: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/22.jpg)
Hierarchical Clustering: MAX
Nested Clusters Dendrogram
3 6 4 1 2 50
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
1
2
3
4
5
6
1
2 5
3
4
22
![Page 23: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/23.jpg)
Strength of MAX
Original Points Two Clusters
• Less susceptible to noise and outliers
23
![Page 24: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/24.jpg)
Limitations of MAX
Original Points Two Clusters
•Tends to break large clusters
•Biased towards globular clusters24
![Page 25: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/25.jpg)
Cluster Similarity: Group Average
Proximity of two clusters is the average of pairwise proximity between points in the two clusters.
Need to use average connectivity for scalability since total proximity favors large clusters
||Cluster||Cluster
)p,pproximity(
)Cluster,Clusterproximity(ji
ClusterCluster ,ClusterpClusterp
ji
jijijj
ii
I1 I2 I3 I4 I5I1 1.00 0.90 0.10 0.65 0.20I2 0.90 1.00 0.70 0.60 0.50I3 0.10 0.70 1.00 0.40 0.30I4 0.65 0.60 0.40 1.00 0.80I5 0.20 0.50 0.30 0.80 1.00 1 2 3 4 5 25
0.4 0.625 0.35
0.4
0.625
0.35
![Page 26: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/26.jpg)
Hierarchical Clustering: Group Average
Nested Clusters Dendrogram
26
1
2
3
4
5
6
1
2
5
34
3 6 4 2 5 10
0.05
0.1
0.15
0.2
0.25
![Page 27: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/27.jpg)
Hierarchical Clustering: Group Average
Compromise between Single and Complete Link
Strengths Less susceptible to noise and outliers
Limitations Biased towards globular clusters
27
![Page 28: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/28.jpg)
Hierarchical Clustering: Comparison
Group Average
MIN MAX
1
2
3
4
5
61
2
5
34
1
2
3
4
5
61
2 5
3
41
2
3
4
5
6
12
3
4
5
28
![Page 29: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/29.jpg)
Hierarchical Clustering: Time and Space requirements
29
O(N2) space since it uses the proximity matrix. N is the number of points.
O(N3) time in many cases There are N steps and at each step the size,
N2, proximity matrix must be updated and searched
Complexity can be reduced to O(N2 log(N) ) time for some approaches
![Page 30: CSE5334 DATA MINING CSE4334/5334 Data Mining, Fall 2014 Department of Computer Science and Engineering, University of Texas at Arlington Chengkai Li (Slides](https://reader035.vdocuments.site/reader035/viewer/2022081603/56649f265503460f94c3d9b8/html5/thumbnails/30.jpg)
Hierarchical Clustering: Problems and Limitations Once a decision is made to combine two clusters,
it cannot be undone
No objective function is directly minimized
Different schemes have problems with one or more of the following: Sensitivity to noise and outliers Difficulty handling different sized clusters and
convex shapes Breaking large clusters
30