Data Mining Cluster Analysis Basic Concepts and Algorithms
Data Mining Cluster Analysis: Basic Concepts and Algorithms Lecture Notes for Chapter 8 Introduction to Data Mining by Tan, Steinbach, Kumar Modified by S. Parthasarathy 5/01/2007 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 1
What is Cluster Analysis? l Finding groups of objects such that the objects in a group will be similar (or related) to one another and different from (or unrelated to) the objects in other groups Inter-cluster distances are maximized Intra-cluster distances are minimized © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 2
Applications of Cluster Analysis l Understanding – Group related documents for browsing, group genes and proteins that have similar functionality, or group stocks with similar price fluctuations l Summarization – Reduce the size of large data sets Clustering precipitation in Australia © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 3
What is not Cluster Analysis? l Supervised classification – Have class label information l Simple segmentation – Dividing students into different registration groups alphabetically, by last name l Results of a query – Groupings are a result of an external specification l Graph partitioning – Some mutual relevance and synergy, but areas are not identical © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 4
Notion of a Cluster can be Ambiguous How many clusters? Six Clusters Two Clusters Four Clusters © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 5
Types of Clusterings l A clustering is a set of clusters l Important distinction between hierarchical and partitional sets of clusters l Partitional Clustering – A division data objects into non-overlapping subsets (clusters) such that each data object is in exactly one subset l Hierarchical clustering – A set of nested clusters organized as a hierarchical tree © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 6
Partitional Clustering Original Points © Tan, Steinbach, Kumar A Partitional Clustering Introduction to Data Mining 4/18/2004 7
Hierarchical Clustering Traditional Dendrogram Non-traditional Hierarchical Clustering Non-traditional Dendrogram © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 8
Types of Clusters: Well-Separated l Well-Separated Clusters: – A cluster is a set of points such that any point in a cluster is closer (or more similar) to every other point in the cluster than to any point not in the cluster. 3 well-separated clusters © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 9
Types of Clusters: Center-Based l Center-based – A cluster is a set of objects such that an object in a cluster is closer (more similar) to the “center” of a cluster, than to the center of any other cluster – The center of a cluster is often a centroid, the average of all the points in the cluster, or a medoid, the most “representative” point of a cluster 4 center-based clusters © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 10
Types of Clusters: Contiguity-Based l Contiguous Cluster (Nearest neighbor or Transitive) – A cluster is a set of points such that a point in a cluster is closer (or more similar) to one or more other points in the cluster than to any point not in the cluster. 8 contiguous clusters © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 11
Types of Clusters: Density-Based l Density-based – A cluster is a dense region of points, which is separated by low -density regions, from other regions of high density. – Used when the clusters are irregular or intertwined, and when noise and outliers are present. 6 density-based clusters © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 12
Characteristics of the Input Data Are Important l Type of proximity or density measure – This is a derived measure, but central to clustering l Sparseness – Dictates type of similarity – Adds to efficiency l Attribute type – Dictates type of similarity l Type of Data – Dictates type of similarity – Other characteristics, e. g. , autocorrelation l l l Dimensionality Noise and Outliers Type of Distribution © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 13
Clustering Algorithms l K-means and its variants l Hierarchical clustering l Density-based clustering © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 14
K-means Clustering l Partitional clustering approach l Each cluster is associated with a centroid (center point) l Each point is assigned to the cluster with the closest centroid l Number of clusters, K, must be specified l The basic algorithm is very simple © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 15
K-means Clustering – Details l Initial centroids are often chosen randomly. – Clusters produced vary from one run to another. l The centroid is (typically) the mean of the points in the cluster. l ‘Closeness’ is measured by Euclidean distance, cosine similarity, correlation, etc. l K-means will converge for common similarity measures mentioned above. l Most of the convergence happens in the first few iterations. – l Often the stopping condition is changed to ‘Until relatively few points change clusters’ Complexity is O( n * K * I * d ) – n = number of points, K = number of clusters, I = number of iterations, d = number of attributes © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 16
Two different K-means Clusterings Original Points Optimal Clustering © Tan, Steinbach, Kumar Introduction to Data Mining Sub-optimal Clustering 4/18/2004 17
Importance of Choosing Initial Centroids © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 18
Importance of Choosing Initial Centroids © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 19
Evaluating K-means Clusters l Most common measure is Sum of Squared Error (SSE) – For each point, the error is the distance to the nearest cluster – To get SSE, we square these errors and sum them. – x is a data point in cluster Ci and mi is the representative point for cluster Ci u can show that mi corresponds to the center (mean) of the cluster – Given two clusters, we can choose the one with the smallest error – One easy way to reduce SSE is to increase K, the number of clusters A good clustering with smaller K can have a lower SSE than a poor clustering with higher K u © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 20
Importance of Choosing Initial Centroids … © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 21
Importance of Choosing Initial Centroids … © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 22
Problems with Selecting Initial Points l If there are K ‘real’ clusters then the chance of selecting one centroid from each cluster is small. – – Chance is relatively small when K is large – – For example, if K = 10, then probability = 10!/1010 = 0. 00036 – Consider an example of five pairs of clusters If clusters are the same size, n, then Sometimes the initial centroids will readjust themselves in ‘right’ way, and sometimes they don’t © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 23
Solutions to Initial Centroids Problem l Multiple runs – Helps, but probability is not on your side Sample and use hierarchical clustering to determine initial centroids l Select more than k initial centroids and then select among these initial centroids l – Select most widely separated Postprocessing l Bisecting K-means l – Not as susceptible to initialization issues © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 24
Handling Empty Clusters l Basic K-means algorithm can yield empty clusters l Several strategies – Choose the point that contributes most to SSE – Choose a point from the cluster with the highest SSE – If there are several empty clusters, the above can be repeated several times. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 25
Updating Centers Incrementally l In the basic K-means algorithm, centroids are updated after all points are assigned to a centroid l An alternative is to update the centroids after each assignment (incremental approach) – – – Each assignment updates zero or two centroids More expensive Introduces an order dependency Never get an empty cluster Can use “weights” to change the impact © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 26
Pre-processing and Post-processing l Pre-processing – Normalize the data – Eliminate outliers l Post-processing – Eliminate small clusters that may represent outliers – Split ‘loose’ clusters, i. e. , clusters with relatively high SSE – Merge clusters that are ‘close’ and that have relatively low SSE – Can use these steps during the clustering process u ISODATA © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 27
Limitations of K-means l K-means has problems when clusters are of differing – Sizes – Densities – Non-globular shapes K-means has problems when the data contains outliers. l The mean may often not be a real point! l © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 28
Limitations of K-means: Differing Density K-means (3 Clusters) Original Points © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 29
Limitations of K-means: Non-globular Shapes Original Points © Tan, Steinbach, Kumar K-means (2 Clusters) Introduction to Data Mining 4/18/2004 30
Overcoming K-means Limitations Original Points © Tan, Steinbach, Kumar K-means Clusters Introduction to Data Mining 4/18/2004 31
Hierarchical Clustering Produces a set of nested clusters organized as a hierarchical tree l Can be visualized as a dendrogram l – A tree like diagram that records the sequences of merges or splits © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 32
Strengths of Hierarchical Clustering l Do not have to assume any particular number of clusters – Any desired number of clusters can be obtained by ‘cutting’ the dendogram at the proper level l They may correspond to meaningful taxonomies – Example in biological sciences (e. g. , animal kingdom, phylogeny reconstruction, …) © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 33
Hierarchical Clustering l Two main types of hierarchical clustering – Agglomerative: u Start with the points as individual clusters At each step, merge the closest pair of clusters until only one cluster (or k clusters) left u – Divisive: u Start with one, all-inclusive cluster At each step, split a cluster until each cluster contains a point (or there are k clusters) u l Traditional hierarchical algorithms use a similarity or distance matrix – Merge or split one cluster at a time © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 34
Agglomerative Clustering Algorithm l More popular hierarchical clustering technique l Basic algorithm is straightforward 1. 2. 3. 4. 5. 6. l Compute the proximity matrix Let each data point be a cluster Repeat Merge the two closest clusters Update the proximity matrix Until only a single cluster remains Key operation is the computation of the proximity of two clusters – Different approaches to defining the distance between clusters distinguish the different algorithms © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 35
Starting Situation l Start with clusters of individual points and a proximity matrix p 1 p 2 p 3 p 4 p 5 . . . p 1 p 2 p 3 p 4 p 5. . . © Tan, Steinbach, Kumar Introduction to Data Mining Proximity Matrix 4/18/2004 36
Intermediate Situation l After some merging steps, we have some clusters C 1 C 2 C 3 C 4 C 5 Proximity Matrix C 1 C 2 © Tan, Steinbach, Kumar C 5 Introduction to Data Mining 4/18/2004 37
Intermediate Situation l We want to merge the two closest clusters (C 2 and C 5) and update the proximity matrix. C 1 C 2 C 3 C 4 C 5 Proximity Matrix C 1 C 2 © Tan, Steinbach, Kumar C 5 Introduction to Data Mining 4/18/2004 38
After Merging l The question is “How do we update the proximity matrix? ” C 1 C 2 U C 5 C 3 C 4 ? ? ? C 3 ? C 4 ? Proximity Matrix C 1 C 2 U C 5 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 39
How to Define Inter-Cluster Similarity p 1 Similarity? p 2 p 3 p 4 p 5 . . . p 1 p 2 p 3 p 4 l l MIN MAX Group Average Distance Between Centroids © Tan, Steinbach, Kumar Introduction to Data Mining p 5 . . . Proximity Matrix 4/18/2004 40
How to Define Inter-Cluster Similarity p 1 p 2 p 3 p 4 p 5 . . . p 1 p 2 p 3 p 4 l l MIN MAX Group Average Distance Between Centroids © Tan, Steinbach, Kumar Introduction to Data Mining p 5 . . . Proximity Matrix 4/18/2004 41
How to Define Inter-Cluster Similarity p 1 p 2 p 3 p 4 p 5 . . . p 1 p 2 p 3 p 4 l l MIN MAX Group Average Distance Between Centroids © Tan, Steinbach, Kumar Introduction to Data Mining p 5 . . . Proximity Matrix 4/18/2004 42
How to Define Inter-Cluster Similarity p 1 p 2 p 3 p 4 p 5 . . . p 1 p 2 p 3 p 4 l l MIN MAX Group Average Distance Between Centroids © Tan, Steinbach, Kumar Introduction to Data Mining p 5 . . . Proximity Matrix 4/18/2004 43
How to Define Inter-Cluster Similarity p 1 p 2 p 3 p 4 p 5 . . . p 1 p 2 p 3 p 4 l l MIN MAX Group Average Distance Between Centroids © Tan, Steinbach, Kumar Introduction to Data Mining p 5 . . . Proximity Matrix 4/18/2004 44
Cluster Similarity: MIN or Single Link l Similarity of two clusters is based on the two most similar (closest) points in the different clusters – Determined by one pair of points, i. e. , by one link in the proximity graph. 1 © Tan, Steinbach, Kumar Introduction to Data Mining 2 3 4 4/18/2004 5 45
Hierarchical Clustering: MIN 1 3 5 2 1 2 3 4 5 6 4 Nested Clusters © Tan, Steinbach, Kumar Dendrogram Introduction to Data Mining 4/18/2004 46
Strength of MIN Original Points Two Clusters • Can handle non-elliptical shapes © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 47
Limitations of MIN Original Points Two Clusters • Sensitive to noise and outliers © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 48
Cluster Similarity: MAX or Complete Linkage l Similarity of two clusters is based on the two least similar (most distant) points in the different clusters – Determined by all pairs of points in the two clusters 1 © Tan, Steinbach, Kumar Introduction to Data Mining 2 3 4 4/18/2004 5 49
Hierarchical Clustering: MAX 4 1 2 5 5 2 3 3 6 1 4 Nested Clusters © Tan, Steinbach, Kumar Dendrogram Introduction to Data Mining 4/18/2004 50
Strength of MAX Original Points Two Clusters • Less susceptible to noise and outliers © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 51
Limitations of MAX Original Points Two Clusters • Tends to break large clusters • Biased towards globular clusters © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 52
Cluster Similarity: Group Average l Proximity of two clusters is the average of pairwise proximity between points in the two clusters. l Need to use average connectivity for scalability since total proximity favors large clusters 1 © Tan, Steinbach, Kumar Introduction to Data Mining 2 3 4 4/18/2004 5 53
Hierarchical Clustering: Group Average 5 4 1 2 5 2 3 6 1 4 3 Nested Clusters © Tan, Steinbach, Kumar Dendrogram Introduction to Data Mining 4/18/2004 54
Hierarchical Clustering: Group Average l Compromise between Single and Complete Link l Strengths – Less susceptible to noise and outliers l Limitations – Biased towards globular clusters © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 55
Hierarchical Clustering: Time and Space requirements l O(N 2) space since it uses the proximity matrix. – N is the number of points. l O(N 3) time in many cases – There are N steps and at each step the size, N 2, proximity matrix must be updated and searched – Complexity can be reduced to O(N 2 log(N) ) time for some approaches © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 56
Hierarchical Clustering: Problems and Limitations l Once a decision is made to combine two clusters, it cannot be undone l No objective function is directly minimized l Different schemes have problems with one or more of the following: – Sensitivity to noise and outliers – Difficulty handling different sized clusters and convex shapes – Breaking large clusters © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 57
MST: Divisive Hierarchical Clustering l Build MST (Minimum Spanning Tree) – Start with a tree that consists of any point – In successive steps, look for the closest pair of points (p, q) such that one point (p) is in the current tree but the other (q) is not – Add q to the tree and put an edge between p and q © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 58
MST: Divisive Hierarchical Clustering l Use MST for constructing hierarchy of clusters © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 59
DBSCAN l DBSCAN is a density-based algorithm. – Density = number of points within a specified radius (Eps) – A point is a core point if it has more than a specified number of points (Min. Pts) within Eps u These are points that are at the interior of a cluster – A border point has fewer than Min. Pts within Eps, but is in the neighborhood of a core point – A noise point is any point that is not a core point or a border point. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 60
DBSCAN: Core, Border, and Noise Points © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 61
DBSCAN Algorithm Eliminate noise points l Perform clustering on the remaining points l © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 62
DBSCAN: Core, Border and Noise Points Original Points Point types: core, border and noise Eps = 10, Min. Pts = 4 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 63
When DBSCAN Works Well Original Points Clusters • Resistant to Noise • Can handle clusters of different shapes and sizes © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 64
When DBSCAN Does NOT Work Well (Min. Pts=4, Eps=9. 75). Original Points • Varying densities • High-dimensional data © Tan, Steinbach, Kumar (Min. Pts=4, Eps=9. 92) Introduction to Data Mining 4/18/2004 65
Cluster Validity l For supervised classification we have a variety of measures to evaluate how good our model is – Accuracy, precision, recall l For cluster analysis, the analogous question is how to evaluate the “goodness” of the resulting clusters? l But “clusters are in the eye of the beholder”! l Then why do we want to evaluate them? – – To avoid finding patterns in noise To compare clustering algorithms To compare two sets of clusters To compare two clusters © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 66
Clusters found in Random Data Random Points DBSCAN K-means © Tan, Steinbach, Kumar Complete Link Introduction to Data Mining 4/18/2004 67
Different Aspects of Cluster Validation 1. Determining the clustering tendency of a set of data, i. e. , distinguishing whether non-random structure actually exists in the data. 2. Comparing the results of a cluster analysis to externally known results, e. g. , to externally given class labels. 3. Evaluating how well the results of a cluster analysis fit the data without reference to external information. - Use only the data 4. Comparing the results of two different sets of cluster analyses to determine which is better. 5. Determining the ‘correct’ number of clusters. For 2, 3, and 4, we can further distinguish whether we want to evaluate the entire clustering or just individual clusters. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 68
Using Similarity Matrix for Cluster Validation l Order the similarity matrix with respect to cluster labels and inspect visually. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 69
Using Similarity Matrix for Cluster Validation l Clusters in random data are not so crisp DBSCAN © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 70
Intrinsic Measures of Clustering quality © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 71
Cohesion and Separation l A proximity graph based approach can also be used for cohesion and separation. – Cluster cohesion is the sum of the weight of all links within a cluster. – Cluster separation is the sum of the weights between nodes in the cluster and nodes outside the cluster. cohesion © Tan, Steinbach, Kumar separation Introduction to Data Mining 4/18/2004 72
Silhouette Coefficient l l Silhouette Coefficient combine ideas of both cohesion and separation, but for individual points, as well as clusters and clusterings For an individual point, i – Calculate a = average distance of i to the points in its cluster – Calculate b = min (average distance of i to points in another cluster) – The silhouette coefficient for a point is then given by s = 1 – a/b if a < b, (or s = b/a - 1 if a b, not the usual case) – Typically between 0 and 1. – The closer to 1 the better. l Can calculate the Average Silhouette width for a clustering © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 73
Other Measures of Cluster Validity l Entropy/Gini u. If there is a class label – you can use the entropy/gini of the class label – similar to what we did for classification u. If there is no class label – one can compute the entropy w. r. t each attribute (dimension) and sum up or weighted average to compute the disorder within a cluster l Classification Error u. If there is a class label one can compute this in a similar manner © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 74
Extensions: Clustering Large Databases l l l Most clustering algorithms assume a large data structure which is memory resident. Clustering may be performed first on a sample of the database then applied to the entire database. Algorithms – BIRCH – DBSCAN (we have already covered this) – CURE © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 75
Desired Features for Large Databases l l l l One scan (or less) of DB Online Suspendable, stoppable, resumable Incremental Work with limited main memory Different techniques to scan (e. g. sampling) Process each tuple once © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 76
More on Hierarchical Clustering Methods l Major weakness of agglomerative clustering methods – do not scale well: time complexity of at least O(n 2), where n is the number of total objects – can never undo what was done previously l Integration of hierarchical with distance-based clustering – BIRCH (1996): uses CF-tree and incrementally adjusts the quality of sub-clusters – CURE (1998): selects well-scattered points from the cluster and then shrinks them towards the center of the cluster by a specified fraction © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 77
BIRCH l l l Balanced Iterative Reducing and Clustering using Hierarchies Incremental, hierarchical, one scan Save clustering information in a tree Each entry in the tree contains information about one cluster New nodes inserted in closest entry in tree © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 78
BIRCH (1996) l Incrementally construct a CF (Clustering Feature) tree, a hierarchical data structure for multiphase clustering – Phase 1: scan DB to build an initial in-memory CF tree (a multi-level compression of the data that tries to preserve the inherent clustering structure of the data) – Phase 2: use an arbitrary clustering algorithm to cluster the leaf nodes of the CF-tree l Scales linearly: finds a good clustering with a single scan and improves the quality with a few additional scans l Weakness: handles only numeric data, and sensitive to the order of the data record. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 79
Clustering Feature l l CT Triple: (N, LS, SS) – N: Number of points in cluster – LS: Sum of points in the cluster – SS: Sum of squares of points in the cluster CF Tree – Balanced search tree – Node has CF triple for each child – Leaf node represents cluster and has CF value for each subcluster in it. – Subcluster has maximum diameter © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 80
Clustering Feature Vector Clustering Feature: CF = (N, LS, SS) N: Number of data points LS: Ni=1=Xi SS: Ni=1=Xi 2 CF = (5, (16, 30), (54, 190)) (3, 4) (2, 6) (4, 5) (4, 7) (3, 8) © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 81
BIRCH Algorithm © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 82
Improve Clusters © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 83
CF Tree Root B=7 CF 1 CF 2 CF 3 CF 6 L=6 child 1 child 2 child 3 child 6 CF 1 Non-leaf node CF 2 CF 3 CF 5 child 1 child 2 child 3 child 5 Leaf node prev CF 1 CF 2 © Tan, Steinbach, Kumar CF 6 next Leaf node prev Introduction to Data Mining CF 1 CF 2 CF 4 next 4/18/2004 84
CURE l Clustering Using Representatives (CURE) – Stops the creation of a cluster hierarchy if a level consists of k clusters l Use many points to represent a cluster instead of only one – Uses multiple representative points to evaluate the distance between clusters, adjusts well to arbitrary shaped clusters and avoids singlelink effect – Points will be well scattered l Drawbacks of square-error based clustering method – Consider only one point as representative of a cluster – Good only for convex shaped, similar size and density, and if k can be reasonably estimated © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 85
CURE Approach © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 86
CURE for Large Databases © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 87
Cure: The Algorithm – Draw random sample s. – Partition sample to p partitions with size s/p – Partially cluster partitions into s/pq clusters – Eliminate outliers u By u If random sampling a cluster grows too slow, eliminate it. – Cluster partial clusters. – Label data in disk © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 88
Data Partitioning and Clustering – s = 50 – p=2 n – s/p = 25 s/pq = 5 y y y x x x x © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 89
Cure: Shrinking Representative Points y y x l x Shrink the multiple representative points towards the gravity center by a fraction of . l Multiple representatives capture the shape of the cluster © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 90
Clustering Categorical Data: ROCK l ROCK: Robust Clustering using lin. Ks, by S. Guha, R. Rastogi, K. Shim (ICDE’ 99). – Use links to measure similarity/proximity – Not distance based u Example (1, 0, 0, 0), (0, 1, 1, 0, 1, 1), (0, 0, 1, 0, 1) u Eucledian distance based approach would cluster – Pt 2, Pt 3 and Pt 1 and Pt 4 – Problem? Pt 1 and Pt 4 have nothing in common © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 91
Rock: Algorithm l Links: The number of common neighbours for the two points. Using jacquard – Use Distances to determine neighbors u u u (pt 1, pt 4) = 0, (pt 1, pt 2) = 0, (pt 1, pt 3) = 0 (pt 2, pt 3) = 0. 6, (pt 2, pt 4) = 0. 2 (pt 3, pt 4) = 0. 2 – Use 0. 2 as threshold for neighbors u u u Pt 2 and Pt 3 have 3 common neighbors Pt 3 and Pt 4 have 3 common neighbors Pt 2 and Pt 4 have 3 common neighbors – Resulting clusters (1), (2, 3, 4) which makes more sense Algorithm – Draw random sample – Cluster with links – Label data in disk © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 92
Another example l Links: The number of common neighbours for the two points. {1, 2, 3}, {1, 2, 4}, {1, 2, 5}, {1, 3, 4}, {1, 3, 5} {1, 4, 5}, {2, 3, 4}, {2, 3, 5}, {2, 4, 5}, {3, 4, 5} 3 {1, 2, 3} {1, 2, 4} l Algorithm – Draw random sample – Cluster with links – Label data in disk © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 93
Midterm Performance (Winter 2009) 80 70 Midterm Scores (avg: 53. 8) 60 50 40 30 20 10 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 94
- Slides: 94