Data Mining Data Lecture Notes for Chapter 2
- Slides: 83
Data Mining: Data Lecture Notes for Chapter 2 Introduction to Data Mining by Tan, Steinbach, Kumar © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 1
What is Data? l Collection of data objects and their attributes l An attribute is a property or characteristic of an object Attributes – Examples: eye color of a person, temperature, etc. – Attribute is also known as variable, field, characteristic, or feature Objects l A collection of attributes describe an object – Object is also known as record, point, case, sample, entity, or instance © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 2
Data l l The Type of Data The Quality of the Data Preprocessing steps to make the data more suitable for data mining Analyzing data in terms of relationships – Example 2. 1 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 3
Attribute Values l Attribute values are numbers or symbols assigned to an attribute l Distinction between attributes and attribute values – Same attribute can be mapped to different attribute values u Example: height can be measured in feet or meters – Different attributes can be mapped to the same set of values Example: Attribute values for ID and age are integers u But properties of attribute values can be different u – ID has no limit but age has a maximum and minimum value © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 4
Measurement of Length l The way you measure an attribute is somewhat may not match the attributes properties. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 5
Types of Attributes l There are different types of attributes – Nominal u Examples: ID numbers, eye color, zip codes – Ordinal u Examples: rankings (e. g. , taste of potato chips on a scale from 1 -10), grades, height in {tall, medium, short} – Interval u Examples: calendar dates, temperatures in Celsius or Fahrenheit. – Ratio u Examples: temperature in Kelvin, length, time, counts © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 6
Properties of Attribute Values l The type of an attribute depends on which of the following properties it possesses: = < > + */ – – Distinctness: Order: Addition: Multiplication: – – Nominal attribute: distinctness Ordinal attribute: distinctness & order Interval attribute: distinctness, order & addition Ratio attribute: all 4 properties © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 7
Attribute Type Description Examples Nominal The values of a nominal attribute are just different names, i. e. , nominal attributes provide only enough information to distinguish one object from another. (=, ) zip codes, employee ID numbers, eye color, sex: {male, female} mode, entropy, contingency correlation, 2 test Ordinal The values of an ordinal attribute provide enough information to order objects. (<, >) hardness of minerals, {good, better, best}, grades, street numbers median, percentiles, rank correlation, run tests, sign tests Interval For interval attributes, the differences between values are meaningful, i. e. , a unit of measurement exists. (+, - ) calendar dates, temperature in Celsius or Fahrenheit mean, standard deviation, Pearson's correlation, t and F tests For ratio variables, both differences and ratios are meaningful. (*, /) temperature in Kelvin, monetary quantities, counts, age, mass, length, electrical current geometric mean, harmonic mean, percent variation Ratio Operations
Attribute Level Transformation Nominal Any permutation of values If all employee ID numbers were reassigned, would it make any difference? Ordinal An order preserving change of values, i. e. , new_value = f(old_value) where f is a monotonic function. An attribute encompassing the notion of good, better best can be represented equally well by the values {1, 2, 3} or by { 0. 5, 1, 10}. Interval new_value =a * old_value + b where a and b are constants Thus, the Fahrenheit and Celsius temperature scales differ in terms of where their zero value is and the size of a unit (degree). new_value = a * old_value Length can be measured in meters or feet. Ratio Comments
Discrete and Continuous Attributes l Discrete Attribute – Has only a finite or countably infinite set of values – Examples: zip codes, counts, or the set of words in a collection of documents – Often represented as integer variables. – Note: binary attributes are a special case of discrete attributes l Continuous Attribute – Has real numbers as attribute values – Examples: temperature, height, or weight. – Practically, real values can only be measured and represented using a finite number of digits. – Continuous attributes are typically represented as floating-point variables. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 10
Asymmetric Attributes l Only presence – a non-zero attribute value is regarded as important – Student’s record indicating whether he/she has taken a course or not Are students more similar in the number of courses they have taken or those they haven’t? u l Asymmetric Binary attributes – non-zero values are important - associative analysis © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 11
Types of data sets l l l Record – Data Matrix – Document Data – Transaction Data Graph – World Wide Web – Molecular Structures Ordered – Spatial Data – Temporal Data – Sequential Data – Genetic Sequence Data © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 12
Important Characteristics of Structured Data – Dimensionality u Curse of Dimensionality – Sparsity (affects data with asymmetric features) u Only presence counts – May work very well in data mining as 1’s are important – Resolution u Patterns depend on the scale – Properties are different at different resolutions, surface of the Earth at different resolutions © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 13
Record Data l Data that consists of a collection of records, each of which consists of a fixed set of attributes • Usually stored in flat files or relational databases • No explicit relationship among records or data fields © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 14
Data Matrix (Pattern matrix) l If data objects have the same fixed set of numeric attributes, then the data objects can be thought of as points in a multi-dimensional space, where each dimension represents a distinct attribute l Such data set can be represented by an m by n matrix, where there are m rows, one for each object, and n columns, one for each attribute © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 15
Document Data (ex of Sparse Data Matrix) l Each document becomes a `term' vector, – each term is a component (attribute) of the vector, – the value of each component is the number of times the corresponding term occurs in the document. Non-zero = important? © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 16
Transaction Data (Market basket data) l A special type of record data, where – each record (transaction) involves a set of items. – For example, consider a grocery store. The set of products purchased by a customer during one shopping trip constitute a transaction, while the individual products that were purchased are the items. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 17
Graph Data l Examples: Generic graph and HTML Links – The graph captures relationships among data objects – The data objects themselves are represented as graphs © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 18
Graph Data l Data with relationships among Objects – The relationships among objects frequently convey important information – Often data objects are mapped to nodes of the graph l Data with Objects that are Graphs – When objects have structures – the objects contain subobjects that have relationships, ex. A chemical component. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 19
Chemical Data l Benzene Molecule: C 6 H 6 H C © Tan, Steinbach, Kumar Makes it possible to: • determine which substructures occurs frequently in a set of compounds • whether the presence of any of these substructures is associated with the presence or absence of certain chemical properties (melting point, etc) Introduction to Data Mining 4/18/2004 20
Ordered Data Attributes have relationships that involve order in time or space l Sequences of transactions (temporal data) Items/Events An element of the sequence © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 21
Ordered Data Time Customer Items Purchased T 1 C 1 A, B T 2 C 3 A, C T 2 C 1 C, D T 3 C 2 A, D T 4 C 2 E T 5 C 1 A, E Customer C 1 C 2 C 3 © Tan, Steinbach, Kumar Time and item purchased (t 1: A, B) (t 2: C, D) (t 5: A, E) (t 3: A, D) (t 4: E) (t 2: A, C) Introduction to Data Mining 4/18/2004 22
Sequence Data l What is the difference between sequential data and sequence data? © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 23
Ordered Data l Genomic sequence data © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 24
Ordered Data l Spatio-Temporal Data Average Monthly Temperature of land ocean Spatial auto-correlation - Objects that are physically close tend to be similar in other ways as well. Example: two points that are close to each other have similar rainfall. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 25
Time Series Data l A series of measurements taken over time How do we handle non-record data? © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 26
Data Quality What kinds of data quality problems? l How can we detect problems with the data? l What can we do about these problems? l – Detection and correction – Use of algorithms that can tolerate poor quality l Examples of data quality problems: – Noise and outliers – missing values – duplicate data © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 27
Precision, Bias, and Accuracy Precision – The closeness of repeated measurements to one another l Bias – A systematic variation of measurements from the quantity being measured l Accuracy – The closeness of measurements to the true value of the quantity being measured l © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 28
Noise and artifacts (distortion or addition) l Noise refers to modification of original values – Examples: distortion of a person’s voice when talking on a poor phone and “snow” on television screen Two Sine Waves © Tan, Steinbach, Kumar Introduction to Data Mining Two Sine Waves + Noise 4/18/2004 29
Outliers l Outliers are data objects with characteristics that are considerably different than most of the other data objects in the data set © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 30
Missing Values l Reasons for missing values – Information is not collected (e. g. , people decline to give their age and weight) – Attributes may not be applicable to all cases (e. g. , annual income is not applicable to children) l Handling missing values – – Eliminate Data Objects Estimate Missing Values Ignore the Missing Value During Analysis Replace with all possible values (weighted by their probabilities) © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 31
Missing Values How do we estimate Missing Values? Example: 2 5 11. ? . 47 l How do we replace with all possible values (weighted by their probabilities) Suppose domain was [0— 15} imagine a 4 -bit image Example: 2 3 4 2 8 ? 2 3 4 Think PDF, Think CDF l © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 32
Duplicate Data l Data set may include data objects that are duplicates, or almost duplicates of one another – Major issue when merging data from heterogeneous sources l Examples: – Same person with multiple email addresses l Data cleaning – Process of dealing with duplicate data issues © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 33
Application Related Issues Quote: “Data is of high quality if it is suitable for its intended use” l Timeliness – Data start to age as soon as it is collected. . Ex. Data collected on people who are buying Pentium II’s l Relevance – The available data must contain the information necessary for the application. . Ex. Insurance data without age and gender – Sampling bias. . Ex. surveys l Knowledge about the data – Quality of documentation. . Missing value issue © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 34
Data Preprocessing Some of the most important ideas: l Aggregation Improving: l Sampling Time l Dimensionality Reduction Cost, and l Feature subset selection Quality l Feature creation l Discretization and Binarization l Attribute Transformation These fall into two categories: selecting data objects and attributes for analysis or creating/changing the attributes. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 35
Aggregation (less is more) l l Combining two or more attributes (or objects) into a single attribute (or object) – How would you deal with the quantitative attributes? Sum, avg – How would you deal with the qualitative attributes? Omitted or summarized as a set Purpose (commonly used in OLAP) – Data reduction u Reduce the number of attributes or objects less memory, fast, allow more expensive algorithm – Change of scale u Cities aggregated into regions, states, countries, etc – More “stable” data u Aggregated data tends to have less variability © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 36
Aggregation Variation of Precipitation in Australia Standard Deviation of Average Monthly Precipitation © Tan, Steinbach, Kumar Introduction to Data Mining Standard Deviation of Average Yearly Precipitation 4/18/2004 37
Sampling l Sampling is the main technique employed for data selection. – It is often used for both the preliminary investigation of the data and the final data analysis. l Statisticians sample because obtaining the entire set of data of interest is too expensive or time consuming. l Sampling is used in data mining because processing the entire set of data of interest is too expensive or time consuming. With reduced data, i. e. , better and more expensive algorithms can be used © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 38
Sampling … l The key principle for effective sampling is the following: – using a sample will work almost as well as using the entire data sets, if the sample is representative – A sample is representative if it has approximately the same property (of interest) as the original set of data © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 39
Types of Sampling l Simple Random Sampling – There is an equal probability of selecting any particular item l Sampling without replacement – As each item is selected, it is removed from the population l Sampling with replacement – Objects are not removed from the population as they are selected for the sample. In sampling with replacement, the same object can be picked up more than once u l Stratified sampling – Split the data into several partitions; then draw random samples from each partition © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 40
Sample Size 8000 points © Tan, Steinbach, Kumar 2000 Points Introduction to Data Mining 500 Points 4/18/2004 41
Sample Size l What sample size is necessary to get at least one object from each of 10 groups. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 42
Progressive Sampling l Start with a small sample, and then increase the sample size until a sample of sufficient size is obtained. Problem – Need a mechanism to determine whether the sample size is large enough © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 43
Curse of Dimensionality l When dimensionality increases, data becomes increasingly sparse in the space that it occupies l Definitions of density and distance between points, which is critical for clustering and outlier detection, become less meaningful • Randomly generate 500 points • Compute difference between max and min distance between any pair of points © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 44
Dimensionality Reduction l Purpose: – Avoid curse of dimensionality – Reduce amount of time and memory required by data mining algorithms – Allow data to be more easily visualized – May help to eliminate irrelevant features or reduce noise l Techniques – Principle Component Analysis (PCA) – Singular Value Decomposition (SVD) also related to PCA – Others: supervised and non-linear techniques © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 45
Dimensionality Reduction: PCA l Goal is to find a projection that captures the largest amount of variation in data x 2 e x 1 Principal components: 1) Linear combination of original attributes, 2) are orthogonal (perpendicular) to each other, and 3) capture the maximum amount of variation in the data. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 46
Dimensionality Reduction: PCA Find the eigenvectors of the covariance matrix l The eigenvectors define the new space l x 2 e x 1 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 47
Dimensionality Reduction: ISOMAP By: Tenenbaum, de Silva, Langford (2000) l l Construct a neighbourhood graph For each pair of points in the graph, compute the shortest path distances – geodesic distances © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 48
Dimensionality Reduction: PCA © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 49
Feature Subset Selection l Another way to reduce dimensionality of data l Redundant features – duplicate much or all of the information contained in one or more other attributes – Example: purchase price of a product and the amount of sales tax paid l Irrelevant features – contain no information that is useful for the data mining task at hand – Example: students' ID is often irrelevant to the task of predicting students' GPA © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 50
Feature Subset Selection l Techniques: – Brute-force approach: u. Try all possible feature subsets as input to data mining algorithm. N attributes results in 2 N subsets … I. e impractical – Embedded approaches: Feature selection occurs naturally as part of the data mining algorithm. Algorithm decides what to keep an what to eliminate. u – Filter approaches: Features are selected before data mining algorithm is run. Thus, it is independent of the data mining task. u – Wrapper approaches: Use the data mining algorithm as a black box to find best subset of attributes. Almost similar to the first method, but not all attributes are considered. u © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 51
Feature Subset Selection Process Selected Attributes 3 Stopping Criterion Evaluation 4 Validation Procedure 2 1 Attributes © Tan, Steinbach, Kumar Search Strategy Introduction to Data Mining Subset of Attributes 4/18/2004 52
Feature Creation l Create new attributes that can capture the important information in a data set much more efficiently than the original attributes l Three general methodologies: – Feature Extraction u domain-specific – Mapping Data to a New Space – Feature Construction u combining features © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 53
Feature Extraction l Creation of new set of features from the original raw data is known as feature extraction l l Consider a set of pictures where you are trying to separate the ones which contain a human face It is highly domain specific – Whenever data mining is applied to a relatively new area, a key task is the development of new features and feature extraction techniques © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 54
Mapping Data to a New Space l Fourier transform ($$$$ --- Popcorn ) l Wavelet transform Two Sine Waves + Noise Frequency The point is: Better features can reveal important aspects of the data © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 55
Binarization Some classification algorithms require the data to be in the form of categorical attributes. Algorithms for association patterns require data in binary form. Categorical Value Integer Value x 1 x 2 x 3 Awful 0 0 Poor 1 0 0 1 OK 2 0 1 0 Good 3 0 1 1 Great 4 1 0 0 Categorical Value Integer Value x 1 x 2 Conversion of categorical attribute to three binary attributes Conversion of categorical attribute to five asymmetric binary attributes. Association x 3 x 4 x 5 Awful 0 1 0 0 Poor 1 0 0 0 OK 2 0 0 1 0 0 Good 3 0 0 0 1 0 Great 4 0 0 1 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 56
Discretization Without Using Class Labels Discretization of continuous attributes involves two things, how many categories and how to map Data Equal interval width Equal frequency © Tan, Steinbach, Kumar K-means Introduction to Data Mining 4/18/2004 57
Discretization Using Class Labels Entropy based approach 3 categories for both x and y 5 categories for both x and y Side note: How about categorical cases, for example departments, colleges, … © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 58
Attribute Transformation l A function that maps the entire set of values of a given attribute to a new set of replacement values such that each old value can be identified with one of the new values – Simple functions: xk, log(x), ex, |x| – Standardization and Normalization: Ex. Word of caution on transformation Example 1/x -Does order matter? -Does it apply to all? -What is the effect to values between 0 to 1? © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 59
Similarity and Dissimilarity l Similarity – Numerical measure of how alike two data objects are. – Is higher when objects are more alike. – Often falls in the range [0, 1] l Dissimilarity – Numerical measure of how different are two data objects – Lower when objects are more alike – Minimum dissimilarity is often 0 – Upper limit varies l Proximity refers to a similarity or dissimilarity © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 60
Similarity/Dissimilarity for Simple Attributes p and q are the attribute values for two data objects. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 61
Euclidean Distance l Euclidean Distance Where n is the number of dimensions (attributes) and pk and qk are, respectively, the kth attributes (components) or data objects p and q. l Standardization is necessary, if scales differ. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 62
Euclidean Distance Matrix © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 63
Minkowski Distance l Minkowski Distance is a generalization of Euclidean Distance Where r is a parameter, n is the number of dimensions (attributes) and pk and qk are, respectively, the kth attributes (components) or data objects p and q. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 64
Minkowski Distance: Examples l r = 1. City block (Manhattan, taxicab, L 1 norm) distance. – A common example of this is the Hamming distance, which is just the number of bits that are different between two binary vectors l r = 2. Euclidean distance l r . “supremum” (Lmax norm, L norm) distance. – This is the maximum difference between any component of the vectors l Do not confuse r with n, i. e. , all these distances are defined for all numbers of dimensions. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 65
Minkowski Distance Matrix © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 66
Mahalanobis Distance is the covariance matrix of the input data X For red points, the Euclidean distance is 14. 7, Mahalanobis distance is 6. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 67
Mahalanobis Distance Covariance Matrix: C B A: (0. 5, 0. 5) A B: (0, 1) C: (1. 5, 1. 5) Mahal(A, B) = 5 Mahal(A, C) = 4 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 68
Common Properties of a Distance l Distances, such as the Euclidean distance, have some well known properties. 1. d(p, q) 0 for all p and q and d(p, q) = 0 only if p = q. (Positive definiteness) 2. 3. d(p, q) = d(q, p) for all p and q. (Symmetry) d(p, r) d(p, q) + d(q, r) for all points p, q, and r. (Triangle Inequality) where d(p, q) is the distance (dissimilarity) between points (data objects), p and q. l A distance that satisfies these properties is a metric © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 69
Common Properties of a Similarity l Similarities, also have some well known properties. 1. s(p, q) = 1 (or maximum similarity) only if p = q. 2. s(p, q) = s(q, p) for all p and q. (Symmetry) where s(p, q) is the similarity between points (data objects), p and q. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 70
Examples: Similarity • Non-metric dissimilarity: Set Differences A={1, 2, 3, 4} and B={2, 3, 4} • Non-metric dissimilarity: Time 1 PM and 2 PM Similarity between data objects: • A non-symmetric Similarity Measure (O and 0) © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 71
Similarity Between Binary Vectors l Common situation is that objects, p and q, have only binary attributes l Compute similarities using the following quantities M 01 = the number of attributes where p was 0 and q was 1 M 10 = the number of attributes where p was 1 and q was 0 M 00 = the number of attributes where p was 0 and q was 0 M 11 = the number of attributes where p was 1 and q was 1 l Simple Matching and Jaccard Coefficients SMC = number of matches / number of attributes = (M 11 + M 00) / (M 01 + M 10 + M 11 + M 00) This measure both presence and absence equally. Ex. Students who have answered questions similarly on all True/False exam. J = number of 11 matches / number of not-both-zero attributes values = (M 11) / (M 01 + M 10 + M 11) © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 72
SMC versus Jaccard: Example p= 100000 q= 0000001001 M 01 = 2 (the number of attributes where p was 0 and q was 1) M 10 = 1 (the number of attributes where p was 1 and q was 0) M 00 = 7 (the number of attributes where p was 0 and q was 0) M 11 = 0 (the number of attributes where p was 1 and q was 1) SMC = (M 11 + M 00)/(M 01 + M 10 + M 11 + M 00) = (0+7) / (2+1+0+7) = 0. 7 J = (M 11) / (M 01 + M 10 + M 11) = 0 / (2 + 1 + 0) = 0 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 73
Cosine Similarity l If d 1 and d 2 are two document vectors, then cos( d 1, d 2 ) = (d 1 d 2) / ||d 1|| ||d 2|| , recall DOT product where indicates vector dot product and || is the length of vector d. l Example: d 1 = 3 2 0 5 0 0 0 2 0 0 d 2 = 1 0 0 0 1 0 2 d 1 d 2= 3*1 + 2*0 + 0*0 + 5*0 + 0*0 + 2*1 + 0*0 + 0*2 = 5 ||d 1|| = (3*3+2*2+0*0+5*5+0*0+0*0+2*2+0*0)0. 5 = (42) 0. 5 = 6. 481 ||d 2|| = (1*1+0*0+0*0+0*0+1*1+0*0+2*2) 0. 5 = (6) 0. 5 = 2. 245 cos( d 1, d 2 ) =. 3150 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 74
Extended Jaccard Coefficient (Tanimoto) l Variation of Jaccard for continuous or count attributes – Reduces to Jaccard for binary attributes © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 75
Correlation measures the linear relationship between objects l To compute correlation, we standardize data objects, p and q, and then take their dot product l This is not normalized. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 76
Correlation – A different approach © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 77
Visually Evaluating Correlation Scatter plots showing the similarity from – 1 to 1. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 78
General Approach for Combining Similarities l Sometimes attributes are of many different types, but an overall similarity is needed. Example: X = [0 1 0 1 0 0 0 1 ] Y = [0 1 0 0 0 1 1 0 0 0 ] Sk= [1 1 1 0 ] = [1 1 1 0 0 1 ] Similarity(X, Y) = ? © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 79
Using Weights to Combine Similarities l May not want to treat all attributes the same. – Use weights wk which are between 0 and 1 and sum to 1. Example: X = [0 1 0 1 0 0 0 1 ] Y = [0 1 0 0 0 1 1 0 0 0 ] Sk= [1 1 1 0 ] = [1 1 1 0 0 1 ] w = [. 2. 1 0 0 0. 4. 2 0 0. 1] Similarity(X, Y) = ? © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 80
Density l Density-based clustering require a notion of density l Examples: – Euclidean density u Euclidean density = number of points per unit volume – Probability density – Graph-based density © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 81
Euclidean Density – Cell-based l Simplest approach is to divide region into a number of rectangular cells of equal volume and define density as # of points the cell contains © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 82
Euclidean Density – Center-based l Euclidean density is the number of points within a specified radius of the point © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 83
- Bayesian classification in data mining lecture notes
- Data mining lecture notes
- Data mining lecture notes
- Data mining lecture notes
- 01:640:244 lecture notes - lecture 15: plat, idah, farad
- Mining complex data types
- Mining multimedia databases
- Exploratory data analysis lecture notes
- Strip mining vs open pit mining
- Chapter 13 mineral resources and mining worksheet answers
- Difference between strip mining and open pit mining
- Text and web mining
- Project procurement management lecture notes
- Theology proper lecture notes
- Public sector accounting lecture notes in uganda
- Lecture notes on project management
- Electricity and magnetism lecture notes
- Classical mechanics
- Physical science lecture notes
- Power system dynamics and stability lecture notes
- Microbial physiology notes
- Mechatronics lecture notes ppt
- Limits fits and tolerances
- Financial engineering lecture notes
- Bipolar junction transistor lecture notes
- Software engineering lecture notes
- Ofdm lecture notes
- Land use planning lecture notes
- Project management lecture notes doc
- Lecture notes on homiletics
- Foundation engineering lecture notes
- Image processing lecture notes
- Intermediate microeconomics lecture notes
- Cloud computing lecture
- Bayesian decision theory lecture notes
- Polynomial regression least squares
- Advanced inorganic chemistry lecture notes
- Fundamental deviation table
- Direct stiffness method truss
- Flood routing
- Adf.test in r
- Shape memory alloys lecture notes
- Research methodology lecture notes doc
- Financial markets and institutions ppt
- Physics 101 lecture notes pdf
- Operations management lecture notes doc
- Natural language processing lecture notes
- Linux lecture notes
- Dyphilidium caninum
- Introduction to biochemistry lecture notes
- Stern-gerlach experiment lecture notes
- Land use planning lecture notes
- Slidetodoc. com
- Principles of design in interior design ppt
- Computer architecture notes
- Franck condon principle
- Biopotential electrodes lecture notes
- Catalysis lecture notes
- Bayesian decision theory lecture notes
- Gujarati basic econometrics lecture notes ppt
- Anderson localization lecture notes
- Operating systems lecture notes
- Microwave remote sensing lecture notes
- Lecture notes tiu
- Natural language processing nlp - theory lecture
- Sensitivity analysis lecture notes
- Vct monitoring foetal
- Translation studies notes
- Optical amplifiers lecture notes
- Land use planning '' lecture notes
- Environmental sociology examples
- Factor analysis lecture notes
- Physics 101 lecture notes pdf
- Random ditch system
- Er diagram ppt
- Analysis of algorithms lecture notes
- Software cost estimation notes
- Magnetically coupled circuits lecture notes
- Atomic emission spectroscopy lecture notes
- Enterprise resource planning lecture notes ppt
- Streak plate method
- Professional ethics in engineering notes
- Nlp lecture notes
- Zline 667-36