Data Mining Classification Basic Concepts Decision Trees and
Data Mining Classification: Basic Concepts, Decision Trees, and Model Evaluation Lecture Notes for Chapter 4 and towards the end from Chapter 5 Introduction to Data Mining by Tan, Steinbach, Kumar Adapted and modified by Srinivasan Parthasarathy 4/11/2007 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 1
Classification: Definition l Given a collection of records (training set ) – Each record contains a set of attributes, one of the attributes is the class. l l Find a model for class attribute as a function of the values of other attributes. Goal: previously unseen records should be assigned a class as accurately as possible. – A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 2
Examples of Classification Task l Classifying credit card transactions as legitimate or fraudulent l Classifying secondary structures of protein as alpha-helix, beta-sheet, or random coil l Categorizing news stories as finance, weather, entertainment, sports, etc © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 3
Classification Techniques Decision Tree based Methods l Rule-based Methods l Memory based reasoning l Neural Networks l Naïve Bayes and Bayesian Belief Networks l Support Vector Machines l © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 4
Example of a Decision Tree al ric at c o eg c at al o eg ric in nt co s u o u ss a cl Splitting Attributes Refund Yes No NO Mar. St Single, Divorced Tax. Inc < 80 K NO NO > 80 K YES Model: Decision Tree Training Data © Tan, Steinbach, Kumar Married Introduction to Data Mining 4/18/2004 5
Decision Tree Classification Task Decision Tree © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 6
Decision Tree Induction l Many Algorithms: – Hunt’s Algorithm (one of the earliest) – CART – ID 3, C 4. 5 – SLIQ, SPRINT © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 7
General Structure of Hunt’s Algorithm l l Let Dt be the set of training records that reach a node t General Procedure: – If Dt contains records that belong the same class yt, then t is a leaf node labeled as yt – If Dt is an empty set, then t is a leaf node labeled by the default class, yd – If Dt contains records that belong to more than one class, use an attribute test to split the data into smaller subsets. Recursively apply the procedure to each subset. © Tan, Steinbach, Kumar Introduction to Data Mining Dt ? 4/18/2004 8
Hunt’s Algorithm Don’t Cheat Refund Yes No Don’t Cheat Single, Divorced Cheat Don’t Cheat Marital Status Married Single, Divorced Marital Status Married Don’t Cheat Taxable Income Don’t Cheat © Tan, Steinbach, Kumar No < 80 K >= 80 K Don’t Cheat Introduction to Data Mining 4/18/2004 9
Tree Induction l Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. l Issues – Determine how to split the records u. How to specify the attribute test condition? u. How to determine the best split? – Determine when to stop splitting © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 10
How to Specify Test Condition? l Depends on attribute types – Nominal – Ordinal – Continuous l Depends on number of ways to split – 2 -way split – Multi-way split © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 11
Splitting Based on Nominal Attributes l Multi-way split: Use as many partitions as distinct values. Car. Type Family Luxury Sports l Binary split: Divides values into two subsets. Need to find optimal partitioning. {Sports, Luxury} Car. Type © Tan, Steinbach, Kumar {Family} OR Introduction to Data Mining {Family, Luxury} Car. Type {Sports} 4/18/2004 12
Splitting Based on Continuous Attributes © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 13
How to determine the Best Split Greedy approach: – Nodes with homogeneous class distribution are preferred l Need a measure of node impurity: l Non-homogeneous, High degree of impurity Low degree of impurity © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 14
Measures of Node Impurity l Gini Index l Entropy l Misclassification error © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 15
Measure of Impurity: GINI l Gini Index for a given node t : (NOTE: p( j | t) is the relative frequency of class j at node t). – Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information – Minimum (0. 0) when all records belong to one class, implying most interesting information © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 16
Splitting Based on GINI l l Used in CART, SLIQ, SPRINT. When a node p is split into k partitions (children), the quality of split is computed as, where, © Tan, Steinbach, Kumar ni = number of records at child i, n = number of records at node p. Introduction to Data Mining 4/18/2004 17
Binary Attributes: Computing GINI Index l l Splits into two partitions Effect of Weighing partitions: – Larger and Purer Partitions are sought for. B? Yes Gini(N 1) = 1 – (5/6)2 – (2/6)2 = 0. 194 Node N 1 No Node N 2 Gini(Children) = 7/12 * 0. 194 + 5/12 * 0. 528 = 0. 333 Gini(N 2) = 1 – (1/6)2 – (4/6)2 = 0. 528 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 18
Categorical Attributes: Computing Gini Index l l For each distinct value, gather counts for each class in the dataset Use the count matrix to make decisions Multi-way split © Tan, Steinbach, Kumar Two-way split (find best partition of values) Introduction to Data Mining 4/18/2004 19
Continuous Attributes: Computing Gini Index l l Use Binary Decisions based on one value Several Choices for the splitting value – Number of possible splitting values = Number of distinct values Each splitting value has a count matrix associated with it – Class counts in each of the partitions, A < v and A v Simple method to choose best v – For each v, scan the database to gather count matrix and compute its Gini index – Computationally Inefficient! Repetition of work. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 20
Continuous Attributes: Computing Gini Index. . . l For efficient computation: for each attribute, – Sort the attribute on values – Linearly scan these values, each time updating the count matrix and computing gini index – Choose the split position that has the least gini index Sorted Values Split Positions © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 21
Alternative Splitting Criteria based on INFO l Entropy at a given node t: (NOTE: p( j | t) is the relative frequency of class j at node t). – Measures homogeneity of a node. u Maximum (log nc) when records are equally distributed among all classes implying least information u Minimum (0. 0) when all records belong to one class, implying most information – Entropy based computations are similar to the GINI index computations © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 22
Examples for computing Entropy P(C 1) = 0/6 = 0 P(C 2) = 6/6 = 1 Entropy = – 0 log 0 – 1 log 1 = – 0 = 0 P(C 1) = 1/6 P(C 2) = 5/6 Entropy = – (1/6) log 2 (1/6) – (5/6) log 2 (1/6) = 0. 65 P(C 1) = 2/6 P(C 2) = 4/6 Entropy = – (2/6) log 2 (2/6) – (4/6) log 2 (4/6) = 0. 92 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 23
Splitting Based on INFO. . . l Information Gain: Parent Node, p is split into k partitions; ni is number of records in partition i – Measures Reduction in Entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN) – Used in ID 3 and C 4. 5 – Disadvantage: Tends to prefer splits that result in large number of partitions, each being small but pure. © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 24
Splitting Based on INFO. . . l Gain Ratio: Parent Node, p is split into k partitions ni is the number of records in partition i – Adjusts Information Gain by the entropy of the partitioning (Split. INFO). Higher entropy partitioning (large number of small partitions) is penalized! – Used in C 4. 5 – Designed to overcome the disadvantage of Information Gain © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 25
Splitting Criteria based on Classification Error l Classification error at a node t : l Measures misclassification error made by a node. u Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information u Minimum (0. 0) when all records belong to one class, implying most interesting information © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 26
Examples for Computing Error P(C 1) = 0/6 = 0 P(C 2) = 6/6 = 1 Error = 1 – max (0, 1) = 1 – 1 = 0 P(C 1) = 1/6 P(C 2) = 5/6 Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6 P(C 1) = 2/6 P(C 2) = 4/6 Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 27
Comparison among Splitting Criteria For a 2 -class problem: © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 28
Tree Induction l Greedy strategy. – Split the records based on an attribute test that optimizes certain criterion. l Issues – Determine how to split the records u. How to specify the attribute test condition? u. How to determine the best split? – Determine when to stop splitting © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 29
Stopping Criteria for Tree Induction l Stop expanding a node when all the records belong to the same class l Stop expanding a node when all the records have similar attribute values l Early termination (to be discussed later) © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 30
Decision Tree Based Classification l Advantages: – Inexpensive to construct – Extremely fast at classifying unknown records – Easy to interpret for small-sized trees – Accuracy is comparable to other classification techniques for many simple data sets © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 31
Example: C 4. 5 Simple depth-first construction. l Uses Information Gain l Sorts Continuous Attributes at each node. l Needs entire data to fit in memory. l Unsuitable for Large Datasets. – Needs out-of-core sorting. l l You can download the software from: http: //www. cse. unsw. edu. au/~quinlan/c 4. 5 r 8. tar. gz © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 32
Practical Issues of Classification l Underfitting and Overfitting l Missing Values l Costs of Classification © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 33
Underfitting and Overfitting Underfitting: when model is too simple, both training and test errors are large © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 34
Overfitting due to Noise Decision boundary is distorted by noise point © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 35
Overfitting due to Insufficient Examples Lack of data points in the lower half of the diagram makes it difficult to predict correctly the class labels of that region - Insufficient number of training records in the region causes the decision tree to predict the test examples using other training records that are irrelevant to the classification task © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 36
Notes on Overfitting l Overfitting results in decision trees that are more complex than necessary l Training error no longer provides a good estimate of how well the tree will perform on previously unseen records l Need new ways for estimating errors © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 37
Estimating Generalization Errors l l l Re-substitution errors: error on training ( e(t) ) Generalization errors: error on testing ( e’(t)) Methods for estimating generalization errors: – Optimistic approach: e’(t) = e(t) – Pessimistic approach: u u u For each leaf node: e’(t) = (e(t)+0. 5) Total errors: e’(T) = e(T) + N 0. 5 (N: number of leaf nodes) For a tree with 30 leaf nodes and 10 errors on training (out of 1000 instances): Training error = 10/1000 = 1% Generalization error = (10 + 30 0. 5)/1000 = 2. 5% – Reduced error pruning (REP): u uses validation data set to estimate generalization error © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 38
How to Address Overfitting l Pre-Pruning (Early Stopping Rule) – Stop the algorithm before it becomes a fully-grown tree – Typical stopping conditions for a node: u Stop if all instances belong to the same class u Stop if all the attribute values are the same – More restrictive conditions: Stop if number of instances is less than some user-specified threshold u Stop if class distribution of instances are independent of the available features (e. g. , using 2 test) u u Stop if expanding the current node does not improve impurity measures (e. g. , Gini or information gain). © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 39
How to Address Overfitting… l Post-pruning – Grow decision tree to its entirety – Trim the nodes of the decision tree in a bottom -up fashion – If generalization error improves after trimming, replace sub-tree by a leaf node. – Class label of leaf node is determined from majority class of instances in the sub-tree – Can use MDL for post-pruning © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 40
Example of Post-Pruning Training Error (Before splitting) = 10/30 Class = Yes 20 Pessimistic error = (10 + 0. 5)/30 = 10. 5/30 Class = No 10 Training Error (After splitting) = 9/30 Pessimistic error (After splitting) Error = 10/30 = (9 + 4 0. 5)/30 = 11/30 PRUNE! Class = Yes 8 Class = Yes 3 Class = Yes 4 Class = Yes 5 Class = No 4 Class = No 1 © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 41
Examples of Post-pruning – Optimistic error? Case 1: Don’t prune for both cases – Pessimistic error? C 0: 11 C 1: 3 C 0: 2 C 1: 4 C 0: 14 C 1: 3 C 0: 2 C 1: 2 Don’t prune case 1, prune case 2 – Reduced error pruning? Case 2: Depends on validation set © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 42
Occam’s Razor l Given two models of similar generalization errors, one should prefer the simpler model over the more complex model l For complex models, there is a greater chance that it was fitted accidentally by errors in data l Therefore, one should include model complexity when evaluating a model © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 43
Handling Missing Attribute Values Missing values affect decision tree construction in three different ways: – Affects how impurity measures are computed – Affects how to distribute instance with missing value to child nodes – Affects how a test instance with missing value is classified l While the book describes a few ways it can be handled as part of the process – it is often best to handle this using standard statistical methods – EM-based estimation l © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 44
Other Issues Data Fragmentation l Search Strategy l Expressiveness l © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 45
Data Fragmentation l Number of instances gets smaller as you traverse down the tree l Number of instances at the leaf nodes could be too small to make any statistically significant decision © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 46
Search Strategy l Finding an optimal decision tree is NP-hard l The algorithm presented so far uses a greedy, top -down, recursive partitioning strategy to induce a reasonable solution l Other strategies? – Bottom-up – Bi-directional © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 47
Expressiveness l Decision tree provides expressive representation for learning discrete-valued function – But they do not generalize well to certain types of Boolean functions Example: XOR or Parity functions (example in book) u l Not expressive enough for modeling continuous variables – Particularly when test condition involves only a single attribute at-a-time © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 48
Expressiveness: Oblique Decision Trees x+y<1 Class = + Class = • Test condition may involve multiple attributes • More expressive representation • Finding optimal test condition is computationally expensive • Needs multi-dimensional discretization © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 49
Model Evaluation l Metrics for Performance Evaluation – How to evaluate the performance of a model? l Methods for Performance Evaluation – How to obtain reliable estimates? l Methods for Model Comparison – How to compare the relative performance among competing models? © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 50
Metrics for Performance Evaluation Focus on the predictive capability of a model – Rather than how fast it takes to classify or build models, scalability, etc. l Confusion Matrix: l PREDICTED CLASS Class=Yes ACTUAL CLASS Class=No © Tan, Steinbach, Kumar a c Introduction to Data Mining Class=No b d a: TP (true positive) b: FN (false negative) c: FP (false positive) d: TN (true negative) 4/18/2004 51
Metrics for Performance Evaluation… PREDICTED CLASS Class=Yes ACTUAL CLASS l Class=No Class=Yes a (TP) b (FN) Class=No c (FP) d (TN) Most widely-used metric: © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 52
Limitation of Accuracy l Consider a 2 -class problem – Number of Class 0 examples = 9990 – Number of Class 1 examples = 10 l If model predicts everything to be class 0, accuracy is 9990/10000 = 99. 9 % – Accuracy is misleading because model does not detect any class 1 example © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 53
Cost Matrix PREDICTED CLASS C(i|j) Class=Yes ACTUAL CLASS Class=No Class=Yes Class=No C(Yes|Yes) C(No|Yes) C(Yes|No) C(No|No) C(i|j): Cost of misclassifying class j example as class i © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 54
Computing Cost of Classification Cost Matrix PREDICTED CLASS ACTUAL CLASS Model M 1 C(i|j) + -1 100 - 1 0 PREDICTED CLASS ACTUAL CLASS + - + 150 40 - 60 250 Accuracy = 80% Cost = 3910 © Tan, Steinbach, Kumar Model M 2 ACTUAL CLASS PREDICTED CLASS + - + 250 45 - 5 200 Accuracy = 90% Cost = 4255 Introduction to Data Mining 4/18/2004 55
Cost-Sensitive Measures l l l Precision is biased towards C(Yes|Yes) & C(Yes|No) Recall is biased towards C(Yes|Yes) & C(No|Yes) F-measure is biased towards all except C(No|No) © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 56
Methods for Performance Evaluation l How to obtain a reliable estimate of performance? l Performance of a model may depend on other factors besides the learning algorithm: – Class distribution – Cost of misclassification – Size of training and test sets © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 57
Learning Curve l Learning curve shows how accuracy changes with varying sample size l Requires a sampling schedule for creating learning curve: l Arithmetic sampling (Langley, et al) l Geometric sampling (Provost et al) Effect of small sample size: © Tan, Steinbach, Kumar Introduction to Data Mining - Bias in the estimate - Variance of estimate 4/18/2004 58
Methods of Estimation l l l Holdout – Reserve 2/3 for training and 1/3 for testing Random subsampling – Repeated holdout Cross validation – Partition data into k disjoint subsets – k-fold: train on k-1 partitions, test on the remaining one – Leave-one-out: k=n Stratified sampling – oversampling vs undersampling Bootstrap – Sampling with replacement © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 59
Model Evaluation l Metrics for Performance Evaluation – How to evaluate the performance of a model? l Methods for Performance Evaluation – How to obtain reliable estimates? l Methods for Model Comparison – How to compare the relative performance among competing models? © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 60
ROC Curve (TP, FP): l (0, 0): declare everything to be negative class l (1, 1): declare everything to be positive class l (1, 0): ideal l Diagonal line: – Random guessing – Below diagonal line: prediction is opposite of the true class u © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 61
Using ROC for Model Comparison l No model consistently outperform the other l M 1 is better for small FPR l M 2 is better for large FPR l Area Under the ROC curve l Ideal: § l Random guess: § © Tan, Steinbach, Kumar Introduction to Data Mining Area = 1 Area = 0. 5 4/18/2004 62
Other Classifiers (Chapter 5) Bayesian Classification l Probabilistic learning: Calculate explicit probabilities for hypothesis, among the most practical approaches to certain types of learning problems l Incremental: Each training example can incrementally increase/decrease the probability that a hypothesis is correct. Prior knowledge can be combined with observed data. l Probabilistic prediction: Predict multiple hypotheses, weighted by their probabilities l Standard: Even when Bayesian methods are computationally intractable, they can provide a standard of optimal decision making against which other methods can be measured © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 63
Bayesian Theorem: Basics l Let X be a data sample whose class label is unknown l Let H be a hypothesis that X belongs to class C l For classification problems, determine P(H/X): the probability that the hypothesis holds given the observed data sample X l P(H): prior probability of hypothesis H (i. e. the initial probability before we observe any data, reflects the background knowledge) l P(X): probability that sample data is observed l P(X|H) : probability of observing the sample X, given that the hypothesis holds © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 64
Bayes Theorem (Recap) l Given training data X, posteriori probability of a hypothesis H, P(H|X) follows the Bayes theorem l MAP (maximum posteriori) hypothesis l Practical difficulty: require initial knowledge of many probabilities, significant computational cost; insufficient data © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 65
Naïve Bayes Classifier l A simplified assumption: attributes are conditionally independent: l The product of occurrence of say 2 elements x 1 and x 2, given the current class is C, is the product of the probabilities of each element taken separately, given the same class P([y 1, y 2], C) = P(y 1, C) * P(y 2, C) No dependence relation between attributes Greatly reduces the computation cost, only count the class distribution. Once the probability P(X|Ci) is known, assign X to the class with maximum P(X|Ci)*P(Ci) l l l © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 66
Training dataset Class: C 1: buys_computer= ‘yes’ C 2: buys_computer= ‘no’ Data sample X =(age<=30, Income=medium, Student=yes Credit_rating= Fair) © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 67
Naïve Bayesian Classifier: Example Compute P(X/Ci) for each class P(age=“<30” | buys_computer=“yes”) = 2/9=0. 222 P(age=“<30” | buys_computer=“no”) = 3/5 =0. 6 P(income=“medium” | buys_computer=“yes”)= 4/9 =0. 444 P(income=“medium” | buys_computer=“no”) = 2/5 = 0. 4 P(student=“yes” | buys_computer=“yes)= 6/9 =0. 667 P(student=“yes” | buys_computer=“no”)= 1/5=0. 2 P(credit_rating=“fair” | buys_computer=“yes”)=6/9=0. 667 P(credit_rating=“fair” | buys_computer=“no”)=2/5=0. 4 X=(age<=30 , income =medium, student=yes, credit_rating=fair) P(X|Ci) : P(X|buys_computer=“yes”)= 0. 222 x 0. 444 x 0. 667 x 0. 0. 667 =0. 044 P(X|buys_computer=“no”)= 0. 6 x 0. 4 x 0. 2 x 0. 4 =0. 019 Multiply by P(Ci)s and we can conclude that X belongs to class “buys_computer=yes” © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 68
Naïve Bayesian Classifier: Comments l l l Advantages : – Easy to implement – Good results obtained in most of the cases Disadvantages – Assumption: class conditional independence , therefore loss of accuracy – Practically, dependencies exist among variables – E. g. , hospitals: patients: Profile: age, family history etc Symptoms: fever, cough etc. , Disease: lung cancer, diabetes etc – Dependencies among these cannot be modeled by Naïve Bayesian Classifier How to deal with these dependencies? – Bayesian Belief Networks © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 69
Classification Using Distance l l l Place items in class to which they are “closest”. Must determine distance between an item and a class. Classes represented by – Centroid: Central value. – Medoid: Representative point. – Individual points l Algorithm: KNN © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 70
K Nearest Neighbor (KNN): l l Training set includes classes. Examine K items near item to be classified. New item placed in class with the most number of close items. O(q) for each tuple to be classified. (Here q is the size of the training set. ) © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 71
KNN © Tan, Steinbach, Kumar Introduction to Data Mining 4/18/2004 72
- Slides: 72