Association Analysis Basic Concepts and Algorithms 1 Association
Association Analysis: Basic Concepts and Algorithms 1
Association Rule Mining p Given a set of transactions, find rules that will predict the occurrence of an item based on the occurrences of other items in the transaction Example of Association Rules Market-Basket transactions {Diaper} {Beer}, {Beer, Bread} {Milk}, Implication means co-occurrence, not causality! 2
Definition: Frequent Itemset p Itemset n A collection of one or more items p n k-itemset p p n Frequency of occurrence of an itemset E. g. ({Milk, Bread, Diaper}) = 2 Support n n p An itemset that contains k items Support count ( ) n p Example: {Milk, Bread, Diaper} Fraction of transactions that contain an itemset E. g. s({Milk, Bread, Diaper}) = 2/5 Frequent Itemset n An itemset whose support is greater than or equal to a minsup threshold 3
Definition: Association Rule p Association Rule n n p An implication expression of the form X Y, where X and Y are itemsets Example: {Milk, Diaper} {Beer} Rule Evaluation Metrics n Support (s) p n Fraction of transactions that contain both X and Y Example: Confidence (c) p Measures how often items in Y appear in transactions that contain X 4
Association Rule Mining Task p Given a set of transactions T, the goal of association rule mining is to find all rules having n n p support ≥ minsup threshold confidence ≥ minconf threshold Brute-force approach: List all possible association rules n Compute the support and confidence for each rule n Prune rules that fail the minsup and minconf thresholds Computationally prohibitive! n 5
Mining Association Rules Example of Rules: {Milk, Diaper} {Beer} (s=0. 4, c=0. 67) {Milk, Beer} {Diaper} (s=0. 4, c=1. 0) {Diaper, Beer} {Milk} (s=0. 4, c=0. 67) {Beer} {Milk, Diaper} (s=0. 4, c=0. 67) {Diaper} {Milk, Beer} (s=0. 4, c=0. 5) {Milk} {Diaper, Beer} (s=0. 4, c=0. 5) Observations: • All the above rules are binary partitions of the same itemset: {Milk, Diaper, Beer} • Rules originating from the same itemset have identical support but can have different confidence • Thus, we may decouple the support and confidence requirements 6
Mining Association Rules p Two-step approach: 1. Frequent Itemset Generation – 2. Rule Generation – p Generate all itemsets whose support minsup Generate high confidence rules from each frequent itemset, where each rule is a binary partitioning of a frequent itemset Frequent itemset generation is still computationally expensive 7
Frequent Itemset Generation Given d items, there are 2 d possible candidate itemsets 8
Frequent Itemset Generation p Brute-force approach: n n Each itemset in the lattice is a candidate frequent itemset Count the support of each candidate by scanning the database Match each transaction against every candidate Complexity ~ O(NMw) => Expensive since M = 29 d !!!
Computational Complexity p Given d unique items: n n Total number of itemsets = 2 d Total number of possible association rules: If d=6, R = 602 rules 10
Factors Affecting Complexity p Choice of minimum support threshold n n p Dimensionality (number of items) of the data set n n p more space is needed to store support count of each item if number of frequent items also increases, both computation and I/O costs may also increase Size of database n p lowering support threshold results in more frequent itemsets this may increase number of candidates and max length of frequent itemsets since Apriori makes multiple passes, run time of algorithm may increase with number of transactions Average transaction width n n transaction width increases with denser data sets This may increase max length of frequent itemsets and traversals of hash tree (number of subsets in a transaction 11 increases with its width)
Frequent Itemset Generation Strategies p Reduce the number of candidates (M) n n p Reduce the number of transactions (N) n n p Complete search: M=2 d Use pruning techniques to reduce M Reduce size of N as the size of itemset increases Used by DHP and vertical-based mining algorithms Reduce the number of comparisons (NM) n n Use efficient data structures to store the candidates or transactions No need to match every candidate against every transaction 12
Reducing Number of Candidates p Apriori principle: n p If an itemset is frequent, then all of its subsets must also be frequent Apriori principle holds due to the following property of the support measure: n n Support of an itemset never exceeds the support of its subsets This is known as the anti-monotone property of 13 support
Illustrating Apriori Principle Found to be Infrequent Pruned supersets 14
Illustrating Apriori Principle Items (1 -itemsets) Pairs (2 -itemsets) (No need to generate candidates involving Coke or Eggs) Minimum Support = 3 Triplets (3 -itemsets) If every subset is considered, 6 C + 6 C = 41 1 2 3 With support-based pruning, 6 + 1 = 13 15
Apriori Algorithm p Method: n n n Let k=1 Generate frequent itemsets of length 1 Repeat until no new frequent itemsets are identified Generate length (k+1) candidate itemsets from length k frequent itemsets p Prune candidate itemsets containing subsets of length k that are infrequent p Count the support of each candidate by scanning the DB p Eliminate candidates that are infrequent, leaving only those that are frequent 16 p
How to Generate Candidates? p Suppose the items in Lk-1 are listed in an order p Step 1: self-joining Lk-1 insert into Ck select p. item 1, p. item 2, …, p. itemk-1, q. itemk-1 from Lk-1 p, Lk-1 q where p. item 1=q. item 1, …, p. itemk-2=q. itemk-2, p. itemk-1 < q. itemk-1 p Step 2: pruning forall itemsets c in Ck do forall (k-1)-subsets s of c do if (s is not in Lk-1) then delete c from Ck 17
Challenges of Frequent Pattern Mining p p Challenges n Multiple scans of transaction database n Huge number of candidates n Tedious workload of support counting for candidates Improving Apriori: general ideas n Reduce passes of transaction database scans n Shrink number of candidates n Facilitate support counting of candidates 18
Compact Representation of Frequent Itemsets p Some itemsets are redundant because they have identical support as their supersets p Number of frequent itemsets 19 p Need a compact representation
Maximal Frequent Itemset An itemset is maximal frequent if none of its immediate supersets is frequent Maximal Itemsets Infrequent Itemsets Border 20
Closed Itemset p An itemset is closed if none of its immediate supersets has the same support as the itemset 21
Maximal vs Closed Itemsets Transaction Ids Not supported by any transactions 22
Maximal vs Closed Frequent Itemsets Minimum support = 2 Closed but not maximal Closed and maximal # Closed = 9 # Maximal = 4 23
Maximal vs Closed Itemsets 24
Alternative Methods for Frequent Itemset Generation p Representation of Database n horizontal vs vertical data layout 25
ECLAT p For each item, store a list of transaction ids (tids) TID-list 26
ECLAT p Determine support of any k-itemset by intersecting tid-lists of two of its (k-1) subsets. p 3 traversal approaches: n p p top-down, bottom-up and hybrid Advantage: very fast support counting Disadvantage: intermediate tid-lists may become too large for memory 27
FP-growth Algorithm p Use a compressed representation of the database using an FP-tree p Once an FP-tree has been constructed, it uses a recursive divide-and-conquer approach to mine the frequent itemsets 28
FP-tree construction null After reading TID=1: A: 1 B: 1 After reading TID=2: A: 1 B: 1 null B: 1 C: 1 29 D: 1
FP-Tree Construction Transaction Database null B: 3 A: 7 B: 5 Header table C: 1 C: 3 D: 1 D: 1 E: 1 Pointers are used to assist frequent itemset generation 30
FP-growth C: 1 Conditional Pattern base for D: P = {(A: 1, B: 1, C: 1), (A: 1, B: 1), (A: 1, C: 1), (A: 1), (B: 1, C: 1)} D: 1 Recursively apply FPgrowth on P null A: 7 B: 5 C: 1 C: 3 D: 1 B: 1 D: 1 Frequent Itemsets found (with sup > 1): AD, BD, CD, ACD, BCD 31
Rule Generation p Given a frequent itemset L, find all nonempty subsets f L such that f L – f satisfies the minimum confidence requirement n If {A, B, C, D} is a frequent itemset, candidate rules: ABC D, A BCD, AB CD, BD AC, p ABD C, B ACD, AC BD, CD AB, ACD B, C ABD, AD BC, BCD A, D ABC BC AD, If |L| = k, then there are 2 k – 2 candidate association rules (ignoring L and L) 32
Rule Generation p How to efficiently generate rules from frequent itemsets? n In general, confidence does not have an antimonotone property c(ABC D) can be larger or smaller than c(AB D) n n But confidence of rules generated from the same itemset has an anti-monotone property e. g. , L = {A, B, C, D}: c(ABC D) c(AB CD) c(A BCD) p Confidence is anti-monotone w. r. t. number of items 33 on the RHS of the rule
Rule Generation for Apriori Algorithm Lattice of rules Low Confidence Rule Pruned Rules 34
Rule Generation for Apriori Algorithm p Candidate rule is generated by merging two rules that share the same prefix in the rule consequent p join(CD=>AB, BD=>AC) would produce the candidate rule D => ABC p Prune rule D=>ABC if its subset AD=>BC does not have high confidence 35
References R. Agrawal, T. Imielinski, and A. Swami. Mining association rules between sets of items in large databases. SIGMOD, 207216, 1993. p R. Agrawal and R. Srikant. Fast algorithms for mining association rules. VLDB, 487 -499, 1994. p R. J. Bayardo. Efficiently mining long patterns from databases. SIGMOD, 85 -93, 1998. p 36
- Slides: 36