Vector Space Model Rong Jin 1 Basic Issues
Vector Space Model Rong Jin 1
Basic Issues in A Retrieval Model How to represent text objects How to refine query according to users’ feedbacks? What similarity function should be used? 2
Basic Issues in IR o o How to represent queries? How to represent documents? How to compute the similarity between documents and queries? How to utilize the users’ feedbacks to enhance the retrieval performance? 3
IR: Formal Formulation o o o Vocabulary V={w 1, w 2, …, wn} of language Query q = q 1, …, qm, where qi V Collection C= {d 1, …, dk} n o Set of relevant documents R(q) C n n o Document di = (di 1, …, dimi), where dij V Generally unknown and user-dependent Query is a “hint” on which doc is in R(q) Task = compute R’(q), an “approximate R(q)” 4
Computing R(q) o Strategy 1: Document selection n Classification function f(d, q) {0, 1} o n n n Outputs 1 for relevance, 0 for irrelevance R(q) is determined as a set {d C|f(d, q)=1} System must decide if a doc is relevant or not (“absolute relevance”) Example: Boolean retrieval 5
Document Selection Approach True R(q) Classifier C(q) + +- - + + --- 6
Computing R(q) o Strategy 2: Document ranking n Similarity function f(d, q) o n Cut off o n n Outputs a similarity between document d and query q The minimum similarity for document and query to be relevant R(q) is determined as the set {d C|f(d, q)> } System must decide if one doc is more likely to be relevant than another (“relative relevance”) 7
Document Selection vs. Ranking True R(q) + +- - + + --- Doc Ranking f(d, q)=? 0. 98 d 1 + 0. 95 d 2 + 0. 83 d 3 0. 80 d 4 + 0. 76 d 5 0. 56 d 6 0. 34 d 7 0. 21 d 8 + 0. 21 d 9 - R’(q) 8
Document Selection vs. Ranking True R(q) + +- - + + --- 1 Doc Selection f(d, q)=? Doc Ranking f(d, q)=? 0 + +- + ++ R’(q) - -- - - + - 0. 98 d 1 + 0. 95 d 2 + 0. 83 d 3 0. 80 d 4 + 0. 76 d 5 0. 56 d 6 0. 34 d 7 0. 21 d 8 + 0. 21 d 9 - R’(q) 9
Ranking is often preferred o o Similarity function is more general than classification function The classifier is unlikely to be accurate n o Ambiguous information needs, short queries Relevance is a subjective concept n Absolute relevance vs. relative relevance 10
Probability Ranking Principle o As stated by Cooper “If a reference retrieval system’s response to each request is a ranking of the documents in the collections in order of decreasing probability of usefulness to the user who submitted the request, where the probabilities are estimated as accurately as possible on the basis of whatever data made available to the system for this purpose, then the overall effectiveness of the system to its users will be the best that is obtainable on the basis of that data. ” o Ranking documents in probability maximizes the utility of IR systems 11
Vector Space Model o Any text object can be represented by a term vector n n o Similarity is determined by relationship between two vectors n o Examples: Documents, queries, sentences, …. A query is viewed as a short document e. g. , the cosine of the angle between the vectors, or the distance between vectors The SMART system: n n Developed at Cornell University, 1960 -1999 Still used widely 12
Vector Space Model: illustration Java Starbuck Microsoft D 1 1 1 0 D 2 0 1 1 D 3 1 0 1 D 4 1 1 1 Query 1 0. 1 1 13
Vector Space Model: illustration Starbucks D 2 ? ? D 3 D 4 ? ? Java Query Microsoft ? ? D 1 ? ? 14
Vector Space Model: Similarity o o Represent both documents and queries by word histogram vectors n n: the number of unique words n A query q = (q 1, q 2, …, qn) o qi: occurrence of the i-th word in query n A document dk = (dk, 1, dk, 2, …, dk, n) o dk, i: occurrence of the i-th word in document q Similarity of a query q to a document dk dk 15
Some Background in Linear Algebra o Dot product (scalar product) o Example: o Measure the similarity by dot product 16
Some Background in Linear Algebra o Length of a vector o Angle between two vectors q dk 17
Some Background in Linear Algebra o Example: q dk o Measure similarity by the angle between vectors 18
Vector Space Model: Similarity o Given n A query q = (q 1, q 2, …, qn) o n q A document dk = (dk, 1, dk, 2, …, dk, n) o o qi: occurrence of the i-th word in query dk, i: occurrence of the i-th word in document Similarity of a query q to a document dk dk 19
Vector Space Model: Similarity q dk 20
Vector Space Model: Similarity q dk 21
Term Weighting o o wk, i: the importance of the i-th word for document dk Why weighting ? n o Some query terms carry more information TF. IDF weighting n n n TF (Term Frequency) = Within-doc-frequency IDF (Inverse Document Frequency) TF normalization: avoid the bias of long documents 22
TF Weighting o o A term is important if it occurs frequently in document Formulas: n f(t, d): term occurrence of word ‘t’ in document d n Maximum frequency normalization: Term frequency normalization 23
TF Weighting o o A term is important if it occurs frequently in document Formulas: n f(t, d): term occurrence of word ‘t’ in document d Term frequency n “Okapi/BM 25 TF”: normalization doclen(d): the length of document d avg_doclen: average document length k, b: predefined constants 24
TF Normalization o Why? n n o Two views of document length n n o Document length variation “Repeated occurrences” are less informative than the “first occurrence” A doc is long because it uses more words A doc is long because it has more contents Generally penalize long doc, but avoid over-penalizing (pivoted normalization) 25
TF Normalization Norm. TF Raw TF “Pivoted normalization” 26
IDF Weighting o o o A term is discriminative if it occurs only in a few documents Formula: IDF(t) = 1+ log(n/m) n – total number of docs m -- # docs with term t (doc freq) Can be interpreted as mutual information 27
TF-IDF Weighting o TF-IDF weighting : n The importance of a term t to a document d weight(t, d)=TF(t, d)*IDF(t) n n Freq in doc high tf high weight Rare in collection high idf high weight 28
TF-IDF Weighting o TF-IDF weighting : n The importance of a term t to a document d weight(t, d)=TF(t, d)*IDF(t) n n n Freq in doc high tf high weight Rare in collection high idf high weight Both qi and dk, i arebinary values, i. e. presence and absence of a word in query and document. 29
Problems with Vector Space Model o Still limited to word based matching n n A document will never be retrieved if it does not contain any query word How to modify the vector space model ? 30
Choice of Bases Starbucks D Q Microsoft Java D 1 31
Choice of Bases Starbucks D Q Microsoft Java D 1 32
Choice of Bases Starbucks D’ D Q Microsoft Java D 1 33
Choice of Bases Starbucks D’ D Q’ Microsoft Q Java D 1 34
Choice of Bases Starbucks D’ Java Q’ Microsoft D 1 35
Choosing Bases for VSM o Modify the bases of the vector space n n Each basis is a concept: a group of words Every document is a vector in the concept space A 1 c 2 c 3 c 4 c 5 m 1 m 2 m 3 m 4 A 1 1 1 0 0 A 2 0 0 0 1 1 A 2 36
Choosing Bases for VSM o Modify the bases of the vector space n n Each basis is a concept: a group of words Every document is a mixture of concepts A 1 c 2 c 3 c 4 c 5 m 1 m 2 m 3 m 4 A 1 1 1 0 0 A 2 0 0 0 1 1 A 2 37
Choosing Bases for VSM o Modify the bases of the vector space n n o Each basis is a concept: a group of words Every document is a mixture of concepts How to define/select ‘basic concept’? n In VS model, each term is viewed as an independent concept 38
Basic: Matrix Multiplication 39
Basic: Matrix Multiplication 40
Linear Algebra Basic: Eigen Analysis o Eigenvectors (for a square m m matrix S) (right) eigenvector o eigenvalue Example 41
Linear Algebra Basic: Eigen Analysis 42
Linear Algebra Basic: Eigen Decomposition S= U * * UT 43
Linear Algebra Basic: Eigen Decomposition S= U * * UT 44
Linear Algebra Basic: Eigen Decomposition S= U * * UT o This is generally true for symmetric square matrix o Columns of U are eigenvectors of S o Diagonal elements of are eigenvalues of S 45
Singular Value Decomposition For an m n matrix A of rank r there exists a factorization (Singular Value Decomposition = SVD) as follows: m m m n V is n n The columns of U are left singular vectors. The columns of V are right singular vectors is a diagonal matrix with singular values 46
Singular Value Decomposition o Illustration of SVD dimensions and sparseness 47
Singular Value Decomposition o Illustration of SVD dimensions and sparseness 48
Singular Value Decomposition o Illustration of SVD dimensions and sparseness 49
Low Rank Approximation o Approximate matrix with the largest singular values and singular vectors 50
Low Rank Approximation o Approximate matrix with the largest singular values and singular vectors 51
Low Rank Approximation o Approximate matrix with the largest singular values and singular vectors 52
Latent Semantic Indexing (LSI) Computation: usingle value decomposition (SVD) with the first m largest singular values and singular vectors, where m is the number of concepts Concept Rep. of Concepts in term space Rep. of concepts in 53 document space
Finding “Good Concepts” 54
SVD: Example: m=2 X X 55
SVD: Example: m=2 X X 56
SVD: Example: m=2 X X 57
SVD: Example: m=2 X X 58
SVD: Orthogonality v 1 X X v 2 u 1 · u 2 =0 v 1 · v 2 = 0 59
SVD: Properties X: rank(X) = 9 X X X’: rank(X’) = 2 o rank(S): the maximum number of either row or column vectors within matrix S that are linearly independent. o SVD produces the best low rank approximation 60
SVD: Visualization X = 61
SVD: Visualization o SVD tries to preserve the Euclidean distance of document vectors 62
- Slides: 62