Lecture 4 IR System Elements cont Principles of

  • Slides: 53
Download presentation
Lecture 4: IR System Elements (cont) Principles of Information Retrieval Prof. Ray Larson University

Lecture 4: IR System Elements (cont) Principles of Information Retrieval Prof. Ray Larson University of California, Berkeley School of Information IS 240 – Spring 2010. 02. 01 - SLIDE 1

Review • Review – Elements of IR Systems • Collections, Queries • Text processing

Review • Review – Elements of IR Systems • Collections, Queries • Text processing and Zipf distribution • Stemmers and Morphological analysis (cont…) • Inverted file indexes IS 240 – Spring 2010. 02. 01 - SLIDE 2

Queries • A query is some expression of a user’s information needs • Can

Queries • A query is some expression of a user’s information needs • Can take many forms – Natural language description of need – Formal query in a query language • Queries may not be accurate expressions of the information need – Differences between conversation with a person and formal query expression IS 240 – Spring 2010. 02. 01 - SLIDE 3

Collections of Documents… • Documents – A document is a representation of some aggregation

Collections of Documents… • Documents – A document is a representation of some aggregation of information, treated as a unit. • Collection – A collection is some physical or logical aggregation of documents • Let’s take the simplest case, and say we are dealing with a computer file of plain ASCII text, where each line represents the “UNIT” or document. IS 240 – Spring 2010. 02. 01 - SLIDE 4

How to search that collection? • Manually? – Cat, more • Scan for strings?

How to search that collection? • Manually? – Cat, more • Scan for strings? – Grep • Extract individual words to search? ? ? – “tokenize” (a unix pipeline) • tr -sc ’A-Za-z’ ’12’ < TEXTFILE | sort | uniq –c – See “Unix for Poets” by Ken Church • Put it in a DBMS and use pattern matching there… – assuming the lines are smaller than the text size limits for the DBMS IS 240 – Spring 2010. 02. 01 - SLIDE 5

What about VERY big files? • Scanning becomes a problem • The nature of

What about VERY big files? • Scanning becomes a problem • The nature of the problem starts to change as the scale of the collection increases • A variant of Parkinson’s Law that applies to databases is: – Data expands to fill the space available to store it IS 240 – Spring 2010. 02. 01 - SLIDE 6

Document Processing Steps 2010. 02. 01 - SLIDE 7

Document Processing Steps 2010. 02. 01 - SLIDE 7

Structure of an IR System Search Line Interest profiles & Queries Formulating query in

Structure of an IR System Search Line Interest profiles & Queries Formulating query in terms of descriptors Information Storage and Retrieval System Rules of the game = Rules for subject indexing + Thesaurus (which consists of Lead-In Vocabulary and Indexing Language Documents & data Indexing (Descriptive and Subject) Storage of profiles Store 1: Profiles/ Search requests Storage Line Storage of Documents Comparison/ Matching Store 2: Document representations Potentially Relevant Documents Adapted from Soergel, p. 19 IS 240 – Spring 2010. 02. 01 - SLIDE 8

Query Processing • In order to correctly match queries and documents they must go

Query Processing • In order to correctly match queries and documents they must go through the same text processing steps as the documents did when they were stored • In effect, the query is treated like it was a document • Exceptions (of course) include things like structured query languages that must be parsed to extract the search terms and requested operations from the query – The search terms must still go through the same text process steps as the document… IS 240 – Spring 2010. 02. 01 - SLIDE 9

Steps in Query processing • Parsing and analysis of the query text (same as

Steps in Query processing • Parsing and analysis of the query text (same as done for the document text) – Morphological Analysis – Statistical Analysis of text IS 240 – Spring 2010. 02. 01 - SLIDE 10

Stemming and Morphological Analysis • Goal: “normalize” similar words • Morphology (“form” of words)

Stemming and Morphological Analysis • Goal: “normalize” similar words • Morphology (“form” of words) – Inflectional Morphology • E. g, . inflect verb endings and noun number • Never change grammatical class – dog, dogs – tengo, tienes, tiene, tenemos, tienen – Derivational Morphology • Derive one word from another, • Often change grammatical class – build, building; health, healthy IS 240 – Spring 2010. 02. 01 - SLIDE 11

Plotting Word Frequency by Rank • Say for a text with 100 tokens •

Plotting Word Frequency by Rank • Say for a text with 100 tokens • Count – – – How many tokens occur 1 time (50) How many tokens occur 2 times (20) … How many tokens occur 7 times (10) … How many tokens occur 12 times (1) How many tokens occur 14 times (1) • So things that occur the most often share the highest rank (rank 1). • Things that occur the fewest times have the lowest rank (rank n). IS 240 – Spring 2010. 02. 01 - SLIDE 12

Many similar distributions… • • • Words in a text collection Library book checkout

Many similar distributions… • • • Words in a text collection Library book checkout patterns Bradford’s and Lotka’s laws. Incoming Web Page Requests (Nielsen) Outgoing Web Page Requests (Cunha & Crovella) • Document Size on Web (Cunha & Crovella) IS 240 – Spring 2010. 02. 01 - SLIDE 13

Zipf Distribution (linear and log scale) 2010. 02. 01 - SLIDE 14

Zipf Distribution (linear and log scale) 2010. 02. 01 - SLIDE 14

Resolving Power (van Rijsbergen 79) The most frequent words are not the most descriptive.

Resolving Power (van Rijsbergen 79) The most frequent words are not the most descriptive. IS 240 – Spring 2010. 02. 01 - SLIDE 15

Other Models • • Poisson distribution 2 -Poisson Model Negative Binomial Katz K-mixture –

Other Models • • Poisson distribution 2 -Poisson Model Negative Binomial Katz K-mixture – See Church (SIGIR 1995) IS 240 – Spring 2010. 02. 01 - SLIDE 16

IS 240 – Spring 2010. 02. 01 - SLIDE 17

IS 240 – Spring 2010. 02. 01 - SLIDE 17

Stemming and Morphological Analysis • Goal: “normalize” similar words • Morphology (“form” of words)

Stemming and Morphological Analysis • Goal: “normalize” similar words • Morphology (“form” of words) – Inflectional Morphology • E. g, . inflect verb endings and noun number • Never change grammatical class – dog, dogs – tengo, tienes, tiene, tenemos, tienen – Derivational Morphology • Derive one word from another, • Often change grammatical class – build, building; health, healthy IS 240 – Spring 2010. 02. 01 - SLIDE 18

Stemming and Morphological Analysis • Goal: “normalize” similar words • Morphology (“form” of words)

Stemming and Morphological Analysis • Goal: “normalize” similar words • Morphology (“form” of words) – Inflectional Morphology • E. g, . inflect verb endings and noun number • Never change grammatical class – dog, dogs – tengo, tienes, tiene, tenemos, tienen – Derivational Morphology • Derive one word from another, • Often change grammatical class – build, building; health, healthy IS 240 – Spring 2010. 02. 01 - SLIDE 19

Simple “S” stemming • IF a word ends in “ies”, but not “eies” or

Simple “S” stemming • IF a word ends in “ies”, but not “eies” or “aies” – THEN “ies” “y” • IF a word ends in “es”, but not “aes”, “ees”, or “oes” – THEN “es” “e” • IF a word ends in “s”, but not “us” or “ss” – THEN “s” NULL Harman, JASIS Jan. 1991 IS 240 – Spring 2010. 02. 01 - SLIDE 20

Stemmer Examples The SMART stemmer % tstem ate % tstem apples appl % tstem

Stemmer Examples The SMART stemmer % tstem ate % tstem apples appl % tstem formulae formul % tstem appendices appendix % tstem implementation imple % tstem glasses glass IS 240 – Spring 2010 The Porter stemmer % pstem ate at % pstem apples appl % pstem formulae formula % pstem appendices appendic % pstem implementation implement % pstem glasses glass The IAGO! stemmer % stem ate|2 eat|2 apples|1 apple|1 formula|1 appendices|1 appendix|1 implementation|1 glasses|1 2010. 02. 01 - SLIDE 21

Errors Generated by Porter Stemmer (Krovetz 93) Too Aggressive organization/organ policy/police execute/executive arm/army IS

Errors Generated by Porter Stemmer (Krovetz 93) Too Aggressive organization/organ policy/police execute/executive arm/army IS 240 – Spring 2010 Too Timid european/europe cylinder/cylindrical create/creation search/searcher 2010. 02. 01 - SLIDE 22

Automated Methods • Stemmers: – Very dumb rules work well (for English) – Porter

Automated Methods • Stemmers: – Very dumb rules work well (for English) – Porter Stemmer: Iteratively remove suffixes – Improvement: pass results through a lexicon • Newer stemmers are configurable (Snowball) – Demo… • Powerful multilingual tools exist for morphological analysis – – PCKimmo, Xerox Lexical technology Require a grammar and dictionary Use “two-level” automata Wordnet “morpher” IS 240 – Spring 2010. 02. 01 - SLIDE 23

Wordnet • Type “wn word” on a machine where aardwolves aardwolf wordnet is installed…

Wordnet • Type “wn word” on a machine where aardwolves aardwolf wordnet is installed… abaci abacus • Large exception dictionary: abacuses abbacies abbacy abhenries abhenry abilities ability • Demo abkhaz abnormalities abnormality aboideaus aboideaux aboideau aboiteaus aboiteaux aboiteau abos abo abscissae abscissas abscissa absurdities absurdity … IS 240 – Spring 2010. 02. 01 - SLIDE 24

Using NLP • Strzalkowski (in Reader) Text NLP: TAGGER IS 240 – Spring 2010

Using NLP • Strzalkowski (in Reader) Text NLP: TAGGER IS 240 – Spring 2010 NLP repres PARSER Dbase search TERMS 2010. 02. 01 - SLIDE 25

Using NLP INPUT SENTENCE The former Soviet President has been a local hero ever

Using NLP INPUT SENTENCE The former Soviet President has been a local hero ever since a Russian tank invaded Wisconsin. TAGGED SENTENCE The/dt former/jj Soviet/jj President/nn has/vbz been/vbn a/dt local/jj hero/nn ever/rb since/in a/dt Russian/jj tank/nn invaded/vbd Wisconsin/np. /per IS 240 – Spring 2010. 02. 01 - SLIDE 26

Using NLP TAGGED & STEMMED SENTENCE the/dt former/jj soviet/jj president/nn have/vbz be/vbn a/dt local/jj

Using NLP TAGGED & STEMMED SENTENCE the/dt former/jj soviet/jj president/nn have/vbz be/vbn a/dt local/jj hero/nn ever/rb since/in a/dt russian/jj tank/nn invade/vbd wisconsin/np. /per IS 240 – Spring 2010. 02. 01 - SLIDE 27

Using NLP PARSED SENTENCE [assert [[perf [have]][[verb[BE]] [subject [np[n PRESIDENT][t_pos THE] [adj[FORMER]][adj[SOVIET]]]] [adv EVER]

Using NLP PARSED SENTENCE [assert [[perf [have]][[verb[BE]] [subject [np[n PRESIDENT][t_pos THE] [adj[FORMER]][adj[SOVIET]]]] [adv EVER] [sub_ord[SINCE [[verb[INVADE]] [subject [np [n TANK][t_pos A] [adj [RUSSIAN]]]] [object [np [name [WISCONSIN]]]]] IS 240 – Spring 2010. 02. 01 - SLIDE 28

Using NLP EXTRACTED TERMS & WEIGHTS President 2. 623519 soviet President+soviet 11. 556747 president+former

Using NLP EXTRACTED TERMS & WEIGHTS President 2. 623519 soviet President+soviet 11. 556747 president+former Hero 7. 896426 hero+local Invade 8. 435012 tank Tank+invade 17. 402237 tank+russian Russian 7. 383342 wisconsin IS 240 – Spring 2010 5. 416102 14. 594883 14. 314775 6. 848128 16. 030809 7. 785689 2010. 02. 01 - SLIDE 29

Same Sentence, different sys Enju Parser ROOT been a a local The former Russian

Same Sentence, different sys Enju Parser ROOT been a a local The former Russian Soviet invaded has since ever ROOT be be a a local the former russian soviet invade have since ever IS 240 – Spring 2010 ROOT VBN DT DT JJ JJ NNP VBD VBZ IN IN RB ROOT VB VB DT DT JJ JJ NNP VB VB IN IN RB -1 5 5 6 11 7 0 1 12 2 14 14 4 4 10 10 9 ROOT ARG 1 ARG 2 ARG 1 ARG 1 MOD ARG 1 ARG 2 MOD ARG 1 been President hero tank hero President tank Wisconsin President been invaded since be president hero tank hero president tank wisconsin president be be invade since VBN NNP NN NNP NNP VBN VBD IN VB NNP NN NNP NNP VB VB VB IN 2010. 02. 01 - SLIDE 30

Other Considerations • Church (SIGIR 1995) looked at correlations between forms of words in

Other Considerations • Church (SIGIR 1995) looked at correlations between forms of words in texts IS 240 – Spring 2010. 02. 01 - SLIDE 31

Assumptions in IR • Statistical independence of terms • Dependence approximations IS 240 –

Assumptions in IR • Statistical independence of terms • Dependence approximations IS 240 – Spring 2010. 02. 01 - SLIDE 32

Statistical Independence Two events x and y are statistically independent if the product of

Statistical Independence Two events x and y are statistically independent if the product of their probability of their happening individually equals their probability of happening together. IS 240 – Spring 2010. 02. 01 - SLIDE 33

Statistical Independence and Dependence • What are examples of things that are statistically independent?

Statistical Independence and Dependence • What are examples of things that are statistically independent? • What are examples of things that are statistically dependent? IS 240 – Spring 2010. 02. 01 - SLIDE 34

Statistical Independence vs. Statistical Dependence • How likely is a red car to drive

Statistical Independence vs. Statistical Dependence • How likely is a red car to drive by given we’ve seen a black one? • How likely is the word “ambulence” to appear, given that we’ve seen “car accident”? • Color of cars driving by are independent (although more frequent colors are more likely) • Words in text are not independent (although again more frequent words are more likely) IS 240 – Spring 2010. 02. 01 - SLIDE 35

Lexical Associations • Subjects write first word that comes to mind – doctor/nurse; black/white

Lexical Associations • Subjects write first word that comes to mind – doctor/nurse; black/white (Palermo & Jenkins 64) • Text Corpora yield similar associations • One measure: Mutual Information (Church and Hanks 89) • If word occurrences were independent, the numerator and denominator would be equal (if measured across a large collection) IS 240 – Spring 2010. 02. 01 - SLIDE 36

Interesting Associations with “Doctor” (AP Corpus, N=15 million, Church & Hanks 89) I(x, y)

Interesting Associations with “Doctor” (AP Corpus, N=15 million, Church & Hanks 89) I(x, y) 11. 3 10. 7 9. 4 9. 0 8. 9 8. 7 f(x, y) 12 8 30 8 6 11 25 IS 240 – Spring 2010 f(x) 111 1105 275 1105 621 x honorary doctors examined doctors doctor f(y) 621 44 241 154 621 317 1407 y doctor dentists nurses treating doctor treat bills 2010. 02. 01 - SLIDE 37

Un-Interesting Associations with “Doctor” I(x, y) f(x, y) 0. 96 0. 95 0. 93

Un-Interesting Associations with “Doctor” I(x, y) f(x, y) 0. 96 0. 95 0. 93 6 41 12 f(x) x 621 doctor 284690 a 84716 is f(y) y 73785 with 1105 doctors These associations were likely to happen because the non-doctor words shown here are very common and therefore likely to co-occur with any noun. IS 240 – Spring 2010. 02. 01 - SLIDE 38

Query Processing • Once the text is in a form to match to the

Query Processing • Once the text is in a form to match to the indexes then the fun begins – What approach to use? • Boolean? • Extended Boolean? • Ranked – – – Fuzzy sets? Vector? Probabilistic? Language Models? Neural nets? • Most of the next few weeks will be looking at these different approaches IS 240 – Spring 2010. 02. 01 - SLIDE 39

Display and formatting • Have to present the results to the user • Lots

Display and formatting • Have to present the results to the user • Lots of different options here, mostly governed by – How the actual document is stored – And whether the full document or just the metadata about it is presented IS 240 – Spring 2010. 02. 01 - SLIDE 40

What to do with terms… • Once terms have been extracted from the documents,

What to do with terms… • Once terms have been extracted from the documents, they need to be stored in some way that lets you get back to documents that those terms came from • The most common index structure to do this in IR systems is the “Inverted File” IS 240 – Spring 2010. 02. 01 - SLIDE 41

Boolean Implementation: Inverted Files • We will look at “Vector files” in detail later.

Boolean Implementation: Inverted Files • We will look at “Vector files” in detail later. But conceptually, an Inverted File is a vector file “inverted” so that rows become columns and columns become rows IS 240 – Spring 2010. 02. 01 - SLIDE 42

How Are Inverted Files Created • Documents are parsed to extract words (or stems)

How Are Inverted Files Created • Documents are parsed to extract words (or stems) and these are saved with the Document ID. Doc 1 Doc 2 Now is the time for all good men to come to the aid of their country It was a dark and stormy night in the country manor. The time was past midnight IS 240 – Spring 2010 Text Proc Steps 2010. 02. 01 - SLIDE 43

How Inverted Files are Created • After all document have been parsed the inverted

How Inverted Files are Created • After all document have been parsed the inverted file is sorted IS 240 – Spring 2010. 02. 01 - SLIDE 44

How Inverted Files are Created • Multiple term entries for a single document are

How Inverted Files are Created • Multiple term entries for a single document are merged and frequency information added IS 240 – Spring 2010. 02. 01 - SLIDE 45

Inverted Files • The file is commonly split into a Dictionary and a Postings

Inverted Files • The file is commonly split into a Dictionary and a Postings file IS 240 – Spring 2010. 02. 01 - SLIDE 46

Inverted files • Permit fast search for individual terms • Search results for each

Inverted files • Permit fast search for individual terms • Search results for each term is a list of document IDs (and optionally, frequency and/or positional information) • These lists can be used to solve Boolean queries: – country: d 1, d 2 – manor: d 2 – country and manor: d 2 IS 240 – Spring 2010. 02. 01 - SLIDE 47

Inverted Files • Lots of alternative implementations – E. g. : Cheshire builds within-document

Inverted Files • Lots of alternative implementations – E. g. : Cheshire builds within-document frequency using a hash table during document parsing. Then Document IDs and frequency info are stored in a Berkeley. DB B-tree index keyed by the term. IS 240 – Spring 2010. 02. 01 - SLIDE 48

Btree (conceptual) F B || D || F| || P || Z| H ||

Btree (conceptual) F B || D || F| || P || Z| H || L || P| R || S || Z| Devils Aces Boilers Cars IS 240 – Spring 2010 Flyers Hawkeyes Hoosiers Minors Panthers Seminoles 2010. 02. 01 - SLIDE 49

Btree with Postings F B || D || F| Devils Aces Boilers Cars Flyers

Btree with Postings F B || D || F| Devils Aces Boilers Cars Flyers 2, 4, 8, 12 IS 240 – Spring 20102, 4, 8, 12 || P || Z| H || L || P| Hawkeyes Hoosiers Minors Panthers 2, 4, 8, 12 5, 7, 200 R || S || Z| Seminoles 2, 4, 8, 120 2010. 02. 01 - SLIDE 50

Inverted files • Permit fast search for individual terms • Search results for each

Inverted files • Permit fast search for individual terms • Search results for each term is a list of document IDs (and optionally, frequency, part of speech and/or positional information) • These lists can be used to solve Boolean queries: – country: d 1, d 2 – manor: d 2 – country and manor: d 2 IS 240 – Spring 2010. 02. 01 - SLIDE 51

Query Processing • Once the text is in a form to match to the

Query Processing • Once the text is in a form to match to the indexes then the fun begins – What approach to use? • Boolean? • Extended Boolean? • Ranked – – – Fuzzy sets? Vector? Probabilistic? Language Models? Neural nets? • Most of the next few weeks will be looking at these different approaches IS 240 – Spring 2010. 02. 01 - SLIDE 52

Display and formatting • Have to present the results to the user • Lots

Display and formatting • Have to present the results to the user • Lots of different options here, mostly governed by – How the actual document is stored – And whether the full document or just the metadata about it is presented IS 240 – Spring 2010. 02. 01 - SLIDE 53