CL vs NLP Why Computational Linguistics CL rather

  • Slides: 54
Download presentation
CL vs NLP Why “Computational Linguistics (CL)” rather than “Natural Language Processing” (NLP)? •

CL vs NLP Why “Computational Linguistics (CL)” rather than “Natural Language Processing” (NLP)? • Computational Linguistics — Computers dealing with language — Modeling what people do • Natural Language Processing —Applications on the computer side

Relation of CL to Other Disciplines Artificial Intelligence (AI) (notions of rep, search, etc.

Relation of CL to Other Disciplines Artificial Intelligence (AI) (notions of rep, search, etc. ) Machine Learning (particularly, probabilistic or statistic ML techniques) Human Computer Interaction (HCI) Electrical Engineering (EE) (Optical Character Recognition) Linguistics (Syntax, Semantics, etc. ) CL Psychology Philosophy of Language, Formal Logic Theory of Computation Information Retrieval

A Sampling of “Other Disciplines” ¬ Linguistics: formal grammars, abstract characterization of what is

A Sampling of “Other Disciplines” ¬ Linguistics: formal grammars, abstract characterization of what is to be learned. ¬ Computer Science: algorithms for efficient learning or online deployment of these systems in automata. ¬ Engineering: stochastic techniques for characterizing regular patterns for learning and ambiguity resolution. ¬ Psychology: Insights into what linguistic constructions are easy or difficult for people to learn or to use

History: 1940 -1950’s • Development of formal language theory (Chomsky, Kleene, Backus). – Formal

History: 1940 -1950’s • Development of formal language theory (Chomsky, Kleene, Backus). – Formal characterization of classes of grammar (context-free, regular) – Association with relevant automata • Probability theory: language understanding as decoding through noisy channel (Shannon) – Use of information theoretic concepts like entropy to measure success of language models.

1957 -1983 Symbolic vs. Stochastic • Symbolic – Use of formal grammars as basis

1957 -1983 Symbolic vs. Stochastic • Symbolic – Use of formal grammars as basis for natural language processing and learning systems. (Chomsky, Harris) – Use of logic and logic based programming for characterizing syntactic or semantic inference (Kaplan, Kay, Pereira) – First toy natural language understanding and generation systems (Woods, Minsky, Schank, Winograd, Colmerauer) – Discourse Processing: Role of Intention, Focus (Grosz, Sidner, Hobbs) • Stochastic Modeling – Probabilistic methods for early speech recognition, OCR (Bledsoe and Browning, Jelinek, Black, Mercer)

1983 -1993: Return of Empiricism • Use of stochastic techniques for part of speech

1983 -1993: Return of Empiricism • Use of stochastic techniques for part of speech tagging, parsing, word sense disambiguation, etc. • Comparison of stochastic, symbolic, more or less powerful models for language understanding and learning tasks.

1993 -Present • Advances in software and hardware create NLP needs for information retrieval

1993 -Present • Advances in software and hardware create NLP needs for information retrieval (web), machine translation, spelling and grammar checking, speech recognition and synthesis. • Stochastic and symbolic methods combine for real world applications.

Language and Intelligence: Turing Test • Turing test: – machine, human, and human judge

Language and Intelligence: Turing Test • Turing test: – machine, human, and human judge • Judge asks questions of computer and human. – Machine’s job is to act like a human, human’s job is to convince judge that he’s not the machine. – Machine judged “intelligent” if it can fool judge. • Judgement of “intelligence” linked to appropriate answers to questions from the system.

ELIZA • Remarkably simple “Rogerian Psychologist” • Uses Pattern Matching to carry on limited

ELIZA • Remarkably simple “Rogerian Psychologist” • Uses Pattern Matching to carry on limited form of conversation. • Seems to “Pass the Turing Test!” (Mc. Corduck, 1979, pp. 225 -226) • Eliza Demo: http: //www. lpa. co. uk/pws_dem 4. htm

What’s involved in an “intelligent” Answer? Analysis: Decomposition of the signal (spoken or written)

What’s involved in an “intelligent” Answer? Analysis: Decomposition of the signal (spoken or written) eventually into meaningful units. This involves …

Speech/Character Recognition • Decomposition into words, segmentation of words into appropriate phones or letters

Speech/Character Recognition • Decomposition into words, segmentation of words into appropriate phones or letters • Requires knowledge of phonological patterns: – I’m enormously proud. – I mean to make you proud.

Morphological Analysis • Inflectional – duck + s = [N duck] + [plural s]

Morphological Analysis • Inflectional – duck + s = [N duck] + [plural s] – duck + s = [V duck] + [3 rd person s] • Derivational – kind, kindness • Spelling changes – drop, dropping – hide, hiding

Syntactic Analysis • Associate constituent structure with string S • Prepare for semantic interpretation

Syntactic Analysis • Associate constituent structure with string S • Prepare for semantic interpretation OR: NP I VP V watched watch Subject NP det I Object terrapin N Det the terrapin the

Semantics • A way of representing meaning • Abstracts away from syntactic structure •

Semantics • A way of representing meaning • Abstracts away from syntactic structure • Example: – First-Order Logic: watch(I, terrapin) – Can be: “I watched the terrapin” or “The terrapin was watched by me” • Real language is complex: – Who did I watch?

Lexical Semantics The Terrapin, is who I watched. Watch the Terrapin is what I

Lexical Semantics The Terrapin, is who I watched. Watch the Terrapin is what I do best. *Terrapin is what I watched the I= experiencer Watch the Terrapin = predicate The Terrapin = patient

Compositional Semantics • Association of parts of a proposition with semantic roles • Scoping

Compositional Semantics • Association of parts of a proposition with semantic roles • Scoping Proposition Experiencer I (1 st pers, sg) Predicate: Be (perc) pred saw patient the Terrapin

Word-Governed Semantics • Any verb can add “able” to form an adjective. – I

Word-Governed Semantics • Any verb can add “able” to form an adjective. – I taught the class. The class is teachable – I rejected the idea. The idea is rejectable. • Association of particular words with specific semantic forms. – John (masculine) – The boys ( masculine, plural, human)

Pragmatics • Real world knowledge, speaker intention, goal of utterance. • Related to sociology.

Pragmatics • Real world knowledge, speaker intention, goal of utterance. • Related to sociology. • Example 1: – Could you turn in your assignments now (command) – Could you finish the homework? (question, command) • Example 2: – I couldn’t decide how to catch the crook. Then I decided to spy on the crook with binoculars. – To my surprise, I found out he had them too. Then I knew to just follow the crook with binoculars. [ the crook [with binoculars]] [ the crook] [ with binoculars]

Discourse Analysis • Discourse: How propositions fit together in a conversation—multi-sentence processing. – Pronoun

Discourse Analysis • Discourse: How propositions fit together in a conversation—multi-sentence processing. – Pronoun reference: The professor told the student to finish the assignment. He was pretty aggravated at how long it was taking to pass it in. – Multiple reference to same entity: George W. Bush, president of the U. S. – Relation between sentences: John hit the man. He had stolen his bicycle

NLP Pipeline speech text Phonetic Analysis OCR/Tokenization Morphological analysis Syntactic analysis Semantic Interpretation Discourse

NLP Pipeline speech text Phonetic Analysis OCR/Tokenization Morphological analysis Syntactic analysis Semantic Interpretation Discourse Processing

Relation to Machine Translation analysis input generation output Morphological analysis Morphological synthesis Syntactic analysis

Relation to Machine Translation analysis input generation output Morphological analysis Morphological synthesis Syntactic analysis Syntactic realization Semantic Interpretation Lexical selection Interlingua

Ambiguity I made her duck I made duckling for her I made the duckling

Ambiguity I made her duck I made duckling for her I made the duckling belonging to her I created the duck she owns I forced her to lower head By magic, I changed her into a duck

Syntactic Disambiguation • Structural ambiguity: S NP I S VP V NP NP VP

Syntactic Disambiguation • Structural ambiguity: S NP I S VP V NP NP VP made her V duck I VP V NP made det N her duck

Part of Speech Tagging and Word Sense Disambiguation • [verb Duck ] ! [noun

Part of Speech Tagging and Word Sense Disambiguation • [verb Duck ] ! [noun Duck] is delicious for dinner • I went to the bank to deposit my check. I went to the bank to look out at the river. I went to the bank of windows and chose the one dealing with last names beginning with “d”.

Resources for NLP Systems • Dictionary • Morphology and Spelling Rules • Grammar Rules

Resources for NLP Systems • Dictionary • Morphology and Spelling Rules • Grammar Rules • Semantic Interpretation Rules • Discourse Interpretation Natural Language processing involves (1) learning or fashioning the rules for each component, (2) embedding the rules in the relevant automaton, (3) and using the automaton to efficiently process the input.

Some NLP Applications • Machine Translation—Babelfish (Alta Vista): http: //babelfish. altavista. com/translate. dyn •

Some NLP Applications • Machine Translation—Babelfish (Alta Vista): http: //babelfish. altavista. com/translate. dyn • Question Answering—Ask Jeeves (Ask Jeeves): http: //www. ask. com/ • Language Summarization—MEAD (U. Michigan): http: //www. summarization. com/mead • Spoken Language Recognition— Edu. Speak (SRI): http: //www. eduspeak. com/ • Automatic Essay evaluation—E-Rater (ETS): http: //www. ets. org/research/erater. html • Information Retrieval and Extraction—Net. Owl http: //www. netowl. com/extractor_summary. html (SRA):

What is MT? • Definition: Translation from one natural language to another by means

What is MT? • Definition: Translation from one natural language to another by means of a computerized system • Early failures • Later: varying degrees of success

An Old Example The spirit is willing but the flesh is weak The vodka

An Old Example The spirit is willing but the flesh is weak The vodka is good but the meat is rotten

Machine Translation History • • 1950’s: Intensive research activity in MT 1960’s: Direct word-for-word

Machine Translation History • • 1950’s: Intensive research activity in MT 1960’s: Direct word-for-word replacement 1966 (ALPAC): NRC Report on MT Conclusion: MT no longer worthy of serious scientific investigation. • 1966 -1975: `Recovery period’ • 1975 -1985: Resurgence (Europe, Japan) • 1985 -present: Resurgence (US) http: //ourworld. compuserve. com/homepages/WJHutchins/MTS-93. htm.

What happened between ALPAC and Now? • Need for MT and other NLP applications

What happened between ALPAC and Now? • Need for MT and other NLP applications confirmed • Change in expectations • Computers have become faster, more powerful • WWW • Political state of the world • Maturation of Linguistics • Development of hybrid statistical/symbolic approaches

Three MT Approaches: Direct, Transfer, Interlingual Semantic Composition Semantic Analysis Syntactic Structure Word Structure

Three MT Approaches: Direct, Transfer, Interlingual Semantic Composition Semantic Analysis Syntactic Structure Word Structure Morphological Analysis Source Text Semantic Structure Interlingua Semantic Transfer Syntactic Transfer Direct Semantic Decomposition Semantic Structure Semantic Generation Syntactic Structure Syntactic Generation Word Structure Morphological Generation Target Text

Examples of Three Approaches • Direct: – I checked his answers against those of

Examples of Three Approaches • Direct: – I checked his answers against those of the teacher → Yo comparé sus respuestas a las de la profesora – Rule: [check X against Y] → [comparar X a Y] • Transfer: – Ich habe ihn gesehen → I have seen him – Rule: [clause agt aux obj pred] → [clause agt aux pred obj] • Interlingual: – I like Mary→ Mary me gusta a mí – Rep: [Be (I [AT (I, Mary)] Like+ingly)]

MT Systems: 1964 -1990 • Direct: GAT [Georgetown, 1964], TAUM-METEO [Colmerauer et al. 1971]

MT Systems: 1964 -1990 • Direct: GAT [Georgetown, 1964], TAUM-METEO [Colmerauer et al. 1971] • Transfer: GETA/ARIANE [Boitet, 1978] LMT [Mc. Cord, 1989], METAL [Thurmair, 1990], Mi. Mo [Arnold & Sadler, 1990], … • Interlingual: MOPTRANS [Schank, 1974], KBMT [Nirenburg et al, 1992], UNITRAN [Dorr, 1990]

Statistical MT and Hybrid Symbolic/Stats MT: 1990 -Prese Candide [Brown, 1990, 1992]; Halo/Nitrogen [Langkilde

Statistical MT and Hybrid Symbolic/Stats MT: 1990 -Prese Candide [Brown, 1990, 1992]; Halo/Nitrogen [Langkilde and Knight, 1998], [Yamada and Knight, 2002]; GHMT [Dorr and Habash, 2002]; DUSTer [Dorr et al. 2002]

Direct MT: Pros and Cons • Pros – Fast – Simple – Inexpensive •

Direct MT: Pros and Cons • Pros – Fast – Simple – Inexpensive • Cons – – – Unreliable Not powerful Rule proliferation Requires too much context Major restructuring after lexical substitution

Transfer MT: Pros and Cons • Pros – Don’t need to find language-neutral rep

Transfer MT: Pros and Cons • Pros – Don’t need to find language-neutral rep – No translation rules hidden in lexicon – Relatively fast • Cons – N 2 sets of transfer rules: Difficult to extend – Proliferation of language-specific rules in lexicon and syntax – Cross-language generalizations lost

Interlingual MT: Pros and Cons • Pros – Portable (avoids N 2 problem) –

Interlingual MT: Pros and Cons • Pros – Portable (avoids N 2 problem) – Lexical rules and structural transformations stated more simply on normalized representation – Explanatory Adequacy • Cons – Difficult to deal with terms on primitive level: universals? – Must decompose and reassemble concepts – Useful information lost (paraphrase)

Approximate IL Approach • Tap into richness of TL resources • Use some, but

Approximate IL Approach • Tap into richness of TL resources • Use some, but not all, components of IL representation • Generate multiple sentences that are statistically pared down

Approximating IL: Handling Divergences • Primitives • Semantic Relations • Lexical Information

Approximating IL: Handling Divergences • Primitives • Semantic Relations • Lexical Information

Interlingual vs. Approximate IL • Interlingual MT: – – primitives & relations bi-directional lexicons

Interlingual vs. Approximate IL • Interlingual MT: – – primitives & relations bi-directional lexicons analysis: compose IL generation: decompose IL • Approximate IL – hybrid symbolic/statistical design – overgeneration with statistical ranking – uses dependency rep input and structural expansion for “deeper” overgeneration

Mapping from Input Dependency to English Dependency Tree Mary le dio patadas a John

Mapping from Input Dependency to English Dependency Tree Mary le dio patadas a John → Mary kicked John GIVEV Agent MARY Theme KICKN KICKV [CAUSE GO] Goal JOHN Agent MARY [CAUSE GO] Goal JOHN Knowledge Resources in English only: (LVD; Dorr, 2001).

Statistical Extraction Mary Mary Mary kicked John. [0. 670270 ] gave a kick at

Statistical Extraction Mary Mary Mary kicked John. [0. 670270 ] gave a kick at John. [-2. 175831] gave the kick at John. [-3. 969686] gave an kick at John. [-4. 489933] gave a kick by John. [-4. 803054] gave a kick to John. [-5. 045810] gave a kick into John. [-5. 810673] gave a kick through John. [-5. 836419] gave a foot wound by John. [-6. 041891] gave John a foot wound. [-6. 212851]

Benefits of Approximate IL Approach • Explaining behaviors that appear to be statistical in

Benefits of Approximate IL Approach • Explaining behaviors that appear to be statistical in nature • “Re-sourceability”: Re-use of already existing components for MT from new languages. • Application to monolingual alternations

What Resources are Required? • Deep TL resources • Requires SL parser and tralex

What Resources are Required? • Deep TL resources • Requires SL parser and tralex • TL resources are richer: LVD representations, Cat. Var database • Constrained overgeneration

MT Challenges: Ambiguity • Syntactic Ambiguity I saw the man on the hill with

MT Challenges: Ambiguity • Syntactic Ambiguity I saw the man on the hill with the telescope • Lexical Ambiguity E: book S: libro, reservar • Semantic Ambiguity – Homography: ball(E) = pelota, baile(S) – Polysemy: kill(E), matar, acabar (S) – Semantic granularity esperar(S) = wait, expect, hope (E) be(E) = ser, estar(S) fish(E) = pez, pescado(S)

How do we evaluate MT? • Human-based Metrics – – – – Semantic Invariance

How do we evaluate MT? • Human-based Metrics – – – – Semantic Invariance Pragmatic Invariance Lexical Invariance Structural Invariance Spatial Invariance Fluency Accuracy “Do you get it? ” • Automatic Metrics: Bleu

Bi. Lingual Evaluation Understudy (BLEU — Papineni, 2001) http: //www. research. ibm. com/people/k/kishore/RC 22176.

Bi. Lingual Evaluation Understudy (BLEU — Papineni, 2001) http: //www. research. ibm. com/people/k/kishore/RC 22176. pdf • Automatic Technique, but …. • Requires the pre-existence of Human (Reference) Translations • Approach: – Produce corpus of high-quality human translations – Judge “closeness” numerically (word-error rate) – Compare n-gram matches between candidate translation and 1 or more reference translations

Bleu Comparison Chinese-English Translation Example: Candidate 1: It is a guide to action which

Bleu Comparison Chinese-English Translation Example: Candidate 1: It is a guide to action which ensures that the military always obeys the commands of the party. Candidate 2: It is to insure the troops forever hearing the activity guidebook that party direct. Reference 1: It is a guide to action that ensures that the military will forever heed Party commands. Reference 2: It is the guiding principle which guarantees the military forces always being under the command of the Party. Reference 3: It is the practical guide for the army always to heed the directions of the party.

How Do We Compute Bleu Scores? • Key Idea: A reference word should be

How Do We Compute Bleu Scores? • Key Idea: A reference word should be considered exhausted after a matching candidate word is identified. • For each word compute: (1) candidate word count (2) maximum ref count • Add counts for each candidate word using the lower of the two numbers. • Divide by number of candidate words. .

Modified Unigram Precision: Candidate #1 It(1) is(1) a(1) guide(1) to(1) action(1) which(1) ensures(1) that(2)

Modified Unigram Precision: Candidate #1 It(1) is(1) a(1) guide(1) to(1) action(1) which(1) ensures(1) that(2) the(4) military(1) always(1) obeys(0) the commands(1) of(1) the party(1) Reference 1: It is a guide to action that ensures that the military will forever heed Party commands. Reference 2: It is the guiding principle which guarantees the military forces always being under the command of the Party. Reference 3: It is the practical guide for the army always to heed the directions of the party. What’s the 17/1

Modified Unigram Precision: Candidate #2 It(1) is(1) to(1) insure(0) the(4) troops(0) forever(1) hearing(0) the

Modified Unigram Precision: Candidate #2 It(1) is(1) to(1) insure(0) the(4) troops(0) forever(1) hearing(0) the activity(0) guidebook(0) that(2) party(1) direct(0) Reference 1: It is a guide to action that ensures that the military will forever heed Party commands. Reference 2: It is the guiding principle which guarantees the military forces always being under the command of the Party. Reference 3: It is the practical guide for the army always to heed the directions of the party. What’s the 8/14

Modified Bigram Precision: Candidate #1 It is(1) is a(1) a guide(1) guide to(1) to

Modified Bigram Precision: Candidate #1 It is(1) is a(1) a guide(1) guide to(1) to action(1) action which(0) which ensures(0) ensures that(1) that the(1) the military(1) military always(0) always obeys(0) obeys the(0) the commands(0) commands of(0) of the(1) the party(1) Reference 1: It is a guide to action that ensures that the military will forever heed Party commands. Reference 2: It is the guiding principle which guarantees the military forces always being under the command of the Party. Reference 3: It is the practical guide for the army always to heed the directions of the party. What’s the 10/1

Modified Bigram Precision: Candidate #2 It is(1) is to(0) to insure(0) insure the(0) the

Modified Bigram Precision: Candidate #2 It is(1) is to(0) to insure(0) insure the(0) the troops(0) troops forever(0) forever hearing(0) hearing the(0) the activity(0) activity guidebook(0) guidebook that(0) that party(0) party direct(0) Reference 1: It is a guide to action that ensures that the military will forever heed Party commands. Reference 2: It is the guiding principle which guarantees the military forces always being under the command of th Party. Reference 3: It is the practical guide for the army always to heed the directions of the party. What’s the 1/13

Catching Cheaters the(2) the the(0) Reference 1: The cat is on the mat Reference

Catching Cheaters the(2) the the(0) Reference 1: The cat is on the mat Reference 2: There is a cat on the mat What’s the unigram answer? What’s the bigram answer? 2/7 0/7