Statistical NLP Spring 2011 Lecture 9 Word Alignment

  • Slides: 47
Download presentation
Statistical NLP Spring 2011 Lecture 9: Word Alignment II Dan Klein – UC Berkeley

Statistical NLP Spring 2011 Lecture 9: Word Alignment II Dan Klein – UC Berkeley

Learning with EM § Hard EM: alternate between § Example: K-Means E-step: Find best

Learning with EM § Hard EM: alternate between § Example: K-Means E-step: Find best “completions” Y for fixed M-step: Find best parameters for fixed Y

K-Means § An iterative clustering algorithm § Pick K random points as cluster centers

K-Means § An iterative clustering algorithm § Pick K random points as cluster centers (means) § Alternate: § Assign data instances to closest mean § Assign each mean to the average of its assigned points § Stop when no points’ assignments change 3

K-Means Example 4

K-Means Example 4

K-Means Getting Stuck § A local optimum: 5

K-Means Getting Stuck § A local optimum: 5

Naïve-Bayes Models § Model: pick a topic, then generate a document using a language

Naïve-Bayes Models § Model: pick a topic, then generate a document using a language model for that topic. § Naïve-Bayes assumption: all words are independent given the topic. y x 1 x 2 . . . xn

Hard EM for Naïve-Bayes § Procedure: (1) we calculate best completions: § (2) compute

Hard EM for Naïve-Bayes § Procedure: (1) we calculate best completions: § (2) compute relevant counts from the completed data: § (3) compute new parameters from these counts (divide) § (4) repeat until convergence § Can also do this when some docs are labeled

Hard EM: More Formally § Hard EM: § Improve completions § Improve parameters §

Hard EM: More Formally § Hard EM: § Improve completions § Improve parameters § Each step either does nothing or increases the objective

Soft EM for Naïve-Bayes § Procedure: (1) calculate posteriors (soft completions): § (2) compute

Soft EM for Naïve-Bayes § Procedure: (1) calculate posteriors (soft completions): § (2) compute expected counts under those posteriors: § (3) compute new parameters from these counts (divide) § (4) repeat until convergence

EM in General § We’ll use EM over and over again to fill in

EM in General § We’ll use EM over and over again to fill in missing data § Convenience Scenario: we want P(x), including y just makes the model simpler (e. g. mixing weights for language models) § Induction Scenario: we actually want to know y (e. g. clustering) § NLP differs from much of statistics / machine learning in that we often want to interpret or use the induced variables (which is tricky at best) § General approach: alternately update y and § E-step: compute posteriors P(y|x, ) § This means scoring all completions with the current parameters § Usually, we do this implicitly with dynamic programming § M-step: fit to these completions § This is usually the easy part – treat the completions as (fractional) complete data § Initialization: start with some noisy labelings and the noise adjusts into patterns based on the data and the model § We’ll see lots of examples in this course § EM is only locally optimal (why? )

KL Divergence

KL Divergence

General Setup § KL divergence to true posterior

General Setup § KL divergence to true posterior

Approximations

Approximations

General Solution

General Solution

Example: Two-Mixture

Example: Two-Mixture

Example Posteriors

Example Posteriors

Approximate Posteriors

Approximate Posteriors

Approximate Posteriors

Approximate Posteriors

IBM Models 1/2 1 E: A: F: 2 3 4 , I 3 7

IBM Models 1/2 1 E: A: F: 2 3 4 , I 3 7 6 8 Gracias , lo haré de Thank you 1 5 6 shall do 8 7 so 8 8 gladly 8 muy buen grado 9 . Model Parameters Emissions: P( F 1 = Gracias | EA 1 = Thank ) Transitions: P( A 2 = 3)

Problems with Model 1 § There’s a reason they designed models 2 -5! §

Problems with Model 1 § There’s a reason they designed models 2 -5! § Problems: alignments jump around, align everything to rare words § Experimental setup: § Training data: 1. 1 M sentences of French-English text, Canadian Hansards § Evaluation metric: alignment error Rate (AER) § Evaluation data: 447 handaligned sentences

Monotonic Translation Japan shaken by two new quakes Le Japon secoué par deux nouveaux

Monotonic Translation Japan shaken by two new quakes Le Japon secoué par deux nouveaux séismes

Local Order Change Japan is at the junction of four tectonic plates Le Japon

Local Order Change Japan is at the junction of four tectonic plates Le Japon est au confluent de quatre plaques tectoniques

IBM Model 2 § Alignments tend to the diagonal (broadly at least) § Other

IBM Model 2 § Alignments tend to the diagonal (broadly at least) § Other schemes for biasing alignments towards the diagonal: § Relative vs absolute alignment § Asymmetric distances § Learning a full multinomial over distances

EM for Models 1/2 § Model 1 Parameters: Translation probabilities (1+2) Distortion parameters (2

EM for Models 1/2 § Model 1 Parameters: Translation probabilities (1+2) Distortion parameters (2 only) § Start with uniform, including § For each sentence: § For each French position j § Calculate posterior over English positions § (or just use best single alignment) § Increment count of word fj with word ei by these amounts § Also re-estimate distortion probabilities for model 2 § Iterate until convergence

Example

Example

Phrase Movement On Thursday Nov. 4, earthquakes rocked Japan once again Des tremblements de

Phrase Movement On Thursday Nov. 4, earthquakes rocked Japan once again Des tremblements de terre ont à nouveau touché le Japon jeudi 4 novembre.

The HMM Model 1 E: A: F: 2 3 4 , I 3 7

The HMM Model 1 E: A: F: 2 3 4 , I 3 7 6 8 Gracias , lo haré de Thank you 1 5 6 shall do 8 7 so 8 8 gladly 8 muy buen grado 9 . Model Parameters Emissions: P( F 1 = Gracias | EA 1 = Thank ) Transitions: P( A 2 = 3 | A 1 = 1)

The HMM Model § Model 2 preferred global monotonicity § We want local monotonicity:

The HMM Model § Model 2 preferred global monotonicity § We want local monotonicity: § Most jumps are small § HMM model (Vogel 96) -2 -1 0 1 2 3 § Re-estimate using the forward-backward algorithm § Handling nulls requires some care § What are we still missing?

HMM Examples

HMM Examples

AER for HMMs Model 1 INT HMM E F HMM F E HMM AND

AER for HMMs Model 1 INT HMM E F HMM F E HMM AND HMM INT GIZA M 4 AND AER 19. 5 11. 4 10. 8 7. 1 4. 7 6. 9

IBM Models 3/4/5 Mary did not slap the green witch Mary not slap the

IBM Models 3/4/5 Mary did not slap the green witch Mary not slap the green witch n(3|slap) P(NULL) Mary not slap NULL the green witch t(la|the) Mary no daba una botefada a la verde bruja d(j|i) Mary no daba una botefada a la bruja verde [from Al-Onaizan and Knight, 1998]

Examples: Translation and Fertility

Examples: Translation and Fertility

Example: Idioms he is nodding il hoche la tête

Example: Idioms he is nodding il hoche la tête

Example: Morphology

Example: Morphology

Some Results § [Och and Ney 03]

Some Results § [Och and Ney 03]

Decoding § In these word-to-word models § Finding best alignments is easy § Finding

Decoding § In these word-to-word models § Finding best alignments is easy § Finding translations is hard (why? )

Bag “Generation” (Decoding)

Bag “Generation” (Decoding)

Bag Generation as a TSP § Imagine bag generation with a bigram LM §

Bag Generation as a TSP § Imagine bag generation with a bigram LM § Words are nodes § Edge weights are P(w|w’) § Valid sentences are Hamiltonian paths § Not the best news for word-based MT! is it . not clear

IBM Decoding as a TSP

IBM Decoding as a TSP

Phrase Weights

Phrase Weights

Phrase Scoring § Learning weights has been tried, several times: § [Marcu and Wong,

Phrase Scoring § Learning weights has been tried, several times: § [Marcu and Wong, 02] § [De. Nero et al, 06] § … and others aiment poisson les chats le frais cats like fresh fish. . . § Seems not to work well, for a variety of partially understood reasons § Main issue: big chunks get all the weight, obvious priors don’t help § Though, [De. Nero et al 08]

Phrase Size § Phrases do help § But they don’t need to be long

Phrase Size § Phrases do help § But they don’t need to be long § Why should this be?

Lexical Weighting

Lexical Weighting

Many-to-Many Alignments

Many-to-Many Alignments

Crash Course in EM

Crash Course in EM