11 731 Machine Translation SyntaxBased Translation Models Principles
11 -731 Machine Translation Syntax-Based Translation Models – Principles, Approaches, Acquisition Alon Lavie 16 March 2011
Outline l Syntax-based Translation Models: Rationale and Motivation l Resource Scenarios and Model Definitions l String-to-Tree, Tree-to-String and Tree-to-Tree l Hierarchical Phrase-based Models (Chiang’s Hiero) l Syntax-Augmented Hierarchical Models (Venugopal and Zollmann) l String-to-Tree Models l Phrase-Structure-based Model (Galley et al. , 2004, 2006) l Tree-to-Tree Models l Phrase-Structure-based Stat-XFER Model (Lavie et al. , 2008) l DCU Tree-bank Alignment method (Zhachev, Tinsley et. al. ) l Tree-to-String Models l Tree Transduction Models (Yamada and Knight, Gildea et al. ) 11 -731 Machine Translation (2011) 2
Syntax-based Models: Rationale l Phrase-based models model translation at very shallow levels: l Translation equivalence modeled at the multi-word lexical level l Phrases capture some cross-language local reordering, but only for phrases that were seen in training – No effective generalization l Non-local cross-language reordering is modeled only by permuting order of phrases during decoding l No explicit modeling of syntax, structural divergences or syntax-tosemantic mapping differences l Goal: Improve translation quality using syntax-based models l Capture generalizations, reorderings and divergences at appropriate levels of abstraction l Models direct the search during decoding to more accurate translations l Still Statistical MT: Acquire translation models automatically from (annotated) parallel-data and model them statistically! 11 -731 Machine Translation (2011) 3
Syntax-based Statistical MT l Building a syntax-based Statistical MT system: l Similar in concept to simpler phrase-based SMT methods: l Model Acquisition from bilingual sentence-parallel corpora l Decoders that given an input string can find the best translation according to the models l Our focus today will be on the models and their acquisition l Next week: Chris Dyer will cover decoding for hierarchical and syntax-based MT 11 -731 Machine Translation (2011) 4
Syntax-based Resources vs. Models l Important Distinction: 1. What structural information for the parallel-data is available during model acquisition and training? 2. What type of translation models are we acquiring from the annotated parallel data? l Structure available during Acquisition – Main Distinctions: l Syntactic/structural information for the parallel training data: l Given by external components (parsers) or inferred from the data? l Syntax/Structure available for one language or for both? l Phrase-Structure or Dependency-Structure? l What do we extract from parallel-sentences? l Sub-sentential units of translation equivalence annotated with structure l Rules/structures that determine how these units combine into full transductions 11 -731 Machine Translation (2011) 5
Syntax-based Translation Models l String-to-Tree: l Models explain how to transduce a string in the source language into a structural representation in the target language l During decoding: l No separate parsing on source side l Decoding results in set of possible translations, each annotated with syntactic structure l The best-scoring string+structure can be selected as the translation l Example: ne VB pas (VP (AUX (does) RB (not) x 2 11 -731 Machine Translation (2011) 6
Syntax-based Translation Models l Tree-to-String: l Models explain how to transduce a structural representation of the source language input into a string in the target language l During decoding: l Parse the source string to derive its structure l Decoding explores various ways of decomposing the parse tree into a sequence of composable models, each generating a translation string on the target side l The best-scoring string can be selected as the translation l Examples: 11 -731 Machine Translation (2011) 7
Syntax-based Translation Models l Tree-to-Tree: l Models explain how to transduce a structural representation of the source language input into a structural representation in the target language l During decoding: l Decoder synchronously explores alternative ways of parsing the sourcelanguage input string and transduce it into corresponding target-language structural output. l The best-scoring structure+string can be selected as the translation l Example: NP: : NP [VP 北 CD 有 邦交 ] [one of the CD countries that VP] ( ; ; Alignments (X 1: : Y 7) (X 3: : Y 4) ) 11 -731 Machine Translation (2011) 8
Structure Available During Acquisition l What information/annotations are available for the bilingual sentence-parallel training data? l l (Symerticized) Viterbi Word Alignments (i. e. from GIZA++) (Non-syntactic) extracted phrases for each parallel sentence Parse-trees/dependencies for “source” language Parse-trees/dependencies for “target” language l Some major potential issues and problems: l GIZA++ word alignments are not aware of syntax – word-alignment errors can have bad consequences on the extracted syntactic models l Using external monolingual parsers is also problematic: l Usingle-best parse for each sentence introduces parsing errors l Parsers were designed for monolingual parsing, not translation l Parser design decisions for each language may be very different: • Different notions of constituency and structure • Different sets of POS and constituent labels 11 -731 Machine Translation (2011) 9
Hierarchical Phrase-Based Models l Proposed by David Chiang in 2005 l Natural hierarchical extension to phrase-based models l Representation: rules in the form of synchronous CFGs l Formally syntactic, but with no direct association to linguistic syntax l Single non-terminal “X” l Acquisition Scenario: Similar to standard phrase-based models l No independent syntactic parsing on either side of parallel data l Uses “symetricized” bi-directional viterbi word alignments l Extracts phrases and rules (hierarchical phrases) from each parallel sentence l Models the extracted phrases statistically using MLE scores 11 -731 Machine Translation (2011) 10
Hierarchical Phrase-Based Models l Extraction Process Overview: 1. Start with standard phrase extraction from symetricized viterbi word -aligned sentence-pair 2. For each phrase-pair, find all embedded phrase-pairs, and create a hierarchical rule for each instance 3. Accumulate collection of all such rules from the entire corpus along with their counts 4. Model them statistically using maximum likelihood estimate (MLE) scores: l P(target|source) = count(source, target)/count(source) l P(source|target) = count(source, target)/count(target) 5. Filtering: l Rules of length < 5 (terminals and non-terminals) l At most two non-terminals X l Non-terminals must be separated by a terminal 11 -731 Machine Translation (2011) 11
Hierarchical Phrase-Based Models l Example: l Chinese-to-English Rules: 11 -731 Machine Translation (2011) 12
Syntax-Augmented Hierarchical Model l Proposed by CMU’s Venugopal and Zollmann in 2006 l Representation: rules in the form of synchronous CFGs l Main Goal: add linguistic syntax to the hierarchical rules that are extracted by the Hiero method: l Hiero’s “X” labels are completely generic – allow substituting any subphrase into an X-hole (if context matches) l Linguistic structure has labeled constituents – the labels determine what sub-structures are allowed to combine together l Idea: use labels that are derived from parse structures on one side of parallel data to label the “X” labels in the extracted rules l Labels from one language (i. e. English) are “projected” to the other language (i. e. Chinese) l Major Issues/Problems: l How to label X-holes that are not complete constituents? l What to do about rule “fragmentation” – rules that are the same other than the labels inside them? 11 -731 Machine Translation (2011) 13
Syntax-Augmented Hierarchical Model l Extraction Process Overview: 1. Parse the “strong” side of the parallel data (i. e. English) 2. Run the Hiero extraction process on the parallel-sentence instance and find all phrase-pairs and all hierarchical rules for parallel-sentence 3. Labeling: for each X-hole that corresponds to a parse constituent C, label X as C. For all other X-holes, assign combination labels 4. Accumulate collection of all such rules from the entire corpus along with their counts 5. Model the rules statistically: Venagopal & Zollman use six different rule score features instead of just two MLE scores. 6. Filtering: similar to Hiero rule filtering l Advanced Modeling: Preference Grammars l Avoid rule fragmentation: instead of explicitly labeling the X-holes in the rules with different labels, keep them as “X”, with distributions over the possible labels that could fill the “X”. These are used as features during decoding 11 -731 Machine Translation (2011) 14
Syntax-Augmented Hierarchical Model l Example: 11 -731 Machine Translation (2011) 15
Tree-to-Tree: Stat-XFER l Developed by Lavie, Ambati and Parlikar in 2007 l Goal: Extract linguistically-supported syntactic phrase-pairs and synchronous transfer rules automatically from parsed parallel corpora l Representation: Synchronous CFG rules with constituentlabels, POS-tags or lexical items on RHS of rules. Syntaxlabeled phrases are fully-lexicalized S-CFG rules. l Acquisition Scenario: l Parallel corpus is word-aligned using GIZA++, symetricized. l Phrase-structure parses for source and/or target language for each parallel-sentence are obtained using monolingual parsers 11 -731 Machine Translation (2011) 16
Transfer Rule Formalism ; SL: the old man, TL: ha-ish ha-zaqen Type information Part-of-speech/constituent information Alignments x-side constraints NP: : NP ( (X 1: : Y 1) (X 1: : Y 3) (X 2: : Y 4) (X 3: : Y 2) [DET ADJ N] -> [DET N DET ADJ] ((X 1 AGR) = *3 -SING) ((X 1 DEF = *DEF) ((X 3 AGR) = *3 -SING) ((X 3 COUNT) = +) y-side constraints xy-constraints, e. g. ((Y 1 AGR) = (X 1 AGR)) ((Y 1 DEF) = *DEF) ((Y 3 DEF) = *DEF) ((Y 2 AGR) = *3 -SING) ((Y 2 GENDER) = (Y 4 GENDER)) ) 11 -731 Machine Translation (2011) 17
Translation Lexicon: French-to-English Examples DET: : DET |: [“le"] -> [“the"] ( (X 1: : Y 1) ) NP: : NP |: [“le respect"] -> [“accordance"] ( ) Prep: : Prep |: [“dans”] -> [“in”] ( (X 1: : Y 1) ) N: : N |: [“principes"] -> [“principles"] ( (X 1: : Y 1) ) PP: : PP |: [“dans le respect"] -> [“in accordance"] ( ) PP: : PP |: [“des principes"] -> [“with the principles"] ( ) N: : N |: [“respect"] -> [“accordance"] ( (X 1: : Y 1) ) 11 -731 Machine Translation (2011) 18
French-English Transfer Grammar Example Rules (Automatically-acquired) {PP, 24691} ; ; SL: des principes ; ; TL: with the principles {PP, 312} ; ; SL: dans le respect des principes ; ; TL: in accordance with the principles PP: : PP [“des” N] -> [“with the” N] ( (X 1: : Y 1) ) PP: : PP [Prep NP] -> [Prep NP] ( (X 1: : Y 1) (X 2: : Y 2) ) 11 -731 Machine Translation (2011) 19
Syntax-driven Acquisition Process l Overview of Extraction Process: 1. 2. 3. Word-align the parallel corpus (GIZA++) Parse the sentences independently for both languages Tree-to-tree Constituent Alignment: a) b) 4. 5. 6. Run our Constituent Aligner over the parsed sentence pairs Enhance alignments with additional Constituent Projections Extract all aligned constituents from the parallel trees Extract all derived synchronous transfer rules from the constituent-aligned parallel trees Construct a “data-base” of all extracted parallel constituents and synchronous rules with their frequencies and model them statistically (assign them MLE maximum-likelihood probabilities) 11 -731 Machine Translation (2011) 20
PFA Constituent Node Aligner l Input: a bilingual pair of parsed and word-aligned sentences l Goal: find all sub-sentential constituent alignments between the two trees which are translation equivalents of each other l Equivalence Constraint: a pair of constituents <S, T> are considered translation equivalents if: l All words in yield of <S> are aligned only to words in yield of <T> (and viceversa) l If <S> has a sub-constituent <S 1> that is aligned to <T 1>, then <T 1> must be a sub-constituent of <T> (and vice-versa) l Algorithm is a bottom-up process starting from wordlevel, marking nodes that satisfy the constraints 11 -731 Machine Translation (2011) 21
PFA Node Alignment Algorithm Example • Words don’t have to align one-to-one • Constituent labels can be different in each language • Tree Structures can be highly divergent 11 -731 Machine Translation (2011) 22
PFA Node Alignment Algorithm Example • Aligner uses a clever arithmetic manipulation to enforce equivalence constraints • Resulting aligned nodes are highlighted in figure 11 -731 Machine Translation (2011) 23
PFA Node Alignment Algorithm Example Extraction of Phrases: • Get the yields of the aligned nodes and add them to a phrase table tagged with syntactic categories on both source and target sides • Example: NP # NP : : 澳洲 # Australia 11 -731 Machine Translation (2011) 24
PFA Node Alignment Algorithm Example All Phrases from this tree pair: 1. IP # S : : 澳洲 是 与 北� 有 邦交 的 少数 国家 之一 。 # Australia is one of the few countries that have diplomatic relations with North Korea. 2. VP # VP : : 是 与 北� 有 邦交 的 少数 国家 之一 # is one of the few countries that have diplomatic relations with North Korea 3. NP # NP : : 与 北� 有 邦交 的 少数 国家 之一 # one of the few countries that have diplomatic relations with North Korea 4. VP # VP : : 与 北� 有 邦交 # have diplomatic relations with North Korea 5. NP # NP : : 邦交 # diplomatic relations 6. NP # NP : : 北� # North Korea 7. NP # NP : : 澳洲 # Australia 11 -731 Machine Translation (2011) 25
Further Improvements l The Tree-to-Tree (T 2 T) method is high precision but suffers from low recall l Alternative: Tree-to-String (T 2 S) methods (i. e. [Galley et al. , 2006]) use trees on ONE side and project the nodes based on word alignments l High recall, but lower precision l Recent work by Vamshi Ambati [Ambati and Lavie, 2008]: combine both methods (T 2 T*) by seeding with the T 2 T correspondences and then adding in additional consistent projected nodes from the T 2 S method l Can be viewed as restructuring target tree to be maximally isomorphic to source tree l Produces richer and more accurate syntactic phrase tables that improve translation quality (versus T 2 T and T 2 S) 11 -731 Machine Translation (2011) 26
Extracted Syntactic Phrases English French The principles Principes With the principles Principes Accordance with the. . Respect des principes Accordance Respect In accordance with the… Dans le respect des principes Is all in accordance with. . Tout ceci dans le respect… This et Tn. S English The principles With the principles Accordance French English French The principles Principes With the principles des Principes Accordance with the. . Respect des principes Accordance Respect In accordance with the… Dans le respect des principes Is all in accordance with. . Tout ceci dans le respect… This et Principes des Principes Respect Tn. T 11 -731 Machine Translation (2011) Tn. T* 27
Comparative Results: French-to-English l MT Experimental Setup l l Dev Set: 600 sents, WMT 2006 data, 1 reference Test Set: 2000 sents, WMT 2007 data, 1 reference NO transfer rules, Stat-XFER monotonic decoder SALM Language Model (4 M words) 11 -731 Machine Translation (2011) 28
Transfer Rule Acquisition l Input: Constituent-aligned parallel trees l Idea: Aligned nodes act as possible decomposition points of the parallel trees l The sub-trees of any aligned pair of nodes can be broken apart at any lower-level aligned nodes, creating an inventory of “tree-fragment” correspondences l Synchronous “tree-frags” can be converted into synchronous rules l Algorithm: l Find all possible tree-frag decompositions from the node aligned trees l “Flatten” the tree-frags into synchronous CFG rules 11 -731 Machine Translation (2011) 29
Rule Extraction Algorithm Sub-Treelet extraction: Extract Sub-tree segments including synchronous alignment information in the target tree. All the sub-trees and the super-tree are extracted. 11 -731 Machine Translation (2011) 30
Rule Extraction Algorithm Flat Rule Creation: Each of the treelets pairs is flattened to create a Rule in the ‘Stat-XFER Formalism’ – Four major parts to the rule: 1. Type of the rule: Source and Target side type information 2. Constituent sequence of the synchronous flat rule 3. Alignment information of the constituents 4. Constraints in the rule (Currently not extracted) 11 -731 Machine Translation (2011) 31
Rule Extraction Algorithm Flat Rule Creation: Sample rule: IP: : S [ NP VP. ] -> [NP VP. ] ( ; ; Alignments (X 1: : Y 1) (X 2: : Y 2) ; ; Constraints ) 11 -731 Machine Translation (2011) 32
Rule Extraction Algorithm Flat Rule Creation: Sample rule: NP: : NP [VP 北 CD 有 邦交 ] -> [one of the CD countries that VP] ( ; ; Alignments (X 1: : Y 7) (X 3: : Y 4) ) Note: 1. Any one-to-one aligned words are elevated to Part-Of-Speech in flat rule. 2. Any non-aligned words on either source or target side remain lexicalized 11 -731 Machine Translation (2011) 33
Rule Extraction Algorithm All rules extracted: VP: : VP [VC NP] -> [VBZ NP] ( (*score* 0. 5) ; ; Alignments (X 1: : Y 1) (X 2: : Y 2) ) All rules extracted: NP: : NP [VP 北 CD 有 邦交 ] -> [one of the CD countries that VP] ( (*score* 0. 5) ; ; Alignments (X 1: : Y 7) (X 3: : Y 4) ) IP: : S [ NP VP ] -> [NP VP ] ( (*score* 0. 5) ; ; Alignments (X 1: : Y 1) (X 2: : Y 2) ) NP: : NP [ “北� ”] -> [“North” “Korea”] ( ; Many to one alignment is a phrase ) VP: : VP [VC NP] -> [VBZ NP] ( (*score* 0. 5) ; ; Alignments (X 1: : Y 1) (X 2: : Y 2) ) NP: : NP [NR] -> [NNP] ( (*score* 0. 5) ; ; Alignments (X 1: : Y 1) (X 2: : Y 2) ) VP: : VP [北 NP VE NP] -> [ VBP NP with NP] ( (*score* 0. 5) ; ; Alignments (X 2: : Y 4) (X 3: : Y 1) (X 4: : Y 2) ) 11 -731 Machine Translation (2011) 34 34
Some Chinese XFER Rules ; ; SL: : (2, 4) � 台 �易 ; ; TL: : (3, 5) trade to taiwan ; ; Score: : 22 {NP, 1045537} NP: : NP [PP NP ] -> [NP PP ] ((*score* 0. 916666667) (X 2: : Y 1) (X 1: : Y 2)) ; ; SL: : (2, 7) 直接 提到 � 哥 的 广告 ; ; TL: : (1, 7) commercials that directly mention the name viagra ; ; Score: : 5 {NP, 1017929} NP: : NP [VP "的" NP ] -> [NP "that" VP ] ((*score* 0. 11111111) (X 3: : Y 1) (X 1: : Y 3)) ; ; SL: : (4, 14) 有 一 至 多 个 高 新 技� �目 或 �品 ; ; TL: : (3, 14) has one or more new , high level technology projects or products ; ; Score: : 4 {VP, 1021811} VP: : VP ["有" NP ] -> ["has" NP ] ((*score* 0. 1) (X 2: : Y 2)) 11 -731 Machine Translation (2011) 35
DCU Tree-bank Alignment method l Proposed by Tinsley, Zhechev et al. in 2007 l Main Idea: l Focus on parallel treebank scenario: parallel sentences annotated with constituent parse-trees for both sides (obtained by parsing) l Same notion and idea as Lavie et al. : find sub-sentential constituent nodes across the two trees that are translation equivalents l Main difference: does not depend on the viterbi word alignments l Instead, use the lexical probabilities (obtained by GIZA++) to score all possible node-to-node alignments and incrementally grow the set of aligned-nodes. l Various types of rules can then be extracted (i. e. Stat-XFER rules, etc. ) l Overcomes some of the problems due to incorrect and sparse word alignments l Produces surprisingly different collections of rules than the Stat-XFER method 11 -731 Machine Translation (2011) 36
String-to-Tree: Galley et al. (GHKM) l Proposed by Galley et al. in 2004 and improved in 2006 l Idea: model full syntactic structure on the target-side only in order to produce translations that are more grammatical l Representation: synchronous hierarchical strings on the source side and their corresponding tree fragments on the target side l Example: ne VB pas (VP (AUX (does) RB (not) x 2 11 -731 Machine Translation (2011) 37
String-to-Tree: Galley et al. (GHKM) l Overview of Extraction Process: 1. Obtain symetricized viterbi word-alignments for parallel sentences 2. Parse the “strong” side of the parallel data (i. e. English) 3. Find all constituent nodes in the source-language tree that have consistent word alignments to strings in target-language 4. Treat these as “decomposition” points: extract tree-fragments on target-side along with corresponding “gapped” string on source-side 5. Labeling: for each “gap” that corresponds to a parse constituent C, label the gap as C. 6. Accumulate collection of all such rules from the entire corpus along with their counts 7. Model the rules statistically: initially used “standard” P(tgt|src) MLE scores. Also experimented with other scores, similar to SAMT l Advanced Modeling: Extraction of composed rules, not just minimal rules 11 -731 Machine Translation (2011) 38
Tree Transduction Models l Originally proposed by Yamada and Knight, 2001. Influenced later work by Gildea et al. on Tree-to-String models l Conceptually simpler than most other models: l Learn finite-state transductions on source-language parse-trees in order to map them into well-ordered and well-formed target sentences, based on the viterbi word alignments l Representation: simple local transformations on tree structure, given contextual structure in the tree: l l l Transduce leaf words in the tree from source to target language Delete a leaf-word or a sub-tree in a given context Insert a leaf-word or a sub-tree in a given context Transpose (invert order) of two sub-trees in a given context [Advanced model by Gildea: duplicate and insert a sub-tree] 11 -731 Machine Translation (2011) 39
Tree Transduction Models l Main Issues/Problems: l Some complex reorderings and correspondences cannot be modeled using these simple tree transductions l Highly sensitive to errors in the source-language parse-tree and to word-alignment errors 11 -731 Machine Translation (2011) 40
Summary l Variety of structure and syntax based models: string-to-tree, tree-to-string, tree-to-tree l Different models utilize different structural annotations on training resources and depend on different independent components (parsers, word alignments) l Different model acquisition processes from parallel data, but several recurring themes: l Finding sub-sentential translation equivalents and relating them via hierarchical and/or syntax-based structure l Statistical modeling of the massive collections of rules acquired from the parallel data 11 -731 Machine Translation (2011) 41
Major Challenges l Sparse Coverage: the acquired syntax-based models are often much sparser in coverage than non-syntactic phrases l Because they apply additional hard constraints beyond word-alignment as evidence of translation equivalence l Because the models fragment the data – they are often observed far fewer times in training data more difficult to model them statistically l Consequently, “pure” syntactic models often lag behind phrase-based models in translation performance – observed and learned again and again by different groups (including our own) l This motivates approaches that integrate syntax-based models with phrase-based models l Overcoming Pipeline Errors: l Adding independent components (parser output, viterbi word alignments) introduces cumulative errors that are hard to overcome l Various approaches try to get around these problems l Also recent work on “syntax-aware” word-alignment, “bi-lingual-aware” parsing 11 -731 Machine Translation (2011) 42
Major Challenges l Optimizing for Structure Granularity and Labels: l Syntactic structure in MT heavily based on Penn Tree. Bank structures and labels (POS and constituents) – are these needed and optimal for MT, even for MT into English? l Approaches range from single abstract hierarchical “X” label, to fully lexicalized constituent labels. What is optimal? How do we answer this question? l Alternative Approaches (i. e. ITGs) aim to overcome this problem by unsupervised inference of the structure from the data l Direct Contrast and Comparison of alternative approaches is extremely difficult: l Decoding with these syntactic models is highly complex and computationally intensive l Different groups/approaches develop their own decoders l Hard to compare anything beyond BLEU (or other metric) scores l Different groups continue to pursue different approaches – this is at the forefront of current research in Statistical MT 11 -731 Machine Translation (2011) 43
References l l (2008) Vamshi Ambati & Alon Lavie: Improving syntax driven translation models by re-structuring divergent and non-isomorphic parse tree structures. AMTA-2008. MT at work: Proceedings of the Eighth Conference of the Association for Machine Translation in the Americas, Waikiki, Hawai’i, 21 -25 October 2008; pp. 235 -244 (2005) David Chiang: A hierarchical phrase-based model for statistical machine translation. ACL-2005: 43 rd Annual meeting of the Association for Computational Linguistics, University of Michigan, Ann Arbor, 25 -30 June 2005; pp. 263 -270. (2004) Michel Galley, Mark Hopkins, Kevin Knight & Daniel Marcu: What’s in a translation rule? HLT- NAACL 2004: Human Language Technology conference and North American Chapter of the Association for Computational Linguistics annual meeting, May 2 -7, 2004, The Park Plaza Hotel, Boston, USA; pp. 273 -280. (2006) Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve De. Neefe, Wei Wang, & Ignacio Thayer: Scalable inference and training of context-rich syntatic translation models. Coling-ACL 2006: Proceedings of the 21 st International Conference on Computational Linguistics and 44 th Annual Meeting of the Association for Computational Linguistics, Sydney, 17 -21 July 2006; pp. 961 -968. l l l (2008) Alon Lavie, Alok Parlikar, & Vamshi Ambati: Syntax-driven learning of sub-sentential translation equivalents and translation rules from parsed parallel corpora. Second ACL Workshop on Syntax and Structure in Statistical Translation (ACL-08 SSST-2) , Proceedings, 20 June 2008, Columbus, Ohio, USA; pp. 87 -95. (2007) John Tinsley, Ventsislav Zhechev, Mary Hearne, & Andy Way: Robust language pair-independent sub-tree alignment. MT Summit XI, 10 -14 September 2007, Copenhagen, Denmark. Proceedings; pp. 467 -474 (2007) Ashish Venugopal & Andreas Zollmann: Hierarchical and syntax structured MT. First Machine Translation Marathon, Edinburgh, April 16 -20, 2007; 52 pp. (2001) Kenji Yamada & Kevin Knight: A syntax-based statistical translation model ACL-EACL-2001: 39 th Annual meeting [of the Association for Computational Linguistics] and 10 th Conference of the European Chapter [of ACL], July 9 th - 11 th 2001, Toulouse, France; pp. 523 -530. (2006) Andreas Zollmann & Ashish Venugopal: Syntax augmented machine translation via chart parsing. HLT-NAACL 2006: Proceedings of the Workshop on Statistical Machine Translation, New York, NY, USA, June 2006; pp. 138 -141 11 -731 Machine Translation (2011) 44
- Slides: 44