Reference Resolution Extension CMSC 35900 1 Discourse and

  • Slides: 33
Download presentation
Reference Resolution- Extension CMSC 35900 -1 Discourse and Dialogue October 2, 2006

Reference Resolution- Extension CMSC 35900 -1 Discourse and Dialogue October 2, 2006

Problem 1 • Coreference is a rare relation – skewed class distributions (2% positive

Problem 1 • Coreference is a rare relation – skewed class distributions (2% positive instances) – remove some negative instances NP 1 NP 2 NP 3 NP 4 farthest antecedent NP 5 NP 6 NP 7 NP 8 NP 9

Problem 2 • Coreference is a discourse-level problem – different solutions for different types

Problem 2 • Coreference is a discourse-level problem – different solutions for different types of NPs • proper names: string matching and aliasing – inclusion of “hard” positive training instances – positive example selection: selects easy positive training instances (cf. Harabagiu et al. (2001)) Queen Elizabeth set about transforming her husband, King George VI, into a viable monarch. Logue, the renowned speech therapist, was summoned to help the King overcome his speech impediment. . .

Problem 3 • Coreference is an equivalence relation – loss of transitivity – need

Problem 3 • Coreference is an equivalence relation – loss of transitivity – need to tighten the connection between classification and clustering – prune learned rules w. r. t. the clustering-level coreference scoring function coref ? [Queen Elizabeth] set about transforming [her] [husband], . . . not coref ?

Weakly Supervised Learning • Exploit small pool of labeled training data – Larger pool

Weakly Supervised Learning • Exploit small pool of labeled training data – Larger pool unlabeled • Single-View Multi-Learner Co-training – 2 different learning algorithms, same feature set • Initially train on small hand-labeled set – Each classifier labels unlabeled instances for the other classifier – Select new examples for training • High confidence, labeled differently by other classifier etc.

Weakly Supervised Learning • Self-training: – Small pool of labeled training data – Train

Weakly Supervised Learning • Self-training: – Small pool of labeled training data – Train one supervised classifier • Decision trees with bagging – Classify previous unlabeled data • Use some newly labeled examples as training – High confidence, etc – Retrain classifier with newly labeled examples

Effectiveness • Supervised learning approaches – Comparable performance to knowledge-based • Weakly supervised approaches

Effectiveness • Supervised learning approaches – Comparable performance to knowledge-based • Weakly supervised approaches – Decent effectiveness, still lags supervised – Dramatically less labeled training data • 1 K vs 500 K – Multi-learner cotraining peaks (65%), then overtrains – Self-training consistently improves (over 65%)

Reference Resolution: Extensions • Cross-document co-reference • (Baldwin & Bagga 1998) – Break “the

Reference Resolution: Extensions • Cross-document co-reference • (Baldwin & Bagga 1998) – Break “the document boundary” – Question: “John Smith” in A = “John Smith” in B? – Approach: • Integrate: – Within-document co-reference • with – Vector Space Model similarity

Cross-document Co-reference • Run within-document co-reference (CAMP) – Produce chains of all terms used

Cross-document Co-reference • Run within-document co-reference (CAMP) – Produce chains of all terms used to refer to entity • Extract all sentences with reference to entity – Pseudo per-entity summary for each document • Use Vector Space Model (VSM) distance to compute similarity between summaries

Cross-document Co-reference • Experiments: – 197 NYT articles referring to “John Smith” • 35

Cross-document Co-reference • Experiments: – 197 NYT articles referring to “John Smith” • 35 different people, 24: 1 article each • With CAMP: Precision 92%; Recall 78% • Without CAMP: Precision 90%; Recall 76% • Pure Named Entity: Precision 23%; Recall 100%

Conclusions • Co-reference establishes coherence • Reference resolution depends on coherence • Variety of

Conclusions • Co-reference establishes coherence • Reference resolution depends on coherence • Variety of approaches: – Syntactic constraints, Recency, Frequency, Role • Similar effectiveness - different requirements • Co-reference can enable summarization within and across documents (and languages!)

Challenges • Alternative approaches to reference resolution – Different constraints, rankings, combination • Different

Challenges • Alternative approaches to reference resolution – Different constraints, rankings, combination • Different types of referent – Speech acts, propositions, actions, events – “Inferrables” - e. g. car -> door, hood, trunk, . . – Discontinuous sets – Generics – Time

Discourse Structure Theories Discourse & Dialogue CMSC 35900 -1 October 4, 2006

Discourse Structure Theories Discourse & Dialogue CMSC 35900 -1 October 4, 2006

Roadmap • Goals of Discourse Structure Models – Limitations of early approaches • Models

Roadmap • Goals of Discourse Structure Models – Limitations of early approaches • Models of Discourse Structure – Attention & Intentions (Grosz & Sidner 86) – Rhetorical Structure Theory (Mann & Thompson 87) • Contrasts, Constraints & Conclusions

Why Model Discourse Structure? (Theoretical) • Discourse: not just constituent utterances – – –

Why Model Discourse Structure? (Theoretical) • Discourse: not just constituent utterances – – – Create joint meaning Context guides interpretation of constituents How? ? What are the units? How do they combine to establish meaning? • How can we derive structure from surface forms? – What makes discourse coherent vs not? – How do they influence reference resolution?

Why Model Discourse Structure? (Applied) • Design better summarization, understanding • Improve speech synthesis

Why Model Discourse Structure? (Applied) • Design better summarization, understanding • Improve speech synthesis – Influenced by structure • Develop approach for generation of discourse • Design dialogue agents for task interaction • Guide reference resolution

Early Discourse Models • Schemas & Plans • (Mc. Keown, Reichman, Litman & Allen)

Early Discourse Models • Schemas & Plans • (Mc. Keown, Reichman, Litman & Allen) – Task/Situation model = discourse model • Specific->General: “restaurant” -> AI planning • Topic/Focus Theories (Grosz 76, Sidner 76) – Reference structure = discourse structure • Speech Act – single utt intentions vs extended discourse

Discourse Models: Common Features • Hierarchical, Sequential structure applied to subunits – Discourse “segments”

Discourse Models: Common Features • Hierarchical, Sequential structure applied to subunits – Discourse “segments” – Need to detect, interpret • Referring expressions provide coherence – Explain and link • Meaning of discourse more than that of component utterances • Meaning of units depends on context

Earlier Models • Issues: – Conflate different aspects of discourse • Task plan, discourse

Earlier Models • Issues: – Conflate different aspects of discourse • Task plan, discourse plan – Ignore aspects of discourse • Goals & intentions vs focus – Overspecific • Fixed plan, schema, relation inventory

Attention, Intentions and the Structure of Discourse • Grosz&Sidner (1986) • Goals: – Integrate

Attention, Intentions and the Structure of Discourse • Grosz&Sidner (1986) • Goals: – Integrate approaches for focus (reference res. ), plan/task structure, discourse structure, goals • Three part model: – Linguistic structure (utterances) – Attentional structure (focus, reference) – Intentional structure (plans, purposes)

Linguistic Structure • Utterances group into discourse segments – Hierarchical, not necessarily contiguous –

Linguistic Structure • Utterances group into discourse segments – Hierarchical, not necessarily contiguous – Not strictly decompositional • 2 -way interactions – Utterances define structure; • Cue phrases mark segment boundaries – But, okay, fine, incidentally – Structure guides interpretation – Reference

Intentional Structure • Discourse & participants: overall purpose – Discourse segments have purposes (DP/DSP)

Intentional Structure • Discourse & participants: overall purpose – Discourse segments have purposes (DP/DSP) • Contribute to overall • Main DP/DSP intended to be recognized

Intentional Structure: Relations • Two relations between purposes – Dominance • DSP 1 dominates

Intentional Structure: Relations • Two relations between purposes – Dominance • DSP 1 dominates DSP 2 if doing DSP 2 contributes to achieving DSP 1 – Satisfaction-Precedence • DSP 1 must be satisfied before DSP 2 • Purposes: – Intend that someone know something, do something, believe something, etc – Open-ended

Attentional State • Captures focus of attention in discourse – Incremental – Focus Spaces

Attentional State • Captures focus of attention in discourse – Incremental – Focus Spaces • Include entities salient/evoked in discourse • Include a current DSP • Stack-structured: – higher->more salient, lower still accessible – Push: segment contributes to previous DSP – Pop: segment to contributes to more dominant DSP » Tied to intentional structure

Attentional State cntd. • Focusing structure depends on the intentional structure: the relationships between

Attentional State cntd. • Focusing structure depends on the intentional structure: the relationships between DSPs determine pushes and pops from the stack • Focusing structure coordinates the linguistic and intentional structures during processing • Like the other 2 structures, focusing structure evolves as discourse proceeds

Discourse examples • Essay • Task-oriented dialog – Intentional structure is neither identical nor

Discourse examples • Essay • Task-oriented dialog – Intentional structure is neither identical nor isomorphic to the general plan

0 The "movies" are so attractive to the great American public, especially to young

0 The "movies" are so attractive to the great American public, especially to young people, that it is time to take careful thought about their effect on mind and morals. 1 Ought any parent to permit his children to attend a moving picture show often or without being quite certain of the show he permits them to see? 2 3 No one can deny, of course, that great educational and ethical gains may be made through the movies because of their astonishing vividness. But the important fact to be determined is the total result of continuous and indiscriminate attendance on shows of this kind. Can it other than harmful? 4 5 In the first place the character of the plays is seldom of the best. One has only to read the ever-present "movie" billboard to see how cheap, melodramatic and vulgar most of the photoplays are. 6 Even the best plays, moreover, are bound to be exciting and over-emotional. Without spoken words, facial expression and gesture must carry the meaning: but only strong emotion or buffoonery can be represented through facial expression and gesture. The more reasonable and quiet aspects of life are necessarily neglected. How can our young people drink in through their eyes a continuous spectacle of intense and strained activity and feeling without harmful effects? Parents and teachers will do well to guard the young against overindulgence in the taste for the "movie".

H: 1. First you have to remove the flywheel. R: 2. How do I

H: 1. First you have to remove the flywheel. R: 2. How do I remove the flywheel? H: 3. First, loosen the screw , then pull it off. R: 4. OK. 5. The tool I have is awkward. Is there another tool that I could use instead? H: 6. Show me the tool you are using. R: 7. OK. H: 8. Are you sure you are using the right size key? R: 9. I’ll try some others. 10. I found an angle I can get at it. 11. The screw is loose, but I’m having trouble getting the flywheel off. H: 12. Use the wheelpuller. Do you know how to use it ? R: 13. No. H: 14. Do you know what it looks like? R: 15. Yes. H: 16. Show it to me please. R: 17. OK. H: 18. Good. Loosen the screw in the center and place the jaws around the hub of the flywheel, then tighten the screw onto the center of the shaft. The flywheel should slide off.

Processing issues • Intention recognition – What info can be used to recognize an

Processing issues • Intention recognition – What info can be used to recognize an intention – At what point does this info become available • Overall processing module has to be able to operate on partial information • It must allow for incrementally constraining the range of possibilities on the basis of new info that becomes available as the segment progresses

 • Info constraining DSP: – Specific linguistic markers – Utterance-level intentions – General

• Info constraining DSP: – Specific linguistic markers – Utterance-level intentions – General knowledge about actions and objects in the domain of discourse • Applications of theory: – Interruptions • Weak – not linked to immediate DSP • Strong - not linked to any DSP – Cue words

Interruption • John came by and left the groceries • Stop that you kids

Interruption • John came by and left the groceries • Stop that you kids • And I put them away after he left kids DSP 2 John, groceries DSP 1

Conclusions • Generalizes approaches to task-oriented dialogue – Goal: Domain-independence – Broad, general, abstract

Conclusions • Generalizes approaches to task-oriented dialogue – Goal: Domain-independence – Broad, general, abstract model • Accounts for interesting phenomena – Interruptions, returns, cue phrases

More conclusions • Asks more questions than it answers. • How do we implement

More conclusions • Asks more questions than it answers. • How do we implement these aspects of dialog? – Is it remotely feasible? ?