Story Segmentation of Broadcast News Mehrbod Sharifi mehrbodcs
Story Segmentation of Broadcast News Mehrbod Sharifi mehrbod@cs. columbia. edu Thanks to Andrew Rosenberg ~mehrbod/presentations/SSeg. Dec 06. pdf
GALE (Global Autonomous Language Exploitation) “… to absorb, analyze and interpret huge volumes of speech and text in multiple languages, eliminating the need for linguists and analysts and automatically providing relevant, distilled actionable information …” Transcription Engines (ASR) o Translation Engines (MT) o Distillation Engines (QA+IR) o http: //projects. ldc. upenn. edu/gale/ http: //www. darpa. mil/ipto/Programs/gale/ 2
3
Task: Story Segmentation o Input: n n . sph: audio files from TDT-4 corpus distributed by LDC. rttmx: output from other collaborators of GALE project (all automated, one word per row) o o o n o Speaker boundaries (Chuck at ICSI) ASR: words, start & end time, confidence, phone durations (Andreas at SRI/ICSI) Sentence boundaries probabilities (Dustin at UW) Gold standard – annotated story boundaries Output: n . rttmx files with story boundaries (generated by a method that performs well on unseen data) /n/squid/proj/gale 1/AA/eng-tdt 4/tdt 4 -eng-rttmx-12192005/README 4
Task: Story Segmentation o Event: specific thing that happens at a specific time and place along with all necessary preconditions and unavoidable consequences “U. S. Marine jet sliced a funicular cable in Italy in February 1998”, the cable car's crash to earth and the subsequent injuries were all unavoidable consequences and thus part of the same event. o Topic: an event or activity, along with all directly related events and activities o Story: News stories may be of any length, even fewer than two independent clauses, as long as they constitute a complete, cohesive news report on a particular topic. Note that single news stories may discuss more than one related topic. http: //www. ldc. upenn. edu/Projects/TDT 4/Annotation/annot_task_def_V 1. 4. pdf 5
Task: Story Segmentation Example: 3898 words / 263 sentences / 26 stories (? : reject or low confidence word) 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. ? ? ? [headlines] ? ? good evening everyone. . . [report on war]. . . gillian findlay a. b. c. news ? turning to politics. . . [election - Gore]. . . a. b. c. news ? ? this is ron claiborne. . . [election - Bush]. . . a. b. c. news ? ? ? as for the two other candidates. . . said the same still ahead. . . [teaser]. . . camera man this is world news. . . [commercials]. . . was a woman turning to news overseas. . . [election]. . . no matter what its just days after a deadly ferry sinking in greece. . . safety tests ~mehrbod/rttmx/eng/20001001_1830_1900_ABC_WNT. rttmx ~mehrbod/out/eng. ANC_WNT. txt 6
Task: Story Segmentation o How difficult is it? n n Topic vs. Story Segment classes o o n New story Teaser Misc. Under-transcribed Error accumulated from previous processes 7
Current Approach - Summary o o Align story boundaries with sentence boundaries Extract sentence level features n n n o Lexical Acoustic Speaker-dependent Train and evaluate a decision tree classifier (J 48 or JRip) http: //www 1. cs. columbia. edu/~amaxwell/pubs/storyseg-final-hlt. pdf 8
Current Approach - Features o Lexical (*various windows) n o Acoustic n n o Text. Tiling*, LCSeg, keywords*, sentence position and length Pitch and Intensity: min, max, median, mean, std. dev. , mean absolute slope Pause, speaking rate (voiced frame / total) Vowel Duration: Mean vowel length, sentence final rhyme length Second order of the above Speaker n speaker distribution, speaker turn, first in the show 9
Current Approach - Results o Report in the HLT paper for full feature set at the sentence level F 1 (p, r) Pk Win. Dif f Cseg Engli sh . 421(. 6 7, . 32) 0. 194 0. 318 0. 067 Mand arin . 592(. 7 3, . 50) 0. 179 0. 245 0. 068 Arabi c . 300(. 6 5, . 19) 0. 264 0. 353 0. 085 pk (Beeferman et al. , 1999) Window. Diff (Pevzner and Hearst, 2002) Cseg (Doddington, 1998) 10
Improvements In Progress o o Looking for ways to reduce the negative effect of error inherited from upstream processes (ASR, SU and speaker detection) Adding/modifying features to make them more flexible to error Analyzing the current features and discard those that are not discriminative or descriptive enough Improving the framework for the package 11
Word Level vs. Sentence Level o Pros n n o Eliminate the error on sentence boundary detection (it becomes a feature) No need for story boundary alignment Cons n n More chance for error and lower baseline Higher risk of over fitting 12
Word Level vs. Sentence Level 13
Word Level - Features o Providing information about a window preceding, surrounding or following the current word to provide more information: n n Acoustic features were done for windows of five words Similar idea for other features, e. g. , o @attribute speaker_boundary {TRUE, FALSE} o @attribute same_speaker_5 {TRUE, FALSE} o @attribute same_speaker_10 {TRUE, FALSE} o @attribute same_speaker_20 {TRUE, FALSE} 14
Word Level - Features o Feature analysis for sentence level features e. g. , for ABC show using Weka (ordered list): Chi Square Information Gain sent_position pauselen start_time keywords_after_5 speaker_distribution keywords_after_10 end_time keywords_after_5 15
Word Level - Features o o Word ASR confidence: score, (@reject@ or score < 0. 8): Boolean and count in various window widths Word introduction 16
Word Level - Results 17
Future Directions o Finding a reasonable segmentation strategy, followed by n clustering on featured extracted from segments: o o o n o o Sentences => A+L+S Pause => L “acoustic tiling” => L+S Sequential Modeling Performing more morphological analysis particularly in Arabic Using the rest of the story and topic labels Using other parts of the TDT and/or external information for training: Word. Net, WSJ, etc. Experimenting with other classifiers: JRip, SVM, Bayesian, GMM, etc. 18
Thank you. Questions? 19
- Slides: 19