Speech Summarization Sameer R Maskey Summarization n the

  • Slides: 70
Download presentation
Speech Summarization Sameer R. Maskey

Speech Summarization Sameer R. Maskey

Summarization n ‘the process of distilling the most important information from a source (or

Summarization n ‘the process of distilling the most important information from a source (or sources) to produce an abridged version for a particular user (or users) and task (or tasks) [Mani and Maybury, 1999]

Indicative or Informative n Indicative Suggests contents of the document ¨ Better suits for

Indicative or Informative n Indicative Suggests contents of the document ¨ Better suits for searchers ¨ n Informative Meant to represent the document ¨ Better suits users who want the overview ¨

Speech Summarization n Speech summarization entails ‘summarizing’ speech Identify important information relevant to users

Speech Summarization n Speech summarization entails ‘summarizing’ speech Identify important information relevant to users and the story ¨ Represent the important information ¨ Present the extracted/inferred information as an addition or substitute to the story ¨

Are Speech and Text Summarization similar? n n Yes Identifying important information Some lexical,

Are Speech and Text Summarization similar? n n Yes Identifying important information Some lexical, discourse features Extraction n n n n NO! Speech Signal Prosodic features NLP tools? Segments – sentences? Generation? ASR transcripts Data size

Text vs. Speech Summarization (NEWS) Speech Signal Speech Channels - phone, remote satellite, station

Text vs. Speech Summarization (NEWS) Speech Signal Speech Channels - phone, remote satellite, station Transcript- Manual Transcripts - ASR, Close Captioned Lexical Features Some Lexical Features Many Speakers - speaking styles Segmentation -sentences Story presentation style Error-free Text NLP tools Structure -Anchor, Reporter Interaction Prosodic Features -pitch, energy, duration Commercials, Weather Report

Speech Summarization (NEWS) Speech Signal Speech Channels - phone, remote satellite, station Error-free Text

Speech Summarization (NEWS) Speech Signal Speech Channels - phone, remote satellite, station Error-free Text Transcript- Manual Transcripts - ASR, Close Captioned Lexical Features Some Lexical Features Many Speakers - speaking styles Segmentation -sentences Story presentation style many NLP tools Structure -Anchor, Reporter Interaction Prosodic Features -pitch, energy, duration Commercials, Weather Report

Why speech summarization? n n Multimedia production and size are increasing: need less timeconsuming

Why speech summarization? n n Multimedia production and size are increasing: need less timeconsuming ways to archive, extract, use and browse speech data - speech summarization, a possible solution Due to temporal nature of speech, difficult to scan like text User-specific summaries of broadcast news is useful Summarizing voicemails can help us better organize voicemails

[Salton, et al. , 1995] Sentence Extraction Similarity Measures SOME SUMMARIZATION TECHNIQUES BASED ON

[Salton, et al. , 1995] Sentence Extraction Similarity Measures SOME SUMMARIZATION TECHNIQUES BASED ON TEXT (LEXICAL FEATURES) [Mc. Keown, et al. , 2001] Extraction Training w/ manual Summaries [Hovy & Lin, 1999] Concept Level Extract concepts units [Witbrock & Mittal, 1999] Generate Words/Phrases [Maybury, 1995] Use of Structured Data

Summarization by sentence extraction with similarity measures [Salton, et al. , 1995] n n

Summarization by sentence extraction with similarity measures [Salton, et al. , 1995] n n Many present day techniques involve sentence extraction Extract sentence by finding similar sentence to topic sentence or dissimilar sentences to already built summary (Maximal Marginal Relativity) Find sentences similar to the topic sentence Various similarity measures [Salton, et al. , 1995] Cosine Measure ¨ Vocabulary Overlap ¨ Topic words overlap ¨ Content Signatures Overlap ¨

“Automatic text structuring and summarization” [Salton, et al. , 1995] w w w Uses

“Automatic text structuring and summarization” [Salton, et al. , 1995] w w w Uses hypertext link generation to summarize documents Builds intra-document hypertext links Coherent topic distinguished by separate chunk of links Remove the links that are not in close proximity Traverse along the nodes to select a path that defines a summary Traverse order can be n n n Bushy Path: constructed out n most bushy nodes Depth first Path: Traverse the most bushy path after each node Segmented bushy path: construct bushy paths individually and connect them on text level

Text relationship map [Salton, et al. , 1995]

Text relationship map [Salton, et al. , 1995]

Summarization by feature based statistical models [Kupiec, et al. , 1995] ¨ ¨ ¨

Summarization by feature based statistical models [Kupiec, et al. , 1995] ¨ ¨ ¨ Build manual summaries using available number of annotators Extract set of features from the manual summaries Train the statistical model with the given set of values for manual summaries Use the trained model to score each sentence in the test data Extract ‘n’ highest scoring sentences Various statistical models/machine learning n n n Regression Models Various classifiers Bayes rules for computing probability for inclusion by counting [Kupiec, et al. , 1995] Where S is summary given k features Fj and P(Fj) & P(Fj|s of S) can be computed by counting occurrences

Summarization by concept/content level extraction and generation [Hovy & Lin, 1999] , [Witbrock &

Summarization by concept/content level extraction and generation [Hovy & Lin, 1999] , [Witbrock & Mittal, 1999] n Quite a few text summarizers based on extracting concept/content and presenting them as summary Concept Words/Themes] ¨ Content Units [Hovy & Lin, 1999] ¨ Topic Identification ¨ n [Hovy & Lin, 1999] uses Concept Wavefront to build concept taxonomy ¨ n n Builds concept signatures by finding relevant words in 30000 WSJ documents each categorized into different topics Phrase concatenation of relevant concepts/content Sentence planning for generation

Summarization of Structured text database [Maybury, 1995] n Summarization of text represented in a

Summarization of Structured text database [Maybury, 1995] n Summarization of text represented in a structured form: database, templates ¨ n n Report generation of a medical history from a database is such an example Link analysis (semantic relations within the structure) Domain dependent importance of events

Speech summarization: present n n Speech Summarization seems to be mostly based on extractive

Speech summarization: present n n Speech Summarization seems to be mostly based on extractive summarization Extraction of words, sentences, content units Some compression methods have also been proposed Generation as in some text-summarization techniques is not available/feasible ¨ Mainly due to the nature of the content

[Christensen et al. , 2004] Sentence extraction with similarity measures SPEECH SUMMARIZATION TECHNIQUES [Hori

[Christensen et al. , 2004] Sentence extraction with similarity measures SPEECH SUMMARIZATION TECHNIQUES [Hori C. et al. , 1999, 2002] , [Hori T. et al. , 2003] Word scoring with dependency structure [Koumpis & Renals, 2004] Classification [He et al. , 1999] User access information [Zechner, 2001] Removing disfluencies [Hori T. et al. , 2003] Weighted finite state transducers

Content/Context sentence level extraction for speech summary [Christensen et al. , 2004] w These

Content/Context sentence level extraction for speech summary [Christensen et al. , 2004] w These are commonly used speech summarization techniques: w finding sentences similar to the lead topic sentences w Using position features to find the relevant nearby sentences after detecting the topic sentence where Sim is a similarity measure between two sentences

Weighted finite state transducers for speech summarization n [Hori T. et al. , 2003]

Weighted finite state transducers for speech summarization n [Hori T. et al. , 2003] Speech Summarization includes speech recognition, paraphrasing, sentence compaction integrated into single Weighted Finite State Transducer Enables decoder to employ all the knowledge sources in one-pass strategy Speech recognition using WFST Where H is state network of triphone HMMs, C is triphone connection rules, L is pronunciation and G is trigram language model n n Paraphrasing can be looked at as a kind of machine translation with translation probability P(W|T) where W is source language and T is the target language If S is the WFST representing translation rules and D is the language model of the target language speech summarization can bee looked at as the following composition Speech Translator H C L Speech recognizer G S Translator D

User access information for finding salient parts n n [He et al. , 1999]

User access information for finding salient parts n n [He et al. , 1999] Idea is to summarize lectures or shows extracting the parts that have been viewed the longest Needs multiple users of the same show, meeting or lecture for a statistically significant training data For summarizing lectures compute the time spent on each slide Summarizer based on user access logs did as well as summarizers that used linguistic and acoustic features ¨ Average score of 4. 5 on a scale of 1 to 8 for the summarizer (subjective evaluation)

Word level extraction by scoring/classifying words [Hori C. et al. , 1999, 2002] w

Word level extraction by scoring/classifying words [Hori C. et al. , 1999, 2002] w Score each word in the sentence and extract a set of words to form a sentence whose total score is the product/sum of the scores of each word w Example: w Word Significance score (topic words) w Linguistic Score (bigram probability) w Confidence Score (from ASR) w Word Concatenation Score (dependency structure grammar) Where M is the number of words to be extracted, and I C T are weighting factors for balancing among L, I, C, and T r

Assumptions n There a few assumptions made in the previously mentioned methods ¨ ¨

Assumptions n There a few assumptions made in the previously mentioned methods ¨ ¨ ¨ Segmentation Information Extraction Automatic Speech Recognition Manual Transcripts Annotation

Speech Segmentation? n Segmentation Sentences ¨ Stories ¨ Topic ¨ Speaker ¨ • Sentences

Speech Segmentation? n Segmentation Sentences ¨ Stories ¨ Topic ¨ Speaker ¨ • Sentences • Topics • Features • Techniques • Evaluation speech segmentation Extraction text • Text Retrieval Methods on ASR Transcripts

Information Extraction from Speech Data? n Information Extraction Named Entities ¨ Relevant Sentences and

Information Extraction from Speech Data? n Information Extraction Named Entities ¨ Relevant Sentences and Topics ¨ Weather/Sports Information ¨ • Sentences • Topics • Features • Techniques • Evaluation speech segmentation Extraction text • Text Retrieval Methods on ASR Transcripts

Audio segmentation Audio Segmentation Topics Story Sentences Commercials Speaker Types Gender Weather

Audio segmentation Audio Segmentation Topics Story Sentences Commercials Speaker Types Gender Weather

Audio segmentation methods n n n Can be roughly categorized in two different categories

Audio segmentation methods n n n Can be roughly categorized in two different categories n Language Models [Dharanipragada, et al. , 1999] , [Gotoh & Renals, 2000], [Maybury, 1998], [Shriberg, et al. , 2000] n Prosody Models [Gotoh & Renals, 2000], [Meinedo & Neto, 2003] , [Shriberg, et al. , 2000] Different methods work better for different purposes and different styles of data [Shriberg, et al. , 2000] Discourse cues based method highly effective in broadcast news segmentation [Maybury, 1998] Prosodic model outperforms most of the pure language modeling methods [Shriberg, et al. , 2000], [Gotoh & Renals, 2000] Combined model of using NLP techniques on ASR transcripts and prosodic features seem to work the best

Overview of a few algorithms: statistical model [Gotoh & Renals, 2000] n n n

Overview of a few algorithms: statistical model [Gotoh & Renals, 2000] n n n n Sentence Boundary Detection: Finite State Model that extracts boundary information from text and audio sources Uses Language and Pause Duration Model Language Model: Represent boundary as two classes with “last word” or “not last word” Pause Duration Model: n Prosodic features strongly affected by word Two models can be combined Prosody Model outperforms language model Combined model outperforms both

Segmentation using discourse cues [Maybury, 1998] n n n Discourse Cues Based Story Segmentation

Segmentation using discourse cues [Maybury, 1998] n n n Discourse Cues Based Story Segmentation Sentence segmentation is not possible with this method Discourse Cues in CNN n Start of Broadcast n Anchor to Reporter Handoff, Reporter to Anchor Handoff n Cataphoric Segment (still ahead of this news) n Broadcast End Time Enhanced Finite State Machine to represent discourse states such as anchor, reporter, advertisement, etc Other features used are named entities, part of speech, discourse shifts “>>” speaker change, “>>>” subject change Source Precision Recall ABC 90 94 CNN 95 75 Jim Lehrer Show 77 52

Speech Segmentation n n n Segmentation methods essential for any kind of extractive speech

Speech Segmentation n n n Segmentation methods essential for any kind of extractive speech summarization Sentence Segmentation in speech data is hard Prosody Model usually works better than Language Model Different prosody features useful for different kinds of speech data Pause features essential in broadcast news segmentation Phone duration essential in telephone speech segmentation Combined linguistic and prosody model works the best

Information Extraction from Speech n n Different types of information need to be extracted

Information Extraction from Speech n n Different types of information need to be extracted depending on the type of speech data Broadcast News: n Stories [Merlino, et al. , 1997] n Named Entities [Miller, et al. , 1999] , [Gotoh & Renals, 2000] n Weather information Meetings n Main points by a particular speaker n Address n Dates Voicemail n Phone Numbers [Whittaker, et al. , 2002] n Caller Names [Whittaker, et al. , 2002]

Statistical model for extracting named entities [Miller, et al. , 1999] , [Gotoh &

Statistical model for extracting named entities [Miller, et al. , 1999] , [Gotoh & Renals, 2000] w Statistical Framework: V denote vocabulary and C set of name classes, n Modeling class information as word attribute: Denote e=<c, w> and model using n In the above equation ‘e’ for two words with two different classes are considered different. This bring data sparsity problem Maximum likelihood estimates by frequency counts Most probable sequence of class names by Viterbi algorithm n Precision and recall of 89% for manual transcript with explicit modeling n n

Named entity extraction results [Miller, et al. , 1999] BBN Named Entity Performance as

Named entity extraction results [Miller, et al. , 1999] BBN Named Entity Performance as a function of WER [Miller, et al. , 1999]

Information Extraction from Speech n n Information Extraction from speech data essential tool for

Information Extraction from Speech n n Information Extraction from speech data essential tool for speech summarization Named Entities, phone number, speaker types are some frequently extracted entities Named Entity tagging in speech is harder than in text because ASR transcript lacks punctuation, sentence boundaries, capitalization, etc Statistical models perform reasonably well on named entity tagging

Speech Summarization at Columbia n n n We make a few assumptions in segmentation

Speech Summarization at Columbia n n n We make a few assumptions in segmentation and extraction Some new techniques proposed 2 -level summary Headlines for each story ¨ Summary for each story ¨ n Summarization Client and Server model

Speech Summarization (NEWS) Speech Signal Speech Channels - phone, remote satellite, station ACOUSTIC Error-free

Speech Summarization (NEWS) Speech Signal Speech Channels - phone, remote satellite, station ACOUSTIC Error-free Text LEXICAL Transcript- Manual Transcripts - ASR, Close Captioned Lexical Features Some Lexical Features Many Speakers - speaking styles Segmentation DISCOURSE -sentences Story presentation style STRUCTURALmany NLP tools Structure -Anchor, Reporter Interaction Prosodic Features -pitch, energy, duration Commercials, Weather Report

Speech Summarization INPUT: ACOUSTIC + LEXICAL Transcripts DISCOURSE STRUCTURAL Story/Sentence Segmentation, Speaker Identification, Speaker

Speech Summarization INPUT: ACOUSTIC + LEXICAL Transcripts DISCOURSE STRUCTURAL Story/Sentence Segmentation, Speaker Identification, Speaker Clustering, Manual Annotation, Named Entity Detection, POS tagging 2 -Level Summary Headlines Summary

Corpus n n n Topic Detection and Tracking Corpus (TDT-2) We are using 20

Corpus n n n Topic Detection and Tracking Corpus (TDT-2) We are using 20 “CNN Headline shows” for summarization 216 stories in total 10 hours of speech data Using Manual transcripts, Dragon and BBN ASR transcripts

Annotations - Entities n We want to detect – ¨ ¨ ¨ n Headlines

Annotations - Entities n We want to detect – ¨ ¨ ¨ n Headlines Greetings Signoff Sound. Byte-Speaker Interviews We annotated all of the above entities and the named entities (person, place, organization)

Annotations – by Whom and How? n n We created a labeling manual following

Annotations – by Whom and How? n n We created a labeling manual following ACE standards Annotated by 2 annotators over a course of a year 48 hours of CNN headlines news in total We built a labeling interface d. Label v 2. 5 that went through 3 revisions for this purpose

Annotations - d. Label v 2. 5

Annotations - d. Label v 2. 5

Annotations – ‘Building Summaries’ n n n 20 CNN shows annotated for extractive summary

Annotations – ‘Building Summaries’ n n n 20 CNN shows annotated for extractive summary A Brief Labeling Manual No detailed instruction on what to choose and what not to? We built a web-interface for this purpose, where annotator can click on sentences to be included in the summary Summaries stored in a My. SQL database

Annotations – Web Interface

Annotations – Web Interface

Annotations – Web Interface

Annotations – Web Interface

Acoustic Features n F 0 features ¨ max, min, mean, median, slope n n

Acoustic Features n F 0 features ¨ max, min, mean, median, slope n n RMS energy feature ¨ max, min, mean n n Higher amplitude probably means a stress on the phrases Duration ¨ Length of sentence in seconds (endtime – starttime) n n Change in pitch may be a topic shift Very short or a long sentence might not be important for summary Speaker Rate ¨ how fast the speaker is speaking n Slower rate may mean more emphasis in a particular sentence

Acoustic Features – Problems in Extraction n What should be the segment to extract

Acoustic Features – Problems in Extraction n What should be the segment to extract these features – sentences, turn, stories? n We do not have sentence boundaries. n A dynamic programming aligner to align manual sentence boundary with ASR transcripts n Feature values needs to be normalized by speaker: used Speaker Cluster ID available from BBN ASR

Acoustic Features – Praat: Extraction Tool

Acoustic Features – Praat: Extraction Tool

Lexical Features n Named Entities in a sentence Person ¨ People ¨ Organization ¨

Lexical Features n Named Entities in a sentence Person ¨ People ¨ Organization ¨ Total count of named entities ¨ n n Num. of words in a sentence Num. of words in previous and next sentence

Lexical Features - Issues n n Using Manual Transcript Sentence boundary detection using Ratnaparkhi’s

Lexical Features - Issues n n Using Manual Transcript Sentence boundary detection using Ratnaparkhi’s mxterminator Named Entities annotated For ASR transcript: Sentence boundaries aligned ¨ Automatic Named Entities detected using BBN’s Identifinder ¨ Many NLP tools fail when used with ASR transcript ¨

Structural Features n Position of the sentence in the story and the turn ¨

Structural Features n Position of the sentence in the story and the turn ¨ Turn position in the show ¨ n Speaker Type ¨ n n Reporter or Not Previous and Next Speaker Type Change in Speaker Type

Discourse Feature n n Given-New Feature Value Computed using the following equation where n_i

Discourse Feature n n Given-New Feature Value Computed using the following equation where n_i is the number of ‘new’ noun stems in sentence i, d is the total number of unique nouns, s_i is the number of noun stems that have already been seen, t is the total number of nouns n Intuition: ‘newness’ ~ more new unique nouns in the sentence (ni/d) ¨ If many nouns already seen in the sentence ~ higher ‘givenness’ s_i/(t-d) ¨

Experiments n n Sentence Extraction as a summary Binary Classification problem ‘ 0’ not

Experiments n n Sentence Extraction as a summary Binary Classification problem ‘ 0’ not in the summary ¨ ‘ 1’ in the summary ¨ n n n n 10 hours of CNN news shows 4 different sets of features – acoustic, lexical, structural, discourse 10 fold-cross validation 90/10 train and test 4 different classifiers WEKA and YALE learning tool Feature Selection Evaluation using F-Measure and ROUGE metrics

Feature Sets n We want to compare the various combination of our “ 4”

Feature Sets n We want to compare the various combination of our “ 4” feature sets Acoustic/Prosodic (A) ¨ Lexical (L) ¨ Structural (S) ¨ Discourse (D) ¨ n Combinations of feature sets, 15 in total ¨ L, A, …, L+A, L+S, … , L+A+S, … , L+S+D, … , L+A+S+D

Classifiers n n n Choice of available classifier may affect the comparison of feature

Classifiers n n n Choice of available classifier may affect the comparison of feature sets Compared 4 different classifiers by plotting threshold (ROC) curve and computing Area Under Curve (AOC) Best Classifier has AOC of 1 Classifier AOC Bayesian Network 0. 771 C 4. 5 Decision Trees 0. 647 Ripper 0. 643 Support Vector Machines 0. 535

ROC Curves

ROC Curves

Results – Best Combined Feature Set n We obtained best F-measure for 10 fold

Results – Best Combined Feature Set n We obtained best F-measure for 10 fold cross validation using all acoustic (A), lexical (L), discourse (D) and structural (S) feature. Precision Recall F-Measure Baseline 0. 430 0. 429 L+S+A+D 0. 489 0. 613 0. 544 n F-Measure is 11. 5% higher than the baseline.

What is the Baseline? n Baseline is the first 23% of sentences in each

What is the Baseline? n Baseline is the first 23% of sentences in each story. ¨ n n In Average Model summaries were 23% in length In summarization selecting first n% of sentences is pretty standard baseline For our purpose this is a very strict baseline, why? Because stories are short. In average 18. 2 sentences for each story ¨ In broadcast news it is standard to summarize the story in the introduction ¨ These sentences are likely to be in the summary ¨

Baseline and the Best F-measure

Baseline and the Best F-measure

F-Measure for All 15 Feature Sets

F-Measure for All 15 Feature Sets

Evaluation using ROUGE n n F-measure is a too strict measure Predicted summary sentences

Evaluation using ROUGE n n F-measure is a too strict measure Predicted summary sentences has to match exactly with the summary sentences What if we have a predicted sentence that is not an exact but has a similar content? ROUGE takes account of this

ROUGE metric n n n Recall-Oriented Understudy for Gisting Evaluation (ROUGE) ROUGE-N (where N=1,

ROUGE metric n n n Recall-Oriented Understudy for Gisting Evaluation (ROUGE) ROUGE-N (where N=1, 2, 3, 4 grams) ROUGE-L (longest common subsequence) ROUGE-S (skip bigram) ROUGE-SU (skip bigram counting unigrams as well)

Evaluation using ROUGE metric ROUGE ROUGE E-1 E-2 -3 -4 -L -S -SU Baseline

Evaluation using ROUGE metric ROUGE ROUGE E-1 E-2 -3 -4 -L -S -SU Baseline 0. 58 0. 51 0. 50 0. 49 0. 57 0. 40 0. 41 L+S+A+D 0. 84 0. 81 0. 80 0. 79 0. 84 0. 76 n In average L+S+A+D is 30. 3% higher than the baseline

Results - ROUGE

Results - ROUGE

Does importance of ‘what’ is said correlates with ‘how’ it is said? n Hypothesis:

Does importance of ‘what’ is said correlates with ‘how’ it is said? n Hypothesis: “Speakers change their amplitude, pitch, speaking rate to signify importance of words, phrases, sentences. ” n If this is the case then the prediction labels for sentences predicted using acoustic features (A) should correlate with labels predicted using lexical features (L) n We found correlation of 0. 74 n This above correlation is a strong support for our hypothesis

Is It Possible to Build ‘good’ Automatic Speech Summarization Without Any Transcripts? n Feature

Is It Possible to Build ‘good’ Automatic Speech Summarization Without Any Transcripts? n Feature Set F-Measure ROUGE-avg L+S+A+D 0. 54 0. 80 L 0. 49 0. 70 S+A 0. 49 0. 68 A 0. 47 0. 63 Baseline 0. 43 0. 50 Just using A+S without any lexical features we get 6% higher Fmeasure and 18% higher ROUGE-avg than the baseline

Feature selection n n We used feature selection to find the best feature set

Feature selection n n We used feature selection to find the best feature set among all the features in the combined set 5 best features are shown in the table These 5 features consist of all 4 different feature sets Feature Selection also selected these 5 features as the optimal feature set F-measure using just 5 features is 0. 53 which only 1% lower than using all features Rank Type Feature 1 A Time Length in seconds 2 L Num. of words 3 L Tot. Named Entities 4 S Normalized Sent. Pos 5 D Given-New Score

Problems and Future Work n We assume we have a good Sentence boundary detection

Problems and Future Work n We assume we have a good Sentence boundary detection ¨ Speaker IDs ¨ Named Entities ¨ n We obtain a very good speaker IDs and named entities from BBN but no sentence boundaries n We have to address the sentence boundary detection as a problem on its own. ¨ Alternative solution: We can do a ‘breath group’ level segmentation and build a model based on such segmentation

More Current and Future Work n n We annotated headlines, greetings, signoffs, interviews, soundbyte

More Current and Future Work n n We annotated headlines, greetings, signoffs, interviews, soundbyte speakers, interviewees We want to detect these entities ¨ n We want to present summary and these entities in a unified browsable frame work ¨ n (students involved for detecting some of these entities – Aaron Roth, Irina Likhtina) (student involved – Lauren Wilcox) The browser is implemented in client/server framework

Summarization Architecture SEGMENTATI ON SPEECH SGML Parser Story ACOUSTIC Headline XML Parser Interviews LEXICAL

Summarization Architecture SEGMENTATI ON SPEECH SGML Parser Story ACOUSTIC Headline XML Parser Interviews LEXICAL ASR Transcript Aligner Q&A DETECTION CC MANUAL Sentence Parser Text Pre. Process or STRUCTURE Sound. Bite. Speaker DISCOURCE Named-Entity Tagger INPUT PRE-PROCESSING Sound. Bites Signon/off S U M M A R I Z E R SPEECH MANUAL ASR CHUNKS MIN ING Weather forecast FEATURE EXTRACTION SEGMENTATION/ DETECTION MODEL/PRESENTAT ION

Generation or Extraction? n n n n SENT 27 a trial that pits the

Generation or Extraction? n n n n SENT 27 a trial that pits the cattle industry against tv talk show host oprah winfrey is under way in amarillo , texas. SENT 28 jury selection began in the defamation lawsuit began this morning. SENT 29 winfrey and a vegetarian activist are being sued over an exchange on her April 16, 1996 show. SENT 30 texas cattle producers claim the activists suggested americans could get mad cow disease from eating beef. SENT 31 and winfrey quipped , this has stopped me cold from eating another burger SENT 32 the plaintiffs say that hurt beef prices and they sued under a law banning false and disparaging statements about agricultural products SENT 33 what oprah has done is extremely smart and there's nothing wrong with it she has moved her show to amarillo texas , for a while SENT 34 people are lined up , trying to get tickets to her show so i'm not sure this hurts oprah. SENT 35 incidentally oprah tried to move it out of amarillo. she's failed and now she has brought her show to amarillo. SENT 36 the key is , can the jurors be fair SENT 37 when they're questioned by both sides, by the judge , they will be asked, can you be fair to both sides SENT 38 if they say , there's your jury panel SENT 39 oprah winfrey's lawyers had tried to move the case from amarillo , saying they couldn't get an impartial jury SENT 40 however, the judge moved against them in that matter … story summary

Conclusion n We talked about different techniques to build summarization systems We described some

Conclusion n We talked about different techniques to build summarization systems We described some speech-specific summarization algorithms We showed feature comparison techniques for speech summarization A model using a combination of lexical, acoustic, discourse and structural feature is one of the best model so far. ¨ Acoustic features correlate with the content of the sentences ¨ n We discussed possibilities of summarizing speech without any transcribed text