icassp paper survey
DESCRIPTION
ICASSP Paper Survey. Presenter: Chen Yi-Ting. Improved Spoken Document Retrieval With Dynamic Key Term Lexicon and Probabilistic Latent Semantic Analysis (PLSA) Improved Spoken Document Summarization Using Probabilistic Latent Semantic Analysis (PLSA) - PowerPoint PPT PresentationTRANSCRIPT
1
ICASSP Paper Survey
Presenter: Chen Yi-Ting
2
• Improved Spoken Document Retrieval With Dynamic Key Term Lexicon and Probabilistic Latent Semantic Analysis (PLSA)
• Improved Spoken Document Summarization Using Probabilistic Latent Semantic Analysis (PLSA)
• Topic and Stylistic Adaptation for Speech Summarisation• Automatic Sentence Segmentation of Speech for Automa
tic Summarization
3
Improved Spoken Document Retrieval With Dynamic Key Term Lexicon and Probabilistic Latent
Semantic Analysis (PLSA)
4
• In this paper, using a “dynamic key term lexicon” automatically extracted from the ever-changing document archives as an extra feature set in the retrieval task.
5
• An important part of the proposed approach is the automatic key term extraction from the archives
• The second important part of the proposed approach is the key term recognition from the user spoken query– Here special approaches to recognize correctly the key term in
the user query were developed, including emphasizing the possible key term candidates during search through the phone lattice, and key term matching using a phone similarity matrix including two different distance measures.
– Two different versions of models can be used: the general lexicon including all terms except those stop terms deleted and the other based on the much smaller but semantically rich key term lexicon
6
• Named Entity Recogniion– The first is to recognize the NEs from a text document (or the tra
nscription of a spoken document) using global information– The second special approach used here is for spoken document
s to recover the OOV NEs using external knowledge
• Key Term extraction by term entropy based on PLSA
)(
)|(
)(
)()|()|(
j
kj
j
kkjjk tP
TtP
tP
TPTtPtTP
)|(log)|()(1
jk
K
kjkj tTPtTPtH
7
• Key term recognition from user query– The user spoken query is transcribed not only into a word graph as usua
l recognition process, but into a phone lattice as well
– Then matching the phone lattice with the phone sequences of the key terms in the dynamic lexicon using dynamic programming (threshold)
– The price paid here is of course the overall word error rate may be increased
• The experimental conditions– Word error rates 27%、 character error rates 14.29%、 syllable error ra
te 8.91%
– 32 topics were used in PLSA modeling
– 1000new stories (test set) 50 queries
– The length of the queries is roughly 8-11 words
– A lexicon of 61521 word was used here
– A total of 1708 NE were obtained (from 9836 news stories)
– Picked up the top 2000 terms ranked by term entropy
8
• Experimental Results
9
Improved Spoken Document Summarization Using Probabilistic Latent Semantic Analysis (PLSA)
10
• where some statistical measure s(tj) (such as TF/IDF or the like)linguistic measure l(tj) (e.g., named entities)c(tj) is calculated from the confidence scoreg(tj) is N-gram score for the term tj b(S) is calculated from the grammatical structure of the sentence Sλ1, λ2, λ3, λ4 and λ5 are weighting parameters
• Two useful measures, referred to as topic significance and term entropy in this paper are proposed based on the PLSA modeling to determine the terms and thus sentences important for the document which can then be used to construct the summary
• The statistical measure s(tj) which has been proved extremely useful is called “significance score”:
)()()()()(1
)( 51
4321 Sbtgtctltsn
SIn
jjjjj
t
Aijj F
Fdtnts log),()(
11
• Topic significance– The topic significance score of a term tj with respect to a topic Tk
– The statistical measure:
• Term Entropy
– is a scaling factor
klTil
ik
Ddijkt
l
i
j dTP
dTPdtnTS
,
)|(
)|(),()(
)|()()(1
ikk
K
ktj dTPTStsj
)(
)|(
)(
)()|()|(
j
kj
j
kkjjk tP
TtP
tP
TPTtPtTP
K
kjkjkijjEN tTPtTPdtnts
1
)|(log)|(),()(
)|( jk tTP )|( jk tTP
12
• Experiments configuration– The test corpus included 200 news stories broadcast– Word accuracy 66.46%、 character accuracy 74.95%、 syllable
accuracy 81.70%– Sentence recall/precision is the evaluation metric for automatic s
ummarization of documents
13
• Experiments configuration
14
Topic and Stylistic Adaptation for Speech Summarisation
15
• In this paper they investigate LiM topic and stylistic adaptation using combinations of LiMs each trained on different adaptation data
• Focusing on adapting the linguistic component, which is not related at all to the language model used during the recognition process, to make it more suited for the summarisation task
• Experiments were performed both on spontaneous speech, using 9 talks from the Translanguage English Database (TED) corpus, and speech read from text, using 5 talks from CNN broadcast news from 1995
• The measure of summary quality used in this paper is summarisation accuracy (SumACCY)
[%]100*/)( LenDelInsSubLenAccuracy
16
• Automatic speech summarisation system
17
• Summarisation Method• Important sentences are first extracted according to the followi
ng score for each sentence, obtained from the automatic speech recognition out (ASR)
• Starting with a baseline LiM (LiMB) we perform LiM adaptation by linearly interpolating the baseline model with other component models trained on different data
Where • Different types of component LiM are built, coming from differe
nt sources of data, and using either unigram, bigram or trigram information
n
iiLiIiC wLwIwC
NWS
1
)()()(1
)(
)....|(.....
)....|()....|(
11
111111
iniinn
iniiinii
wwwP
wwwPwwwP
k k 1
)....|(log)( 11 iniii wwwPwL
18
• Experimental Setup– Due to lack of data we had to use the talks both for development
and evaluation with a rotating form of cross-validation: all talks but one are used for development, the remaining talk being used for testing
– Summaries from the development talks are generated automatically by the system using different sets of parameters
– For the TED data, two type of component linguistic models:• The first type are built on the small corpus of hand-made sum
maries, made for the same summarisation ratio.• The second type are built from the papers in the conference
proceeding for talk we want to summarise– For the CNN data, one type of component linguistic models
• the small corpus of hand-made summaries
19
• Experimental Setup– Reference results: random summarisation, the humman summari
es and the baseline
CNNTED
20
21
Automatic Sentence Segmentation of Speech for Automatic Summarization
22
• This paper presents an automatic sentence segmentation method for an automatic speech summarization system
• The segmentation method is based on combining word- and class-based statistical language models to predict sentence and non-sentence boundaries
• Studying both the effect of the segmentation on the sentence segmentation system itself and the effect of the segmentation on the summarization accuracy
• To judge the quality of the sentence segmentation we used the F-measure metrics
23
• Automatic sentence segmentation
• This probability was combined with the matching recursive path probability from
• Three LMs were used in sentence segmentation, two word-based LMs and a class-based LM
• The LMs were combined by linear interpolation as follows:
)|()|()....(
)|()|()....()....(
1211
111 1
SwpwwSpwwP
SwpSSpwwPwwP
iiiiSNO
iwiSiS i
)|()....(
)|()....()....(
1211
111 1
iiiiSNO
wiiSiSNO
wwwpwwP
SwpwwPwwPi
1iw
))(),...,(|)(())(|()|(
),....,|()|(
)|()|(
11
11
iniiiiim
iniiim
mimmi
wCwCwCPwCwPhwP
orwwwPhwP
hwPhwP
24
• Experimental results