on real-time mean-and-variance normalization of speech recognition features

24
ON REAL-TIME MEAN-AND-VARIANCE NORMALIZAT ION OF SPEECH RECOGNITION FEATURES Pere Pujol, Dušan Macho, and Climent NadeuNat ional ICT TALP Research Center Universitat Politècnica de Catalunya, Barcelo na, Spain. Presenter: Chen, Hung-Bin ICASSP 2006

Upload: lyn

Post on 20-Mar-2016

41 views

Category:

Documents


2 download

DESCRIPTION

ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES. Pere Pujol, Dušan Macho, and Climent NadeuNational ICT TALP Research Center Universitat Politècnica de Catalunya, Barcelona, Spain. Presenter: Chen, Hung-Bin. ICASSP 2006. Outline. Introduction - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

Pere Pujol, Dušan Macho, and Climent NadeuNational ICTTALP Research Center

Universitat Politècnica de Catalunya, Barcelona, Spain.

Presenter: Chen, Hung-Bin

ICASSP 2006

Page 2: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

Outline

• Introduction• On-line versions of the mean and variance normalization(MV

N) • Experiments• Conclusions

Page 3: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

Introduction

• On real-time system, However, the off-line estimation (MVN) involves a long delay that is likely unacceptable

• In this paper, we report an empirical investigation about several on-line versions of the mean and variance normalization(MVN) technique and the factors affecting their performance

– Segment-based updating of mean & variance– Recursive updating of mean & variance

Page 4: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

INVOLVED IN REAL-TIME MVN

• Mean and variance normalization

][][][

][n

nnXnX

i

iii

parameterfloor a:n frameat estimates variance:][

n frameat estimatesmean :][ionnormalizatafter n frameat vectorsfeature theofth -i the:][

ionnormalizat beforen frameat vectorsfeature theofth -i the:][

nnnXnX

i

i

i

i

Page 5: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

on-line versions issues

• Segment-based updating of mean & variance– there is a delay of half the length of the window

• Recursive updating of mean & variance– initialized using the first D frames of the current utterance and th

en they are recursively updated as new frames arrive

2/

2/

22

2/

2/

1

1

D

Dninin

D

Dniin

XD

XD

222 ][][)1(]1[][

][)1(]1[][

nDnXnn

DnXnn

iiii

iii

n-D/2 n n+D/2

D

0

D

]0[i ][ni

Page 6: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

EXPERIMENTAL SETUP AND BASELINE RESULTS

• A subset of the office environment recordings from the Spanish version of the Speecon database– carry out the speech recognition experiments with digit strings

• The database includes recordings with 4 microphones:– a head-mounted close-talk (CT)– a Lavalier mic– a directional mic situated at 1 meter from the speaker– an omni-directional microphone placed at 2-3 meters from the sp

eaker

• 125 speakers were chosen for training and 75 for testing– both balanced in terms of sex and dialect

Page 7: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

baseline system results

• the resulting parameters were used as features, along with their first- and second-order time derivatives

Page 8: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

Segment-based updating of mean & variance results

• a sliding fixed-length window centered in the current frame– there is a delay of half the length of the window

n-D/2 n n+D/2

D

Page 9: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

Recursive updating of mean & variance results

• Table 3 shows the results using different look-ahead values in the recursive MVN– The entire look-ahead interval is used to calculate the initial esti

mates of mean and varianc– in the case of 0 sec, the first 100 ms are used 0

D

]0[i ][ni

Page 10: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

Recursive updating of mean & variance results

• The initial estimates were computed as in UTT-MVN• This reinforces the good initial estimates of mean & variance

– In our case β was experimentally set to 0.992

Page 11: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

INITIAL MEAN & VARIANCE ESTIMATED FROMPAST DATA ONLY

• this category do not use the current utterance to compute the initial estimates of the mean & variance of the features

• the different data sources– Current session:

• in the current session with fixed microphones, environment and speaker

• utterances not included in the test set

– Set of sessions:• using utterances from a set of sessions of the Speecon datab

ase instead of utterances from the same session

Page 12: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

initial with past data results

Page 13: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

Conclusions

• In that case, a recursive MVN performs better than a segment-based MVN

• we observed that – the usefulness of mean and variance updating increases when th

e initial estimates are not representative enough for a given utterance

Page 14: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

GENERALIZED PERCEPTUAL FEATURES FOR VOCALIZATION ANALYSIS ACROSS

MULTIPLE SPECIES

Patrick J. Clemins, Marek B. Trawicki, Kuntoro Adi, Jidong Tao, and Michael T. Johnson

Marquette UniversityDepartment of Electrical and Computer Engineering

Speech and Signal Processing Lab

Presenter: Chen, Hung-Bin

ICASSP 2006

Page 15: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

15

Outline

• Introduction• The greenwood function cepstral coefficient (gfcc) model• Experiments• Conclusions

Page 16: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

16

Introduction

• This paper introduces the Greenwood Function Cepstral Coefficient (GFCC) and Generalized Perceptual Linear Prediction (GPLP) feature extraction models for the analysis of animal vocalizations across arbitrary species

• Mel-Frequency Cepstral Coefficients (MFCCs) and Perceptual Linear Predication (PLP) coefficients are well-established feature representations for human speech analysis and recognition tasks– W e want to generalize the frequency warping component across

arbitrary species, we build on the work of Greenwood

Page 17: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

17

Greenwood function cepstral coefficient (gfcc) model

• Greenwood found that many mammals perceived frequency on a logarithmic scale along the cochlea– He modeled this relationship with an equation of the form

where a, A, and k are species-specific constants and x is the cochlea position

– This equation can be used to define a frequency warping through the following equations for real frequency, f, and perceived frequency, fp

kA ax 10

kAfafFp /log/1 10

Page 18: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

18

Greenwood function cepstral coefficient (gfcc) model

• Assuming this value for k, the other two constants can be solved for given the approximate hearing range (fmin – fmax) of the species under study– By setting Fp(fmin) = 0 and Fp (fmax) = 1, the following equation

s for a and A are derived

– Thus, using the species specific values for fmin and fmax and an assumed value of k=0.88, a frequency warping function can be constructed

1)(when , log

0)(when , 1

/log/1

maxmax

10

minmin

10

fFkAf

a

fFk

fA

kAfafF

p

p

p

Page 19: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

19

Greenwood function cepstral coefficient (gfcc) model

• Figure 1 shows GFCC filter positioning for African Elephant and Passeriformes (songbird) species compared to the Mel-Frequency scale

Page 20: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

20

Greenwood function cepstral coefficient (gfcc) model

• Figure 2 shows GPLP equal loudness curves for African Elephant and Passeriformes (songbird) species compared to the human equal loudness curve

Page 21: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

21

experiments

• Speaker independent song-type classification experiments were performed across the 5 most common song types using 50 exemplars of each song-type, each containing multiple song-type variants.

Page 22: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

22

experiments

• Song-type dependent speaker identification experiments were performed using 25 exemplars of the most frequent song-type for each of the 6 vocalizing buntings.

Page 23: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

23

experiments

• Call-type dependent speaker identification experiments were performed using the entire Loxodonta Africana dataset– There were a total of six elephants with 20, 30, 14, 17, 34, and 2

8 rumble exemplars per elephant

Page 24: ON REAL-TIME MEAN-AND-VARIANCE NORMALIZATION OF SPEECH RECOGNITION FEATURES

24

Conclusions

• New feature extraction models have been introduced for application to analysis and classification tasks of animal vocalizations.