digital speech processingdigital speech processing—— σ 2 x ... · • advantage of feedback...

16
1 Digital Speech Processing Digital Speech Processing— Lecture 16 Lecture 16 1 Speech Coding Methods Speech Coding Methods Based on Speech Waveform Based on Speech Waveform Representations and Representations and Speech Models Speech Models—Adaptive Adaptive and Differential Coding and Differential Coding Speech Waveform Coding Speech Waveform Coding-Summary of Summary of Part 1 Part 1 12 3 2 3 0 / () () σ x px e p 2 1 1 0 2 2 || () () σ σ σ = = x x x x px e p 1. Probability density function for speech samples Gamma Laplacian 2 0 8 () () | | σ πσ = =∞ x x px e p x 2. Coding paradigms uniform -- divide interval from +X max to –X max into 2 B intervals of length Δ =(2X max /2 B ) for a B-bit quantizer Δ Δ Δ Δ Δ Δ Δ -X max =4σ x +X max =4σ x Speech Waveform Coding Speech Waveform Coding-Summary of Summary of Part 1 Part 1 6 4 77 20 10 20 10 2 6 02 46 max max max max ˆ( ) () () . log sensitivity to / ( varies a lot!!!) not great use of bits for actual speech densities! log (uniform) (B=8) . . σ σ σ σ σ = + = + i i x x x x x xn xn en X SNR B X X X SNR 75 4 12 04 40 73 . . -Δ/2 Δ/2 1/Δ p(e) Δ Δ Δ Δ Δ Δ Δ -X max =4σ x +X max =4σ x 8 18 06 34 71 16 24 08 28 69 32 30 10 22 67 64 36 12 16 65 max max . . . . . . . . (or equivalently ) varies a lot across sounds, speakers, environments need to adapt coder ( ( )) to time varying or key σ σ Δ i i i x x X n X question is how to adapt 30 dB loss as X max /σ x varies over a 32:1 range Speech Waveform Coding Speech Waveform Coding-Summary of Part 1 Summary of Part 1 [ ] | ( )| = y(n) F x(n) xn pseudo-logarithmic (constant percentage error) - compress x(n) by pseudo-logarithmic compander - quantize the companded x(n) uniformly - expand the quantized signal [ ] 1 1 max max | ( )| log log( ) μ μ + = + X X sign x(n) max max large | ( )| | ( )| | ( )| log log μ μ xn X xn yn X -X max =4σ x +X max =4σ x Speech Waveform Coding Speech Waveform Coding-Summary of Summary of Part 1 Part 1 [ ] 2 10 10 6 4 77 20 1 10 1 2 max max max ( ) . log ln( ) log insensitive to / over a wide range for large μ μσ μσ σ μ = + + + + i x x x X X SNR dB B X 5 maximum SNR coding — match signal quantization intervals to model probability distribution (Gamma, Laplacian) • interesting—at least theoretically Adaptive Quantization Adaptive Quantization linear quantization => SNR depends on σ x being constant (this is clearly not the case) instantaneous companding => SNR only weakly dependent on X max /σ x for large μ-law compression (100- 500) optimum SNR => minimize σ 2 when σ 2 is known non- 6 optimum SNR > minimize σ e when σ x is known, non uniform distribution of quantization levels Quantization dilemma: want to choose quantization step size large enough to accomodate maximum peak-to- peak range of x(n); at the same time need to make the quantization step size small so as to minimize the quantization error the non-stationary nature of speech (variability across sounds, speakers, backgrounds) compounds this problem greatly

Upload: others

Post on 24-Oct-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

1

Digital Speech ProcessingDigital Speech Processing——Lecture 16Lecture 16

1

Speech Coding Methods Speech Coding Methods Based on Speech Waveform Based on Speech Waveform

Representations and Representations and Speech ModelsSpeech Models——Adaptive Adaptive

and Differential Codingand Differential Coding

Speech Waveform CodingSpeech Waveform Coding--Summary of Summary of Part 1Part 1

1 2 323 0

/

( ) ( )σ−⎡ ⎤

⎢ ⎥ x

x

p x e p

21 102 2

| |

( ) ( )σ

σ σ

= =x

x

x x

p x e p

1. Probability density function for speech samples

Gamma

Laplacian

2

2 08

( ) ( )| |

σ

πσ= = ∞⎢ ⎥⎢ ⎥⎣ ⎦

x

x

p x e px

p

2. Coding paradigms

• uniform -- divide interval from +Xmax to –Xmax into 2B intervals of length Δ=(2Xmax/2B) for a B-bit quantizer

ΔΔ Δ Δ Δ Δ Δ

-Xmax=4σx +Xmax=4σx

Speech Waveform CodingSpeech Waveform Coding--Summary of Summary of Part 1Part 1

6 4 77 20 10

20 10

2 6 02 46

max

max

max max

ˆ( ) ( ) ( )

. log

sensitivity to / ( varies a lot!!!) not great use of bits for actual speech densities!

log (uniform) (B=8)

. .

σ

σ σ

σ σ

= +

⎡ ⎤= + − ⎢ ⎥

⎣ ⎦

⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

ii

x

x x

x x

x n x n e n

XSNR B

X

X X SNR

754 12 04 40 73. .

-Δ/2 Δ/2

1/Δ

p(e)

3

ΔΔ Δ Δ Δ Δ Δ

-Xmax=4σx +Xmax=4σx

8 18 06 34 7116 24 08 28 6932 30 10 22 6764 36 12 16 65

max

max

. .. .. .. .

(or equivalently ) varies a lot across sounds, speakers, environments need to adapt coder ( ( )) to time varying or key

σσΔ

iii

x

x

Xn X

question is how to adapt

30 dB loss as Xmax/σx varies over a 32:1 range

Speech Waveform CodingSpeech Waveform Coding--Summary of Part 1Summary of Part 1

[ ]

1

| ( ) |l

=

⎡ ⎤⎢ ⎥

y(n) F x(n)

x n

• pseudo-logarithmic (constant percentage error)

- compress x(n) by pseudo-logarithmic compander

- quantize the companded x(n) uniformly

- expand the quantized signal

4

[ ]1

1max

max

| ( ) |log

log( )

μ

μ

+⎢ ⎥⎣ ⎦= ⋅

+X

X sign x(n)

max

max

large | ( ) |

| ( ) | | ( ) | loglog

μμ

⎡ ⎤≈ ⋅ ⎢ ⎥

⎣ ⎦

x n

X x ny nX

-Xmax=4σx +Xmax=4σx

Speech Waveform CodingSpeech Waveform Coding--Summary of Summary of Part 1Part 1

[ ]2

10 106 4 77 20 1 10 1 2max max

max

( ) . log ln( ) log

insensitive to / over a wide range for large

μμσ μσ

σ μ

⎡ ⎤⎛ ⎞ ⎛ ⎞⎢ ⎥= + − + − + +⎜ ⎟ ⎜ ⎟⎢ ⎥⎝ ⎠ ⎝ ⎠⎣ ⎦

ix x

x

X XSNR dB B

X

5

• maximum SNR coding — match signal quantization intervals to model probability distribution (Gamma, Laplacian)

• interesting—at least theoretically

Adaptive QuantizationAdaptive Quantization• linear quantization => SNR depends on σx being

constant (this is clearly not the case)• instantaneous companding => SNR only weakly

dependent on Xmax/σx for large μ-law compression (100-500)

• optimum SNR => minimize σ 2 when σ 2 is known non-

6

optimum SNR > minimize σe when σx is known, nonuniform distribution of quantization levels

Quantization dilemma: want to choose quantization step size large enough to accomodate maximum peak-to-peak range of x(n); at the same time need to make the quantization step size small so as to minimize the quantization error– the non-stationary nature of speech (variability across sounds,

speakers, backgrounds) compounds this problem greatly

Page 2: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

2

Solutions to Quantization DilemnaSolutions to Quantization Dilemna

• Solution 1 - let Δ vary to match the variance of the input signal => Δ(n)

• Solution 2 - use a variable gain, G(n),f ll d b fi d

Adaptive Quantization:

7

followed by a fixed quantizer step size, Δ => keep signal variance of y(n)=G(n)x(n) constant

Case 1: Δ(n) proportional to σx => quantization levels and ranges would be linearly scaled to match σx

2 => need to reliably estimate σx2

Case 2: G(n) proportional to 1/ σx to give σy2 ≈ constant

• need reliable estimate of σx2 for both types of adaptive quantization

Types of Adaptive QuantizationTypes of Adaptive Quantization• instantaneous-amplitude changes reflect sample-

to-sample variations in x[n] => rapid adaptation• syllabic-amplitude changes reflect syllable-to-

syllable variations in x[n] => slow adaptation

8

• feed-forward-adaptive quantizers that estimate σx2

from x[n] itself• feedback-adaptive quantizers that adapt the step

size, Δ, on the basis of the quantized signal, , (or equivalently the codewords, c[n])

ˆ[ ]x n

Feed Forward AdaptationFeed Forward AdaptationVariable step size• assume uniform quantizer with step size Δ[n]

• x[n] is quantized using Δ[n] => c[n] and Δ[n] need to be

9

ˆ ˆ[ [ [ ]′ =x n x n

transmitted to the decoder

• if c’[n]=c[n] and Δ’[n]=Δ[n] => no errors in channel, and

Don’t have x[n] at the decoder to estimate Δ[n] => need to transmit Δ[n]; this is a major drawback of feed forward adaptation

FeedFeed--Forward QuantizerForward Quantizer

10

Can’t estimate G[n] at the decoder => it has to

be transmitted

time varying gain, G[n] => c[n] and G[n] need to be transmitted to the decoder

Feed Forward QuantizersFeed Forward Quantizers• feed forward systems make estimates of σx

2 , then make Δ or the quantization levels proportional to σx , or the gain is inversely proportional to σx

2

22

assume short-time energy

[ ] [ ] [ ] / [ ] h [ ] i l filt

σ∞ ∞

• ∝

∑ ∑x

h h h

11

2

0

2 2

1

2

10

2

2

[ ] [ ] [ ] / [ ] where [ ] is a lowpass filter

[ ] (this can be shown)

consider [ ]

[ ] [ ]

σ

σ σ

α

σ α

=−∞ =

= −

⎡ ⎤ ∝⎣ ⎦• = ≥

=

=

∑ ∑m m

x

n

n x m h n m h m h n

E n

h n notherwise

n x m ( )1

1

2

0 0

1 0 1

1 1 12 2

( )

[ ] [ ] [ ]( ) (recursion) this gives [ ] [ ] and [ ] / [ ]

α α

σ ασ ασ σ

−− −

=−∞

− < <

= − + − −• Δ = Δ =

∑n

n m

m

n n x nn n G n G n

Slowly Adapting Gain ControlSlowly Adapting Gain Control

[ ] [ ] [ ]=y n G n x n

2 2 2

11 2

[ ] [ 1] [ 1](1 )

[ ](1 )n

n m

m

n n x n

x m

σ ασ α

α α−

− −

=−∞

= − + − −

= −∑

02

[ ][ ]σ

=GG n

n

{ }ˆ[ ] [ ] [ ]Q G

0 99.α =

12

[ ] [ ] [ ]y n G n x n { }[ ] [ ] [ ]Δ=y n Q G n x n

20[ ] [ ]σΔ = Δn n

{ }[ ]ˆ[ ] [ ]Δ= nx n Q x nmin max

min max

[ ][ ]

≤ ≤

Δ ≤ Δ ≤ Δ

G G n Gn

α =0.99 => brings up level in low amplitude regions => time constant of 100 samples (12.5 msec for 8 kHz sampling rate) => syllabic rate

Page 3: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

3

Rapidly Adapting Gain ControlRapidly Adapting Gain Control

[ ] [ ] [ ]=y n G n x n

2 2 2

11 2

[ ] [ 1] [ 1](1 )

[ ](1 )n

n m

m

n n x n

x m

σ ασ α

α α−

− −

=−∞

= − + − −

= −∑

02

[ ][ ]σ

=GG n

n

{ }ˆ[ ] [ ] [ ]y n Q G n x n

0 9.α =

13

[ ] [ ] [ ]y n G n x n { }[ ] [ ] [ ]Δ=y n Q G n x n2

0[ ] [ ]σΔ = Δn n{ }[ ]ˆ[ ] [ ]Δ= nx n Q x n

min max

min max

[ ][ ]

≤ ≤

Δ ≤ Δ ≤ Δ

G G n Gn

α = 0.9 => system reacts to amplitude variations more rapidly => provides better approximation to σy

2 = constant => time constant of 9 samples (1 msec at 8 kHz) for change => instantaneous rate

Feed Forward QuantizersFeed Forward Quantizers• Δ[n] and G[n] vary slowly compared to x[n]

– they must be sampled and transmitted as part of the waveform coder parameters

– rate of sampling depends on the bandwidth of the lowpass filter, h[n]—for α = 0.99, the rate is about 13 Hz; for α = 0.9, the rate is about 135 Hz

14

2

min max

min max

max

min

it is reasonable to place limits on the variation of [ ] or [ ], of the form [ ]

[ ]

for obtaining constant over a 40 dB range in signal levels

• Δ≤ ≤

Δ ≤ Δ ≤ Δ

• ≈ =>

=

y

n G nG G n G

n

σ

GG

100max

min

(40dB range)Δ=

Δ

Feed Forward Adaptation GainFeed Forward Adaptation Gain1

2 21 [ ] [ ]

[ ] or [ ] evaluated every samples used 128, 1024 samples for estimates adaptive quantizer achieves up to 5.6 dB better SNR than

σ+ −

=

=

• Δ• =•

∑n M

m nn x m

Mn G n M

M

15

non-adaptive quantizers can ac•

min max

hieve this SNR with low "idle channel noise" and wide speech dynamic range by suitable choice of and Δ Δ

feed-forward adaptation gain with B=3—less

gain for M=1024 than M=128 by 3 dB =>

M=1024 is too long an interval

Feedback AdaptationFeedback Adaptation

16

• σ2[n] estimated from quantizer output (or the code words)

• advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since they can be derived from the code words

• disadvantage of feedback adaptation is increased sensitivity to errors in codewords, since such errors affect Δ[n] and G[n]

Feedback AdaptationFeedback Adaptation2 2

0

2

1 1

ˆ [ ] [ ] [ ] / [ ]

ˆ [ ] based only on past values of [ ] two typical windows/filters are

1. [ ]

σ

σ

α

∞ ∞

=−∞ =

= −

••

= ≥

∑ ∑m m

n

n x m h n m h m

n x n

h n n

17

12 2

02 1 1

01

[ ]

. [ ] /

ˆ[ ] [ ]σ−

= −

== ≤ ≤=

=

∑n

m n M

otherwiseh n M n M

otherwise

n x mM

can use very short window lengths (e.g., 2) to achieve12 dB SNR for a 3 bit quantizer

==

MB

Alternative Approach to AdaptationAlternative Approach to Adaptation

[ ]

1 2 3 411

[ ] [ ]; { , , , }| [ ] |

[ ] [ ]ˆ( ) [ ] [ ]

Δ = ⋅ Δ − =

∝ −

Δ= + Δ

n P n P P P P PP c n

n sign c nx n n c n

18

21 1

min

( ) [ ] [ ]

[ ] only depends on [ ] and [ ] only need to transmit codewords

also necessary to impose the limits [

+ Δ

• Δ Δ − −=>

•Δ ≤ Δ

x n n c n

n n c n

max

max min

]the ratio / controls the dynamic

range of the quantizer

≤ Δ

• Δ Δ

n

Page 4: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

4

Adaptation GainAdaptation Gain• key issue is how should P vary with

|c[n-1]|– if (c[n-1[ is either largest positive or largest

negative codeword, then quantizer is l d d d h i i i

19

overloaded and the quantizer step size is too small => P4 > 1

– if c[n-1] is either smallest positive or negative codeword, then quantization error is too large => P1 < 1

– need choices for P2 and P3

Adaptation GainAdaptation Gain

1 2 | [ 1] |2 1

+ −=

−B

c nQ

20

2 1• shaded area is variation in range of P values due to different speech sounds or different B values

Can see that step size increases (P>1) are more vigorous than step size decreases (P<1) since signal growth needs to be kept within quantizer range to avoid ‘overloads’

Optimal Step Size MultipliersOptimal Step Size Multipliers

• optimal values of P for B=2,3,4,5

21

• improvements in SNR

• 4-7 dB improvement over μ-law

• 2-4 dB improvement over non-adaptive optimum quantizers

Quantization of Speech Model Quantization of Speech Model ParametersParameters

22

• Excitation and vocal tract (linear system) are characterized by sets of parameters which can be estimated from a speech signal by LP or cepstral processing.

• We can use the set of estimated parameters to synthesize an approximation to the speech signal whose quality depends of a range of factors.

Quantization of Speech Model Quantization of Speech Model ParametersParameters

• Quality and data rate of synthesis depends on:– the ability of the model to represent speech

the ability to reliably and accurately estimate

23

– the ability to reliably and accurately estimate the parameters of the model

– the ability to quantize the parameters in order to obtain a low data rate digital representation that will yield a high quality reproduction of the speech signal

ClosedClosed--Loop and OpenLoop and Open--Loop Loop Speech CodersSpeech Coders

ClosedClosed--looploop – used in a feedback loop where the synthetic speech output is compared to the input signal, and the resulting difference used to determine the excitation

24

determine the excitation for the vocal tract model.

OpenOpen--looploop – the parameters of the model are estimated directly from the speech signal with no feedback as to the quality of the resulting synthetic speech.

Page 5: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

5

Scalar QuantizationScalar Quantization• Scalar quantization – treat each model parameter

separately and quantize using a fixed number of bits– need to measure (estimate) statistics of each parameter, i.e.,

mean, variance, minimum/maximum value, pdf, etc.– each parameter has a different quantizer with a different number

of bits allocatedE l f l ti ti

25

• Example of scalar quantization– pitch period typically ranges from 20-150 samples (at 8 kHz

sampling rate) => need about 128 values (7-bits) uniformly over the range of pitch periods, including value of zero for unvoiced/background

– amplitude parameter might be quantized with a μ-law quantizer using 4-5 bits per sample

– using a frame rate of 100 frames/sec, you would need about 700 bps for pitch period and 400-500 bps for amplitude

Scalar QuantizationScalar Quantization

• 20-th order LPC analysis frame

• Each PARCOR coefficient transformed to range:

26

to range: –π/2<sin-1(ki)<π/2 and then quantized with both a 4-bit and a 3-bit uniform quantizer.

• Total rate of quantized representation of speech about 5000 bps.

Techniques of Vector Techniques of Vector QuantizationQuantization

27

QQ

Vector QuantizationVector Quantization

• code block of scalars as a vector, rather than individually

• design an optimal quantization

28

method based on mean-squared distortion metric

• essential for model-based and hybrid coders

Vector QuantizationVector Quantization

29

Waveform Coding Vector QuantizerWaveform Coding Vector QuantizerVQ code pairs of waveform samples,

X[n]=(x[2n],x[2n+1]);

(b) Single element codebook with cluster centroid (0-bit codebook)

30

( )

(c) Two element codebook with two cluster centers (1-bit codebook)

(d) Four element codebook with four cluster centers (2-bit codebook)

(e) Eight element codebook with eight cluster centers (3-bit codebook)

Page 6: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

6

Toy Example of VQ CodingToy Example of VQ Coding• 2-pole model of the vocal tract => 4 reflection coefficients

• 4-possible vocal tract shapes => 4 sets of reflection coefficients

1. Scalar Quantization -assume 4 values for each reflection coefficient => 2-bits x 4 coefficients = 8 bits/frame

31

coefficients 8 bits/frame

2. Vector Quantization -only 4 possible vectors => 2-bits to choose which of the 4 vectors to use for each frame (pointer into a codebook)

• this works because the scalar components of each vector are highly correlated

• if scalar components are independent => VQ offers no advantage over scalar quantization

Elements of a VQ ImplementationElements of a VQ Implementation

1. A large training set of analysis vectors; X={X1,X2,…,XL}, L should be much larger than the size of the codebook, M, i.e., 10-100 times the size of M.

2. A measure of distance, dij=d(Xi,Xj), between a pair of analysis vectors, both for clustering the training set as well as for classifying test set vectors into unique

32

y g qcodebook entries.

3. A centroid computation procedure and a centroid splitting procedure.

4. A classification procedure for arbitrary analysis vectors that chooses the codebook vector closest in distance to the input vector, providing the codebook index of the resulting nearest codebook vector.

VQ ImplementationVQ Implementation

33

The VQ Training SetThe VQ Training Set• The VQ training set of L≥10M vectors should

span the anticipated range of:– talkers, ranging in age, accent, gender, speaking rate,

speaking levels, etc.

34

– speaking conditions, range from quiet rooms, to automobiles, to noisy work places

– transducers and transmission systems, including a range of microphones, telephone handsets, cellphones, speakerphones, etc.

– speech, including carefully recorded material, conversational speech, telephone queries, etc.

The VQ Distance MeasureThe VQ Distance Measure• The VQ distance measure depends critically

on the nature of the analysis vector, X.– If X is a log spectral vector, then a possible

distance measure would be an Lp log spectral distance, of the form:

1/ pR⎡ ⎤

35

1( , ) | |k k p

i j i jk

d X X x x=

⎡ ⎤= −⎢ ⎥⎣ ⎦∑

• If X is a cepstral vector, then the distance measure might well be a cepstral distance of the form:

1/22

1( , ) ( )

Rk k

i j i jk

d X X x x=

⎡ ⎤= −⎢ ⎥⎣ ⎦∑

Clustering Training VectorsClustering Training Vectors• Goal is to cluster the set of L training vectors into a set

of M codebook vectors using generalized Lloyd algorithm (also known as the K-means clustering algorithm) with the following steps:1. Initialization – arbitrarily choose M vectors (initially out of the

training set of L vectors) as the initial set of codewords in the codebook

36

codebook2. Nearest Neighbor Search – for each training vector, find the

codeword in the current codebook that is closest (in distance) and assign that vector to the corresponding cell

3. Centroid Update – update the codeword in each cell to the centroid of all the training vectors assigned to that cell in the current iteration

4. Iteration – repeat steps 2 and 3 until the average distance between centroids at successive iterations falls below a preset threshold

Page 7: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

7

Clustering Training VectorsClustering Training Vectors

Voronoi regions and

37

regions and centroids

Centroid ComputationCentroid Computation

1 2{ , ,..., }.

Assume we have a set of vectors, where all vectors are assigned to cluster The centroid of the set is defined as the vector that minimizes the average distortion,

C C C CV

C

VX X X X

V CX

Y

=

i

i

1

1min ( , )

i.e.,

V

CiY i

Y d X YV

= ∑

38

1

2

The solution for the centroid is highly dependenton the choice of distance measure. When both and are measured in a -dimensional space withthe norm, the cen

i

Ci

V

XY KL

=

i

1

1

1troid is the mean of the vector set

When using an distance measure, the centroid is themedian vector of the set of vectors assigned to the given class.

VCi

i

Y XV

L=

= ∑i

Vector Classification ProcedureVector Classification Procedure

The classification procedure for arbitrary test set vectors is a full search through the codebook to find the "best" (minimumdistance) match. If we denote the codebook vectors of an -vector codeboM

i

i ok

39

1

1(

arg min ( , )

as for , and we denote the vector to be classifiedand vector quantized) as , then the index, , of the best

codebook entry is:

i

ii M

CB i MX i

i d X CB≤ ≤

≤ ≤∗

∗ =

Binary Split Codebook DesignBinary Split Codebook Design1. Design a 1-vector codebook; the single vector in the

codebook is the centroid of the entire set of training vectors

2. Double the size of the codebook by splitting each current codebook vector, Ym, according to the rule:

(1 )m mY Y ε+ = +

40

where m varies from 1 to the size of the current codebook, and epsilon is a splitting parameter (0.01 typically)

3. Use the K-means clustering algorithm to get the best set of centroids for the split codebook

4. Iterate steps 2 and 3 until a codebook of size M is designed.

(1 )m mY Y ε− = −

Binary Split AlgorithmBinary Split Algorithm

41

VQ Codebook of LPC VectorsVQ Codebook of LPC Vectors

64 vectors in a

codebook

42

of spectral shapes

Page 8: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

8

VQ CellsVQ Cells

43

VQ CellsVQ Cells

44

VQ Coding for SpeechVQ Coding for Speech

45

‘distortion’ in coding computed using a spectral distortion measure related to the difference in log spectra between the original and the codebook vectors

10-bit VQ comparable to 24-bit scalar quantization for these examples

Differential QuantizationDifferential Quantization

46

Theory and PracticeTheory and Practice

Differential QuantizationDifferential Quantization• we have carried instantaneous quantization of

x[n] as far as possible• time to consider correlations between speech

samples separated in time => differential quantization

47

quantization• high correlation values => signal does not

change rapidly in time => difference between adjacent samples should have lower variance than the signal itselfdifferential quantization can increase SNR at a given

bit rate, or lower bit rate for a given SNR

Example of Difference SignalExample of Difference Signal

Prediction error from LPC

analysis using p=12

48

Prediction error is about 2.5

times smaller than signal

Page 9: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

9

Differential QuantizationDifferential Quantization

Q[ ] Encoder

P

+

+

Δ

[ ]c nˆ[ ]d n

ˆ[ ]x n[ ]x n

[ ]d n[ ]x n

49

[ ] [ ] [ ] where [ ] unquantized input sample

[ ] estimate or prediction of [ ]ˆ [ ] is the output of a predictor system, , whose input is [ ],

a quantized version of

= −• =

=

d n x n x nx nx n x nx n P x n

[ ] [ ] prediction error signal

ˆ[ ] quantized difference (prediction error) signal

=

=

x nd n

d n

(a) Coder

Differential QuantizationDifferential Quantization

Decoder

P

+

Δ'[ ]c n ˆ '[ ]d n

'[ ]n

ˆ '[ ]x n

50

'[ ]x n(b) Decoder

[ ] [ ] [ ] where [ ] unquantized input sample

[ ] estimate or prediction of [ )ˆ [ ] is the output of a predictor system, , whose input is [ ],

a quantized version of

= −• =

=

d n x n x nx nx n x nx n P x n

[ ] [ ] prediction error signal

ˆ[ ] quantized difference (prediction error) signal

=

=

x nd n

d n

Q[ ] Encoder+

Δ

[ ]c nˆ[ ]d n[ ]d n[ ]x n

[ ] [ ] [ ]ˆ[ ] [ ] [ ]

= −

= +

d n x n x n

d n d n e n

Differential QuantizationDifferential Quantization

51

Q[ ] Encoder

P

+

+ˆ[ ]x n[ ]x n

This part reconstructs the quantized signal, ˆ[ ]x n ˆˆ( ) ( ) ( )

ˆ( ) ( ) ( )= +

⇒ = +

x n x n d nx n x n e n

1

1

( )

ˆ[ ] [ ]

α

α

=

=

=

= −

pk

kkp

kk

P z z

x n x n k

Differential QuantizationDifferential Quantization• difference signal, d[n], is quantized - not x[n]• quantizer can be fixed, or adaptive, uniform or non-uniform• quantizer parameters are adjusted to match the variance of d[n]

ˆ [ ] [ ] [ ] [ ] quantization errorˆ

= + −d n d n e n e n

52

ˆˆ [ ] [ ] [ ] predicted plus quantized ˆ [ ] [ ] [ ] quantized input has same quantization

error as the difference signal if σ

= + −= + −

=> d

x n x n d n x dx n x n e n

22 , error is smaller independent of predictor, , quantized [ ] differs from unquantized [ ] by [ ], the quantization error of the difference signal!

good prediction gives lower quantization

σ<•

=>

x

P x nx n e n

error than quantizing input directly

Differential QuantizationDifferential Quantization• quantized difference signal is encoded into c(n)

for transmission

H(z)

ˆ ˆ ˆ( ) ( ) ( ) ( )1ˆ ˆ ˆ( ) ( ) ( ) ( )

1 ( )

′ ′ ′= +

′ ′ ′= =−

X z D z P z X z

X z D z H z D zP z

53

• first reconstruct the quantized difference signal from the decoder codeword, c’[n] and the step size Δ

• next reconstruct the quantized input signal using the same predictor, P, as used in the encoder

1

1( )1 α −

=

=−∑

pk

kk

H zz

SNR for Differential QuantizationSNR for Differential Quantization

2 2

22

22

the SNR of the differential coding system is

[ ]

[ ]σσ

σσ

⎡ ⎤⎣ ⎦= =⎡ ⎤⎣ ⎦

x

e

d

E x nSNR

E e n

54

2 2

2

2

2

where

signal-to-quantizing-noise ratio of the quantizer

σσσ σ

σσ

σ

= ⋅ = ⋅

= =

=

dxP Q

d e

dQ

e

xP

SNR G SNR

SNR

G 2 gain due to differential quantizationσ

=d

Page 10: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

10

SNR for Differential QuantizationSNR for Differential Quantization

depends on chosen quantizer and can be maximized using all of the previous quantization methods (uniform, non-uniform, optimal)

hopefully 1 is the gain in due to

• >

QSNR

G SNR

55

, hopefully, 1, is the gain in due to differential coding

w

• >

PG SNR

2

2

ant to choose the predictor, , to maximize since is fixed, then we

need to minimize , i.e., design the best predictor

σ⇒P x

d

PG

σ P

Predictor for Differential Predictor for Differential QuantizationQuantization

1

consider class of linear predictors,

ˆ [ ] [ ]

[ ] is a linear combination of previous quantized values of [ ]

α=

= −

∑p

kk

P

x n x n k

x n x n

56

11

[ ] is a linear combination of previous quantized values of [ ] the predictor -transform is

( ) ( ) --α −

=

••

= = −∑p

kk

k

x n x nz

P z z A z

10

predictor system function

with predictor impulse response coefficients (FIR filter) [ ] α•

= ≤ ≤

=kp n k p

otherwise

Predictor for Differential Predictor for Differential QuantizationQuantization

1 1 111

ˆ the reconstructed signal is the output, [ ], of a system with system function

ˆ ( ) ( ) ˆ ( ) ( )( )α −

= = = =−−∑

pk

k

x n

X zH zP z A zD zz

57

1

where the input to the system is the quantized difference

sig

=

∑ kk

1

1

ˆnal, [ ]

ˆ ˆ ˆ [ ] [ ] [ ]

where

ˆ [ ] [ ]

α

α

=

=

= − −

− =

p

kk

p

kk

d n

d n x n x n k

x n k x n

Predictor for Differential Predictor for Differential QuantizationQuantization

( )

2

22 2

2

to solve for optimum predictor, need expression for

[ ] [ ] [ ]

σ

σ

⎡ ⎤⎡ ⎤= = −⎣ ⎦ ⎣ ⎦⎡ ⎤⎡ ⎤

d

d

p

E d n E x n x n

58

1

2

1 1

ˆ[ ] [ ]

[ ] [ ] [ ]

ˆ( [ ] [ ] [ ])

α

α α

=

= =

⎡ ⎤⎡ ⎤⎢ ⎥= − −⎢ ⎥⎢ ⎥⎣ ⎦⎣ ⎦⎡ ⎤⎛ ⎞⎢ ⎥= − − − −⎜ ⎟⎢ ⎥⎝ ⎠⎣ ⎦

= +

∑ ∑

p

kk

p p

k kk k

E x n x n k

E x n x n k e n k

x n x n e n

Solution for Optimum PredictorSolution for Optimum Predictor

( ) ( )

2

2

2

1

1

2

want to choose { }, , to minimize

differentiate wrt , set derivatives to zero, giving

[ ] [ ] [ ] [ ] [ ]

α σ

σ α

σα

α =

• ≤ ≤ =>

⎡ ⎤⎛ ⎞∂= − − − + − ⋅ − + −⎢ ⎥⎜ ⎟∂ ⎝ ⎠⎣ ⎦

j d

d j

pd

kkj

j p

E x n x n k e n k x n j e n j

59

0 1 which can be wri

⎝ ⎠⎣ ⎦= ≤ ≤

j

j p

( )2

0 1

tten in the more compact formˆ ˆ [ ] [ ] [ ] [ ] [ ] ,

the predictor coefficients that minimize are the ones that make the difference signal, [ ], be uncorrelated w

σ

⎡ ⎤− − = ⋅ − = ≤ ≤⎡ ⎤⎣ ⎦⎣ ⎦• d

E x n x n x n j E d n x n j j p

d n1

ith past ˆvalues of the predictor input, [ ],− ≤ ≤x n j j p

Solution for AlphasSolution for Alphas( ) 0 1ˆ ˆ [ ] [ ] [ ] [ ] [ ] ,

basic equations of differential codingˆ[ ] [ ] [ ] quantization of difference signalˆ[ ] [ ] [ ] error same for original signalˆ[ ] [ ]

⎡ ⎤− − = ⋅ − = ≤ ≤⎡ ⎤⎣ ⎦⎣ ⎦•

= += +

= +

E x n x n x n j E d n x n j j p

d n d n e nx n x n e n

x n x n ˆ[ ] feedback loop for signald n

60

[ ] [ ] +x n x n

[ ]

1

1

1 1

[ ] feedback loop for signal

ˆ[ ] [ ] prediction loop based on quantized input

ˆ ˆ ˆ[ ] [ ] [ ] direct substitution

ˆ[ ] [ ] [ ]

ˆ[ ] [ ] [ ] [ ]

α

α

α α

=

=

= =

= −

= − −

− = − + −

= − = − + −

∑ ∑

p

kk

p

kk

p p

k kk k

d n

x n x n k

d n x n x n k

x n j x n j e n j

x n x n k x n k e n k

Page 11: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

11

Solution for AlphasSolution for Alphas( )

[ ]

[ ] [ ] [ ] [ ]1 1

ˆ ˆ [ ] [ ] [ ] [ ] [ ] 0, 1

ˆ[ ] [ ] [ ]

ˆ[ ] [ ] [ ] [ ]

[ ] [ ] [ ] [ ] [ ] [ ] [ ] [ ]

α α= =

⎡ ⎤− − = ⋅ − = ≤ ≤⎡ ⎤⎣ ⎦⎣ ⎦− = − + −

= − = − + −

− + − − − − −

∑ ∑p p

k kk k

E x n x n x n j E d n x n j j p

x n j x n j e n j

x n x n k x n k e n k

E x n x n j E x n e n j E x n x n j E x n e n j

61

[ ]1

[ ] [ ] [ ]α=

⎡ ⎤= − + − −⎢ ⎥

⎣ ⎦∑

p

kk

E x n k e n k x n j

[ ]

[ ] [ ]

[ ] [ ]

1

1 1

1 1

[ ] [ ] [ ]

[ ] [ ] [ ] [ ]

[ ] [ ] [ ] [ ]

α

α α

α α

=

= =

= =

+

⎡ ⎤− + − −⎢ ⎥

⎣ ⎦

= − − + − − +

− − + − −

∑ ∑

∑ ∑

p

kk

p p

k kk kp p

k kk k

E x n k e n k e n j

E x n k x n j E e n k x n j

E x n k e n j E e n k e n j

Solution for Optimum PredictorSolution for Optimum Predictor

[ ] [ ] [ ]

[ ] [ ]1

solution for - first expand terms to give

[ ] [ ] [ ] [ ] [ ] [ ]

[ ] [ ] [ ] [ ]

α

α

α α

=

− ⋅ + − ⋅ = − ⋅ −

+ − ⋅ − + − ⋅ −

∑ ∑

kp

kk

p p

k k

E x n j x n E e n j x n E x n j x n k

E e n j x n k E x n j e n k

62

[ ]1 1

1

1[ ] [ ] ,

assume fine quantization so th

α

= =

=

+ − ⋅ − ≤ ≤

∑k kp

kk

E e n j e n k j p

[ ][ ] 2

0

at [ ] is uncorrelated with [ ], and

[ ] is stationary white noise (zero mean), giving [ ] [ ] , ,

[ ] [ ] [ ]σ δ

− ⋅ − = ∀

− ⋅ − = ⋅ −e

e n x n

e nE x n j e n k n j k

E e n j e n k j k

Solution for Optimum PredictorSolution for Optimum Predictor

2

1

2

2 21

1

1

we can now simplify solution to form

[ ] [ ] [ ] ,

where [ ] is the autocorrelation of [ ]. Defining terms

[ ] [ ] [ ] [ ] ,

φ α φ σ δ

φ

σφρ α ρ δσ σ

=

=

⎡ ⎤= − + − ≤ ≤⎣ ⎦

⎡ ⎤= = − + − ≤⎢ ⎥

⎣ ⎦

p

k ek

pe

kkx x

j j k j k j p

j x n

jj j k j k ≤j p

63

⎣ ⎦x x

1

2

1 1 1 1 12 1 1 1 2

1 2 1 1

or in matrix form

[ ] / [ ] [ ][ ] [ ] / [ ]

. , ,.

..[ ] [ ] [ ] /

ρ ααρ ρ ραρ ρ ρ

ρ α

αρ ρ ρ

•=

⎡ ⎤ + −⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥+ −⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥= = =⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥− − +⎣ ⎦ ⎣ ⎦⎣ ⎦p

C

SNR pSNR p

C

p p p SNR

Solution for Optimum PredictorSolution for Optimum Predictor

1

2

1 1 1 1 12 1 1 1 2

[ ] / [ ] [ ][ ] [ ] / [ ]

. , ,.

..[ ] [ [ [ ] /

ρ ααρ ρ ραρ ρ ρ

ρ α

=

⎡ ⎤ + −⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥+ −⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥= = =⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦

C

SNR pSNR p

C

SNR

64

1

1 2 1 1[ ] [ [ [ ] /

with matrix solution (defining

αρ ρ ρ

σ−

⎢ ⎥⎢ ⎥ ⎢ ⎥− − +⎣ ⎦ ⎣ ⎦⎣ ⎦•

= =

pp p p SNR

α C ρ SNR 2 2

1

2 2

/ )

where is a Toeplitz matrix can be computed via well understoodnumerical methods the problem here is that depends on / , but depends on

coefficients of the predicto

σ

σ σα

−• =>

• =

x e

x e

k

C C

C SNR SNRr, which depend on bit of a dilemma=>SNR

Solution for Optimum PredictorSolution for Optimum Predictor

1

1

1

special case of , where we can solve directly for of this first order linear predictor, as

[ ]αρα

• =

=

p

65

1

1

1 11 1

/

can see that [ ] we will look further at this special case later

α

α ρ

=+

• < <

SNR

Solution for Optimum PredictorSolution for Optimum Predictor

( ) ( )( )

2

in spite of problems in solving for optimum predictor coefficients, we can solve for the prediction gain, , in terms of the coefficients, as

[ ] [ ] [ ] [ ]

[ ] [ ] [ ]

α

σ

⎡ ⎤= − ⋅ −⎣ ⎦⎡= − ⋅

P j

d

G

E x n x n x n x n

E x n x n x n ( )( )

[ ] [ ] [ ]

where the term [ ] [ ] is the prediction error; we can show that the second term in the expression above is zero, i.e., the prediction error is uncorrelated with the pred

⎤ ⎡ ⎤− − ⋅⎣ ⎦ ⎣ ⎦• −

E x n x n x n

x n x n

iction value; thus

66

( )

( )

2

2

1

2 2 2

1 11

[ ] [ ] [ ]

[ ] [ ] [ ]) [ ]

assuming uncorrelated signal and noise, we get

[ ] [ ]

(

σ

α

σ σ α φ σ α ρ

=

= =

⎡ ⎤= − ⋅⎣ ⎦⎡ ⎤⎡ ⎤= − − + − ⋅⎢ ⎥⎣ ⎦ ⎣ ⎦

⎡ ⎤= − = −⎢ ⎥

⎣ ⎦

∑ ∑

d

p

kk

p p

d x k x kk k

P

E x n x n x n

E x n E x n k e n k x n

k k

G

1

1

1) for optimum values of

[ ]α

α ρ=

=−∑

opt kp

kk

k

Page 12: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

12

First Order Predictor SolutionFirst Order Predictor Solution

( )

2 2

1

1

11 1

1

1

For the case we can examine effects of sub-optimum value of on the quantity / . The optimum solution is:

[ ]

Consider choosing an arbitrary value for ; then we

α σ σ

α ρα

• =

=

=−

P x d

P opt

pG

G

get

67

1g y ;

( )

2 2 2 2 21 1 1

21 1

21

1 2 1

11 2 1 1 1

g

[ ]

Giving the sub-optimum result

[ ] ( / )

Where the term / represents the increase in variance of [ ] due to the feedback of the

σ σ α ρ α α σ

α ρ α

α

⎡ ⎤= − + +⎣ ⎦•

=− + +

d x e

P arbG

SNR

SNRd n error signal [ ]. e n

First Order Predictor SolutionFirst Order Predictor Solution( )

( )

21

21 1

1

1 2 1

1

Can reformulate as

[ ]

for any value of (including the optimum value).

α

α ρ αα

−=

− +

P arb

QP arb

G

SNRG

68

( )

2

2

2 2

1

111 11

1 1 1 1

1Consider the case of ( )

[ ][

[ ] [ ]

α ρ

ρρ

ρ ρ

• =

−⎡ ⎤

= = ⋅ −⎢ ⎥− −⎣ ⎦Q

P subopt

SNRG )

the gain in prediction is a product of the prediction gain without the quantizer, reduced by the loss due to feedback of the error signal.

⎡ ⎤⎢ ⎥⎣ ⎦

•QSNR

First Order Predictor SolutionFirst Order Predictor Solution

[ ]1

1

1 1 1

1

We showed before that the optimum value of was [ ] / / If we neglect the term in 1/SNR (usually very small), then

[ ]

α

ρ

α ρ

+

•=

SNR

69

( ) 2

11 1

and the gain due to prediction is

[ ]

Thus tρ

=−

P optG

( )

1 01 0 8

2 77

here is a prediction gain so long as [ ] It is reasonable to assume that for speech, [ ] . , giving

. (or 4.43 dB)

ρρ

≠• >

>P optG

Differential Quantizationˆ[ ] [ ]

[ ] [ ] [ ]ˆ[ ] [ ] [ ]

ˆˆ[ ] [ ] [ ]

=

= −

= −

= +

+

∑p

kk

x n x n k

d n x n x n

d n d n e n

x n x n d n

Q

P

+

+

Δ

x[n]+ -

d[n] d[n]^

x[n]~ x[n]^

70

[ ] [ ] [ ]ˆ[ ] [ ] [ ]

= +

= +

x n x n d nx n x n e n

[ ]/

[ ][ ]

[ ]

22 2

2 2 2

1

2

2

2

First Order Predictor:1

1 11 11

1 1

11 1

σσ σσ σ σ

ρα

ρρ

ρ

= = = ⋅

=+

⎛ ⎞⎛ ⎞= −⎜ ⎟⎜ ⎟⎜ ⎟−⎝ ⎠⎝ ⎠⎛ ⎞

≈ ⎜ ⎟−⎝ ⎠

dx xp Q

e d e

pQ

SNR G SNR

SNR

GSNR

The error, e[n], in quantizing d[n] is the same as the error in

representing x[n]

Prediction gain dependent on ρ[1], the first correlation coefficient

LongLong--Term Spectrum and CorrelationTerm Spectrum and Correlation

Measured with 32-pointHamming window

71

0.85790.5680

0.2536

Computed Prediction GainComputed Prediction Gain

72

Page 13: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

13

Actual Prediction Gains for SpeechActual Prediction Gains for Speech

• variation in gain across 4 speakers

• can get about 6 dB improvement in SNR => 1 extra bit equivalent in quantization—but at a price of increased complexity in quantization

73

• differential quantization works!!

• gain in SNR depends on signal correlations

• fixed predictor cannot be optimum for all speakers and for all speech material

Delta ModulationDelta Modulation

74

Linear and AdaptiveLinear and Adaptive

Delta ModulationDelta Modulation• simplest form of differential quantization is in delta modulation (DM)• sampling rate chosen to be many times the Nyquist rate for the input

signal => adjacent samples are highly correlated• in the limit as T 0, we expect

2[1] 0asφ σ→ →T

75

• this leads to a high ability to predict x[n] from past samples, with the variance of the prediction error being very low, leading to a high prediction gain => can use simple 1-bit (2-level) quantizer => the bit rate for DM systems is just the (high) sampling rate of the signal

[1] 0 as φ σ→ →x T

Linear Delta ModulationLinear Delta Modulation

ˆ[ ] [ ] 0 [ ] 0[ ] 0 [ ] 1

2-level quantizer with fixed step size, , with quantizer form

( )( )

using simple first order predictor

•Δ

= Δ > == −Δ < =

d n if d n c nif d n c n

76

( ) 2

11 [1]

[1]

g p pwith optimum prediction gain

as ρ

ρ

=−

• →

P optG

( )1,

[1] 1)

(qualitatively

only since the assumptions under which the equation was derived break down as ρ

→∞

P optG

Illustration of DMIllustration of DMˆˆ ˆ[ ] [ 1] [ ]

1ˆ[ ] [ ] [ 1] [ ] [ 1] [ 1]

[ ]

basic equations of DM are

when (essentially digital integration or accumulation of increments of )

is a first backward

α

= − +• ≈ ± Δ

= − − = − − − −•

x n x n d nα

d n x n x n x n x n e nd n [ ]

( )( )max | |

difference of , or an approximation to the derivative of the input how big do we make --at maximum slope of we need

or else the reconstructed signal will lag the actua

• Δ

Δ≥

a

a

x nx t

dx tT dt

l signal called 'slope overload' condition--resulting=>

77

ˆ[ ]in quantization error called 'slope overload distortion' since can only increase by fixed increments of , fixed DM is called linear DM o• Δx n r LDM

slope overload condition

granular noise

DM Granular NoiseDM Granular Noise( )

( ) 0ˆ[ ]

when has small slope, determines the peak error when, quantizer will be alternating sequence of 0's and 1's, and

will alternate around zero with peak variation of this c

• Δ =>=

Δ =>

a

a

x tx tx n onditionis called "granular noise"

d l t i t h dl id d i

78

• need large step size to handle wide dynamic range

• need small step size to accurately represent low level signals

• with LDM we need to worry about dynamic range and amplitude of the difference signal => choose Δ to minimize mean-squared quantization error (a compromise between slope overload and granular noise)

Page 14: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

14

Performance of DM SystemsPerformance of DM Systems

( )1/22

0

[ ] [ 1]

/ (2 )

normalized step size defined as

oversampling index defined as where is the sampling rate of the DM

and is the Nyquist frequency of the signal

•Δ

⎡ ⎤− −⎣ ⎦•

=•

S N

S

E x n x n

F F FF

F

79

and is the Nyquist frequency of the signal the tota•

NF

02l bit rate of the DM is

= = ⋅S NBR F F F

• can see that for given value of F0, there is an optimum value of Δ

• optimum SNR increases by 9 dB for each doubling of F0 => this is better than the 6 dB obtained by increasing the number of bits/sample by 1 bit

• curves are very sharp around optimum value of Δ => SNR is very sensitive to input level

• for SNR=35 dB, for FN=3 kHz => 200 Kbps rate

• for toll quality need much higher rates

Adaptive Delta ModAdaptive Delta Mod

min max

[ ] [ 1][ ]

[ ][ 1] [ ]

[ ]ˆ[ ] [ ] [ 1]

step size adaptation for DM (from codewords)

is a function of and , since depends

only on the sign of α

Δ = ⋅Δ −Δ ≤ Δ ≤ Δ

•−

= − −

n M nn

M c nc n c n

d nd n x n x n

80

[ ] [ ] [ ][ ] the sign of ca• d n

ˆ[ ][ ]

1 [ ] [ 1]1 [ ] [ 1]

n be determined before theactual

quantized value which needs the new value of for evaluation the algorithm for choosing the

step size multiplier is

Δ

= > = −= < ≠ −

d nn

M P if c n c nM Q if c n c n

Adaptive DM PerformanceAdaptive DM Performanceslope

overload condition

granular noise

81

• slope overload in LDM causes runs of 0’s or 1’s

• granularity causes runs of alternating 0’s and 1’s

• figure above shows how adaptive DM performs with P=2, Q=1/2, α=1

• during slope overload, step size increases exponentially to follow increase in waveform slope

• during granularity, step size decreases exponentially to Δmin and stays there as long as slope remains small

ADM Parameter BehaviorADM Parameter Behavior• ADM parameters are P, Q, Δmin and Δmax

• choose Δmin and Δmax to provide desired dynamic range

• choose Δmax/ Δmin to maintain high SNR over range of input signal levels

• Δmin should be chosen to minimize idle channel noise

PQ h ld ti f PQ≤1 f t bilit

82

• PQ should satisfy PQ≤1 for stability

• PQ chosen to be 1

• peak of SNR at P=1.5, but for range 1.25<P<2, the SNRvaries only slightly

Comparison of LDM, ADM and log PCMComparison of LDM, ADM and log PCM

• ADM is 8 dB better SNR at 20 Kbps than LDM, and 14 dB better SNR at 60 Kbps than LDM

• ADM gives a 10 dB increase in

83

• ADM gives a 10 dB increase in SNR for each doubling of the bit rate; LDM gives about 6 dB

• for bit rate below 40 Kbps, ADM has higher SNR than μ-law PCM; for higher bit rates log PCM has higher SNR

Higher Order Prediction in DMHigher Order Prediction in DM

ˆ[ ] [ 1]

ˆˆ ˆ[ ] [ 1] [ ]

first order predictor gave with reconstructed signal satisfying

α

α

= −•

= − +

x n x n

x n x n d n

( )( )

2

2 1 1

( )ˆ ( ) 1( ) , 0 , 1ˆ 1 1( )

assuming two real poles, we can write as

better prediction is achieved using this "double integration" system with up to 4 dB better SNR

− −

= = < <− −

H z

X zH z a baz bzD z

84

1 1

ˆ ( ) 1( ) ˆ 1( )

with system function

digital equivalent of a leaky integraα −

= =−

X zH zzD z

1 2ˆ ˆ[ ] [ 1] [ 2]

tor.consider a second order predictor with

α α•

= − + −x n x n x n

up to 4 dB better SNR there are i• ssues of interaction between

the adaptive quantizer and the predictor

( )

1 2

2 1 21 2

ˆˆ ˆ ˆ[ ] [ 1] [ 2] [ ]

ˆ ( ) 1( ) ˆ 1( )

with reconstructed signal

with system function

α α

α α− −

= − + − +•

= =− −

x n x n x n d n

X zH zz zD z

Page 15: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

15

Differential PCM (DPCM)Differential PCM (DPCM)

• fixed predictors can give from 4-11 dB SNR improvement over direct quantization (PCM)

85

• most of the gain occurs with first order predictor

• prediction up to 4th or 5th order helps

DPCM with Adaptive QuantizationDPCM with Adaptive Quantization• quantizer step size proportional to variance at quantizer input

• can use d[n] or x[n] to control step size

• get 5 dB improvement

86

• get 5 dB improvement in SNR over μ-law non-adaptive PCM

• get 6 dB improvement in SNR using differential configuration with fixed prediction => ADPCM is about 10-11 dB SNRbetter than from a fixed quantizer

Feedback ADPCMFeedback ADPCM

87can achieve same improvement in SNR as feed

forward system

DPCM with Adaptive PredictionDPCM with Adaptive Prediction

need adaptive

88

need adaptive prediction to handle non-stationarity of

speech

DPCM with Adaptive PredictionDPCM with Adaptive Prediction

1

ˆ[ ] [ ] [ ]

[ ]

prediction coefficients assumed to be time-dependent of the form

assume speech properties remain fixed over short time intervals choose to minimize the average s

α

α

=

= −

••

∑p

kk

k

x n n x n k

n quared prediction errorover short intervals

89

1[ ] [ ] [ ], 1,2,...,

[ ]

the optimum predictor coefficients satisfy the relationships

where is the short-time autocorrelation function

α=

= − =

∑p

n k nk

n

R j n R j k j p

R j

[ ] [ ] [ ] [ ] [ ], 0

[ ]

of the form

is window positioned at sample of input update 's every 10-20 msecα

=−∞

= − + − − ≤ ≤

••

∑nm

R j x m w n m x j m w n m j j p

w n - m n

Prediction Gain for DPCM with Prediction Gain for DPCM with Adaptive PredictionAdaptive Prediction

[ ]2

10 10 2

[ ]10log 10log

[ ]

fixed prediction 10.5 dB prediction

⎡ ⎤⎡ ⎤⎣ ⎦⎢ ⎥=⎡ ⎤⎢ ⎥⎣ ⎦⎣ ⎦

• →

P

E x nG

E d n

90

p pgain for large adaptive prediction 14 dB gain for

large adaptive prediction more robust to

speaker, speech

• →

p

p

material

Page 16: Digital Speech ProcessingDigital Speech Processing—— σ 2 x ... · • advantage of feedback adaptation is that neither Δ[n] nor G[n] needs to be transmitted to the decoder since

16

Comparison of CodersComparison of Coders

• 6 dB between curves

• sharp increase in SNR with both fixed

91

prediction and adaptive quantization

• almost no gain for adapting first order predictor