mean-square error and stability analysis of a subband structure for the rapid identification of...

5
Digital Signal Processing 22 (2012) 1068–1072 Contents lists available at SciVerse ScienceDirect Digital Signal Processing www.elsevier.com/locate/dsp Mean-square error and stability analysis of a subband structure for the rapid identification of sparse impulse responses Mariane R. Petraglia a,, Diego B. Haddad b a DEL/POLI, PEE/COPPE, UFRJ, Brazil b CEFET, Nova Iguaçu, Brazil article info abstract Article history: Available online 25 April 2012 Keywords: Sparse impulse responses Multirate processing Supervised adaptive filtering In a context of supervised adaptive filtering, the sparsity of the impulse response to be identified can be employed to accelerate the convergence rate of the algorithm. This idea was first explored by the so-called proportionate NLMS (PNLMS) algorithm, where the adaptation step-sizes are made larger for the coefficients with larger magnitudes. Whereas fast initial adaptation convergence rate is obtained with the PNLMS algorithm for white-noise input, slow convergence is observed for colored input signals. The combination of the PNLMS approach and a subband structure results in an algorithm with better convergence rate for sparse systems and colored input signals. In this paper, the steady-state mean- square error (MSE) and the maximum value of the step-size β that allows convergence of the subband PNLMS-type algorithm are analyzed. Theoretical results are confirmed by simulations. © 2012 Elsevier Inc. All rights reserved. 1. Adaptive sparse impulse responses systems A significant number of coefficients of sparse impulse responses has magnitude close to zero. Systems with such characteristics are common in acoustic, chemical and seismic processes, as well as in wireless channels [1,2]. The sparsity can be used to speed up the convergence process of the adaptive filters employed to model such systems. Among the possible strategies are the use of the l 0 metric in the cost function to be optimized [3] and the adoption of different learning factors (or step-sizes) for the distinct coefficients of the adaptive model [4]. The latter can be interpreted as a gen- eral resource management in a scarcity context, which distributes the updating energy. An adaptive subband structure, that employs a wavelet trans- form and sparse filters (WT-SF), was presented in [5]. In such approach, a significant improvement in the adaptation convergence rate is obtained with a relatively small increase in the compu- tational complexity, mainly own to the implementation of the wavelet transform [5]. Such adaptive subband structure was first associated to a PNLMS-type approach in [6]. The well-known slow convergence of the gradient algorithms for colored input signals, also observed in the proportionate-type NLMS algorithms, is enhanced by a step- size strategy which uses the input power in the different frequency bands. In this paper, we investigate the mean-square error and * Corresponding author. E-mail addresses: [email protected] (M.R. Petraglia), [email protected] (D.B. Haddad). stability characteristics of the resulting subband algorithm, which encompasses particular cases, such as those presented in [1,7,8]. 2. Proportionate-type NLMS algorithms Let g =[ g(0) g(1) ··· g(N 1)] 1 be the impulse response of the system we wish to identify, assuming we have access, in an on-line manner, to its input signal x(n) and to its desired out- put d(n). The unpredictability of the ubiquitous measurement noise ν (n) prevents us from removing it. Modeling it as additive, the re- lation between d(n) and x(n) can be expressed by: d(n) = gx(n) + ν (n), (1) where x(n) =[x(n) x(n 1) ··· x(n N + 1)] T . Most proportionate-type algorithms update the estimate ˆ g(n) of g through the equation: ˆ g(n + 1) g(n) + β x T (n)Λ(n)e(n) x T (n)Λ(n)x(n) + δ , (2) where δ is a regularization parameter (with positive value close to zero), e(n) = d(n) −ˆ g(n)x(n) and Λ(n) is a diagonal matrix. The substitution of Λ(n) by the identity matrix turns Eq. (2) equivalent to the updating equation of the NLMS. 1 The unknown-system coefficient vector and its full-band and subband estima- tions are defined as row vectors, while all other vectors are column vectors. 1051-2004/$ – see front matter © 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.dsp.2012.04.013

Upload: diego-b

Post on 24-Nov-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Digital Signal Processing 22 (2012) 1068–1072

Contents lists available at SciVerse ScienceDirect

Digital Signal Processing

www.elsevier.com/locate/dsp

Mean-square error and stability analysis of a subband structure for the rapididentification of sparse impulse responses

Mariane R. Petraglia a,∗, Diego B. Haddad b

a DEL/POLI, PEE/COPPE, UFRJ, Brazilb CEFET, Nova Iguaçu, Brazil

a r t i c l e i n f o a b s t r a c t

Article history:Available online 25 April 2012

Keywords:Sparse impulse responsesMultirate processingSupervised adaptive filtering

In a context of supervised adaptive filtering, the sparsity of the impulse response to be identified canbe employed to accelerate the convergence rate of the algorithm. This idea was first explored by theso-called proportionate NLMS (PNLMS) algorithm, where the adaptation step-sizes are made larger forthe coefficients with larger magnitudes. Whereas fast initial adaptation convergence rate is obtainedwith the PNLMS algorithm for white-noise input, slow convergence is observed for colored input signals.The combination of the PNLMS approach and a subband structure results in an algorithm with betterconvergence rate for sparse systems and colored input signals. In this paper, the steady-state mean-square error (MSE) and the maximum value of the step-size β that allows convergence of the subbandPNLMS-type algorithm are analyzed. Theoretical results are confirmed by simulations.

© 2012 Elsevier Inc. All rights reserved.

1. Adaptive sparse impulse responses systems

A significant number of coefficients of sparse impulse responseshas magnitude close to zero. Systems with such characteristics arecommon in acoustic, chemical and seismic processes, as well asin wireless channels [1,2]. The sparsity can be used to speed upthe convergence process of the adaptive filters employed to modelsuch systems. Among the possible strategies are the use of the l0metric in the cost function to be optimized [3] and the adoption ofdifferent learning factors (or step-sizes) for the distinct coefficientsof the adaptive model [4]. The latter can be interpreted as a gen-eral resource management in a scarcity context, which distributesthe updating energy.

An adaptive subband structure, that employs a wavelet trans-form and sparse filters (WT-SF), was presented in [5]. In suchapproach, a significant improvement in the adaptation convergencerate is obtained with a relatively small increase in the compu-tational complexity, mainly own to the implementation of thewavelet transform [5].

Such adaptive subband structure was first associated to aPNLMS-type approach in [6]. The well-known slow convergenceof the gradient algorithms for colored input signals, also observedin the proportionate-type NLMS algorithms, is enhanced by a step-size strategy which uses the input power in the different frequencybands. In this paper, we investigate the mean-square error and

* Corresponding author.E-mail addresses: [email protected] (M.R. Petraglia), [email protected]

(D.B. Haddad).

1051-2004/$ – see front matter © 2012 Elsevier Inc. All rights reserved.http://dx.doi.org/10.1016/j.dsp.2012.04.013

stability characteristics of the resulting subband algorithm, whichencompasses particular cases, such as those presented in [1,7,8].

2. Proportionate-type NLMS algorithms

Let g = [g(0) g(1) · · · g(N − 1)]1 be the impulse response ofthe system we wish to identify, assuming we have access, in anon-line manner, to its input signal x(n) and to its desired out-put d(n). The unpredictability of the ubiquitous measurement noiseν(n) prevents us from removing it. Modeling it as additive, the re-lation between d(n) and x(n) can be expressed by:

d(n) = gx(n) + ν(n), (1)

where x(n) = [x(n) x(n − 1) · · · x(n − N + 1)]T .Most proportionate-type algorithms update the estimate g(n) of

g through the equation:

g(n + 1) = g(n) + βxT (n)Λ(n)e(n)

xT (n)Λ(n)x(n) + δ, (2)

where δ is a regularization parameter (with positive value close tozero), e(n) = d(n) − g(n)x(n) and Λ(n) is a diagonal matrix. Thesubstitution of Λ(n) by the identity matrix turns Eq. (2) equivalentto the updating equation of the NLMS.

1 The unknown-system coefficient vector and its full-band and subband estima-tions are defined as row vectors, while all other vectors are column vectors.

M.R. Petraglia, D.B. Haddad / Digital Signal Processing 22 (2012) 1068–1072 1069

Fig. 1. Adaptive subband structure composed of a wavelet transform and sparse sub-filters.

3. WT-SF structure for sparse systems

The WT-SF structure is illustrated in Fig. 1, where the wavelettransform is represented by a non-uniform analysis filter bankHk(z) and Gk(zLk ) are the sparse adaptive subfilters [5]. For anoctave-band wavelet, the equivalent analysis filters of the M-channel filter bank are [9]:

H0(z) =M−2∏j=0

H0(z2 j ),

Hk(z) = H1(z2M−1−k) M−k−2∏j=0

H0(z2 j ),

k = 1, . . . , M − 1, (3)

where H0(z) and H1(z) are, respectively, the lowpass and high-pass filters associated to the wavelet functions [9]. The sparsityfactors

L0 = 2M−1, Lk = 2M−k, k = 1, . . . , M − 1, (4)

and the delays �k in Fig. 1, introduced with the purpose of match-ing the delays of the different length analysis filters, are givenby �k = NH0 − NHk , where NHk is the length of the k-th analy-sis filter (assuming NH0 � NHk ). This structure yields an additionalsystem delay (compared to a direct-form FIR structure) equal to�D = NH0 . For the modeling of a length N FIR system, the num-ber Nk of adaptive coefficients of the subfilters Gk(z) (non-zero

coefficients of Gk(zLk )) should be at least � N+N FkLk

� + 1, where N Fk

is the length of the corresponding synthesis filters which, whenassociated to the analysis filters Hk(z), leads to perfect reconstruc-tion. Note that the absence of decimation in the WT-SF structureprevents the deleterious effects of aliasing.2

The detailed description of the WTPNLMS-SF (wavelet basedPNLMS-family algorithm with sparse filters) is shown in Table 1,where hk(n) are the coefficients of the k-th equivalent analysis fil-ter (Eq. (3)) and the diag{·} operator generates a diagonal matrixwith the elements of a vector in its main diagonal. Note that thecomputation of λk,i(n) for i = 0, . . . , Nk − 1 may be carried outin several ways, corresponding to the different algorithms of thePNLMS family recently proposed in the literature for rapid identifi-cation of sparse systems [1,8,10]. The theoretical analysis described

2 More details regarding the delays, sparsity factors and computational complex-ity of the WT-SF structure, as well as its convergence behavior when its coefficientsare updated by the NLMS algorithm, can be found in [5].

Table 1WTPNLMS-SF algorithm.

Initializationδ = 0.01, β = 0.25 (typical values),For k = 0,1, . . . , M − 1

gk(0) = [gk,0(0) gk,1(0) · · · gk,Nk−1(0)] = 0EndProcessing and AdaptationFor n = 0,1,2, . . .

For k = 0,1, . . . , M − 1

xk(n) = ∑NHk−1

i=0 hk(i)x(n − i − �k)

xk(n) = [xk(n) xk(n − Lk) · · · xk(n − (Nk − 1)Lk)]T

yk(n − �D ) = gk(n)xk(n)

Endy(n) = ∑M−1

k=0 yk(n − �D )

e(n) = d(n − �D ) − y(n)

For k = 0,1, . . . , M − 1determine λk,0(n), . . . , λk,Nk−1 (n)

according to chosen PNLMS-type algorithmΛk(n) = diag{λk,0(n), . . . , λk,Nk−1 (n)}gk(n + 1) = gk(n) + β

xTk (n)Λk(n)e(n)

xTk (n)Λk(n)xk(n)+δ

EndEnd

in the next section does not depend on the choice of the spe-cific PNLMS algorithm. However, in the simulations, we adoptedthe improved μ-law PNLMS (IMPNLMS) algorithm proposed in [7],which produced the best convergence rate and that, by employingan adaptive measure of sparsity of the channel, shows no perfor-mance degradation compared to the NLMS even when the systemto identify is not sparse.

4. MSE analysis of the WTPNLMS-SF algorithm

Let g(n) = g − g(n) be the coefficient error vector at instantn and the a priori and a posteriori errors defined, respectively, byea(n) = g(n)x(n) and ea(n + 1) = g(n + 1)x(n). Using the flow-energy approach [11], we can obtain a theoretical estimate of theMSE considering the following five hypotheses:

(H.1) The adaptive filter converges, stabilizing when n → ∞.(H.2) The measurement noise ν(n) has zero mean and is indepen-

dent of x(n), ∀n.(H.3) In the limit, when n → ∞, x(n) and ea(n) are uncorrelated.(H.4) During the transient, the learning factor associated to each

coefficient is positive.(H.5) The order of the adaptive system is equal to or larger than

the order of the system being identified.

None of the above hypotheses is overly restrictive. In orderto derive the steady-state MSE estimate, we combine the updateequations for the coefficients of all subfilters in a single equation:

ga(n + 1) = ga(n) + βxTa (n)HΛa(n)X−1

H (n)e(n), (5)

with the following definitions:

ga(n) = [g0(n) g1(n) · · · gM−1(n)

], (6)

H = [H0 H1 · · · HM−1 ] , (7)

Hi = [ hi,0 hi,1 · · · hi,M−1 ] , (8)

hi, j = [ 01×�i+( j−1)Li hi 01×(Ni− j)Li+NHi]T , (9)

hi = [hi(0) hi(1) · · · hi(Ni − 1)

]T, (10)

1070 M.R. Petraglia, D.B. Haddad / Digital Signal Processing 22 (2012) 1068–1072

Λa(n) =

⎡⎢⎢⎣

Λ0(n) 0N0×N1 · · · 0N0×NM−1

0N1×N0 Λ1(n) · · · 0N1×NM−1

.... . .

. . ....

0NM−1×N0 0NM−1×N1 · · · ΛNM−1×NM−1(n)

⎤⎥⎥⎦ , (11)

XH(n) =

⎡⎢⎢⎣

XH0(n) 0N0×N1 · · · 0N0×NM−1

0N1×N0 XH1(n) · · · 0N1×NM−1

.... . .

. . ....

0NM−1×N0 0NM−1×N1 · · · XH M−1(n)

⎤⎥⎥⎦ , (12)

XHi (n) = xTa (n)HiΛi(n)HT

i xa(n)INi , (13)

xa(n) = [x(n) x(n − 1) · · · x(n − L)

]T, (14)

e(n) = d(n) − ga(n)HT xa(n) + ν(n), (15)

where 0n×m is the null matrix of dimension n × m, In is the n-th order identity matrix and L = maxk{�k + (Nk − 1)Lk + NHk }.Writing the update equation for the coefficients as a function ofthe coefficients error vector, we obtain:

ga(n + 1) = ga(n) − βxTa (n)HΛa(n)X−1

H (n)e(n). (16)

Post-multiplying by HT xa(n), we find:

ep(n)︷ ︸︸ ︷ga(n + 1)HT xa(n)

=ea(n)︷ ︸︸ ︷

ga(n)HT xa(n)−βxTa (n)HΛa(n)X−1

H (n)HT xa(n)e(n), (17)

from which we get:

e(n) = ea(n) − ep(n)

βxTa (n)HΛa(n)X−1

H (n)HT xa(n). (18)

Using the fact that

xTa (n)HΛa(n)X−1

H (n)HT xa(n)

=M−1∑i=0

xTa (n)HiΛi(n)HT

i xa(n)

xTa (n)HiΛi(n)HT

i xa(n)= M,

and applying Eq. (18) in Eq. (16) we obtain:

ga(n + 1) + xTa (n)HΛa(n)X−1

H (n)ea(n)

M

= ga(n) + xTa (n)HΛa(n)X−1

H (n)ep(n)

M. (19)

Calculating the energy of both sides, taking the limit n → ∞ andapplying (H.1) we get:

limn→∞ E

[xT

a (n)HΛa(n)[X−1H

]2Λa(n)HT xa(n)e2

a(n)]

= limn→∞ E

[xT

a (n)HΛa(n)[X−1H

]2Λa(n)HT xa(n)e2

p(n)]. (20)

Observing that ep(n) = (1 − βM)ea(n) − βMν(n) and using (H.2),we obtain from the above equation,

limn→∞β2M2σ 2

ν E[xT

a (n)HΛa(n)[X−1H

]2Λa(n)HT xa(n)

]

= limn→∞ 2βME

[xT

a (n)HΛa(n)[X−1H

]2Λa(n)HT xa(n)e2

a(n)]

− β2M2E[xT

a (n)HΛa(n)[XH]2ΛaHT xa(n)e2a(n)

], (21)

from which, applying (H.3), we get:

E[e2

a(n)] = βMσ 2

ν . (22)

2 − βM

Using (H.4) and (H.5), we observe that limn→∞ E[e2a(n)] is the

excess MSE (EMSE) and, therefore, the steady-state MSE is givenby:

limn→∞ E

[e2(n)

] = σ 2ν + βMσ 2

ν

2 − βM= σ 2

ν

1 − βM2

. (23)

This is a remarkable result, that implies MSE independencefrom the analysis filters, as well as from the input signal corre-lation function. This result reflects a performance loss with theincrease of the number of subbands M , that can be interpretedas: the adaptive filters in the subbands can be viewed as M esti-mators working in parallel, and the equivalent filter estimate yieldsa mean-square error amplified by M (which acts as a gain in theeffect of the learning factor β over the MSE).

By means of the substitutions ga(n) ← g(n), xa(n) ← x(n), H ←IN , Λa(n) ← Λ(n) and XH ← xT (n)Λx(n) + δ, we see that theabove analysis also applies to the full-band algorithm, with M = 1.Note that for this particular case the MSE estimate of Eq. (23)coincides with that obtained in [4], with the difference that ourderivation uses weaker hypotheses (for instance, the gaussianity ofthe input signal is not required).

5. Stability of the WTPNLMS-SF algorithm

As a byproduct of Eq. (23), we can derive an upper bound forthe learning factor β . Such upper-bound estimate analysis is notrigorous, since Eq. (22) was derived assuming convergence. Nev-ertheless, experimental results validate it. From the positivenessproperty of the MSE, we conclude that:

β � 2

M. (24)

This result derives from the fact that xTa (n)H(XH)−1HT xa(n) =

M . Indeed, if we change the definition of XH in order to imposethe condition xT

a (n)H(XH)−1HT xa(n) = 1, the steady-state MSE ofthe subband structure becomes very similar to that of the full-bandalgorithm.

6. Simulations

In the following simulations, the measurement noise was mod-eled as additive, white Gaussian, with variance σ 2

ν = 10−6.

6.1. Convergence gain of the subband structure

To illustrate the gain in convergence when we use the subbandstructure proposed in [6], a simulation with colored input signalis presented. The input signal was generated by passing a whiteGaussian noise of unitary variance through the filter:

Hi(z) = 0.25

1 − 1.5z−1 + z−2 − 0.25z−3. (25)

As a benchmark algorithm, we choose the regularized NLMS.To accelerate the convergence of the identification of sparse re-sponses, we chose the algorithm IMPNLMS [7], used both in itsoriginal form and in the WT-SF structure with 2 and 3 subbands.The sparse channel impulse response to identify was the model4 from the ITU Recommendation [12], considering its total lengthequals to 512. The regularization parameter of the NLMS algorithmwas δ = 0.01. The IMPNLMS parameters were δ = 0.01, ε = 0.001,λ = 0.1 and ξ = 0.96. The learning factor was selected to producethe same steady-state MSE for each algorithm. The resulting val-ues were β = 0.9 for the NLMS and IMPNLMS algorithms, β = 0.45for the WTIMPNLMS-SF algorithm with M = 2 and β = 0.3 for the

M.R. Petraglia, D.B. Haddad / Digital Signal Processing 22 (2012) 1068–1072 1071

Fig. 2. MSE evolutions of NLMS, IMPNLMS, and WTIMPNLMS-SF for M = 2 and 3.

Fig. 3. MSE evolutions of WTNLMS-SF and WTIMPNLMS-SF for M = 2 and 3.

WTIMPNLMS-SF algorithm with M = 3. Fig. 2 shows the MSE evo-lution, obtained with the Biorthogonal 4.4 wavelet in the WT-SFstructure, revealing the beneficial effect of using the wavelet trans-form in terms of convergence.

Fig. 3 contains the MSE evolution of the WTNLMS-SF andWTIMPNLMS-SF algorithms, showing the advantage of using theIMPNLMS algorithm in the WT-SF structure when modeling sparsesystems with colored input signals.

6.2. Steady-state MSE

Using the coefficients of model 2 of the ITU Recommenda-tion [12] as the impulse response to identify, employing theDaubechies 8 wavelet and varying the number of subbands M andthe learning factor β , we obtained the experimental and theoreti-cal MSEs shown in Fig. 4. From this figure, one can observe goodagreement between theoretical and experimental results.

6.3. Upper bound on the learning factor

In Fig. 5 we present the probability of divergence of theWTPNLMS-SF algorithm as a function of M and β , for model 1 ofthe ITU Recommendation [12], using the Haar wavelet. 20 MonteCarlo averages were employed in this experiment. The tested val-ues of β ranged from 0.01 to 1 in increments of 0.01. Fig. 5 showsthat the upper limit for the learning factor that prevents the diver-gence is fairly well defined, and its value is close to that obtainedtheoretically (see Eq. (24)), confirming the accuracy of the theo-retical analysis. Note that even for M = 4 the upper limit of thelearning factor (βmax = 0.5) is fairly large (β = 0.25 is a typicalvalue), meaning that the reduction caused by the WTPNLMS-SFstructure is not too restrictive.

As indicated by the theoretical analysis and confirmed in oursimulations, the steady-state MSE and the learning-factor upperbound of the WTIMPNLMS-SF algorithm do not depend on the

Fig. 4. Steady-state MSE of WTPNLMS-SF: experimental (solid lines) and theoretical(dotted lines).

Fig. 5. Probability of divergence for M = 3, 4 and 5.

employed wavelet function. However, in general, a more selectivewavelet transform yields faster convergence rate for colored inputsignals. Simulation results of the WTIMPNLMS-SF algorithm withother wavelet functions can be found in [6].

In our simulations in identification of digital network channelsof ITU-T Recommendation G.168, the sparsity degree of the sys-tem was always increased by the transformation. However, it isnot guaranteed that a sparse system will always remain sparse inthe transform domain. On the other hand, since an improved ver-sion of the PNLMS algorithm is used (IMPNLMS), which adaptivelyestimates the level of sparsity of the system being identified andemploys an update technique in a manner consistent with suchsparsity degree, the algorithm analyzed in this paper overcomesthe NLMS algorithm in the vast majority of contexts.

7. Conclusions

In this paper, we elucidated some conflicts inherent to a sub-band structure proposed to accelerate the convergence of identifi-cation algorithms for sparse responses. The acceleration of conver-gence is achieved at the expense of the degradation of steady-stateMSE, which was derived analytically. The upper bound derivedfor the step-size β , as a function of the number of subbands M ,indicates a reduction of this limit with the increase of M . The the-oretical results were confirmed by simulations.

1072 M.R. Petraglia, D.B. Haddad / Digital Signal Processing 22 (2012) 1068–1072

Acknowledgments

This work was supported in part by CNPq, Brazil.

References

[1] B. Jelfs, D.P. Mandic, J. Benesty, A class of adaptively regularised PNLMS al-gorithms, in: Proc. 15th Int. Conf. on Digital Signal Processing, 2007, pp. 19–22.

[2] J. Benesty, Y. Huang, J. Chen, An exponentiated gradient adaptive algorithm forblind identification of sparse simo systems, in: Proc. IEEE Int. Conf. Acoustic,Speech and Signal Processing, vol. 2, 2004, pp. 829–832.

[3] Y. Gu, J. Jin, S. Mei, l0 norm constraint LMS algorithm for sparse system iden-tification, IEEE Signal Process. Lett. 16 (9) (2009) 774–777.

[4] D.L. Duttweiler, Proportionate normalized least-mean squares adaptation inecho cancelers, IEEE Trans. Speech Audio Process. 8 (5) (2000) 508–518.

[5] M.R. Petraglia, J.C.B. Torres, Performance analysis of an adaptive filter struc-ture employing wavelets and sparse filters, IEE Proc. Vision Image Signal Pro-cess. 149 (2) (2002) 115–119.

[6] M.R. Petraglia, G. Barboza, Improved PNLMS algorithm employing wavelettransform and sparse filters, in: Proc. 16th European Signal Process. Conf.,2008, pp. 1–5.

[7] L. Liu, M. Fukumoto, S. Saiki, An improved μ-law proportionate NLMS algo-rithm, in: Proc. IEEE Int. Conf. Acoustic, Speech and Signal Processing, 2008,pp. 3797–3800.

[8] H. Deng, M. Doroslovacki, Proportionate adaptive algorithms for network echocancellation, IEEE Trans. Signal Process. 54 (5) (2006) 1794–1803.

[9] P.P. Vaidyanathan, Multirate Systems and Filter Banks, Prentice Hall, 1993.[10] H. Deng, M. Doroslovacki, Improving convergence of the PNLMS algorithm for

sparse impulse response identification, IEEE Signal Process. Lett. 12 (3) (2005)181–184.

[11] N.R. Yousef, A.H. Sayed, A unified approach to the steady-state and trackinganalyses of adaptive filters, IEEE Trans. Signal Process. 49 (2) (2001) 314–324.

[12] Digital Network Echo Cancellers, ITU-T Recommendation G.168, 2004.

Mariane R. Petraglia received the B.Sc. degree in electronic engineeringfrom the Federal University of Rio de Janeiro, Brazil, in 1985, and the M.Sc.and Ph.D. degrees in electrical engineering from the University of Califor-nia, Santa Barbara, in 1988 and 1991, respectively. From 1992 to 1993,she was with the Department of Electrical Engineering at the CatholicUniversity of Rio de Janeiro, Brazil. Since 1993, she has been with theDepartment of Electronic Engineering and with the Program of ElectricalEngineering, COPPE, at the Federal University of Rio de Janeiro, where sheis presently an Associate Professor. From March 2001 to February 2002,she was a Visiting Researcher with the Adaptive Systems Laboratory, atthe University of California, Los Angeles. Dr. Petraglia is a member of TauBeta Pi. She served as an Associate Editor for the IEEE Transactions on Sig-nal Processing from 2004 to 2007. Her research interests are in adaptivesignal processing, blind source separation, multirate systems, and imageprocessing.

Diego B. Haddad received the B.Sc. degree in telecommunications engi-neering from he State University of Rio de Janeiro, Brazil, in 2005, and theM.Sc. degree in electronic engineering at COPPE (Federal University of Riode Janeiro), Brazil, in 2008. Since 2008, he has been with Federal Centerfor Technological Education (CEFET), Brazil. Presently he is pursuing theD.Sc. degree in electrical engineering at the Federal University of Rio deJaneiro. His current research interests include adaptive signal processing,blind source separation, and pattern recognition.