td-za-lmf algorithm

Upload: marwan-elmahdi

Post on 04-Mar-2016

22 views

Category:

Documents


0 download

DESCRIPTION

TD-ZA-LMF algorithm

TRANSCRIPT

  • Transform Domain LMF Algorithm for SparseSystem Identification Under Low SNR

    Murwan Bashir and Azzedine Zerguine

    AbstractIn this work, a transform domain Least MeanFourth (LMF) adaptive filter for a sparse system identification, inthe case of low Signal-to-Noise Ratio (SNR), is proposed. Unlikethe Least Mean Square (LMS) algorithm, the LMF algorithm,because of its error nonlinearity, performs very well in theseenvironments. Moreover, its transform domain version has anoutstanding performance when the input signal is correlated.However, it lacks sparse information capability. To overcome thislimitation, a zero attractor mechanism, based on the l1 normis implemented to yield the Zero-Attractor Transform-DomainLMF (ZA-TD-LMF) algorithm. The ZA-TD-LMF algorithmensures fast convergence and attracts all the filter coefficientsto zero. Simulation results conducted to substantiate our claimare found to be very effective.

    Index TermsLeast Mean Fourth (LMF), Transform Domain(TD), Zero-Attractor ZA, Sparse solution

    I. INTRODUCTION

    The LMF algorithm [1] is known to perform better thanthe LMS algorithm in the case of non-Gaussian noise andin low SNR environments. However, both algorithms do notexplore the special structure of sparsity that appears in manysystems, e.g., digital transmission channels [2] and wide areawireless channels. Several approaches have been used to en-dow adaptive algorithms the ability to recognize such systems.For example, the work in [3] uses sequential updating, sincethe sparse filters are long by nature and most of the elementsare zeros. The proportionate LMF algorithm [4] and PNLMSalgorithm [5] are applied to sparse systems identification,where the updating power depends on the value of the weight.

    The advent of compressive sensing [6] and the least absoluteshrinkage and selection operator (LASSO) [7], are differentapproaches appeared by endowing adaptive algorithms withthe ability to recognize sparse structures. By adding l1 normregression to the LMS algorithm, the derived sparse-awarealgorithm called zero attractor LMS (ZA-LMS) algorithm [8].This algorithm fundamentally tries to attract all the weightsto zero, and hence the naming zero attractor. To avoid thestrong bias of the ZA-LMS algorithm when the system isnot sparse, a weighted zero attractor is introduced in [8] toendow the resultant adaption with the ability to recognize non-zero elements, and apply small attraction to zero for thesegroup of elements. The sparse LMF algorithms introducedin [9] proved to outperform their counterpart sparse-awareLMS algorithms in low SNR environment. However, both theLMS and the LMF family sparse families inherit the slow

    M. Bashir and A. Zerguine are with the Department of Electrical Engineer-ing, King Fahd University of Petroleum & Minerals, Dhahran, 31261, KSA(e-mail: {g201304570,azzedine}@kfupm.edu.sa).

    convergence property in the correlated environment, where theeigenvalue spread of the autocorrelation matrix of the inputsignal is large.

    Applying discrete transformation (e.g., DCT and DFT)accompanied with power normalization to the input is knownto result in more whiter input, shrinks the eigenvalue spread, asin the case of transform domain LMS algorithm in [10]. More-over, by endowing the TD-LMS algorithm with zero attractor,the result is TD-ZA-LMS and TD-WZA-LMS algorithms [11],where it was shown their convergence is faster in comparisonto their counterpart ZA-LMS and WZA-LMS algorithms.

    In this work, we investigate a sparse-aware transform do-main LMF algorithm, and assess its performance in a lowSNR environment. We study specifically the TD-ZA-LMFalgorithm, since the l1 penalty added to the LMF cost functionis convex, unlike other penalties, e.g., lp and l0 for example.

    Notations: In the following parts of the paper, matrices andvectors are denoted by capital and lower case boldface letters,respectively, the superscripts H , T , and -1 denote Hermitian,transpose, and inverse operators, respectively, and finally, 1and E[] denote the l1 norm and the statistical expectation,respectively.

    II. NEW LMF ALGORITHM

    A. The Transform Domain LMF Algorithm

    Consider a system identification scenario with input ui andthe desired output d(i) defined by

    d(i) = woui + n(i) (1)

    where wo is the optimal filter with length N . The transformedinput vector is defined as

    xi = uiT (2)

    where T is an N N transformation matrix and the trans-formed input is xi. Both xi and ui are of length N . Thetransform domain LMF algorithm is given by

    wi = wi1 + 1i xTi e

    3(i) (3)

    where

    wi = TT wi (4)

    where wi is the time domain weight vector and wi is thetransformed domain weight vector. is the step size, e(i) is theerror, the difference between the desired output and the outputof the adaptive system, and is the power normalizationmatrix. Clearly (3) does not exploit any sparsity informationin its recursion. In order to exploit sparsity, we will alter the

  • cost function to include a sparsity-aware term and then applythe general gradient algorithm formula:

    wi = wi1 1iJ

    wTi(5)

    to yield the proposed TD-ZA-LMF algorithm.

    B. Zero Attractor TDLMF Algorithm

    The cost function of the zero attractor is given by

    JZA =1

    4e4(i) + ZATwi1 (6)

    where ZA is the zero attraction force (Lagrangian multiplier).Note that we are transforming back the coefficients in orderto exploit sparsity, since the transformed weight vector is notsparse. Now,

    JZAwTi

    = e3(i)xTi + ZATT sgn(Twi) (7)

    substituting (7) into (5) gives

    wi = wi1 + 1xTi e3(i) ZA1i TT sgn(Twi)

    where ZA = ZA. In contrast to the algorithm introduced in[9], this algorithm is designed for Non-Gaussian and correlatedinputs. Moreover, it has the ability to attract all the filtercoefficients to zero.

    III. PERFORMANCE ANALYSIS OF THE TD-ZA-LMFA. Convergence Analysis

    In this section, we study the convergence in the mean ofthe TD-LMF sparse-aware algorithm. We launch the analysisby using the following general recursion:

    wi = wi1 + vTi e2k1(i) si (8)

    where si is the sparsity penalty term and k N, note thatk = 1 and k = 2 result in the LMS algorithm and LMFalgorithm, respectively.

    Defining the weight error vector by zi = wo wi1 andemploying the relation between the zi and e(i), that is,

    e(i) = n(i) zTi vi (9)equation (8), reads:

    zi = zi1 + zTi {n(i) zTi vi}2k1(i) si (10)Ignoring the high power of the vTi ui and applying statisticalexpectation to (10) results in

    E[zi] = E[zi1] + E[vin2k1i ] (2k 1)E[n2k2i vivTi zi] E[si] (11)

    By modeling the measurement noise as a white Gaussianprocess, E[vin2k1i ] = 0. The noise and regressor, using inde-pendence assumption, can be assumed to be independent fromeach other at steady state. The higher the noise plant power,the lower the SNR which implies higher E[vi] value. But forvery small step size, the effect of E[vi] will be decreasingregardless of the noise plant power and the regressor powerat steady state. Hence this independence assumption is valid

    for a small step size scenario. This assumption suggests thefollowing relation:

    E[n2k2i vivTi zi] = E[n

    2k2i ]E[viv

    Ti ]E[zi]

    = E[n2k2i ]RvE[zi] (12)

    Finally, taking into account (12), (11) looks like:

    E[zi] =

    {I (2k 1)E[n2k2i ]RvE[zi1]

    } E[si] (13)

    Clearly the sparsity contribution, E[si], does not affect theconvergence. Equation (13) is quite similar to that of the LMSconvergence relation introduced in [12], and hence a necessarycondition for the stability of (8) in the mean is that the stepsize, , should satisfy the following:

    0