fast robust quasi-newton algorithm for adaptive arrays

9
Fast robust quasi-Newton algorithm for adaptive arrays M.Klemes Abstract: A stable, fast (order-of-N) quasi-Newton (QN) algorithm (FRQN) applicable to arbitrary array signals is presented. Its complexity is only twice that of the normalised least mean squares (NLMS) algorithm, yet its convergence is faster and smoother, proceeding along the optimal QN path as that of the order N2 recursive least squares (RLS) algorithm, albeit more slowly. The development of its recursive update equation is outlined and compared to those of the NLMS and RLS algorithms, as well as variations of the conjugate gradient (CG) algorithm. Analytic expressions for its excess mean squared error in stationary and non-stationary scenarios are presented and compared with those of the LMS algorithm. Simulations of an adaptive array in a mobile wireless base-station show that the FRQN performance falls between that of the NLMS and RLS algorithms for a wide range of signal scenarios. 1 Introduction This paper outlines the development of a new digital adap- tive array algorithm for interference cancellation. It is fast in that it requires only order-of-N computations per itera- tion and converges in a substantially fured number of itera- tions (proportional to N), regardless of the condition of the signal scenario or geometry of the N-element array. The need for such algorithms has long been felt in dynamic sig- nal scenarios requiring rapid spatial cancellation of co- channel interference using digital adaptive antenna arrays, as in mil~tary aircraft communications [l] and digital mobile wireless telephony [2, 31. Existing order-of-N LMS (least mean squares) algorithms are inadequate for such applications as they suffer from very slow modes of conver- gence. Ordinary RLS (recursive least squares) algorithms converge much faster but require order-of-N2 operations per iteration whch hampers their realisation. A fast (order N> and stable RLS algorithm (FRLS) for general adaptive arrays does not presently exist, other than for tapped delay line (temporal FIR) adaptive filters [4]. The new algorithm is developed from an earlier continu- ous time analogue formulation by the author [5], and is shown to possess order-of-N simplicity, fast quasi-Newton convergence behaviour (faster than LMS but slower than RLS), robustness and unconditional stability with respect to the type of array data; hence the name fast robust quasi- Newton (FRQN). Extensions of established analytic tech- niques developed in [6] were used in [7] to derive the ana- lytic results presented herein along with key assumptions. The FRQN algorithm is also shown to have a much smaller steady state misadjustment than normalised LMS (NLMS) and momentum LMS (MLMS) [XI. Both the FRQN and NLMS algorithms are compared with the RLS ' 0 IEE, 1999 IEE Proceedings online no. 19990418 DOL lO.l049/ipCom:19990348 Paper fmt received 5th January and in revised form loth December 1998 The author is with Nortel Networks, Department R412 ~ Microwave Radio Systems Development, Po Box 3511, Station C, Ottawa, Ontario KIY 4H7, Canada algorithm in terms of recursive computations, misadjust- ment and tracking errors, and simulated performance in a mobile wireless base-station adaptive antenna array. 2 Development 2. I Background This algorithm evolved from the author's implementation of a modified version of Compton's improved feedback loop [l], for which the differential equation is given in [5] as +S ;I + gY(t)YH(t) ~ ( t ) = (Sg)Y(t)r* (t) (1) [' 1 where Y(t) is the complex N-vector of superposed signals incident on the N array elements, W(t) is the adaptive weight vector and r(t) the reference signal. T, and T2 are time constants selected so that T2 - 5T, and 6 = T2/(T, +T2) - 1, and g >> 1 is the gain of the feedback loop. It was shown in [5] that W(t) follows the optimal Newton path to the steady state, unlike the gradient followed by the LMS algorithm. In converting this algorithm to discrete time, dropping the (1/6)Z term from the coefficient of W(t) was required when minimising averaging time, to avoid a collapse of rank in the coefficient of Y(t)r*(t) in the steady state. Divid- ing g by q in the coefficient of dW(t)/dt in eqn. 1 helps to increase speed. Finite differences were used to replace the derivatives of W(t), with W(t) being sampled at intervals of At as required by the Nyquist sampling theorem. The result corresponding to eqn. 1 is 7-17-2 (Wk - 2w,-1+ Wk-2) IEE Proc.-Commun., Vol. 146, No. 4, August 1999 23 1

Upload: m

Post on 20-Sep-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Fast robust quasi-Newton algorithm for adaptive arrays

Fast robust quasi-Newton algorithm for adaptive arrays

M.Klemes

Abstract: A stable, fast (order-of-N) quasi-Newton (QN) algorithm (FRQN) applicable to arbitrary array signals is presented. Its complexity is only twice that of the normalised least mean squares (NLMS) algorithm, yet its convergence is faster and smoother, proceeding along the optimal QN path as that of the order N2 recursive least squares (RLS) algorithm, albeit more slowly. The development of its recursive update equation is outlined and compared to those of the NLMS and RLS algorithms, as well as variations of the conjugate gradient (CG) algorithm. Analytic expressions for its excess mean squared error in stationary and non-stationary scenarios are presented and compared with those of the LMS algorithm. Simulations of an adaptive array in a mobile wireless base-station show that the FRQN performance falls between that of the NLMS and RLS algorithms for a wide range of signal scenarios.

1 Introduction

This paper outlines the development of a new digital adap- tive array algorithm for interference cancellation. It is fast in that it requires only order-of-N computations per itera- tion and converges in a substantially fured number of itera- tions (proportional to N), regardless of the condition of the signal scenario or geometry of the N-element array. The need for such algorithms has long been felt in dynamic sig- nal scenarios requiring rapid spatial cancellation of co- channel interference using digital adaptive antenna arrays, as in mil~tary aircraft communications [l] and digital mobile wireless telephony [2, 31. Existing order-of-N LMS (least mean squares) algorithms are inadequate for such applications as they suffer from very slow modes of conver- gence. Ordinary RLS (recursive least squares) algorithms converge much faster but require order-of-N2 operations per iteration whch hampers their realisation. A fast (order N> and stable RLS algorithm (FRLS) for general adaptive arrays does not presently exist, other than for tapped delay line (temporal FIR) adaptive filters [4].

The new algorithm is developed from an earlier continu- ous time analogue formulation by the author [5], and is shown to possess order-of-N simplicity, fast quasi-Newton convergence behaviour (faster than LMS but slower than RLS), robustness and unconditional stability with respect to the type of array data; hence the name fast robust quasi- Newton (FRQN). Extensions of established analytic tech- niques developed in [6] were used in [7] to derive the ana- lytic results presented herein along with key assumptions. The FRQN algorithm is also shown to have a much smaller steady state misadjustment than normalised LMS (NLMS) and momentum LMS (MLMS) [XI. Both the FRQN and NLMS algorithms are compared with the RLS

'

0 IEE, 1999 IEE Proceedings online no. 19990418 DOL lO.l049/ipCom:19990348 Paper fmt received 5th January and in revised form loth December 1998 The author is with Nortel Networks, Department R412 ~ Microwave Radio Systems Development, Po Box 3511, Station C, Ottawa, Ontario KIY 4H7, Canada

algorithm in terms of recursive computations, misadjust- ment and tracking errors, and simulated performance in a mobile wireless base-station adaptive antenna array.

2 Development

2. I Background This algorithm evolved from the author's implementation of a modified version of Compton's improved feedback loop [l], for which the differential equation is given in [5] as

+S ;I + g Y ( t ) Y H ( t ) ~ ( t ) = ( S g ) Y ( t ) r * ( t )

(1) [' 1

where Y(t) is the complex N-vector of superposed signals incident on the N array elements, W(t) is the adaptive weight vector and r(t) the reference signal. T, and T2 are time constants selected so that T2 - 5T, and 6 = T2/(T, +T2) - 1, and g >> 1 is the gain of the feedback loop. It was shown in [5] that W(t) follows the optimal Newton path to the steady state, unlike the gradient followed by the LMS algorithm.

In converting this algorithm to discrete time, dropping the (1/6)Z term from the coefficient of W(t) was required when minimising averaging time, to avoid a collapse of rank in the coefficient of Y(t)r*(t) in the steady state. Divid- ing g by q in the coefficient of dW(t)/dt in eqn. 1 helps to increase speed. Finite differences were used to replace the derivatives of W(t), with W(t) being sampled at intervals of At as required by the Nyquist sampling theorem. The result corresponding to eqn. 1 is

7-17-2 (Wk - 2w,-1+ W k - 2 )

IEE Proc.-Commun., Vol. 146, No. 4, August 1999 23 1

Page 2: Fast robust quasi-Newton algorithm for adaptive arrays

where the subscript k denotes time corresponding to t = kAt, q 2 = Tl,2/At, and the optimal values of E and q were found empirically to be z2 and 2, respectively. Although according to [I], the minimum value of q, when combined with the Nyquist sampling-rate criterion, is bounded as zl 2 4N, it was found that sufficiently quasi-Newton behaviour can be obtained even in highly ill-conditioned scenarios with z2 = 4N and q = N.

2.2 Geometric optimisation interpretation The FRQN algorithm can be interpreted as a mean squared error minimising algorithm with a weight-vector- step optimising algorithm embedded in it, which drives the weight-steps toward the optimal, quasi-Newton direction. To see this, define the weight-vector step as z2(wk - Wk-1) = z2 A Wk, and the instantaneous error as 4 = WkH Yk - rk. Then, in terms of the gradient of the instantaneous squared error (magnitude) with respect to wk, eqn. 2 may be written as

T 1 a ( T L a w k ) = - [ - I + 1 ' Y k y f ] ( T J A W k ) E 4

(3) After averaging with Yk independent of wk and R e E{ Yk YkH}, the embedded algorithm for (z2 A wk) becomes apparent with the following rearrangement:

a (7-2 a w k ) M - R [( 7-2 a w k ) - 4 (-R-' v ..lek)] 4

(4) with ,u = g / q and the I/& term being neglected. The last term in [ ] on the RHS is recognised as the quasi-Newton (QN) weight-step, so the current weight step is adjusted in proportion to the (vector) amount by which it differs from the QN weight step, with a time constant proportional to qqlg and the eigenvalues of R, {An} . Thus, the process is much faster than that of the error minimisation itself. The latter process is seen to be a QN algorithm with time con- stant z2/q, as apparent from the alternative rearrangement in familiar QN form whose time constants are readily iden- tified, i.e., the further approximation

( 5 )

2.3 Computation and comparison with MLMS, NLMS and RLS algorithms To obtain a computable (causal) recursive relation for the FRQN algorithm, it must be rearranged yet again in order to isolate W, on the LHS. Fortunately it is possible, with the help of the matrix-inversion lemma, to preserve its order N simplicity. After much tedious manipulation, with g ( ~ ~ / q ) & , ~ ~ > 1 for tracking purposes, the recursive differ- ence equation of the FRQN algorithm emerges as

with a, = zj(q + y), bo = g(l + z2/q), al = 2z2(2, + y/2), y = l/z2, and bl = gz2/q.

232

It can be rewritten in a more efficient form for the pur- pose of computational implementation. When the original parameters (time constants, gains, etc.) are substituted, the FRQN algorithm recursion becomes

(7) Defining the (positive, real) scalar at the kth iteration

allows one to eventually simplify the FRQN algorithm to

w k = w k - 1

( 9 )

where all the parameters are determined by N and g. The computational requirements (per iteration) are tabulated below. Table 1 shows the computations of initial parame- ters and Table 2 shows the computations required at every (e.g. the kth) iteration.

Table 1: Initial FRQN parameter computations

P k = & ykH yk N (complex) N + 1 (real) 0

y k = w k H - l v k N (complex) N (complex) 0

- a k = r k - y k 0 1 (complex) 0

V k = - ( p l / p k ) a k * 1 (complex) 0 1 (real)

A w k - 1 = ( w k - 1 - W k - 2 ) 0 N(cornplex) 0

&k = v k H A w k - 1 N (complex) N (complex) 0

c k = ( P d P k ) & k 1 (complex) 0 1 (real)

w k = w k - l + p O A W k - I - G k i D k 0 3 N (complex) 0

Total 6 N i 2 J N i 2 2

Dk = V k q k N(complex) 0 0

A wk-l N(complex) 0 0

G k = V d k N(complex) 0 0

In the form of eqn. 9, the FRQN resembles the momen- tum LMS (MLMS) and conjugate gradient (CG) algo- rithms discussed in [8], in that it also contains a 'momentum' term proportional to Wk-1 - Wkp2, but effec-

IEE Proc-Commun.. Vol. 146. No. 4, August 1999

Page 3: Fast robust quasi-Newton algorithm for adaptive arrays

tively with a matrix coefficient of proportionality, updated at each iteration with modest computational effort. Unlike the MLMS, the FRQN converges more rapidly than the LMS without the increase in misadjustment seen in [8]. In fact, it is shown here that the FRQN possesses much lower misadjustment than the LMS. The FRQN does not require the computational search for optimal scalar coefficients in the gradient recursion and weight recursion; it appears to inherently possess the optimal matrix coefficient of the weight 'momentum' term, thereby allowing the QN form (eqn. 5) with its optimal convergence properties.

The following equations from [7, 81 illustrate the compar- ison. The notations have been altered somewhat to align them for easier comparison.

The FRQN algorithm may be rewritten yet again as

w k = w k - 1 + pl P 2

P k - --rk-l

where rk-1 is the latest available gradient estimate, i.e.

with PI = 1/(1 + )'/TI), p2 = Zl(T1 + &l/g and P k = p 2 + YkHYk. Notice that although the coefficient of the momen- tum term is a matrix, the FRQN algorithm is shown in Table 2 to require 6N + 2 multiplications per iteration.

For comparison, the CG algorithm as discussed in [8] may be written as

w k = w k - 1 f ("ki,:"-2) ( w k - 1 - w k - 2 )

- " k - l r k - 1

(12)

(13)

where r k - 1 = R W k - 1 - s

and R = I{ YkYkH} and S = I{ YLrL} are used. Notice that the coefficients of the momentum as well as of the gradient term are time-varying scalars. The momentum LMS algo- rithm consists in the scalars a and y in eqn. 12 being fixed, which gives rise to greater misadjustment than the LMS. When they are adjusted to compensate, the MLMS loses the convergence speed advantage over the LMS as shown in [8]. In light of the above, the MLMS has no advantages over the FRQN.

The CG algorithm requires more computations per itera- tion than the MLMS or FRQN. For the FRQN, these are counted in Table 2. Strictly speaking, they apply to real- valued data of dimension N, because not every multiplica- tion in the complex case requires 4 real multiplies and 2 real adds. Thus, the numbers represent the maximum required number of full complex operations.

At each iteration the FRQN algorithm in its preferred form (eqn. 9) requires 6N + 2 complex multiplications, 7N + 2 additions and 2 real divisions. (Note that YkHwk-l = y l and YkHWk-* are scalars). This demonstrates that its computational complexity is indeed of order N, unlike many other QN algorithms, including the RLS, which are of order fl. The FRQN algorithm is only slightly more than twice as computationally intensive as the NLMS algo- rithm, which is given by

w k = [ I - P k Y k y F ] W k - 1 + p k y k r i (14)

IEE Proc -Commun , Vol 146, No 4 , August 1999

with = l/(YkHYk). This requires about 3N complex mul- tiplies and adds per iteration. For comparison, a computa- tionally stable version of the order N2 RLS algorithm [6] is given by

w k = W k - 1 - K I , ~ ; (15) with Kk = (l/K)Pk-lYk, K = (I/$ + VkHYk, VkH = YkHPk-~, ak = WEl Yk - rk and Pk = (1/q) [Pk-l - KkvkH]. It was observed that the weights correspondmg to Flg. 6 began wandering after about 50 iterations, although that did not affect its output error or tracking performance. It is initial- ised with Po = Z/(O.OOIE{YkHYk}/N) and q = 1 - l/z, iS used for fair comparison against the FRQN algorithm with the same signals in the same scenarios. It requires 4N2 + 4N + 2 operations per iteration, so the FRQN is more eco- nomical for all practical values of N (> 112). Note that RLS is also a QN algorithm, but its equivalent time constant parameter is constrained to 0 < < 1 only for tracking. In stationary scenarios one may use q = 1 (and g - CO in the FRQN).

The useful adapted scalar output signal from each algo- rithm is given by

y k WF-.l y k (16) which avoids using Wk and incurring extra computations, and prevents the LMS output from following rk at every instant.

2.4 Mean squared output errors - the learning curves The minimum mean squared error (MMSE) of each algo- rithm is given by

mmse = E { r k r i ) - SHWOpt (17)

WO,, = R- lS (18)

where S = &{Ykr,*} and the steady state optimal weight vector is

In the steady state, as k + CO the excess mean squared error (EMSE) of the NLMS algorithm was approximated using the same standard approach as for the LMS algorithm in Sections 9.4-9.8 of [6], resulting in

1 - px, 2 - P L

n=l

where /3& < 2 is often assumed, & being the nth eigenvalue of R and Z& = Tr[R] is used in approximating & = p = l/Tr[R]. Then, the misadjustment of the NLMS algorithm with /3& < 2 becomes

In non-stationary scenarios where 09;" is the variance of the optimal weights and @ is their parameter of non-sta- tionarity according to the model (similar to Section 9.14 of [61)

w o k = @ w o k - 1 + q k (21) The reference signal is given by the output of the model (optimal) system as

r k = y o k = W o k Y k + v k

where v k is the additive random noise whch gives the same MMSE in each algorithm.

The corresponding steady state tracking error of the NLMS algorithm using the same approximation for the

( 2 2 ) H

233

Page 4: Fast robust quasi-Newton algorithm for adaptive arrays

normalised step-size parameter Ic3, is given by

whch is inversely proportional to /3, whde EMSE is directly proportional to it. When g - CO, the tracking error becomes

N 2aiN C A,

t m s e Z (1 z; (24 1 The total mean squared output error is the sum of all con- tributions, i.e.

msek = mmse + emsek + tmsek (25)

In steady state the FRQN excess mean squared error was for each of the algorithms.

found in [7] to be N

by laborious extension of the standard approach in [6], with the approximation I{ WEl WE2} - €{ WEl WE, + Wk-2 Wf2}/2 and the parameter

9 (27) P =

7 2 ( n + 7) + 9(1+ .2 /q)YHY serving a role s d a r to that of /3 in the NLMS algorithm, with a s d a r approximation that YHY = Tr[R]. As g - with p;S, < 2, the FRQN algorithm has a much smaller misadjustment than NLMS, given by

1 << 1 (28)

By averaging 100 runs of the first 1000 iterations of each algorithm, it was confirmed that for the same convergence speeds, the FRQN algorithm has a much smaller misad- justment in steady state than the NLMS algorithm, as pre- dicted by eqn. 28 above.

Convergence of the FRQN in the mean square in a non- stationary environment was also analysed by a laborious extension of the same method as for the NLMS in eqns. 21-24. Its mean squared tracking error in the quasi- steady state was found to be approximately given by

- e m s e mmse 1+2r2 /q

M m = - -

where a: is the variance of the optimal weights, and CP is their parameter of non-stationarity. Dependence on zl, z2 is neghgible, CP is slightly less than 1 andp q/((q + z2)Tr[R]).

Since z2/q > 1, .the FRQN traclung error would be greater than that of the NLMS. Indeed, simulation results show that the FRQN tracking error grows faster than that of the NLMS as the source velocity of the desired signal (and hence also its fade rate or Doppler) increases. This corresponds to decreasing @, but since Q and 0: are not

234

easily quantifiable from the scenario parameters, the contri- bution of the traclung error may not dominate the total MSE in eqn. 25.

3 Simulations

3. I Comparisons of adaptation transients Figs. 1 4 contain plots of the squared errors IakI2 = bk - rkI2, i.e. the learning curves of the FRQN, NLMS and RLS algorithms in stationary versions of the scenarios depicted in Figs. 7 and 8. A well-conditioned scenario is one in whch the signal vectors which compose Yk are roughly mutually orthogonal and received signal powers roughly equal, giving rise to a moderate eigenvalue spread of R (28:l in Fig. 7). In an ill-conditioned scenario the signal- vectors are nearly parallel and powers may be widely dispa- rate, thus causing large eigenvalue spreads (6.5 x 106:1 or 2.5 x 103:1 for the 'significantly' contributing slowest mode of the MSE in Fig. 8). In all the simulation results that fol- low, there is no delay spread due to actual multipath, only the Doppler fading modulation. The algorithm parameters were N = 4, g + 00 (so pk = l/(YkHYk) for NLMS); g = 1/ &in and others as in Table 1 for the FRQN, q = 1 - 1/(4N) in eqn. 15 for the RLS. They were selected for the fastest speed of the NLMS and FRQN (so they would converge equally fast in a well-conditioned scenario). Then, the for- getting factor of the RLS was selected to correspond to the same equivalent time constant as the nominal one (i.e. q/q) in the FRQN algorithm for a fair comparison.

The theoretical mean squared errors (MSE) and mini- mum mean squared errors (MMSE) for both the FRQN and LMS algorithms, in two scenarios are included. (The theoretical MSE was not plotted with the RLS algorithm, as it optimises a different penalty function than the other two). All weights begin at 0 + j0 and the same pseudo-ran- dom sequences were used for the signals in each run of 1000 iterations. The signal sequences represented independ- ent QPSK data at 30 ksymbols per second, or 60kbit/s.

Each algorithm was simulated on an N = 4 element uni- form line array with A/2 spacing 50m above ground. One desired-signal and 3 interference sources were positioned 3km away, 2m above ground with eirps of 47, 44,43, and SOdBm, respectively. The first source (desired signal) was always situated at 90", i.e. on a line perpendicular to the array. A perfect replica of it served as the reference signal. In the well-conditioned scenario the last 3 (interference) sources were situated at azimuths of 140", 52" and 32" and in the ill-conditioned scenario they were situated at 72", 71" and 70", respectively.

In the well-conditioned scenario both the NLMS and FRQN algorithms converge in about 250 iterations. It is the fastest speed possible for the NLMS, and the fastest attained empirically for the FRQN algorithm. Note that the latter has a much smaller MSE in the stationary scenar- ios than the NLMS (whch was also confiied theoreti- cally), but not as small as that of the RLS. The RLS algorithm converges in 7 iterations, whch is within its theo- retical time of 2N for N = 4.

In the ill-conditioned scenario, the FRQN algorithm converges in about 600 iterations (it has slightly degener- ated to partially LMS type behaviour in accordance with [1], since 'c1 2 4N), whereas the NLMS algorithm was nowhere near steady state, even in 1000 iterations. (It even- tually took about 7500 iterations to converge.) The RLS algorithm now converges in about 30 iterations in this highly ill-conditioned scenario, also somewhat more than its theoretical time of 2N = 8, but not as many as for the FRQN.

IEE Proc.-Commun., Vol. 146, No. 4, August 1999

Page 5: Fast robust quasi-Newton algorithm for adaptive arrays

Fig. 1 ~

_ _ _ _ minimum mean squared error

Convergm of FRQN algorithm in well-conditwned scenario approximate theoretical mean squared error

a priori squared error of single run ...........

Fig. 2 ~

_ _ _ _ minimum mean squared error

Convwgence of FRQN algorithm in ill-condtwned scenarw approximate theoretical mean squared error

a priori squared error of single run ...........

The bumps in the approximate theoretical curves for the FRQN algorithm are believed to be artefacts of the approximation I{ Wk-] WE2} - E{ Wk_l W& + Wk;2 Wf2}12 used in deriving eqns. 26-29. They disappear rn the steady state as k - m. Averages over 100 runs (not plotted) followed the approximate theoretical curves rela- tively closely.

3.2 Comparisons of steady-state and tracking performance According to more detailed simulations [Note 11 of non- stationary scenarios with the NLMS algorithm by other investigators, its performance was adequate for indoor

IEE Proc.-Commun.. Vol. 146. No. 4, August 1999

mobile radio using BPSK data rates of 80Okbith with fad- ing rates of 5-9Hz, but was inadequate for 32kbit/s with a fading rate of 70Hz in outdoor mobile radio environments. This translates to relative fade rates to data rates of 6.25- 11.25 x 1V and 2.19 x l t 3 , respectively. The author of [2] found that the LMS algorithm is inadequate with a fade rate to data rate ratio of 3.3 x lW3 in IS-54 cellular signal- ling, and also with a relative fade rate of 5 x 1 P in GSM signalling. Such results therefore represent ‘fast’ fade rates.

Note 1: Dr. Salim Hanna, Industry Canada, private communication. Note that the ‘fade rate’ is achally the bandwidth of the Doppler spectrum modulating the mobile’s emitted signal as a result of local multipath scattering.

235

Page 6: Fast robust quasi-Newton algorithm for adaptive arrays

Fig. 3 I_

_ _ _ _ minimum mean squared error . . . . . . . . . .

Convergerice of NLMS algorithm in well-conditioned scerucrio approximate theoretical mean squared error

a priori squared error of single run

100 101 102 iteration or somple number k

Fig. 4 __ _ - - _ minimum mean squared error , . . . . . . . .

Convergence of NLMS algorithm in ill-conrlitwned scenario approximate theoretical mean squared error

a priori squared error of single run

The simulation results in Figs. 9-12 and Table 3 repre- sent symbol rates of 30 ksymbok (QPSK @ 60kbit/s), and fade rates of 47.2, 23.6, 102.3 and 78.7Hz for the 4 sources. The relative fade rates are therefore in the region of 2 x lk3, i.e ‘fast’ fading. The correct symbols were used as the reference in all cases and the algorithm parameters were the same as in Figs. 1 4 . Each run again consisted of the same 100 blocks of 1000 samples (i.e. the same 200 000 bits). Bit errors were counted only after the adaptation transients expired, i.e. 8000 iterations.

Each scenario was first modelled without fading to deter- mine the BER obtained with the RLS, NLMS and FRQN algorithms as a function of received signal-to-noise ratio ( S N R ) . Those results are summarised in Table 3.

236

103

Table 3: BER performance of algorithms in stationary sce- narios

SNR, dB Algorithm Well conditioned 111 conditioned

13.2 NLMS 1.325 x 1 0-5 4.1 IO-^ 13.3 FRQN 15x 104 3.894 x IO4 13.4 RLS < 5 x IO4 2.041 x IO”

Relatively fast fading was then introduced into the same scenarios to determine its effects on the BER against SNR relationships for each algorithm. In the case of complete loss of Doppler tracking, it is expected that with moderate to high SNR, each of the quadrature QPSK symbol

IEE Proc-Commun., Vol. 146, No. 4, August 1999

Page 7: Fast robust quasi-Newton algorithm for adaptive arrays

Fig. 5 _ _ _ _ minimum mean squared error . . . . . . . . . .

Convergence ofi?LS algvrith in well-conditwmd scenurw a priori squared error of single run

m ? P,

P c a Q 3

0 - D a3 L

IO0 IO’ 102 iteration or sample number k

Fig. 6 _ _ _ _ m i m u m mean squared error , , . . . . . .

convergence v f m S algvrithm m il[-wndirwned scenuriv

a priori squared error of single run

streams would have the wrong polarity half of the time, during whch binary bit decisions would be only as good as random guesses, i.e., half of them wrong. Consequently, the BER would be (1/2) x (112) or close to 0.25 under complete loss of tracking of the fading modulation.

Traclung performance (BER against SNR) was simu- lated by varying the noise powers added in front of the receivers to cause the SNR to take on values of approxi- mately 60, 30, 18 and lldB (calculated on the desired received signal), with all source velocities held fuced at the values and directions indicated in Figs. 7 and 8. The angles varied negligibly during the course of the simulations.

The results represent steady state performance and are summarised in the following plots. Note that the FRQN

IEE Proc -Cornmm Vol 146 N o 4 Augwt 1999

algorithm consistently gave 2 to 3 times better BER for the same SNR and fade rate than did the NLMS algorithm in its fastest form. The FRQN algorithm performed almost equally well as the RLS at low to moderate SNR. Fig. 9 compares the NLMS, FRQN and RLS algorithms in the well-conditioned, fast fading scenario. Fig. 10 compares their performances in the ill-conditioned, fast fading scenarios.

Again notice that the FRQN algorithm performs almost as well as the more computationally intensive RLS algo- rithm at low to moderately high S N R levels, and both algorithms track better than the NLMS algorithm through- out the range of S N R . Because the same bit streams were used in each simulation run, no error bars are shown on

231

Page 8: Fast robust quasi-Newton algorithm for adaptive arrays

the plots of BER. Thus the relative BER figures are legiti- mate indicators of the relative tracking performance of the algorithms.

, mobile emitters , -V:60km/h \

I y 7 130.60.120.2401

z lh.30

noise

cm I

element potterns - - - - - - - J

?fer

adoptive

array

a lgor i thm

independent noises SNR i 30 dB 111,18,30

.631 coherent receivers

Fig. 7 Sunulaied well-condiiwned scenarw p = -0.97 + 0.03d random Phase Interferer 1: IN", 44dBm, 850MHz, 3 0 M Interferer 2 52", 43dBm, 850MHz, 130km/h Interferer 3 32", SOdBm, 850MHz, lOOkm/h Desired signal: 90", 47dBm, 850MHz, 6Okmih Element patterns: azimuth, 86" wide pointed at 90"; elevation, 71" wide pointed at -5"

In the above comparisons the fade rate of the desired sig- nal was about 1.4 x lW3 relative to its symbol rate, which qualities it as fast fading with respect to the cited results of other investigators. Although the interfering signals' relative fade rates were up to twice as fast, the algorithms did not have to track them in the same sense, as they were being nulled. The slight increase in BER at high SNR in the FRQN algorithm may be due to numerical effects of ill- conditioning, and would have to be investigated before try- ing a futed-point real-time digital signal processing @SP) chip implementation.

The bit error rate at a moderate S N R is next examined as a function of fade rates for the RLS, NLMS and FRQN algorithms in the same basic scenarios. Tracking perform- ance versus fade rate (BER against Doppler) was simulated by varying the speed of only the desired-signal source over the values 30,60, 120 and 240km/h, while keeping its S N R fmed at the 30dB value. The interferers' fade rates remained the same as in Figs. 9 and 10. The SNR of the desired signal was constant and moderately high, so that noise would not mask the effects of the tracking error. (According to Figs. 9 and 10, a S N R of 30dB and greater has only a small effect on the BER in the presence of fast fading).

238

reference signal

only1

30 ksymbol/s f = V / h

L J

f 3

reference signal1 ---7 d e s i r e T 1 signol

R:3km R.3km

element potterns

independent noises SNR = 30 dB L11.18.30 ip adaptive 4 6 cohei 4

yk

. . ,631

-ent receivers

Fig. 8 Simulated ilkonditwned scenarw p = 4 .97 + 0.03d random Phase Interferer I : 72", 44dBm, 85OMHz, 30km/h Interferer 2: 71", 43dBm, 850MHz, 130km/h Interferer 3: 70", SOdBm, 850MHz, lOOkm/h Desired signal: 90", 47dBm, 850MHz, 6Okm/h Element patterns: azimuth, 86" wide pointed at 90"; elevation, 71" wide pointed at -5"

\

. . . . . . . . . ............... . . . . . . . . . .

10 20 30 LO 50 60 70 desired-signal to noise rat io (SNR), dB

Trackmg per o m e (BER) agaimi SNR of desired si@ m fmt- Fi .9 j&g w e / l - ~ o ~ i i o n e A c m i o Desired-signal Doppler = 47.2Hz QPSK symbol rate = 30ksps, lOOks total ~ FRQN - - ~ _ NLMS

RLS . . . . . . . . . . .

Note that the tracking error does not dominate the total mean squared error (which is responsible for the BER in these simulations) until the fade rate is extremely high. Therefore, the FRQN enjoys a lower mean-squared error than the NLMS at most practical fade rates, even though its tracking error is higher. Most of the errors are due to the EMSE caused by steady state stochastic misadjustment, which is much lower for the FRQN than for the NLMS.

IEE Proc.-Commun.. Vol. 146. No. 4, Augusr I999

Page 9: Fast robust quasi-Newton algorithm for adaptive arrays

- E Io-’r

.. ..- ..... .... ...........

....... ....... .....

10-31 1

10 20 30 LO 50 60 70 desired-signal to noise ratio (SNR),dB

Trucking p e r f o m e (BER) against SNR of desired signal in f a t - Fi .10 f i wzg &conditioned scenario Desired-signal Doppler = 47.2Hz QPSK symbol rate = 30ksps, lOOks total ~ FRQN _ _ _ _ NLMS ........... RLS

1o-’r

10-41 I J

20 LO 60 80 100 120 1LO 160 180 200 desired-signal fade r a t e (Doppler1,Hz

Trackmg perfoimunce (BER) against fade rate of &sired signal in Fig. 11 f a t fdmg well-conditwned scenario Desired-signal S N U = 31.5 -+ 0.3dB QPSK symbol rate = 30ksps, lOOks total ~ FRQN

NLMS RLS

_ _ _ _ . . . . .

4 Conclusions

The FRQN algorithm possesses convergence properties superior to those of the NLMS algorithm, at the price of a modest increase in computations. It is also more economi- cal than the RLS algorithm. Although its tracking error is greater than that of the NLMS, its steady state misadjust- ment is much smaller, so it outperforms the NLMS in terms of total MSE, even in fast fading scenarios. The

kk 10-4 20 LO 60 80 100 120 1LO 160 180 200

desired-signal fade rote (Doppler) , HZ Fig. 12 f a t f i g ill-conditwned scenario Desired-signal SNR = 30.7 2 0.2dB QPSK symbol rate = 30ksps, lOOks total

~ FRQN NLMS RLS

Trackingpm$me (BER) against fade rate of desired signal in

_ _ _ _ ...........

FRQN algorithm ran stably for > lo5 iterations in MATLAB, and is robust with respect to additive errors, arbitrary array and scenario configurations.

5 Acknowledgments

The author wishes to thank his co-supervisor Professor D.D. Falconer for his helpful suggestions and careful review of the manuscript, and his employers, Canadian Marconi Company and later Nortel Networks, for support of hls doctoral studies.

References

COMPTON, R.T.: ‘Improved feedback loop for adaptive arrays’, IEEE Trans. Aerosp. Electron. Syst., 1980, -16, (2), pp. 159-168 WINTERS, J.H.: ‘Signal acquisition and tracking with adaptive arrays in wireless systems’. Proceedings of the 1993 Vehicular technol- ogy conference (VTC‘93), Secaucus, NJ, 1993, pp. 8 H 8 WINTERS, J.H.: ‘Spread-spectrum in a four-phase communication system employing adaptive antennas’, IEEE Trans. Commun., 1982,

SLOCK, D.T.M., and KAILATH, T.: ‘Numerically stable fast trans- versal filters for recursive least squares filtering’, IEEE Truns. Signu/ Process., 1991, 39, (l), pp. 92-114 KLEMES, M.: ‘A practical method of obtaining constant convergence rates in LMS adaptive arrays’, ZEEE Trans. Antennas Propag., 1986, Ap-34, (31, PP. -6 HAYKIN, S.: ‘Adaptive filter theory’ (Prentice-Hall, Toronto, 1991, 2nd edn.) KLEMES, M.: ‘Fast robust quasi-Newton adaptive algorithms for general array processing’. PhD thesis, Department of Electronics, Car- leton University, Ottawa, 1996 ROY, S., and SHYNK, J.J.: ‘Analysis of the momentum LMS algo- rithm’, IEEE Trans. Acomt. Speech Signal Process., 1990,38, (12), pp.

COM-30, (5 )

2088-2098

IEE Proc.-Commun., Vol. 146, No. 4, August 1999 239