sparse spectral-line estimation for nonuniformly ... · series : spice, likes and msbl prabhu babu,...

5
SPARSE SPECTRAL-LINE ESTIMATION FOR NONUNIFORMLY SAMPLED MULTIVARIATETIME SERIES : SPICE, LIKES AND MSBL Prabhu Babu, Petre Stoica Dept. of Information Technology, Uppsala University, Uppsala, SE 75105, Sweden. ABSTRACT In this paper we deal with the problem of spectral-line analysis of nonuniformly sampled multivariate time series for which we intro- duce two methods: the first method named SPICE (sparse iterative covariance based estimation) is based on a covariance fitting frame- work whereas the second method named LIKES (likelihood-based estimation of sparse parameters) is a maximum likelihood tech- nique. Both methods yield sparse spectral estimates and they do not require the choice of any hyperparameters. We numerically compare the performance of SPICE and LIKES with that of the re- cently introduced method of multivariate sparse Bayesian learning (MSBL). Index Terms: Spectral analysis, multivariate data, nonuniform sampling, covariance fitting, maximum likelihood, majorization- minimization, expectation maximization. 1. DATA MODEL AND PROBLEM FORMULATION Let {y(t) Δ =[y 1 (t), ··· ,y M (t)] T C M×1 } denote an M -variate time series with each of its components satisfying the following spectral-line model: y m (t n )= Cm l=1 r l,m e iΩ l,m tn + e m (t n ) n =1,...,N m =1,...,M (1) where {t n } N n=1 denote the sampling times which can be nonuni- formly placed. For any m, {r l,m C} Cm l=1 denote the ampli- tudes of the C m components located at the frequencies {Ω l,m [0, Ω max ], Ω max R} Cm l=1 , respectively, and {e m (t n )} denotes the noise in the data. Given {y m (t n )} M,N m=1,n=1 we want to estimate, for m =1,...,M , the number of components C m and their corre- sponding amplitudes {r l,m } Cm l=1 and frequencies {Ω l,m } Cm l=1 . Thus the problem is to detect the components in the model as well as to estimate their amplitudes and frequencies, see e.g. [1] [2] and [3] for motivation of this problem and some possible applications. To tackle this problem, we divide the frequency interval [0, Ω max ] into a set of uniformly spaced values with a spacing of Δ and form a fine grid {ω l } K l=1 such that the true frequencies lie on (or, practically, close to) the grid i.e., {Ω l,m } Cm,M l=1,m=1 ⊂{ω l } K l=1 . A typical choice of Δ is 2π 10(tN t1) [1] [4]. Making use of the grid, the data model in (1) can be re-written as follows : y m Δ = y m (t 1 ) . . . y m (t N ) = e 1t1 ... e Kt1 . . . ... . . . e 1tN ... e KtN x 1,m . . . x K,m + e m (t 1 ) . . . e m (t N ) m =1,...,M (2) for some {x l,m } (see below). Let Y Δ = y 1 ··· y M , X Δ = x 1,1 ... x 1,M . . . ... . . . x K,1 ... x K,M A Δ = a 1 ··· a K = e 1t1 ... e Kt1 . . . ... . . . e 1tN ... e KtN E Δ = e 1 (t 1 ) ... e M (t 1 ) . . . ... . . . e 1 (t N ) ... e M (t N ) . (3) We will refer to {y m } as the data snapshots. By using the above matrix notation, (2) can be re-written as follows : Y = AX + E =[AI ] X E Δ = BZ (4) where I denotes the identity matrix of dimension N × N , and B Δ =[b 1 ,..., b K+N ]=[AI ]. Usually, the number of compo- nents {C m } is much smaller than the grid dimension K, so that only a few values of {x l,m } are non-zero i.e., for any m there exist a set {m ˜ l } such that {x m˜ l ,m = r l,m } or, in other words, the matrix X is row-sparse. Thus, we have transformed the nonlinear detec- tion and estimation problem associated with (1) into a linear sparse parameter estimation problem for (4). By assuming that the noise sequences in the data snapshots have zero means but possibly different variances, namely {σ k } N k=1 , and that they are uncorrelated with each other as well as with X, the (normalized) covariance matrix of Y can be expressed as : R Δ = E [YY ] /M = K l=1 M m=1 E[|x l,m | 2 ] M a l a l +diag(σ 1 , ··· N ) Δ = BPB (5) where P = diag M m=1 E[|x1,m| 2 ] M , ··· , M m=1 E[|xK,m| 2 ] M 1 , ··· N Δ = diag (p 1 ,p 2 , ··· ,p K+N ) (6) where E(·) denotes the expectation operation. The data sample covariance matrix can be obtained as : ˆ R = 1 M M m=1 y m y m = YY /M. (7) The quantities {p l } K l=1 in R represent the powers at the frequen- cies {ω l } K l=1 , respectively. Although our primary interest is in the 20th European Signal Processing Conference (EUSIPCO 2012) Bucharest, Romania, August 27 - 31, 2012 © EURASIP, 2012 - ISSN 2076-1465 445

Upload: others

Post on 26-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: SPARSE SPECTRAL-LINE ESTIMATION FOR NONUNIFORMLY ... · SERIES : SPICE, LIKES AND MSBL Prabhu Babu, Petre Stoica Dept. of Information Technology, Uppsala University, Uppsala, SE 75105,

SPARSE SPECTRAL-LINE ESTIMATION FOR NONUNIFORMLY SAMPLED MULTIVARIATE TIMESERIES : SPICE, LIKES AND MSBL

Prabhu Babu, Petre Stoica

Dept. of Information Technology, Uppsala University, Uppsala, SE 75105, Sweden.

ABSTRACTIn this paper we deal with the problem of spectral-line analysis ofnonuniformly sampledmultivariate time series for which we intro-duce two methods: the first method named SPICE (sparseiterativecovariance basedestimation) is based on a covariance fitting frame-work whereas the second method named LIKES (likelihood-basedestimation ofsparse parameters) is a maximum likelihood tech-nique. Both methods yield sparse spectral estimates and they donot require the choice of any hyperparameters. We numericallycompare the performance of SPICE and LIKES with that of the re-cently introduced method ofmultivariatesparseBayesianlearning(MSBL).Index Terms: Spectral analysis, multivariate data, nonuniformsampling, covariance fitting, maximum likelihood, majorization-minimization, expectation maximization.

1. DATA MODEL AND PROBLEM FORMULATION

Let y(t) ∆= [y1(t), · · · , yM (t)]

T ∈ CM×1 denote anM -variatetime series with each of its components satisfying the followingspectral-line model:

ym(tn) =Cm∑

l=1

rl,meiΩl,mtn + em(tn) n = 1, . . . , N

m = 1, . . . ,M

(1)

wheretnNn=1 denote the sampling times which can be nonuni-formly placed. For anym, rl,m ∈ CCm

l=1 denote the ampli-tudes of theCm components located at the frequenciesΩl,m ∈[0,Ωmax],Ωmax ∈ RCm

l=1, respectively, andem(tn) denotes thenoise in the data. Givenym(tn)M,N

m=1,n=1 we want to estimate,for m = 1, . . . ,M , the number of componentsCm and their corre-sponding amplitudesrl,mCm

l=1 and frequenciesΩl,mCm

l=1. Thusthe problem is todetect the components in the model as well as toestimate their amplitudes and frequencies, see e.g. [1] [2] and [3]for motivation of this problem and some possible applications.

To tackle this problem, we divide the frequency interval[0,Ωmax] into a set of uniformly spaced values with a spacing of∆ and form a fine gridωlKl=1 such that the true frequencies lie on(or, practically, close to) the grid i.e.,Ωl,mCm,M

l=1,m=1 ⊂ ωlKl=1.A typical choice of∆ is 2π

10(tN−t1)[1] [4]. Making use of the grid,

the data model in (1) can be re-written as follows :

ym∆=

ym(t1)...

ym(tN )

=

eiω1t1 . . . eiωKt1

... . . ....

eiω1tN . . . eiωKtN

x1,m

...xK,m

+

em(t1)...

em(tN )

m = 1, . . . ,M

(2)

for somexl,m (see below). Let

Y∆=

[

y1 · · · yM

]

, X∆=

x1,1 . . . x1,M

... . . ....

xK,1 . . . xK,M

A∆=

[

a1 · · · aK

]

=

eiω1t1 . . . eiωKt1

... . . ....

eiω1tN . . . eiωK tN

E∆=

e1(t1) . . . eM (t1)... . . .

...e1(tN ) . . . eM (tN )

.

(3)

We will refer toym as the data snapshots. By using the abovematrix notation, (2) can be re-written as follows :

Y = AX +E = [A I]

[

X

E

]

∆= BZ (4)

whereI denotes the identity matrix of dimensionN × N , and

B∆= [b1, . . . , bK+N ] = [A I]. Usually, the number of compo-

nentsCm is much smaller than the grid dimensionK, so thatonly a few values ofxl,m are non-zero i.e., for anym there exista setml such thatxm

l,m = rl,m or, in other words, the matrix

X is row-sparse. Thus, we have transformed the nonlinear detec-tion and estimation problem associated with (1) into a linear sparseparameter estimation problem for (4).

By assuming that the noise sequences in the data snapshotshave zero means but possibly different variances, namelyσkNk=1,and that they are uncorrelated with each other as well as withX,the (normalized) covariance matrix ofY can be expressed as :

R∆= E [Y Y ∗] /M =

K∑

l=1

M∑

m=1

E[|xl,m|2]M ala

∗l

+diag(σ1, · · · , σN )∆= BPB∗

(5)

where

P = diag

(

M∑

m=1

E[|x1,m|2]M , · · · ,

M∑

m=1

E[|xK,m|2]M , σ1, · · · , σN

)

∆= diag(p1, p2, · · · , pK+N)

(6)whereE(·) denotes the expectation operation. The data samplecovariance matrix can be obtained as :

R = 1M

M∑

m=1ymy∗

m = Y Y ∗/M. (7)

The quantitiesplKl=1 in R represent the powers at the frequen-ciesωlKl=1, respectively. Although our primary interest is in the

20th European Signal Processing Conference (EUSIPCO 2012) Bucharest, Romania, August 27 - 31, 2012

© EURASIP, 2012 - ISSN 2076-1465 445

Page 2: SPARSE SPECTRAL-LINE ESTIMATION FOR NONUNIFORMLY ... · SERIES : SPICE, LIKES AND MSBL Prabhu Babu, Petre Stoica Dept. of Information Technology, Uppsala University, Uppsala, SE 75105,

estimation ofX from Y , we will also estimatepl. In fact, inthe methods described in the following section we will startwiththe problem of estimatingpl and an estimate ofX will be ob-tained as a byproduct of the method. We would also like to pointout that apart from estimating the powers at different frequencies,we also estimate the noise variances by estimating the quantitiesplK+N

l=K+1.In Section 2, we introduce two new methods, namely SPICE

and LIKES, for estimatingpl and X, briefly describe themultivariate-SBL (MSBL) algorithm and duly refer to the pre-vious relevant literature. In Section 3, we compare the statisticalperformance of SPICE, LIKES and MSBL, as well as their com-putational complexities and convergence properties, by means ofnumerical simulations.

2. SPICE, LIKES AND MSBL

2.1. SPICE

Given R, thepk in R can be estimated as the solutions to thefollowing minimization problem :

minp

∥R−1/2(R− R)

2

(8)

where‖ · ‖ denotes the Frobenius matrix norm,R−1/2 denotes a

Hermittian square root ofR−1, andp∆= [p1, · · · , pK+N ]

T witheachpk ≥ 0. We have used this type of covariance fitting crite-rion in [4] (for the spectral analysis of univariate time series, i.e.M = 1) and, in a related form, in [5] (for spatial spectral analysis,which is essentially equivalent to multivariate time series analy-sis with M ≥ N ) to derive a sparse parameter estimation tech-nique named SPICE (sparseiterativecovariance basedestimation).Here we extend SPICE to the multivariate case withM ∈ (1, N).By substituting the expression forR in (8) and expanding the costfunction we get the following equivalent formulation of theprob-lem:

minp

tr(R∗R−1R) +

K+N∑

k=1

w2kpk (9)

wherewk = ‖bk‖ and tr(·) denotes the matrix trace. The mini-mization problem in (9) is convex and has a unique global mini-mum. In fact, (9) can be cast as the following semi-definite pro-gram (SDP),

minp,αl

N∑

l=1

αl +K+N∑

k=1

w2kpk

s.t.

[

αl g∗l

gl R

]

≥ 0 l = 1, . . . , N

(10)

whereαl are auxiliary variables and[g1, · · · , gN ] = R. How-ever, solving the SDP in (10) can be quite time consuming: forinstance this SDP cannot be solved on a general purpose PC evenfor relatively modest dimensions (sayN = 100, M = 10 andK = 1000). To tackle this computational issue, we follow [4] toderive an iterative algorithm for the problem in (9).

Consider the following augmented problem:

minp,Q

tr(Q∗P−1Q) +K+N∑

k=1

w2kpk

s.t.BQ = R.

(11)

Minimization overQ (for fixedp) is straightforward: the solutionis given byQ0 = PB∗R−1R (see, e.g., [1], Appendix A, ResultR35 and [5], Section III, Page 632). It is easy to verify that sub-stitutingQ0 back into the cost function in (11) yields the originalproblem in (9). Hence thep’s obtained from (9) and (11) must beidentical.

The minimization overp (for a givenQ) can also be done ana-lytically as follows. UsingQ =

[

β1, · · · ,βK+N

]∗, the optimiza-

tion problem in (11) (for fixedβk) can be reduced to

minp

K+N∑

k=1

‖βk‖2

pk+

K+N∑

k=1

w2kpk. (12)

A simple calculation shows that

K+N∑

k=1

(

‖βk‖√pk

− wk√pk

)2

≥ 0 ⇐⇒K+N∑

k=1

(

‖βk‖2

pk+ w2

kpk − 2wk‖βk‖)

≥ 0 ⇐⇒K+N∑

k=1

(

‖βk‖2

pk+ w2

kpk

)

≥K+N∑

k=1

2wk‖βk‖.

(13)

The left hand side in the above inequality is nothing but the costfunction in (12) and the equality holds only whenpk = ‖βk‖/wk.Thus the minimizer of (12) is

pk = ‖βk‖wk

k = 1, . . . ,K +N. (14)

Since the cost function in (11) is convex in bothQ andp, the cyclicminimization overQ andp, i.e. minimization overQ (for fixedp)and vice-versa, starting from any arbitrary initial point will lead tothe global minimum of (11). The(i + 1)-th iteration of the so-obtained cyclic algorithm consists of the following steps:

Qi+1 = P iB∗R−1(i)R

pi+1k =

‖βi+1

k‖

wkk = 1, . . . ,K +N

R(i+ 1) = BP i+1B∗.

(15)

(for initialization of (15)), as well as of the other yet-to-be derivedalgorithms, see Section 3) An estimate ofZ, and hence ofX canthen be obtained as follows :

Z = QcY (Y ∗Y /M)−1

X = the firstK rows ofZ(16)

whereQc denotes the value ofQ obtained at the convergence of(15).

2.2. LIKES

LIKES, which stands forlikelihood-basedestimation ofsparse pa-rameters, estimatesp by minimizing the Gaussian negative log-likelihood (NLL) function :

f(p) = tr(R−1R) + ln|R|= 1

M tr(Y ∗R−1Y ) + ln|R| (17)

The minimization problem in (17) is non-convex. In fact, it canbe shown that the two terms inf(p) are convex and concave inp, respectively. In [6] we have derived a LIKES iterative algorithmbased on a majorization-minimization technique that minimizes theabove cost function in the univariate case. Here, followinga sim-ilar approach to that in [6], we extend the LIKES algorithm tothemultivariate case.

446

Page 3: SPARSE SPECTRAL-LINE ESTIMATION FOR NONUNIFORMLY ... · SERIES : SPICE, LIKES AND MSBL Prabhu Babu, Petre Stoica Dept. of Information Technology, Uppsala University, Uppsala, SE 75105,

Since the second term inf(p), viz. ln|R|, is a concave functionin p, it can be majorized by its tangent plane at any point. Letp

be an arbitrary point in the parameter space, and letR denote thecorresponding covariance matrix; then :

ln|R| ≤ ln|R|+K+N∑

k=1

tr(

R−1

bkb∗k

)

(pk − pk)

= ln|R| −N +K+N∑

k=1

w2kpk

(18)

where

w2k = b∗kR

−1bk. (19)

It follows from (18) that :

f(p) ≤(

ln|R| −N)

+ 1M tr(Y ∗R−1Y ) +

K+N∑

k=1

w2kpk

∆= g(p)

(20)for any vectorsp andp. Note also that

f(p) = g(p). (21)

The main implication of (20) and (21) is that we can decrease thefunction f(p) from f(p) to, let us say,f(p) by choosingp asa minimizer of the majorizing functiong(p) or at least such thatg(p) > g(p):

f(p) ≤ g(p) < g(p) = f(p) (22)

This is precisely the basic idea behind the majorization-minimizationapproach to solve the problem in (17), see, e.g. [7] and the ref-erences therein. The usefulness of this approach depends onhoweasier the minimization (or the decrease) ofg(p) is compared tominimizing f(p) directly. In the present case, minimizingg(p) ismuch easier than minimizingf(p) becauseg(p) is (to within anadditive constant) a SPICE-like convex criterion function(com-pare it with (9) after replacingR by Y /

√M ). Consequently, the

approach used in the previous subsection can be adopted verbatimto find a vectorp with the above property, for any givenp.

Following the said approach, we consider the augmented opti-mization problem :

minp,Q

1M tr(Q

∗P−1Q) +

K+N∑

k=1

w2kpk

s.t.BQ = Y

(23)

The minimizerQ (for fixed p) is given byQ0 = PB∗R−1Y

and substitutingQ0 into (23) yields the cost function in (20) (to

within an additive constant). By usingQ =[

β1, · · · , βK+N

]∗

and a calculation similar to (12)-(13), the minimizerp (for fixedQ) can also be derived. The resulting LIKES algorithm comprisesan inner loop to minimize (or to decrease)g(p) and an outer loop

to recompute the weightswk, and can be summarized as follows:

The inner loop:At iterationi+ 1:

Qi+1

= P iB∗R−1(i)Y

pi+1k =

‖βi+1

k ‖√Mwk

k = 1, . . . ,K +N

R(i+ 1) = BP i+1B∗

The outer loop:LettingRc (pck) denote theR (pk)obtained at convergence(or after a pre-specified number of iterations)of the inner loop, compute

wk =√

b∗kR−1c bk

and then go to the inner loop. The inner loop will then beinitialized withpck.Final step :The estimates ofZ andX are obtained as

Z = Qc

X = the firstK rows ofZwhereQc denotes the value ofQ obtained at convergence ofthe outer loop.

(24)

2.3. MSBL

Besides the majorization-minimization technique, the NLLfunc-tion in (17) can also be minimized by an expectation-maximization(EM) algorithm. Such an approach namedsparse Bayesianlearning (SBL) has been suggested in [8] for the univariate case;and in [9] for the multivariate case where it was called MSBL.For conciseness we present only the main steps of the MSBLalgorithm:

Iterative step :

Qi+1

= P iB∗R−1(i)Y

pi+1k = pik −

(

pik)2

b∗kR−1(i)bk + ‖βi+1

k ‖2/Mk = 1, . . . ,K +N

R(i+ 1) = BP i+1B∗.Final step :The estimates ofZ andX are obtained as:

Z = Qc

X = the firstK rows ofZwhereQc denotes the value ofQ obtained at the convergence.

(25)

3. NUMERICAL SIMULATIONS AND CONCLUDINGREMARKS

In this section we numerically compare the performance of SPICE,LIKES and MSBL. The data were generated via the model in (2)with N = 100, M = 10 andK = 1000. The sampling timestnwere uniformly randomly distributed between[0 − 20] sec. Thevalue ofΩmax was chosen to be10π rad/sec. In each of the10data snapshots,5 sinusoidal components were present. Each snap-shot shares three common frequencies with its neighboring snap-shots. Table 1 shows the values of the frequencies with non-zeroamplitudes in different snapshots. The amplitudes of all existingsinusoidal components were chosen as5. The noise was Gaussiandistributed with zero mean and variance equal toσ. The signal to

447

Page 4: SPARSE SPECTRAL-LINE ESTIMATION FOR NONUNIFORMLY ... · SERIES : SPICE, LIKES AND MSBL Prabhu Babu, Petre Stoica Dept. of Information Technology, Uppsala University, Uppsala, SE 75105,

noise ratio (SNR) is defined by10 log(25/σ). The three methodswere initialized with the periodogram estimatep0k = b∗kRbk/N.For all methods the convergence criterion used to terminatethe it-

erations was :‖pi+1−pi‖‖pi‖ < 10−3. In the case of LIKES, this con-

vergence criterion has been used in the outer loop while the innerloop has been run for10 iterations.

3.1. Statistical performance

Figure 1 shows100 superimposed plots of amplitude spectra corre-sponding to the (randomly picked)7-th data snapshot obtained withSPICE, LIKES and MSBL. The spectra obtained with all meth-ods are sparse and they correctly indicate the presence of sinu-soidal components in most of the Monte-Carlo runs; furthermorethe likelihood based approaches provide more accurate amplitudeestimates than SPICE. Figure 2 shows the plots of average meansquare error (AMSE) of the amplitude estimates at the true fre-quency locations, as well as the probability of detection ofthe truefrequencies vs SNR. The AMSE of the amplitude estimates wascalculated as:

AMSE = 15000

100∑

k=1

10∑

m=1

5∑

l=1

∣xkm

l,m − 5

2

(26)

whereml5l=1denote the frequency indices corresponding to the

5 sinusoidal components present in them-th snapshot, and the su-perscriptk in xk

ml,m denotes the Monte-Carlo run. The probability

of detection was computed by first picking the5 dominant peaks ofthe estimated spectrum, sorting the frequencies corresponding tothose peaks, and then calculating the mean absolute error ofthosefrequency estimates: if the mean absolute error is equal to zerothen we have a detection, else we declare a miss. It can be ob-served from the plots in Figure 2 that SPICE is less accurate thanLIKES and MSBL for both amplitude and frequency estimation,and that LIKES is slightly better than MSBL in the case of ampli-tude estimation.

3.2. Complexity and convergence rate

The computational complexity per iteration (in flops) of thecon-sidered methods is on the order ofO(2N3 + 2N2M + 2KMN +KN2+KM +NM) with MSBL requiring an additionalKN2+N3 flops to compute the terms in the power update formula, see(25). However, the convergence rates of the three methods differquite a bit from one another, which leads to rather differentexecu-tion times. Figure 3a shows the average computation times (in sec)per run vs SNR; it is seen from this figure that the times decreasewith increasing SNR, which is primarily due to the fact that themethods converge faster as the SNR increases. For a fixed SNR,SPICE is faster than LIKES which is faster than MSBL.

As both LIKES and MSBL minimize the same NLL criterion,it is interesting to compare their convergence rates and thevaluesof NLL they attain at convergence. Figure 3b shows the NLL valuevs the iteration number for LIKES and MSBL in one Monte-Carlorun; it can be seen from this plot that LIKES converges fasterthanMSBL and also that the value of NLL that it attains at convergenceis slightly lower than the NLL value attained by MSBL.

3.3. Concluding remarks

To conclude, based on our recent work we have introduced twomethods to solve the problem of spectral-line analysis for nonuni-formly sampled multivariate time series. Both methods yield sparse

Fre

quen

cies×π

(rad

/sec

)

Snapshot number1 2 3 4 5 6 7 8 9 10

0.05 0.79 0.85 1.24 1.45 1.53 1.66 1.84 2.40 2.630.76 0.84 1.07 1.37 1.46 1.63 1.82 2.29 2.60 2.640.79 0.85 1.24 1.45 1.53 1.66 1.84 2.40 2.63 3.120.84 1.07 1.37 1.46 1.63 1.82 2.29 2.60 2.64 3.510.85 1.24 1.45 1.53 1.66 1.84 2.40 2.63 3.12 4.00

Table 1. The frequencies of the sinusoidal components in the dif-ferent data snapshots.

parameter estimates and they do not require any selection ofhy-perparameters. We have also considered the previously proposedmethod of MSBL that minimizes the same NLL function as LIKES.We compared these three methods via numerical simulations andobserved that LIKES and MSBL are more accurate than SPICEbut at the cost of extra computation time. Regarding LIKES andMSBL, we showed that LIKES converges faster than MSBL andalso that the NLL value that LIKES attains at convergence canbeslightly smaller than the NLL value attained by MSBL.Acknowledgments

This work was partly supported by the Swedish Science Council(VR) and the European Research Council (ERC).

4. REFERENCES

[1] P. Stoica and R. Moses,Spectral Analysis of Signals. PrenticeHall, Upper Saddle River, NJ, 2005.

[2] N. Butt and A. Jakobsson, “Coherence spectrum estimationfrom nonuniformly sampled sequences,”IEEE Signal Process-ing Letters, vol. 17, no. 4, pp. 339–342, 2010.

[3] A. Jakobsson, S. Alty, and J. Benesty, “Estimating and time-updating the 2-D coherence spectrum,”IEEE Transactions onSignal Processing, vol. 55, no. 5, pp. 2350–2354, 2007.

[4] P. Stoica, P. Babu, and J. Li, “New method of sparse parameterestimation in separable models and its use for spectral analy-sis of irregularly sampled data,”IEEE Transactions on SignalProcessing, vol. 59, no. 1, pp. 35–47, 2011.

[5] ——, “SPICE: A sparse covariance based estimation methodfor array processing,”IEEE Transactions on Signal Process-ing, vol. 59, no. 2, pp. 629–638, 2011.

[6] P. Stoica and P. Babu, “SPICE and LIKES : Twohyperparameter-free methods for sparse-parameter estima-tion,” Signal Processing, vol. 92, no. 7, pp. 1580–1590, , 2012.

[7] P. Stoica and Y. Selen, “Cyclic minimizers, majorization tech-niques, and the expectation-maximization algorithm: a re-fresher,”IEEE Signal Processing Magazine, vol. 21, no. 1, pp.112–114, 2004.

[8] D. Wipf and B. Rao, “Sparse Bayesian learning for basis selec-tion,” IEEE Transactions on Signal Processing, vol. 52, no. 8,pp. 2153–2164, 2004.

[9] ——, “An empirical Bayesian strategy for solving the simulta-neous sparse approximation problem,”IEEE Transactions onSignal Processing, vol. 55, no. 7, pp. 3704–3716, 2007.

448

Page 5: SPARSE SPECTRAL-LINE ESTIMATION FOR NONUNIFORMLY ... · SERIES : SPICE, LIKES AND MSBL Prabhu Babu, Petre Stoica Dept. of Information Technology, Uppsala University, Uppsala, SE 75105,

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

Frequency × π (rad/sec)

Am

plitu

de

1.8 1.82 1.84 1.860

5

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

Frequency × π (rad/sec)

Am

plitu

de

1.8 1.82 1.84 1.860

5

0 1 2 3 4 5 6 7 8 9 100

1

2

3

4

5

6

Frequency × π (rad/sec)

Am

plitu

de

1.8 1.82 1.84 1.860

5

a) b) c)

Fig. 1. One hundred superimposed plots of the amplitude spectra corresponding to the7-th data snapshot estimated via a) SPICE, b) LIKES and c) MSBL. The circlesindicate the locations and amplitudes of the true components in the data. The SNR was40 dB. The peaks at the closely-spaced frequencies of1.82π and1.84π appearalmost merged but in actuality they are distinct. The zoom-in plots show the spectrum in the interval[1.8− 1.86] ×π rad/sec to confirm this fact.

0 5 10 15 20 25 30 35 4010

−3

10−2

10−1

100

101

102

SNR (dB)

AM

SE

SPICELIKESM−SBL

0 5 10 15 20 25 30 35 400

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

SNR (dB)P

roba

bilit

y of

det

ectio

n

SPICELIKESM−SBL

a) b)

Fig. 2. a) AMSE of amplitude estimates at the true frequency locations vs SNR b) Probability of detection of true frequencies vs SNR. The number of Monte-Carlo runswas100.

0 5 10 15 20 25 30 35 400

2

4

6

8

10

12

14

SNR (dB)

Ave

rage

com

puta

tion

time

(sec

)

SPICELIKESM−SBL

0 50 100 150 200 250 300 350 400 450 500−100

−50

0

50

100

150

200

Iteration number

NLL

a) b)

Fig. 3. a) Average computation times (sec) of SPICE, LIKES and MSBL vs SNR, b) NLL values of LIKES and MSBL vs iteration number (o : LIKES, + : MSBL).

449