1 concentration of measure for block diagonal matrices

30
1 Concentration of Measure for Block Diagonal Matrices with Applications to Compressive Sensing Jae Young Park, Han Lun Yap, Christopher J. Rozell, and Michael B. Wakin October 2010 Abstract Theoretical analysis of randomized, compressive operators often depends on a concentration of measure inequality for the operator in question. Typically, such inequalities quantify the likelihood that a random matrix will preserve the norm of a signal after multiplication. When this likelihood is very high for any signal, the random matrices have a variety of known uses in dimensionality reduction and Compressive Sensing. Concentration of measure results are well-established for unstructured compressive matrices, populated with independent and identically distributed (i.i.d.) random entries. Many real-world acquisition systems, however, are subject to architectural constraints that make such matrices impractical. In this paper we derive concentration of measure bounds for two types of block diagonal compressive matrices, one in which the blocks along the main diagonal are random and independent, and one in which the blocks are random but equal. For both types of matrices, we show that the likelihood of norm preservation depends on certain properties of the signal being measured, but that for the best case signals, both types of block diagonal matrices can offer concentration performance on par with their unstructured, i.i.d. counterparts. We support our theoretical results with illustrative simulations as well as (analytical and empirical) investigations of several signal classes that are highly amenable to measurement using block diagonal matrices. Finally, we discuss applications of these results in establishing performance guarantees for solving signal processing tasks in the compressed domain (e.g., signal detection), and in establishing the Restricted Isometry Property for the Toeplitz matrices that arise in compressive channel sensing. Index Terms Concentration of measure phenomenon, Block diagonal matrices, Compressive Sensing, Restricted Isometry Property, Toeplitz matrices I. I NTRODUCTION Recent technological advances have enabled the sensing and storage of massive volumes of data from a dizzying array of sources. While access to such data has revolutionized fields such as signal processing, the limits of some computing and storage resources are being tested, and front-end signal acquisition devices are not always able to support the desire to measure in increasingly finer detail. To confront these challenges, many signal processing researchers have begun investigating compressive linear operators Φ: R N R M for high resolution signals x R N (M<N ), either as a method for simple dimensionality reduction or as a model for novel data acquisition devices. In settings such as these, the data x is often thought to belong to some concise model class; for example, in the field of Compressive Sensing (CS) [3,4], one assumes that x has JYP and HLY contributed equally to this work. JYP is with the Department of Electrical Engineering and Computer Science at the University of Michigan. HLY and CJR are with the School of Electrical and Computer Engineering at the Georgia Institute of Technology. MBW is with the Division of Engineering at the Colorado School of Mines. Preliminary versions of portions of this work appeared in [1] and [2]. This work was partially supported by NSF grants CCF-0830456 and CCF-0830320, DARPA grant HR0011-08-1-0078, and AFOSR grant FA9550-09-1-0465. The authors are grateful to Tyrone Vincent, Borhan Sanandaji, Armin Eftekhari, and Justin Romberg for valuable discussions about this work.

Upload: others

Post on 05-Oct-2021

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 Concentration of Measure for Block Diagonal Matrices

1

Concentration of Measure for Block Diagonal Matrices

with Applications to Compressive Sensing

Jae Young Park, Han Lun Yap, Christopher J. Rozell, and Michael B. Wakin

October 2010

Abstract

Theoretical analysis of randomized, compressive operators often depends on a concentration of measure inequality forthe

operator in question. Typically, such inequalities quantify the likelihood that a random matrix will preserve the normof a signal after

multiplication. When this likelihood is very high for any signal, the random matrices have a variety of known uses in dimensionality

reduction and Compressive Sensing. Concentration of measure results are well-established for unstructured compressive matrices,

populated with independent and identically distributed (i.i.d.) random entries. Many real-world acquisition systems, however,

are subject to architectural constraints that make such matrices impractical. In this paper we derive concentration ofmeasure

bounds for two types of block diagonal compressive matrices, one in which the blocks along the main diagonal are random and

independent, and one in which the blocks are random but equal. For both types of matrices, we show that the likelihood of

norm preservation depends on certain properties of the signal being measured, but that for the best case signals, both types of

block diagonal matrices can offer concentration performance on par with their unstructured, i.i.d. counterparts. We support our

theoretical results with illustrative simulations as wellas (analytical and empirical) investigations of several signal classes that are

highly amenable to measurement using block diagonal matrices. Finally, we discuss applications of these results in establishing

performance guarantees for solving signal processing tasks in the compressed domain (e.g., signal detection), and in establishing

the Restricted Isometry Property for the Toeplitz matricesthat arise in compressive channel sensing.

Index Terms

Concentration of measure phenomenon, Block diagonal matrices, Compressive Sensing, Restricted Isometry Property, Toeplitz

matrices

I. I NTRODUCTION

Recent technological advances have enabled the sensing andstorage of massive volumes of data from a dizzying array

of sources. While access to such data has revolutionized fields such as signal processing, the limits of some computing

and storage resources are being tested, and front-end signal acquisition devices are not always able to support the desire to

measure in increasingly finer detail. To confront these challenges, many signal processing researchers have begun investigating

compressive linear operatorsΦ : RN → RM for high resolution signalsx ∈ RN (M < N ), either as a method for simple

dimensionality reduction or as a model for novel data acquisition devices. In settings such as these, the datax is often thought

to belong to some concise model class; for example, in the field of Compressive Sensing (CS) [3, 4], one assumes thatx has

JYP and HLY contributed equally to this work. JYP is with the Department of Electrical Engineering and Computer Science at the University of Michigan.HLY and CJR are with the School of Electrical and Computer Engineering at the Georgia Institute of Technology. MBW is withthe Division of Engineeringat the Colorado School of Mines. Preliminary versions of portions of this work appeared in [1] and [2]. This work was partially supported by NSF grantsCCF-0830456 and CCF-0830320, DARPA grant HR0011-08-1-0078, and AFOSR grant FA9550-09-1-0465. The authors are grateful to Tyrone Vincent, BorhanSanandaji, Armin Eftekhari, and Justin Romberg for valuable discussions about this work.

Page 2: 1 Concentration of Measure for Block Diagonal Matrices

2

a sparserepresentation (having few nonzero coefficients) in the time domain or in some transform basis. If the number of

“measurements”M is sufficient relative to the complexity of the model, the signal x can be recovered fromΦx by solving an

inverse problem (often cast as a convex optimization program [3, 4]). Because of their universality and amenability to analysis,

randomized compressive linear operators (i.e., random matrices withM < N ) have drawn particular interest.

The theoretical analysis of random matrices often relies onthe general notions that these matrices are well-behaved most of

the time, and that we can bound the probability with which they perform poorly. Frequently, these notions are formalizedusing

some form of theconcentration of measure phenomenon[5], a powerful characterization of the tendency of certainfunctions of

high-dimensional random processes to concentrate sharplyaround their mean. As one important example of this phenomenon,

it is known that for any fixed signalx ∈ RN , if Φ is anM ×N matrix populated with independent and identically distributed

(i.i.d.) random entries drawn from a suitable distribution, then with high probabilityΦ will approximately preserve the norm

of x. More precisely, for many random distributions forΦ, the probability that∣∣‖Φx‖22 − ‖x‖22

∣∣ will exceed a small fraction

of ‖x‖22 decays exponentially in the number of measurementsM (see Section II-B for additional details).

This likely preservation of signal norms makes random matrices remarkably useful. For example, the above mentioned

concentration result has been used to prove the Johnson-Lindenstrauss (JL) lemma [6–8], which states that when appliedto

a finite set of pointsQ ⊂ RN , a randomized compressive operatorΦ can provide a stable, distance preserving embedding

of Q in the measurement spaceRM . This enables the efficient solution of problems such as finding the nearest neighbor to

a point x in a databaseQ by permitting these problems to be solved in the low-dimensional observation space. The same

concentration result has also been used to prove that certain families of random matrices can satisfy the Restricted Isometry

Property (RIP) [9–11], which concerns the stable, distancepreserving embedding of families of sparse signals. In the field

of CS, the RIP is commonly used as a sufficient condition to guarantee that a sparse signalx can be recovered from the

measurementsΦx. In Section II, after providing a brief introduction to somepreliminary ideas and notation, we survey related

concentration analysis results and applications in the literature.

Despite the utility of norm preservation in dimensionalityreduction, concentration analysis to date has focused almost

exclusively on dense matrices that require each measurement to be a weighted linear combination of all entries ofx. Dense

random matrices with i.i.d. entries are often either impractical because of the resources required to store and work with a

large unstructured matrix, or unrealistic as models of acquisition devices with architectural constraints preventing such global

data aggregation. For one example, in a distributed sensingsystem, communication constraints may limit the dependence

of each measurement to only a subset of the data. For a second example, applications involving streaming signals [12, 13]

often have datarates that necessitate operating on local signal blocks rather than the entire signal simultaneously. For a third

example, recent work has shown the utility of measuringx by convolving it with a random pulse and downsampling. These

convolution measurement systems lead to computationally efficient designs and have been shown to work almost as well as

dense randomized operators [14–22]. This paradigm is particularly relevant for compressive channel sensing [16–18],where a

sparse wireless channel is estimated from its convolution with a random probe signal.

The common thread in each example above is that the data is divided naturally into discrete subsections (or blocks), and each

Page 3: 1 Concentration of Measure for Block Diagonal Matrices

3

block is acquired via a local measurement operator.1 To see the implications of this, let us model a signalx ∈ RNJ as being

partitioned intoJ blocksx1, x2, . . . , xJ ∈ RN , and for eachj ∈ 1, 2, . . . , J, suppose that a local measurement operator

Φj : RN → R

Mj collects the measurementsyj = Φjxj . Concatenating all of the measurements into a vectory ∈ R

∑j Mj , we

then have

y1

y2...

yJ

︸ ︷︷ ︸y: (

∑j Mj)×1

=

Φ1

Φ2

. . .

ΦJ

︸ ︷︷ ︸Φ: (

∑j Mj)×NJ

x1

x2

...

xJ

.

︸ ︷︷ ︸x:NJ×1

(1)

In cases such as these, we see that the overall measurement operatorΦ will have a characteristic block diagonal structure.

In some scenarios, the local measurement operatorΦj may be unique for each block, and we say that the resultingΦ has a

Distinct Block Diagonal(DBD) structure. In other scenarios it may be appropriate ornecessary to repeat a single operator

across all blocks (such thatΦ1 = Φ2 = · · · = ΦJ ); we call the resultingΦ a Repeated Block Diagonal(RBD) matrix.

In Section III of this paper, we derive concentration of measure bounds both for DBD matrices populated with i.i.d.

subgaussian2 random variables and for RBD matrices populated with i.i.d.Gaussian random variables. Our main results

essentially state that the probability of concentration depends on the “diversity” of the component signalsx1, x2, . . . , xJ ,

where this notion of signal diversity depends whether the matrix is DBD or RBD (we make this precise in Section III). Such

nonuniform concentration behavior is markedly unlike thatof dense matrices, for which concentration probabilities are signal

agnostic. At one extreme, for the most favorable classes of component signals, the concentration of measure probability for

block diagonal matrices scales exactly as for a fully dense random matrix. In other words, the concentration probability decays

exponentially with the total number of measurements, whichin this case equals∑

j Mj. At the other extreme, when the signal

characteristics are less favorable, the measurement operator effectiveness is diminished possibly to the point that it is no more

effective than measuring a single block. As we discuss analytically and support experimentally, our measures of diversity have

clear intuitive interpretations and intimately capture the relevant phenomena dictating whether a signalx is well matched to

the block diagonal structure ofΦ. In Section IV we further discuss potential signal classes that are well-behaved for DBD and

RBD matrices, and finally in Section V we discuss several possible applications of our results to tasks such as detection in

the compressed space and proving RIP guarantees for the Toeplitz measurement matrices that arise in channel sensing.

II. BACKGROUND AND RELATED WORK

In this section, we begin with a brief overview of subgaussian random variables and a survey of some of their key properties

that will be of relevance in our analysis. We then describe more formally several existing concentration of measure results for

random matrices, and we conclude by reviewing some standardapplications of these results in the literature.

1See Section V-B1 for details on how the channel sensing problem fits into this viewpoint.2Subgaussian random variables [23] are precisely defined in Section II-A, and can be thought of as random variables from a distribution with tails that can

be bounded by a Gaussian.

Page 4: 1 Concentration of Measure for Block Diagonal Matrices

4

A. Subgaussian Random Variables

In fields such as CS, the Gaussian distribution is often invoked for probabilistic analysis thanks to its many convenient

properties. Gaussian random variables, however, are just one special case in a much broader class of so-calledsubgaussian

distributions [23]. As we discuss below, subgaussian random variables actually retain some of the same properties thatmake

Gaussians particularly well suited to concentration of measure analysis in CS. Thus, for the sake of generality, we willstate

our main results in terms of subgaussian random variables where possible.

Before proceeding, however, we present a brief definition ofsubgaussian random variables and a short overview of their

key properties.

Definition II.1. [23] A random variablew is subgaussianif ∃a ≥ 0 such that

Eetw ≤ exp

(1

2a2t2

)for all t ∈ R.

The quantity

τ(w) := infa ≥ 0 : Eetw ≤ exp

(1

2a2t2

)for all t ∈ R

is known as theGaussian standardof w.

From this definition and Jensen’s inequality, it follows that subgaussian random variables must always be centered, i.e., Ew =

0, and with some simple functional analysis one can also see that the varianceEw2 ≤ τ2(w). The class of subgaussian random

variables that achieve this bound with equality (i.e., for whichEw2 = τ2(w)) are known asstrictly subgaussian[23]. Examples

of strictly subgaussian random variables include zero-mean Gaussian random variables,±1 Bernoulli random variables (p = 12 )

and uniform random variables on[−1, 1].

As with Gaussian random variables, linear combinations of i.i.d. subgaussian random variables are also subgaussian. This

fact will be useful to us when studying the matrix-vector products that arise when a compressive linear operator is applied to

a signal. We provide a more formal statement in the followinglemma.

Lemma II.1. [23, Theorem 1 and Lemma 3]Letβ ∈ RZ be a fixed vector, and supposew(1), w(2), . . . , w(Z) are a collection

of i.i.d. subgaussian random variables with Gaussian standards all equal toτ(w). Then the quantityv :=∑Z

i=1 β(i)w(i) is

a subgaussian random variable with Gaussian standardτ(v) ≤ τ(w)‖β‖2.

Finally, we conclude with some important concentration results involving subgaussian random variables. The first result

gives a standard bound on the tail distribution of a subgaussian random variable.

Lemma II.2. [23, Lemma 4]Suppose thatw is a subgaussian random variable with Gaussian standardτ(w). Then

P (|w|2 > t) ≤ 2 exp

(− t

2τ2(w)

)(2)

for all t ≥ 0.

This property will also allow us to use subgaussian random variables in the following important theorem capturing the

Page 5: 1 Concentration of Measure for Block Diagonal Matrices

5

concentration of measure properties of sums of random variables.

Theorem II.1. [24, Theorem 1.4]Let X1, . . . , XL be independent Banach space valued random variables withP‖Xi‖ >

t ≤ a exp−αit for all t and i. Let d ≥ maxi α−1i and b ≥ a

∑Li=1 α

−2i . Then settingS =

∑Li=1 Xi we have

P|‖S‖ −E‖S‖| > t ≤

2 exp−t2/32b, 0 ≤ t ≤ 4bd

2 exp−t/8d, t ≥ 4bd

.

B. Concentration Inequalities for Dense Matrices with I.I.D. Random Entries

Concentration analysis to date has focused almost exclusively on dense random matrices populated with i.i.d. entries drawn

from some distribution. Commonly, whenΦ has sizeM×N and the entries are drawn from a suitably normalized distribution,

then for any fixed signalx ∈ RN the goal is to prove for anyǫ ∈ (0, 1) that

P (∣∣‖Φx‖22 − ‖x‖22

∣∣ > ǫ‖x‖22) ≤ 2e−Mc0(ǫ), (3)

wherec0(ǫ) is some constant (depending onǫ) that is typically on the order ofǫ2. When discussing bounds such as (3) where

the probability of failure scales ase−X , we refer toX as the as theconcentration exponent.

The past several years have witnessed the derivation of concentration results for a variety of (ultimately related) random

distributions forΦ. A concentration result of the form (3) was originally derived for dense Gaussian matrices populated with

entries having mean zero and variance1M [25]; one straightforward derivation of this uses standardtail bounds for chi-squared

random variables [7]. Using slightly more complicated arguments, similar concentration results were then derived forBernoulli

matrices populated with random±1√M

entries (each with probability12 ) and for a “database-friendly” variant populated with

random 3√M, 0,− 3√

M entries (with probabilities 16 , 2

3 ,16) [7]. Each of these distributions, however, is itself subgaussian,

and more recently it has been shown that concentration results of the form (3) in fact hold forall subgaussian distributions

having variance1M [11, 26].3 Moreover, it has been shown that a subgaussian tail bound of the form (2) is actually necessary

for deriving a concentration result of the form (3) for a dense random matrix populated with i.i.d. entries [26]. However, other

forms of dense random matrices, such as those populated withrandom orthogonal rows, can also be considered [27].

C. Applications of Concentration Inequalities

One of the prototypical applications of a concentration result of the form (3) is to prove that with high probability,Φ

will provide a stable, distance preserving embedding of some particular high-dimensional signal familyQ ⊂ RN in the low-

dimensional measurement spaceRM . For example, supposing thatQ consists of a finite number of points and thatΦ is an

M ×N matrix populated with i.i.d. subgaussian random variableshaving variance1M , it is possible to apply (3) to each vector

of the formu − v for u, v ∈ Q. Using the favorable probability of distance preservationfor each pair, one can use a union

bound to conclude that approximate isometry must hold simultaneously for all of these difference vectors with probability at

3This fact also follows from our Theorem III.1 by consideringthe special case whereJ = 1.

Page 6: 1 Concentration of Measure for Block Diagonal Matrices

6

least1− 2|Q|2e−Mc0(ǫ). From this fact one obtains the familiar JL lemma [6–8], which states that

(1− ǫ)‖u− v‖22 ≤ ‖Φ(u− v)‖22 ≤ (1 + ǫ)‖u− v‖22 (4)

holds for allu, v ∈ Q with high probability supposing thatM = O(

log(|Q|)c0(ǫ)

). The stable embedding of one or more discrete

signals can be useful for solving various inference problems in the compressive domain. Potential problems of interestinclude

nearest neighbor search [25], binary detection [28], multi-signal classification [28], and so on. We revisit the problem of

compressive domain signal detection in Section V-A.

It is possible to significantly extend embedding results farbeyond the JL lemma. For example, by coupling the above

union bound approach with some elementary covering arguments, one can prove the RIP in CS [10, 11], which states that if

M = O (K log(N/K)) (with a mild dependence onǫ), the inequality (4) can hold with high probability for an (infinite) setQ

containing all signals with sparsityK in some basis forRN . Supposing a matrix satisfies the RIP, one can derive deterministic

bounds on the performance of CS recovery algorithms such asℓ1 minimization [29]. Alternately, a concentration result ofthe

form (3) has also been used to probabilistically analyze theperformance ofℓ1 minimization [26]. Finally, we note that one

can also generalize these covering/embedding arguments tothe case whereQ is a low-dimensional submanifold ofRN [30].

III. M AIN CONCENTRATION OFMEASURE INEQUALITIES

In this section we state our main concentration of measure results for Distinct Block Diagonal (DBD) and Repeated Block

Diagonal (RBD) matrices. For each type of matrix we provide adetailed examination of the derived concentration rates and

use simulations to demonstrate that our results do indeed capture the salient signal characteristics that affect the concentration

probability. We also discuss connections between the concentration probabilities for the two matrix types.

A. Distinct Block Diagonal (DBD) Matrices

1) Analytical Results:Before stating our first result, let us define the requisite notation. For a given signalx ∈ RNJ

partitioned intoJ blocks of lengthN as in (1), we define a vector describing the energy distribution across the blocks ofx:

γ = γ(x) :=[‖x1‖22 ‖x2‖22 · · · ‖xJ‖22

]T ∈ RJ .

Also, letting M1,M2, . . . ,MJ denote the number of measurements to be taken of each block, we define aJ × J diagonal

matrix containing these numbers along the diagonal:

M :=

M1

M2

. . .

MJ

.

Using this notation, our first main result concerning the concentration of DBD matrices is captured in the following theorem.

Page 7: 1 Concentration of Measure for Block Diagonal Matrices

7

Theorem III.1. Supposex ∈ RNJ . For eachj ∈ 1, 2, . . . , J, suppose thatMj > 0, and letΦj be a randomMj × N

matrix populated with i.i.d. subgaussian entries having variance σ2j = 1

Mjand Gaussian standardτ(φj) ∈ [σj ,

√c · σj ] for

some constantc ≥ 1. Suppose that the matricesΦjJj=1 are drawn independently of one another (though it is not necessary

that they have the same subgaussian distribution), and letΦ be a(∑J

j=1 Mj

)×NJ DBD matrix composed ofΦjJj=1 as

in (1). Then,

P (∣∣‖Φx‖22 − ‖x‖22

∣∣ > ǫ‖x‖22) ≤

2 exp− ǫ2‖γ‖2

1

256c2‖M−1/2γ‖2

2

, 0 ≤ ǫ ≤ 16c‖M−1/2γ‖2

2

‖M−1γ‖∞‖γ‖1

2 exp− ǫ‖γ‖1

16c‖M−1γ‖∞, ǫ ≥ 16c‖M−1/2γ‖2

2

‖M−1γ‖∞‖γ‖1

. (5)

Proof: See Appendix A.

As we can see from the tail bound (5), the concentration probability of interest decays exponentially as a function of

ǫ2‖γ‖2

1

‖M−1/2γ‖2

2

in the case of smallǫ and ǫ‖γ‖1

‖M−1γ‖∞in the case of largerǫ. One striking thing about Theorem III.1 is that, in

contrast to analogous results for dense matrices, the concentration rate depends explicitly on characteristics of thesignal x

being measured. To elaborate on this point, since we are frequently concerned in practice with applications whereǫ is small,

let us focus on the first case listed in (5). We see that in this case, the concentration exponent scales with

Γ = Γ(x,M) :=‖γ‖21

‖M−1/2γ‖22=

(∑Jj=1 ‖xj‖22

)2

∑Jj=1

‖xj‖4

2

Mj

, (6)

where larger values ofΓ promote sharper concentration of‖Φx‖22 about its mean‖x‖22. We can bound the range of possible

Γ values as follows.

Lemma III.1. Let Γ = Γ(x,M) be as defined in (6). Thenminj Mj ≤ Γ ≤∑J

j=1 Mj.

Proof: See Appendix B.

It is not difficult to see that the worst case,Γ = minj Mj, is achieved when all of the signal energy is concentrated

into exactly one signal block where the fewest measurementsare collected, i.e., when‖xj‖22 = 0 except for a single index

j′ ∈ argminj Mj (whereargminj Mj is the set of indices whereMj is minimum). In this case the DBD matrix exhibits

significantly worse performance than a dense i.i.d. matrix of the same size (∑J

j=1 Mj) × NJ , for which the concentration

exponent would scale with∑J

j=1 Mj. This makes some intuitive sense, as this extreme case wouldcorrespond to only one

block of the DBD matrix sensing all of the signal energy. The best case,Γ =∑J

j=1 Mj , is achieved when the number of

measurements collected for each block is proportional to the signal energy in that block. In other words, lettingdiag(M)

represent the diagonal ofM, the concentration exponent scales with∑J

j=1 Mj just as it would for a dense i.i.d. matrix of the

same size whendiag(M) ∝ γ (i.e., whendiag(M) = Cγ for some constantC > 0). This is in spite of the fact that the DBD

matrix has many fewer nonzero elements.

The probability of concentration in the second case of (5) behaves similarly. For the ratio ‖γ‖1

‖M−1γ‖∞appearing in the

concentration exponent, one can show thatminj Mj ≤ ‖γ‖1

‖M−1γ‖∞≤

√J∑J

j=1 M2j . The lower bound is again achieved

when ‖xj‖22 = 0 except for a single indexj′ ∈ argminj Mj, and the upper bound is achieved when the signal energy is

uniformly distributed across the blocks and the measurement rates are constant, i.e., when‖x1‖22 = ‖x2‖22 = · · · = ‖xJ‖22 and

Page 8: 1 Concentration of Measure for Block Diagonal Matrices

8

0 500 1000

−0.1

0

0.1

(a)

0 10 200

0.1

0.2

0.3

0.4

block number, j

bloc

k en

ergy

||x j|| 22

(b)

Fig. 1. Test signal for concentration in a DBD matrix. (a) Test signal x with length1024. (b) Energy distributionγ(x) when signal is partitionedinto J = 16 blocks of lengthN = 64.

M1 = M2 = · · · = MJ .

The above discussion makes clear that the concentration performance a DBD matrix can vary widely depending on how

well matched it is to the energy distribution of the measuredsignal. In particular, DBD matrices can perform as well as dense

i.i.d. matrices if they are constructed to match the number of measurements in each block to the distribution of signal energy.

Three final comments concerning Theorem III.1 are in order. First, the bounds in (5) are most favorable for matrices populated

with strictly subgaussian random variables because this allows one to setc = 1. As mentioned in Section II-A, zero-mean

Gaussian random variables,±1 Bernoulli random variables withp = 12 , and uniform random variables on[−1, 1] are all

examples of strictly subgaussian random variables. Second, we may further characterize the range ofǫ for which the two cases

of Theorem III.1 are relevant.

Lemma III.2. If J ≥ 2, the point of demarcation between the two cases of Theorem III.1 obeys

16c · 2(√J − 1)

J − 1

minj√Mj

maxj√Mj

≤ 16c‖M−1/2γ‖22‖M−1γ‖∞‖γ‖1

≤ 16c.

Proof: See Appendix C.

Examining the bound above, we note that forJ ≥ 2 it holds that 2(√J−1)

J−1 ≥ 1√J

. Thus, as an example, whenM1 = M2 =

· · · = MJ , the first (“smallǫ”) case of Theorem III.1 is guaranteed to at least permitǫ ∈ [0, 16c√J]. Finally, Theorem III.1 was

derived by considering all signal blocks to be of equal length N . By close examination of the proof, one can see that the same

theorem in fact holds for signals partitioned into blocks ofunequal lengths.

2) Supporting Experiments:While the quantityΓ plays a critical role in our analytical upper bound (5) on theconcentration

tail probabilities, it is reasonable to ask whether this quantity actually plays a central role in the empirical concentration

performance of DBD matrices. We explore this question with aseries of simulations. To begin, we randomly construct a signal

of length 1024 partitioned intoJ = 16 blocks of lengthN = 64. The signalx and its energy distributionγ are plotted in

Figures 1(a) and 1(b), respectively. For this simulation, in order to be able to ensurediag(M) ∝ γ with integer values for the

Mj , we began by constructingM (populated with integers) and then normalized each block ofa randomly generated signal

to setγ accordingly.

Fixing this signalx, we generate a series of10000 random64×1024 matricesΦ using zero-mean Gaussian random variables

for the entries. In one case, the matrices are fully dense andthe entries of each matrix have variance1/64. In another case, the

Page 9: 1 Concentration of Measure for Block Diagonal Matrices

9

0.5 1 1.50

500

1000

(a)0.5 1 1.50

500

1000

(b)0.5 1 1.50

500

1000

(c)0.5 1 1.50

500

1000

(d)

Fig. 2. Histogram of‖Φx‖2/‖x‖2 for fixedx across10000 randomly generated matricesΦ. (a) Dense64 × 1024 Φ. (b) DBD 64 × 1024 Φwhere eachΦj has a number of rows such that diag(M) ∝ γ. (c) DBD 64 × 1024 Φ where eachΦj has4 rows so that diag(M) 6∝ γ. Thiscorresponds toΓ = 32.77 ≤ 64. (d) Modified DBD128 × 1024 Φ where eachΦj has8 rows. This corresponds toΓ ≈ 64.

0 0.1 0.2 0.3 0.40

20

40

60

80

100

ε

% o

f tria

ls w

ith c

once

ntra

tion

ε

dense ΦDBD Φ, diag(M) ∝ γDBD Φ, diag(M) 6∝ γmodified DBD Φ

Fig. 3. The percentage of trials for which(1 − ǫ) ≤ ‖Φx‖2/‖x‖2 ≤ (1 + ǫ) as a function ofǫ. Note that all curves overlap except for thatcorresponding to the DBD matrixΦ with diag(M) 6∝ γ.

matrices are DBD withdiag(M) ∝ γ and the entries in each block have variance1/Mj. Thus, we haveΓ(x,M) =∑J

j=1 Mj

and our Theorem III.1 gives the same concentration bound forthis DBD matrix as for the dense i.i.d. matrix of the same size.

Indeed, Figures 2(a) and 2(b) show histograms of‖Φx‖2/‖x‖2 for these two types of matrices and, despite the drastically

different structure of the matrices, both histograms are tightly concentrated around1 as anticipated. For each type of matrix,

Figure 3 also shows the percentage of trials for which(1− ǫ) ≤ ‖Φx‖2/‖x‖2 ≤ (1+ ǫ) as a function ofǫ. The curves for the

dense and DBD matrices are indistinguishable.

Finally, we consider a third scenario in which we construct 10000 random64× 1024 DBD matrices as above but with an

equal number of measurements in each block. In other words, we set allMj = 4, and obtain measurement matrices that are no

longer matched to the signal energy distribution. We quantify this mismatch by noting thatΓ(x, 4 ·IJ×J) = 32.77 <∑J

j=1 Mj.

Figure 2(c) shows the histogram of‖Φx‖2/‖x‖2 and Figure 3 shows the concentration success probability over these10000

random matrices. It is evident that these mismatched DBD matrices provide decidedly less sharp concentration of‖Φx‖2.

B. Repeated Block Diagonal (RBD) Matrices

1) Analytical Results:We now turn our attention to the concentration performance of the more restricted RBD matrices.

Before stating our result, let us again define the requisite notation. Given a signalx ∈ RNJ partitioned intoJ blocks of length

N , we define theJ ×N matrix of concatenated signal blocks

X := [x1 x2 · · · xJ ]T , (7)

Page 10: 1 Concentration of Measure for Block Diagonal Matrices

10

and we denote the non-negative eigenvalues of theN ×N symmetric matrixA = XTX asλiNi=1. We let

λ = λ(x) := [λ1, . . . , λN ]T ∈ R

N

be the vector composed of these eigenvalues. Also, we letM := M1 = M2 = · · · = MJ denote the same number of

measurements to be taken in each block, and we refer to the measurement matrices asΦ := Φ1 = Φ2 = · · · = ΦJ .

Equipped with this notation, our main result concerning theconcentration of RBD matrices is as follows.

Theorem III.2. Supposex ∈ RNJ . Let Φ be a randomM×N matrix populated with i.i.d. zero-mean Gaussian entries having

varianceσ2 = 1M , and letΦ be anMJ ×NJ block diagonal matrix as defined in(1), with Φj = Φ for all j. Then

P (∣∣‖Φx‖22 − ‖x‖22

∣∣ > ǫ‖x‖22) ≤

2 exp−Mǫ2‖λ‖2

1

256‖λ‖2

2

, 0 ≤ ǫ ≤ 16‖λ‖2

2

‖λ‖∞‖λ‖1

2 exp−Mǫ‖λ‖1

16‖λ‖∞, ǫ ≥ 16‖λ‖2

2

‖λ‖∞‖λ‖1

. (8)

Proof: See Appendix D.

As we can see from (8), the concentration probability of interest again decays with a rate that depends on the characteristics

of the signalx being measured. Focusing on the first case listed in (8), we see that the concentration exponent scales with

Λ = Λ(x,M) :=M‖λ‖21‖λ‖22

. (9)

From the standard relation betweenℓ1 and ℓ2 norms, it follows thatM ≤ Λ ≤ M min(J,N). The worst case,Λ = M , is

achieved whenA =∑

j xjxTj has only one nonzero eigenvalue, implying that the blocksxj are the same modulo a scaling

factor. In this case, the RBD matrix exhibits significantly worse performance than a dense i.i.d. matrix of the same size

MJ ×NJ , for which the concentration exponent would scale withMJ rather thanM . However, this diminished performance

is to be expected since the sameΦ is used to measure each identical signal block.

The other extreme case,Λ = M min(J,N) is favorable as long asJ ≤ N , in which case the concentration exponent scales

with MJ just as it would for a dense i.i.d. matrix of the same size. Forthis case to occur,A must haveJ nonzero eigenvalues

and they must all be equal. By noting that the nonzero eigenvalues ofA = XTX are the same as those of the Grammian matrix

G = XXT , we conclude that this most favorable case can occur only when the signal blocks are mutually orthogonal and

have the same energy. Alternatively, if the signal blocks span aK-dimensional subspace ofRN we will haveM ≤ Λ ≤MK.

All of this can also be seen by observing that calculating theeigenvalues ofA = XTX is equivalent to running Principal

Component Analysis (PCA) [31] on the matrixX comprised of theJ signal blocks.

We note that there is a close connection betweenΓ andΛ that is not apparent at first glance. For a fair comparison, we

assume in this discussion thatM1 = M2 = · · · = MJ = M . Now, note that‖λ‖21 = ‖γ‖21 and also that

‖λ‖22 = ‖A‖2F = ‖XXT‖2F =J∑

i=1

‖xi‖42 + 2∑

i>j

(xTi xj)

2 = ‖γ‖22 + 2∑

i>j

(xTi xj)

2.

Page 11: 1 Concentration of Measure for Block Diagonal Matrices

11

0 500 1000−0.1

−0.05

0

0.05

0.1

(a)0 5 10 15

0

0.1

0.2

0.3

eige

nval

ues

(b)0 500 1000

−0.1

−0.05

0

0.05

0.1

(c)0 5 10 15

0

0.1

0.2

0.3

eige

nval

ues

(d)

Fig. 4. Test signals for concentration in a RBD matrix. (a) Sig. 1 withJ = 16 orthogonal blocks. (b)λ for Sig. 1. (c) Sig. 2 with non-orthogonalblocks. (d)λ for Sig. 2.

Using these two relationships, we can rewriteΛ as

Λ =M‖λ‖21‖λ‖22

=M‖γ‖21

‖γ‖22 + 2∑

i>j(xTi xj)2

≤ M‖γ‖21‖γ‖22

= Γ. (10)

From this relationship we see thatΛ andΓ differ only by the additional inner-product term in the denominator ofΛ, and we

also see thatΛ = Γ if and only if the signal blocks are mutually orthogonal.

In relation to Theorem III.1, the conditions for achieving the optimal concentration exponent are more stringent in Theo-

rem III.2. This is not surprising given the restricted nature of the RBD matrix. Indeed, it is remarkable that for some signals,

an RBD matrix can yield the same concentration rate as a densei.i.d. matrix, though as we have discussed above this happens

only for signals containing enough intrinsic diversity to compensate for the lack of diversity in the measurement system.

2) Supporting Experiments:While the quantityΛ plays a critical role in our analytical upper bound (8) on theconcentration

tail probabilities, it is reasonable to ask whether this quantity actually plays a central role in the empirical concentration

performance of RBD matrices. We explore this question with aseries of simulations. To begin, we randomly construct a signal

of length1024 partitioned intoJ = 16 blocks of lengthN = 64, and we perform Gram-Schmidt orthogonalization to ensure

that theJ blocks are mutually orthogonal and have equal energy. The signal x (denoted “Sig. 1”) is plotted in Figure 4(a),

and the nonzero eigenvalues ofA = XTX are shown in the plot ofλ in Figure 4(b).

As we have discussed above, for signals such as Sig. 1 we should haveΛ = MJ , and Theorem III.2 suggests that an RBD

matrix can achieve the same concentration rate as a dense i.i.d. matrix of the same size. Fixing this signal, we generate a

series of10000 random64× 1024 matricesΦ populated with zero-mean Gaussian random variables. In onecase, the matrices

are dense and each entry has variance1/64. In another case, the matrices are RBD, with a single4 × 64 block repeated

along the main diagonal, comprised of i.i.d. Gaussian entries with variance14 . Figures 5(a) and 5(b) show histograms of

‖Φx‖2/‖x‖2 for these two types of matrices and, despite the drasticallydifferent structure of the matrices, both histograms are

tightly concentrated around1 as anticipated. For each type of matrix, Figure 6 also shows the percentage of trials for which

(1− ǫ) ≤ ‖Φx‖2/‖x‖2 ≤ (1 + ǫ) as a function ofǫ. The curves for the dense and RBD matrices are indistinguishable.

In contrast, we also construct a second signalx (denoted “Sig. 2”) that has equal energy between the blocks but has non-

orthogonal components (resulting in non-uniformλ); see Figures 4(c) and 4(d). This signal was constructed to ensure that the

sorted entries ofλ exhibit an exponential decay. Due to the non-orthogonalityof the signal blocks, we see for this signal that

Λ = 21.3284 which is approximately3 times less than the best possible value ofMJ = 64. Consequently, Theorem III.2

provides a much weaker concentration exponent when this signal is measured using an RBD matrix than when it is measured

using a dense i.i.d. matrix. Fixing this signal, we plot in Figures 5(c) and 5(d) the histograms of‖Φx‖2

‖x‖2

for the fully dense and

Page 12: 1 Concentration of Measure for Block Diagonal Matrices

12

0.5 1 1.50

500

1000

(a)0.5 1 1.50

500

1000

(b)0.5 1 1.50

500

1000

(c)0.5 1 1.50

500

1000

(d)Fig. 5. Histogram of‖Φx‖2/‖x‖2 for fixedx across 10000 randomly generated matricesΦ. (a) Sig. 1 with orthogonal blocks, dense64×1024Φ. (b) Sig. 1 with orthogonal blocks, RBD64×1024 Φ. (c) Sig. 2 with non-orthogonal blocks, dense64×1024 Φ. (d) Sig. 2 with non-orthogonalblocks, RBD64× 1024 Φ. This corresponds toΛ = 21.3284 < 64.

0 0.1 0.2 0.3 0.40

20

40

60

80

100

ε

% o

f tria

ls w

ith c

once

ntra

tion

ε

Sig.1, dense ΦSig.1, RBD ΦSig.2, dense ΦSig.2, RBD Φ

Fig. 6. The percent of trials for which(1 − ǫ) ≤ ‖Φx‖2/‖x‖2 ≤ (1 + ǫ) as a function ofǫ. Note that all curves overlap except for thatcorresponding to the signal with non-orthogonal blocks being measured with RBD matrices.

RBD matrices constructed above, respectively. As shown in these histograms and in Figure 6, we see that the concentration

performance of the full dense matrix is agnostic to this new signal structure, while the concentration is clearly not as sharp

for the RBD matrix.

C. Increasing Measurement Rates to Protect Against Signal Uncertainty

As we have seen both analytically and experimentally, givena fixed number of total measurements, the concentration

performance of a DBD matrix can be optimized if there is a suitable match between the measurement allocations and the

structure of the observed signalx. In some cases, however, it may not be possible for a system designer to match the

measurement ratios to the signal structure; for example, without knowledge of the exact signal to be acquired, one may wish

to design a fixed measurement system with an equal number of measurements per block. In such cases, it may still be possible

to guarantee suitable concentration performance for a variety of possible received signals by increasing thetotal number of

measurements collected.

For example, recall the experiment from Section III-A involving the signal shown in Figure 1. Using an unmatched DBD

matrix with M = 4 measurements per block, we obtainedΓ(x,M) = Γ(x,M · IJ×J ) = 32.77, which gave rise to a smaller

concentration exponent and worse empirical concentrationperformance than a dense i.i.d. matrix with the same total number of

MJ = 64 rows. However, suppose that we had the resources available to acquire more measurements from each block (while

keeping the number of measurements equal across the blocks). In particular, if we were to collectM′

= MJΓ(x,M·IJ×J )

·M

measurements from each block, we would obtain a DBD matrix whose concentration exponent scales withΓ(x,M′

) =

Γ(x,M ′ · IJ×J ) =MJ

Γ(x,M·IJ×J )· Γ(x,M · IJ×J ) = MJ , the same as a dense matrix withMJ rows. For this specific signal

Page 13: 1 Concentration of Measure for Block Diagonal Matrices

13

x, we illustrate this by takingM ′ = 8 ≈ 6432.77 ·M ; as shown in Figure 2(d) and Figure 3, the concentration performance of a

128× 1024 DBD matrix with M ′ = 8 measurements per block (we call this a “modified DBD” matrix due to its larger size)

is indistinguishable from that of a dense64 × 1024 Gaussian matrix. Thus, we see that it is possible to guarantee a certain

performance from DBD matrices by increasingM to protect against signals with small ratios‖γ‖2

1

‖γ‖2

2

.

Note that this apparent empirical equivalence between the concentration performance of the modified DBD matrix and the

dense matrix does not mean that the concentration statistics are formally the same. It is straightforward to verify thatin general,

while the probability distributions of‖Φx‖22 will not be the same for a dense and a modified DBD matrix, the variance of

‖Φx‖22 will be equal in both cases. This explains why the empirical concentration performance is very similar. In the special

case that the signal blocks have equal energy and the matrices are populated with Gaussian random variables, the distributions

will in fact be the same under both matrices. As a final remark,we note that similar arguments can be made for RBD matrices

by increasingM to protect against signals with small ratios‖λ‖2

1

‖λ‖2

2

.

IV. FAVORABLE SIGNAL CLASSES

In Section III we demonstrated that block diagonal matriceshave concentration exponents that vary significantly depending

on the exact characteristics of the signal being measured. It is only natural to ask whether there are any realistic signal classes

where we expect most of the constituent signals to have the properties that allow favorable concentration exponents from DBD

or RBD matrices. In this section, we briefly explore some example scenarios where we demonstrate empirically or analytically

that signals in a restricted family often have these favorable characteristics.

A. Frequency Sparse Signals

One of the primary signal characteristics affecting the concentration exponents in the results of Section III is the distribution

of signal energy across blocks. Supposing thatM1 = M2 = · · · = MJ =: M , this effect is most easily seen in the quantity

1 ≤ ΓM ≤ J , where the larger the value ofΓM , the better the concentration exponent when using a DBD matrix. Because of

existing results on time-frequency uncertainty principles and the well-known incoherence of sinusoids and the canonical basis

(i.e., “spikes and sines”) [32, 33], we have intuition that most signals that are sparse in the frequency domain should have

their energy distributed relatively uniformly across blocks in the time domain. In this section, we make formal the notion that

frequency sparse signals are indeed likely to have a favorable energy distribution, producing values ofΓ that scale to within

a log factor of its maximum value.

To be concrete, letx ∈ CN ′

be a signal of interest4 that is split intoJ = N ′/N blocks of fixed lengthN each. For simplicity,

we assume thatN dividesN ′ and we will consider the case whereN ′ grows (implying the number of blocksJ is increasing,

since the block sizeN is fixed). The signalx is further assumed to be frequency sparse, withS nonzero coefficients in the

discrete Fourier transform (DFT). In other words, ifx is the DFT ofx andΩ ⊂ [1, N ′] denotes the support ofx, then|Ω| ≤ S.

We assume that the frequency locations are chosen randomly (i.e., Ω is chosen uniformly at random from[1, N ′]), but the

4We consider complex-valued signals for simplicity and clarity in the exposition. A result with the same spirit that holds with high probability can bederived for strictly real-valued signals, but this comes atthe cost of a more cumbersome derivation.

Page 14: 1 Concentration of Measure for Block Diagonal Matrices

14

0 20 40 600

0.2

0.4

0.6

0.8

1

Γ/M

Fre

quen

cy

S = 5S = 30S = 64

(a)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

Γ/MJ

Fre

quen

cy

J = 64J = 200J = 400

(b)Fig. 7. Histograms of the normalized quantityΓ for frequency sparse signals. (a) The distribution ofΓ

Mfor randomly generated frequency

sparse signals of lengthN ′ = N × J = 64× 64 for sparsity levelsS ∈ 5, 30, 64. Note that ΓM

accumulates near its upper bound ofJ = 64for all three sparsity levels. (b) The distribution ofΓ

MJfor randomly generated frequency sparse signals withS = 5 and the number of signal

blocksJ ∈ 64, 200, 400. Note that Γ

MJaccumulates near its upper bound of 1.

values taken byx on Ω are completely arbitrary with no probability distributionassumed. The following theorem then gives

a lower bound on the ratioΓM for frequency sparse signals.

Theorem IV.1. Let N, β > 1 be fixed and supposeN ′ = NJ > 512. Let Ω ⊂ [1, N ′] be of sizeS ≤ N generated uniformly

at random. Then with probability at least1 − O(J(log(N ′))1/2(N ′)−β),5 every signalx ∈ CN ′

whose DFT coefficients are

supported onΩ will haveΓ/M lower bounded by:

Γ

M≥ min

0.0779J

(β + 1) log(N ′),

J(√

6(β + 1) logN ′ + (logN ′)2

N

)2

.

Proof: See Appendix E.

Note that asN ′ grows, the lower bound onΓM scales as J1

N2log4(N ′)

, which (treating the fixed valueN as a constant)

is within log4(N ′) of its maximum possible value ofJ . Thus the concentration exponent for most frequency sparsesignals

when measured by a DBD matrix will scale withǫ2MJ/ log4(N ′) for small ǫ, which is close to the concentration exponent

resulting from the application of a dense, i.i.d. random matrix of the same size. Also note that the failure probability in the

above theorem can be bounded byO( 1N ′β−2 ) since bothJ and

√log(N ′) are less thanN ′.

To explore the analysis above we use two illustrative simulations. For the first experiment, we generate 5000 signals with

lengthN ′ = NJ = 64 × 64 = 4096, using three different sparsity levelsS ∈ 5, 30, 64. The DFT coefficient locations are

selected uniformly at random, and the corresponding nonzero coefficient values are drawn from the i.i.d. standard Gaussian

distribution. Figure 7(a) plots the ratioΓM , showing that this quantity is indeed near the upper bound ofJ = 64, indicating a

favorable energy distribution. This gives support to the fact that the theoretical value ofΓM predicted in Theorem IV.1 does

not depend strongly on the exact value ofS. For the second experiment, we fix the sparsity atS = 5 and vary the signal

block lengthJ ∈ 64, 200, 400 (and thus the total signal lengthN ′ = NJ changes as well). For eachJ we generate 5000

random signals in the same manner as before and plot in Figure7(b) the distribution of ΓMJ (note the normalization byJ).

5TheO(·) notation is with respect toN ′. With the component lengthN fixed, this means that only the number of blocksJ is growing with increasingN ′.

Page 15: 1 Concentration of Measure for Block Diagonal Matrices

15

Again, it is clear that this value concentrates near the upper bound of1, showing the relative accuracy of the prediction that

ΓM scales linearly withJ . While some of the quantities in Theorem IV.1 appear pessimistic (e.g., the scaling withlog4(N ′)),

these simulations confirm the intuition derived from the theorem that frequency sparse signals should indeed have favorable

energy distributions, and therefore favorable concentration properties when measured with DBD matrices.

B. Delay Networks and Multi-view Imaging

Additional favorable signal classes for DBD and RBD matrices can occasionally arise in certain sensor network or multi-

view imaging scenarios where signals with steeply decayingautocorrelation functions are measured under small perturbations.

For example, consider a network ofJ sensors characterized by measurement operatorsΦ1,Φ2, . . . ,ΦJ , and suppose that the

received signalsx1, x2, . . . , xJ ∈ RN represent observations of some common underlying prototype signalz ∈ RN . However,

due to the configurations of the sensors, these observationsoccur with different delays or translations. More formally, we might

consider the one-dimensional delay parametersδ1, δ2, . . . , δJ ∈ Z and have that for eachj, xj(n) = z(n − δj). A similar

two-dimensional translation model could be used for imaging scenarios.

The characteristics ofz may make it well suited to observation using block diagonal matrices. Assumingz is suitably

zero-padded so that border and truncation artifacts can be neglected, we will have‖xj‖2 = ‖z‖2 for all xj ; this gives

Γ =∑

j Mj , which is the ideal case for observation with a DBD matrix. Moreover, the correlations among the components

xj can be characterized in terms of the autocorrelation function Rz of z: we will have 〈xi, xj〉 =∑N

n=1 xi(n)xj(n) =∑N

n=1 z(n− δi)z(n− δj), which neglecting border and truncation artifacts will simply equalRz(|δi − δj |). Therefore, signals

z that exhibit strong decay in their autocorrelation function will be natural candidates for observation with RBD matrices as

well. For example, if we assume that allMj = M , equation (10) gives

Λ =MJ2‖z‖42

J‖z‖42 + 2∑

i>j Rz(|δi − δj |)2.

Note in the expression above that whenRz(|δi − δj |) is small for mosti and j, the quantityΛ is near its optimal value of

MJ . Finally, we note that the repeated observation of a signal from multiple translations gives rise to a structure not unlike

that which arises when considering Toeplitz matrices in problems such as channel sensing. In Section V-B1, we explore this

related problem in much more depth.

C. Difference Signals

Some applications of our main results could require considering difference vectors of the formx − y wherex, y ∈ RNJ .

For example, as suggested by the discussion in Section II-C,in order to guarantee that a block diagonal measurement matrix

Φ provides a stable embedding of a signal familyQ ⊂ RNJ , it may be necessary to repeatedly invoke bounds such as (5) for

many or all of the difference vectors between elements ofQ. It is interesting to determine what signal families will give rise

to difference vectors that have favorable values ofΓ or Λ.

We provide a partial answer to this question by briefly exemplifying such a favorable signal family. For the sake of simplicity,

let us restrict our attention to DBD matrices of the form in (1) whereM1 = M2 = · · · = MJ =: M . Consider a setQ ⊂ RJN

Page 16: 1 Concentration of Measure for Block Diagonal Matrices

16

of signals that, when partitioned intoJ blocks of lengthN , satisfy both of the following properties: (i) eachx ∈ Q has uniform

energy across theJ blocks, i.e.,‖x1‖22 = ‖x2‖22 = · · · = ‖xJ‖22 = 1J ‖x‖22, and (ii) eachx ∈ Q has highly correlated blocks,

i.e., for somea ∈ (0, 1], 〈xi, xj〉 ≥ a 1J ‖x‖22 for all i, j ∈ 1, 2, . . . , J. The first of these conditions ensures that eachx ∈ Q

will have Γ = MJ and thus be highly amenable to measurement by a DBD matrix. The second condition, when taken in

conjunction with the first, ensures that all difference vectors of the formx− y wherex, y ∈ Q will also be highly amenable

to measurement by a DBD matrix. In particular, for anyi, j ∈ 1, 2, . . . , J, one can show that

∣∣‖xi − yi‖22 − ‖xj − yj‖22∣∣ ≤ 4

√2‖x‖2‖y‖2

√1− a

J,

meaning that the energy differences in each block of the difference signals can themselves have small differences. One

implication of this bound is that asa→ 1, Γ(x− y)→MJ .

Signal families of the form specified above—with uniform energy blocks and high inter-block correlations—may generally

arise as the result of observing some phenomenon that variesslowly as a function of time or of sensor position. As an empirical

demonstration, let us consider a small database of eight real-world video signals frequently used as benchmarks in the video

compression community, where we will treat each video frameas a signal block.6 We truncate each video to haveJ = 150

frames, each of sizeN = 176 × 144 = 25344 pixels, and we normalize each video (not each frame) to have unit energy.

Because the test videos are real-world signals, they do not have perfectly uniform energy distribution across the frames, but

we do observe that most frame energies are concentrated around 1J = 0.00667.

Video name Akiyo Bridge close Bridge far Carphone Claire Coastguard Foreman Miss Americamax〈xi, xj〉 0.00682 0.00668 0.00668 0.00684 0.00690 0.00742 0.00690 0.00695min〈xi, xj〉 0.00655 0.00664 0.00665 0.00598 0.00650 0.00562 0.00624 0.00606

Γ/M 149.9844 149.9998 149.9999 149.9287 149.9782 149.2561 149.9329 149.9301TABLE I

The maximum and minimum inner products between all pairs of distinct frames in each video, and the quantityΓ/M for each video. The bestpossible value ofΓ/M isΓ/M = J = 150.

For each video, we present in Table I the minimum and maximum inner products〈xi, xj〉 over all i 6= j, and we also list

the quantityΓ/M for each video. As we can see, the minimum inner product for each video is indeed quite close to 0.00667,

suggesting from the arguments above that the pairwise differences between various videos are likely to have highly uniform

energy distributions. To verify this, we compute the quantity Γ/M for all possible(82

)pairwise difference signals. As we are

limited in space, we present in Figure 8 plots of the energies‖xj‖22, ‖yj‖22, and‖xj − yj‖22 as a function of the frame indexj

for the pairs of videosx, y that give the best (highest) and the worst (smallest) valuesof Γ(x − y)/M . We see that even the

smallestΓ/M value is quite close to the best possible value ofΓ/M = 150. All of this suggests that the information required

to discriminate or classify various signals within a familysuch as a video database may be well preserved in a small number

of random measurements collected by a DBD matrix.

6Videos were obtained from http://trace.eas.asu.edu/yuv/.

Page 17: 1 Concentration of Measure for Block Diagonal Matrices

17

0 50 100 1500

0.002

0.004

0.006

0.008

0.01

0.012

jth frame

fram

e en

ergy

Bridge closeBridge farDifference, Γ/M=149.9988

(a)

0 50 100 1500

0.002

0.004

0.006

0.008

0.01

0.012

jth frame

fram

e en

ergy

CoastguardMiss AmericaDifference, Γ/M=148.7550

(b)Fig. 8. Plots of the energy distributions of individual videos and of their differences for the best video pair and the worst video pair among allpossible

(

8

2

)

possible video pairs. (a) The difference of the video pair, “Bridge close” and “Bridge far”, giving the best value ofΓ(x− y)/M =149.9988. (b) The difference of the video pair, “Coastguard” and “Miss America”, giving the worst value ofΓ(x− y)/M = 148.7550.

D. Random Signals

Our discussions above have demonstrated that favorableΛ and Γ values can arise for signals in a variety of practical

scenarios. This is no accident; indeed, as a blanket statement, it is true that a large majority of all signalsx ∈ RJN , when

partitioned into a sufficiently small number of blocksJ and measured uniformly, will have favorable values of bothΛ andΓ.

One way of formalizing this fact is with a probabilistic treatment such as that given in the following lemma.

Lemma IV.1. Supposex ∈ RNJ is populated with i.i.d. subgaussian entries having mean zero and varianceσ2, and let

τ =√c · σ denote the Gaussian standard of this distribution for a constant c ≥ 1. Let Γ andΛ be defined as in (6) and (9)

respectively, where we assume thatM1 = M2 = · · · = MJ =: M . Then, supposing thatJ ≤ N(

ǫ2

4·256c2 − c1

)log−1

(12ǫ

)for

some constantc1 > 0,

Γ ≥ Λ ≥M +M

(1− ǫ

1 + ǫ

)2

(J − 1),

with probability at least1− 2e−c1N .

Proof: See Appendix F.

We see from Lemma IV.1 that when random vectors are partitioned into a sufficiently small number of blocks, these signals

will have Λ and Γ values close to their optimal value ofMJ . One possible use of this lemma could be in studying the

robustness of block diagonal measurement systems to noise in the signal. The lemma above tells us that when the restrictions

are met on the number of blocks, random noise will tend to yield blocks that are nearly orthogonal and have highly uniform

energies, thereby guaranteeing that they will not have their energy amplified by the matrix.

To illustrate this phenomenon with an example, we setJ = 16 andN = 64 and construct signalsx ∈ RJN populated with

i.i.d. Gaussian entries. Over 10000 random draws of the signal, Figures 9(a) and 9(b) plot histograms of the resulting quantities

Γ/M andΛ/M , respectively. The average value ofΓ/M is 15.5, while the average value ofΛ/M is 12.6; both represent a

large fraction of the maximum possible value of16. In addition, we see that the two histograms concentrate sharply around

their means, suggesting that a large majority of all signalswill indeed be favorable for measurement with DBD and RBD

matrices.

Page 18: 1 Concentration of Measure for Block Diagonal Matrices

18

6 8 10 12 14 160

500

1000

1500

2000

2500

3000

Γ/M

(a)

6 8 10 12 14 160

500

1000

1500

2000

2500

3000

Λ/M

(b)

Fig. 9. The histograms of (a)Γ/M and (b)Λ/M for 10000 randomly drawn signals withJ = 16 andN = 64. In this setting, the maximumpossible value of bothΓ/M andΛ/M is 16 and we see that both histograms concentrate relativelyclose to this maximum value.

V. A PPLICATIONS

A. Detection in the Compressed Domain

In some settings, the ability of a measurement system to preserve the norm of an input signal is enough to make performance

guarantees when certain signal processing tasks are performed directly on the measurements. For example, while the canonical

CS results mostly center around reconstructing signals from compressive measurements, there is a growing interest in forgoing

this recovery process and performing the desired signal processing directly in the compressed domain. One specific example

of this involves a detection task where one is interested in detecting the presence of a signal from corrupted compressive

linear measurements. It has been shown previously [22, 28, 34, 35] that the performance of such a compressive-domain detector

depends on the norm of the signal being preserved in the compressed domain. Here we make explicit the role that concentration

of measure results play in this task by guaranteeing consistency in the detector performance, and we demonstrate that wecan

determine favorable signal classes for this task when the compressive matrices are block diagonal.

Specifically, consider a binary hypothesis test where we attempt to detect the presence of a known signal by comparing the

two hypotheses:

H0 : y = z

H1 : y = Φx+ z

whereΦ is a compressive measurement matrix andz is a vector of i.i.d. zero-mean Gaussian noise with varianceσ2. We con-

struct a Neyman-Pearson (NP) style detector which maximizes the probability of detection,PD = P H1 chosen|H1 is true,

given that we can tolerate a certain probability of false alarm, PF = P H1 chosen|H0 is true. Starting from the usual

likelihood ratio test, the decision for this problem is madebased on whether or not the sufficient statistict := yTΦx surpasses

a thresholdκ, i.e.,

tH1

><H0

κ,

Page 19: 1 Concentration of Measure for Block Diagonal Matrices

19

0 0.2 0.4 0.6 0.8 10

0.02

0.04

0.06

0.08

0.1

PD

freq

uenc

y

full matrix, Φfull

block matrix, ΦDBD

(a)

0 0.2 0.4 0.6 0.8 10

0.02

0.04

0.06

0.08

0.1

PD

freq

uenc

y

full matrix, Φfull

block matrix, ΦDBD

(b)

Fig. 10. Histogram ofPD for 10000 random signals with a compressive Neyman-Pearsondetector with constraintPF ≤ α = 0.1. (a) Signals with uniformenergy across blocks (signal classS1) with both block measurement matrixΦDBD and full measurement matrixΦfull. (b) Signals with decaying energyacross blocks (signal classS2) with the same matricesΦDBD andΦfull.

whereκ is chosen to meet the constraintPF ≤ α for a specifiedα. It can be shown that for such a detector,

PD(α) = Q

(Q−1(α) − ‖Φx‖2

σ

), (11)

with Q(α) =∫∞α e−

u2

2 du. Notice from (11) that the performance of the detector depends on the norm of the signalx in the

compressed domain,‖Φx‖2. In effect, if Φ “loses” signal energy for some signals the detector performance will suffer, and if

Φ “amplifies” signal energy for some signals the detector performance will improve.

While achieving the best possiblePD for a givenPF is of course desirable, another very important consideration for a

system designer is the reliability and consistency of that system. Large fluctuations in performance make it difficult toascribe

meaning to a particular detection result and use it to take action based on a known probability of being incorrect. Therefore,

we see that the concentration of measure of a matrixΦ is an important factor in the consistency of this detection system

by guaranteeing that‖Φx‖22 ≈ ‖x‖22 with high probability. Of course, for block diagonal matrices these concentration results

depend on the statistics of the signal class. Therefore, oneimportant role of our results in this paper is to make explicit which

signal classes can lead to reliable detectors for the detection problem above whenΦ is constrained to be block diagonal.

As an example, we consider both a DBD measurement matrixΦDBD ∈ RMJ×NJ having an equal number of measurements

Mj = M per block and a dense measurement matrixΦfull ∈ RMJ×NJ . In the experiment below,M = 4, J = 16 and

N = 64. For both matrices the nonzero entries are drawn as zero-mean i.i.d. Gaussian random variables; forΦfull we use

variance 1MJ and forΦDBD we use variance1M . After generating one instance of each matrix randomly, we then fix it. We

test the detection performance of these two measurement matrices by drawing 10000 unit-norm test signals randomly from

two classes:S1 having uniform energy across blocks, andS2 having decaying energy across blocks. The noise varianceσ2 is

chosen such that each test signalx has a constant signal-to-noise ratio ofSNR = 10 log10

(‖x‖2

2

σ2

)= 8dB. Usingα = 0.1,

we use (11) to calculate the expectedPD for each random signal and both measurement matrices.

Figure 10 shows the histogram ofPD for the signal/measurement combinations. We see that for uniform energy signalsS1,

using bothΦDBD andΦfull results in detector performance tightly clustered aroundPD = 0.9. Thus for this class of signals,

block diagonal and dense measurement matrices have the sameconsistency in their performance. However, when using a block

Page 20: 1 Concentration of Measure for Block Diagonal Matrices

20

measurement matrixΦDBD, thePD for signal classS2 is very spread out compared to using a full matrixΦfull despite having

nearly the same average performance. Although some individual signals might enjoy above average performance because the

measurement matrix happens to amplify the energy of those particular signals, other signals will have very poor performance

because the measurement matrix severely attenuates their energy, and moreover, the composition of these “above average”

and “below average” signal sets will vary depending on the random draw of the measurement matrix. Thus, this experiment

illustrates how the signal statistics can affect performance reliability in compressive detection tasks when the measurement

matrices have block diagonal structure.

B. The Restricted Isometry Property in Linear Systems Applications

Interestingly, the concentration results of this paper canalso be used as an analytic tool to prove the RIP for certain structured

matrices that arise in linear systems applications.

1) RIP for Toeplitz Matrices:Our primary example of such an application involves the sub-sampled compressive and non-

compressive Toeplitz measurement matrices that arise in problems such as channel sensing [14–22]. The convolution operation

defining linear time-invariant systems is equivalent to multiplication by a Toeplitz matrix, and Toeplitz matrices in turn are

closely related to the RBD matrices that are the subject of this paper. Consider the channel sensing problem, where a channel

has an unknown impulse responsea ∈ RN of lengthN and we want to estimate this channel by probing the system with a

known length-P signalφ ∈ RP and examining the system outputy:

y = φ ∗ a = Φlargea =

φ1 0 0

.... . . 0

φN · · · φ1

......

φP · · · φ(P−N+1)

0. . .

...

0 0 φP

a, (12)

whereΦlarge is an (N + P − 1)×N Toeplitz matrix whose columns consist of shifts ofφ.

In some applications, the channel can be assumed to be sparse(e.g., when there are only a few multipath reflections), and

resource constraints lead to the desire to estimatea using as few measurements (i.e., values ofy) as possible. In these cases,

one possible strategy is to use a random probe signalφ and only measure a subset of sizeJ ≤ N + P − 1 of the coefficients

in y. In this case, the resulting measurement vectory ∈ RJ can be written asy = Φsmalla, whereΦsmall is a J ×N matrix

whose rows are a subset of those fromΦlarge. In particular, we are interested in the case whenJ ≤ N , making the matrix

Φsmall compressive.

While the Toeplitz structure in the convolution matrix shown in (12) does not immediately appear to fall into the block

diagonal structure detailed in our concentration results,careful examination reveals that it can be written in such a format.

Consider that every output value ofy is the result of multiplying the same probe vectorφ by a version of the (time-reversed)

Page 21: 1 Concentration of Measure for Block Diagonal Matrices

21

impulse responsea that has been shifted by an amount depending on on the measurement index. The intuition here is that this

computation can be written as an RBD matrix with each block equal to the probeφT (i.e., M = 1), multiplied by a signalx

where each blockxk is a time-reversed, shifted, and windowed version of the impulse responsea. To make things concrete,

supposeJ ≤ N and let i1, i2, · · · , iJ denote the indices of the measured coefficients ofy. For simplicity, we assume that

the probe lengthP ≥ N + J − 1 and thati1, i2, · · · , iJ ∈ [N,P ], but note that we will not assume that these indices are

contiguous.7 Using the notation that~0L is a row vector ofL zeros, we precisely define one blockxk corresponding to theik

measurement index by appropriately shifting and zero padding the time-reversed impulse response,

xk =[~0(ik−N) aN . . . a1 ~0(P−ik)

]T. (13)

With this definition, we now see thaty can be written as the multiplication of aJ × PJ block diagonal matrixΦ times a

length-PJ signalx with J blocks:

y = Φsmalla = Φx =

φT

. . .

φT

x1

...

xJ

. (14)

In the upcoming lemma, we will use the earlier results concerning the concentration of measure for RBD matrices to develop

concentration results relating‖y‖22 to ‖a‖22. To simplify the statement of this lemma, it will be beneficial to use operator matrices

to define the shifting and windowing operations creating thesignal blocks. Specifically, letei denote theith canonical basis

vector of RN+P−1 and define the(N + P − 1) × J matrix R = [ei1 ei2 · · · eiJ ] that removes measurements from

the convolution operation to isolate just the selected measurements such thaty = RT y. Furthermore, define the windowing

matrix W =[eN . . . e(N+P−1)

]Tto be theP × (N + P − 1) matrix that keeps only the lastP coefficients of a length-

(N +P − 1) vector. Finally, we note that we can now write the matrix of concatenated signal blocksX ∈ RJ×P defined in (7)

asXT = WAR, whereA is the(N + P − 1)× (N + P − 1) circulant matrix given by

A =

aN 0 . . . 0 a1 . . . aN−1

.... . .

. . ....

. . .. . .

......

. . . 0. . . a1

a1 aN. . . 0

0. . .

... aN. . .

......

. . .. . .

......

. . . 0

0 . . . 0 a1 a2 . . . aN

.

The following lemma establishes the concentration of measure results for the subsampled output of the convolution operation,

and follows directly from applying Theorem III.2 to the current problem formulation.

7The assumptions on the probe length and index locations ensure that each measurement iny depends on all entries ofa. We make these assumptions merelyto simplify the subsequent computation of the value around which ‖Φsmalla‖

22

concentrates; removing them would change only the point of concentration.

Page 22: 1 Concentration of Measure for Block Diagonal Matrices

22

Lemma V.1. Let a ∈ RN be an arbitrary vector,φ ∈ RP be a random vector with i.i.d. Gaussian entries having variance

σ2 = 1J with J ≤ N , and y be the convolution ofa and φ as defined in(12). Suppose thatP ≥ J + N − 1 and let

i1, i2, · · · , iJ ∈ [N,P ] denote the indices of the available measurements from this convolution, such thatyk = yik . Also, define

X as a concatenation of signal blocks as in(7), with the individual signal blocksxk (depending on the measurement indices

ik) defined as in(13). ThenE‖y‖22 = ‖a‖22, and

P∣∣‖y‖22 − ‖a‖22

∣∣ > ǫ‖a‖22≤

2 exp− ǫ2J256(‖λ‖∞/‖a‖2

2), 0 ≤ ǫ ≤ 16‖λ‖2

2

‖λ‖∞‖λ‖1

2 exp− ǫJ16(‖λ‖∞/‖a‖2

2), ǫ ≥ 16‖λ‖2

2

‖λ‖∞‖λ‖1

, (15)

whereλi are the eigenvalues ofXXT andλ = [λ1 λ2 · · · λJ ]T .

Proof: See Appendix G.

We first note that (15) relates the probability of concentration for a vectora ∈ RN to the quantity‖λ‖∞ (which is defined

in terms ofa). If the vectora is sparse, it is possible to derive a useful upper bound for‖λ‖∞. In particular, suppose thata

has no more thanS nonzero components. Then letting‖D‖ denote the standard operator norm of a matrixD (i.e., the largest

singular value ofD), we have

‖λ‖∞ = ‖XXT‖ = ‖X‖2 ≤ ‖R‖2‖A‖2‖W‖2 = ‖A‖2 = ‖AT ‖2,

since the largest singular values of bothW andR are 1. BecauseAT is a circulant matrix, its eigenvalues are equal to the

un-normalized discrete Fourier transform (DFT) of its firstrow a = [aN . . . a1 0 . . . 0]. Denoting the un-normalized DFT

matrix by F ∈ R(N+P−1)×(N+P−1), we see that‖AT ‖2 = ‖F a‖2∞. It is shown in [22] that forS-sparse vectorsa, we

have‖F a‖2∞ ≤ S‖a‖22. Thus, restricting our consideration to small values ofǫ (so that we are in the first case of (15)), a

concentration rate that holds for anyS-sparse vectora is:

P∣∣‖y‖22 − ‖a‖22

∣∣ > ǫ‖a‖22≤ 2 exp

(− ǫ2J

256S

).

Lemma V.1 basically states that arbitrary vectorsa can have favorable concentration properties when multiplied by compres-

sive Toeplitz matrices. The analysis and bound above establishes that whena is a sparse vector, the concentration exponent can

be stated simply in terms of the sparsityS and number of blocksJ , making the concentration bounds suitable as an analysis

tool for establishing results in the CS literature. In particular, one useful application of these results is to prove the RIP for

compressive Toeplitz matrices. Using standard covering arguments and following the same steps as in [10], we arrive at the

following theorem establishing RIP for the compressive Toeplitz matrices relevant for the channel sensing problem.

Theorem V.1 (RIP). SupposeΦsmall ∈ RJ×N is a compressive Toeplitz matrix as defined in(14) (with either contiguous or

non-contiguous measurement indices). Then there exist constantsc1, c2 such that ifJ ≥ c1S2 log(N/S), Φsmall will satisfy

the RIP of orderS with probability at least1− 2 exp(−c2J).

The theorem above establishes the RIP for compressive Toeplitz matrices with a number of measurementsJ proportional

to S2 logN . We make a special note here that there are no restrictions inthe above arguments on the measurement locations

Page 23: 1 Concentration of Measure for Block Diagonal Matrices

23

being contiguous, so the above theorem holds equally well for non-contiguous measurement locations. This result is thus

equivalent to the state-of-the-art RIP results for contiguous compressive Toeplitz matrices [17, 18, 20, 22] and non-contiguous

compressive Toeplitz matrices [21] in the literature. Among these, we note that [22] also establishes the RIP by first deriving a

concentration bound, but via different arguments. However, in addition to the very different technical tools we have employed,

the novelty in this approach is that it provides a unified framework for proving the RIP for compressive Toeplitz matriceswith

both contiguous and non-contiguous measurements.

Using the same general approach, we can make a similar statement about the RIP of non-compressive Toeplitz matrices. In

such cases, we are not concerned with the number of measurements J but rather the length of the probe signalP necessary

to ensure the RIP with high probability. With the methods presented here, one can guarantee the RIP with high probabilityby

takingP proportional toS2 logN . This result is comparable to the results in [18, 22] but lessfavorable than the state-of-the-art

result ofP ∼ S log5 N implied by [19].

2) RIP for Observability Matrices:As evidence of one other potential application, a recent paper in the control systems

literature [36] reveals that our results for DBD and RBD matrices also have immediate extensions to the observability matrices

that arise in the analysis of linear dynamical systems. Suppose thatxj denotes the state of a system at timej, a matrixA

describes the evolution of the system such thatxj = Axj−1, and a matrixC describes the observation of the system such

that yj = Cxj . A standard problem in observability theory is to ask whether some initial statex1 can be identified based on

a collection of successive observations[yT1 yT2 · · · yTJ ]T ; the answer to this question is affirmative if the corresponding

observability matrix

O =

C

CA

...

CAJ−1

is full rank. A recent paper [36], however, details howO in fact has a straightforward relationship to a block diagonal matrix;

how in certain settings (such as whenA is unitary andC is random)O can satisfy the RIP; and how this can enable the

recovery of sparse initial statesx1 from far fewer measurements than conventional rank-based observability theory suggests.

We refer the reader to [36] for a complete discussion.

VI. CONCLUSION

Our main technical contributions in this paper consist of novel concentration of measure bounds for random block diagonal

matrices. Our results cover two cases: DBD matrices, in which each block along the main diagonal is distinct and populated

with i.i.d. subgaussian random variables, and RBD matrices, in which one block of i.i.d. Gaussian random variables is repeated

along the diagonal. For each type of matrix, the likelihood of norm preservation depends on certain characteristics of the

signal being measured, with the relevant characteristics essentially relating to how much intrinsic diversity the signal has

to compensate for the constrained structure of the measurement matrix. For DBD matrices, suitable diversity arises when

the signal energy is distributed across its component blocks proportionally to the number of measurements allotted to each

Page 24: 1 Concentration of Measure for Block Diagonal Matrices

24

block. For RBD matrices, suitable diversity requires not only that the signal energy be evenly distributed across its component

blocks, but also that these blocks be mutually orthogonal. Remarkably, our main theorems show that for signals exhibiting

these desirable properties, block diagonal matrices can have concentration exponents that scale at the same rate as a fully

dense matrix, despite having a fraction of the complexity (i.e., block diagonal matrices require generating many fewerrandom

numbers). Our simulation results confirm that the main diversity measures arising in our theorems (Γ andΛ) do appear to

capture the most prominent effects governing the ability ofa block diagonal matrix to preserve the norm of a signal.

In addition to the main theoretical results above, this paper addresses the important question of what signal classes can

have favorable diversity characteristics and thus be highly amenable to measurement with block diagonal matrices. As it turns

out, many signal classes can have such favorable characteristics, including frequency sparse signals, signals with multiple

related measurements (e.g., as arise in delay networks or multi-view imaging), difference vectors between highly correlated

signals (e.g., between successive video frames), and random signals. Moreover, the compressive detection and compressive

channel sensing problems we discuss in depth illustrate thepotential for finding applications of our main results in a variety

of interesting applications.

The sum total of these results and investigations lead us to conclude that block diagonal matrices can often have sufficiently

favorable concentration properties. Thus, subject to the caveat that one must be mindful of the diversity characteristics of the

signals to be measured, block diagonal matrices can be exploited in a number of scenarios as a sensing architecture or as an

analytical tool. These results are particularly relevant because of the broad scope that block diagonal matrices can have in

describing constrained systems, including communicationconstraints in distributed systems and convolutional systems such as

wireless channels.

There are many natural questions that arise from these results and are suitable topics for future research. For example,it

would be natural to consider whether the concentration results for Gaussian RBD matrices could be extended to more general

subgaussian RBD matrices (to match the distribution used inour DBD analysis). Also, as more applications are identifiedin

the future, it will be important to examine the diversity characteristics of a broader variety of signal classes to determine their

favorability for measurement via block diagonal matrices.Finally, one could derive embedding results akin to (4) whencertain

signal familiesQ are measured with block diagonal matrices. The primary challenge in extending the covering arguments

from Section II-C would be in accounting for the fact the difference vectorsu − v could potentially have highly variable

concentration exponents.

APPENDIX A

PROOF OFTHEOREM III.1

Proof: Let y = Φx. For each matrixΦj , we let [Φj ]i,n denote thenth entry of theith row of Φj . Further, we letyj(i)

denote theith component of measurement vectoryj, and we letxj(n) denote thenth entry of signal blockxj .

We begin by characterizing the point of concentration. One can write yj(i) =∑N

n=1 [Φj ]i,n xj(n), and so it follows that

Ey2j (i) = E

(∑Nn=1 [Φj ]i,n xj(n)

)2

. Since the[Φj ]i,n are zero-mean and independent, all cross product terms are equal to

zero, and therefore we can writeEy2j (i) = E∑N

n=1 [Φj ]2i,n x

2j (n) = σ2

j ‖xj‖22 = 1Mj‖xj‖22. Combining all of the measurements,

Page 25: 1 Concentration of Measure for Block Diagonal Matrices

25

we then haveE‖y‖22 =∑J

j=1

∑Mj

i=1 Ey2j (i) =∑J

j=1

∑Mj

i=1‖xj‖2

2

Mj=

∑Jj=1 ‖xj‖22 = ‖x‖22.

Now, we are interested in the probability that∣∣‖y‖22 − ‖x‖22

∣∣ > ǫ‖x‖22. SinceE‖y‖22 = ‖x‖22, this is equivalent to the

condition that∣∣‖y‖22 −E‖y‖22

∣∣ > ǫE‖y‖22. For a givenj ∈ 1, 2, . . . , J and i ∈ 1, 2, . . . ,Mj, all [Φj ]i,nNn=1 are i.i.d.

subgaussian random variables with Gaussian standards equal to τ(φj). From above, we know thatyj(i) can be expressed as

a linear combination of these random variables, with weights given by the entries ofxj . Thus, from Lemma II.1 we conclude

that eachyj(i) is a subgaussian random variable with Gaussian standardτ(yj(i)) ≤ τ(φj)‖xj‖2. Lemma II.2 then implies that

P (y2j (i) > t) ≤ 2 exp

(− t

2τ2(yj(i))

)≤ 2 exp

(− t

2τ2(φj)‖xj‖22

)≤ 2 exp

(− Mjt

2c‖xj‖22

)(16)

for any t ≥ 0. We recall that‖y‖22 =∑J

j=1

∑Mj

i=1 yj(i)2 and note that all entries of this summation will be independent.

Therefore, to obtain a concentration bound for‖y‖22, we can invoke Theorem II.1 for the random variablesy2j (i) across all

j, i. From (16), we see that the assumptions of the theorem are satisfied with a = 2 andα−1j,i = 2c

Mj‖xj‖22. Note thatα−1

j,i is

constant for a fixedj. Hence, ford ≥ maxj,i α−1j,i = 2cmaxj

1Mj‖xj‖22 andb ≥ a

∑Jj=1

∑Mj

i=1 α−2j,i = 8c2

∑Jj=1

1Mj‖xj‖42,

P (∣∣‖y‖22 − ‖x‖22

∣∣ > ǫ‖x‖22) ≤

2 exp− ǫ2‖x‖4

2

32b , 0 ≤ ǫ ≤ 4bd‖x‖2

2

2 exp− ǫ‖x‖2

2

8d , ǫ ≥ 4bd‖x‖2

2

. (17)

Note that‖x‖22 = ‖γ‖1 and‖x‖42 = ‖γ‖21. Substitutingd = 2cmaxj1

Mj‖xj‖22 = 2c‖M−1γ‖∞ andb = 8c2

∑Jj=1

1Mj‖xj‖42 =

8c2‖M−1/2γ‖22 into (17) completes the proof.

APPENDIX B

PROOF OFLEMMA III.1

Proof: To prove the lower bound, observe that since‖γ‖21 ≥ ‖γ‖22, we can write

Γ =‖γ‖21

‖M−1/2γ‖22≥ ‖γ‖22‖M−1/2γ‖22

≥ 1

‖M−1/2‖22=

1

maxj1

Mj

= minj

Mj .

To prove the upper bound let us defineγ′ := M−1/2γ andm = diag(M1/2), and observe that

Γ =‖γ‖21

‖M−1/2γ‖22=‖M1/2γ

′‖21‖γ′‖22

=〈m, γ

′〉2‖γ′‖22

≤ ‖m‖22 · ‖γ

′‖22‖γ′‖22

=

J∑

j=1

Mj.

APPENDIX C

PROOF OFLEMMA III.2

Proof: We prove the lower bound first. For simplicity and without loss of generality, we assume that‖M−1/2γ‖∞ =

maxjγj√Mj

= γ1√M1

. Ignoring the constant factor of16c, we can then rewrite the numerator as‖M−1/2γ‖22 =γ2

1

M1

+∑J

j=2

γ2

j

Mj=

‖M−1/2γ‖2∞ + ‖M−1/2γ‖22, where γ = [γ2, . . . , γJ ]T ∈ RJ−1 and M is a J − 1 × J − 1 diagonal matrix containing

the valuesM2,M3, . . . ,MJ along the main diagonal. For a lower bound on the numerator, we then have‖M−1/2γ‖22 ≥

Page 26: 1 Concentration of Measure for Block Diagonal Matrices

26

‖M−1/2γ‖2∞ + 1J−1‖M−1/2γ‖21, and so

‖M−1/2γ‖22‖M−1γ‖∞‖γ‖1

≥‖M−1/2γ‖2∞ + 1

J−1‖M−1/2γ‖21‖M−1γ‖∞‖γ‖1

=‖M−1/2γ‖2∞‖M−1γ‖∞‖γ‖1

+1

J − 1

‖M−1/2γ‖21‖M−1γ‖∞‖γ‖1

. (18)

Focusing on the second term in (18),

1

J − 1

‖M−1/2γ‖21‖M−1γ‖∞‖γ‖1

=1

J − 1

(‖M−1/2γ‖1 − ‖M−1/2γ‖∞

)2

‖M−1γ‖∞‖γ‖1,

=1

J − 1

(‖M−1/2γ‖21 − 2‖M−1/2γ‖1‖M−1/2γ‖∞ + ‖M−1/2γ‖2∞

)

‖M−1γ‖∞‖γ‖1.

Combining this with the first term in (18), we obtain

‖M−1/2γ‖22‖M−1γ‖∞‖γ‖1

≥ 1

J − 1

(‖M−1/2γ‖21 + J‖M−1/2γ‖2∞ − 2‖M−1/2γ‖1‖M−1/2γ‖∞

)

‖M−1γ‖∞‖γ‖1. (19)

To combine the first two terms in the numerator of (19), we use the fact that for anyx, y ≥ 0, x+y ≥ 2√xy, and we conclude

that

‖M−1/2γ‖22‖M−1γ‖∞‖γ‖1

≥ 1

J − 1

(2√J‖M−1/2γ‖1‖M−1/2γ‖∞ − 2‖M−1/2γ‖1‖M−1/2γ‖∞

)

‖M−1γ‖∞‖γ‖1

=2(√

J − 1)

J − 1

‖M−1/2γ‖1‖M−1/2γ‖∞‖M−1γ‖∞‖γ‖1

. (20)

Now, we defineγ′

:= M−1/2γ and observe that

‖M−1/2γ‖1‖M−1/2γ‖∞‖M−1γ‖∞‖γ‖1

=‖γ′‖∞

‖M−1/2γ′‖∞· ‖γ′‖1‖M1/2γ′‖1

, ≥ 1

‖M−1/2‖∞· 1

‖M1/2‖1, = min

j

√Mj ·

1

maxj√Mj

. (21)

Combining (20) and (21) completes our proof of the lower bound. The upper bound follows simply because

‖M−1/2γ‖22 =J∑

j=1

γ2j

Mj≤

(max

j

γjMj

J∑

j=1

γj = ‖M−1γ‖∞‖γ‖1,

keeping in mind thatγj ≥ 0 andMj > 0.

APPENDIX D

PROOF OFTHEOREM III.2

In order to prove Theorem III.2, we will require the following two lemmas.

Lemma D.1. Supposex ∈ RNJ and Φ is anM ×N matrix whereΦT = [φ1 φ2 · · · φM ] with eachφi ∈ RN . Let Φ be an

MJ ×NJ RBD matrix as defined in (1) with allΦj = Φ. If y = Φx, then‖y‖22 =∑M

i=1 φTi Aφi, whereA = XTX with X

defined in (7).

Proof of Lemma D.1:We first rewrite the block measurement equations asy′ = X ′φ′, wherey′ consists of a rearrangement

of the elements ofy, i.e., y′ := [y1(1) · · · yJ(1) y1(2) · · · yJ(M)]T ∈ RMJ , φ′ = [φT

1 φT2 · · · φT

M ]T ∈ RMN , and

X ′ ∈ RMJ×MN is a block diagonal measurement matrix containing replicasof X along the main diagonal and zeros elsewhere.

Page 27: 1 Concentration of Measure for Block Diagonal Matrices

27

It follows that ‖y‖22 = ‖y′‖22 = φ′TX ′TX ′φ′ =∑M

i=1 φTi Aφi.

Lemma D.2. Supposez ∈ RN is a random vector with i.i.d. Gaussian entries each having zero-mean and varianceσ2. For

any symmetricN×N matrix A with eigenvaluesλiNi=1, there exists a collection of independent, zero-mean Gaussian random

variableswiNi=1 with varianceσ2 such thatzTAz =∑N

i=1 λiw2i .

Proof of Lemma D.2:BecauseA is symmetric, it has an eigen-decompositionA = V TDV , whereD is a diagonal matrix

of its eigenvaluesλiNi=1 andV is an orthogonal matrix of eigenvectors. Then we havezTAz = (V z)TD(V z) =∑N

i=1 λiw2i ,

wherew = V z andw = [w1, w2, · · · , wN ]T . SinceV is an orthogonal matrix,wiNi=1 are i.i.d. Gaussian random variables

with zero-mean and varianceσ2.

Proof of Theorem III.2: Let y = Φx. We first calculateE‖y‖22 to determine the point of concentration. Applying

Lemma D.1 toy and Lemma D.2 withz = φi for eachi = 1, 2, . . . ,M , we have‖y‖22 =∑M

i=1 φTi Aφi =

∑Mi=1

∑Nj=1 λjw

2i,j ,

where eachwi,ji,j is an independent Gaussian random variable with zero-mean and varianceσ2 = 1M . After switching the

order of the summations and observing thatTr(XTX) = Tr(XXT ) whereTr(·) is the trace operator, we haveE‖y‖22 =

∑Nj=1 λj

∑Mi=1 Ew2

i,j =∑N

j=1 λj = Tr(XXT ) = ‖x‖22.

Having established the point of concentration for‖y‖22, let us now compute the probability that∣∣‖y‖22 − ‖x‖22

∣∣ > ǫ‖x‖22.

SinceE‖y‖22 = ‖x‖22, this is equivalent to the condition that∣∣‖y‖22 −E‖y‖22

∣∣ > ǫE‖y‖22. Let wi,j =√λjwi,j . Thenwi,j is a

Gaussian random variable with varianceλjσ2, and we have‖y‖22 =

∑Mi=1

∑Nj=1 w

2i,j . By Lemma II.2 we have

P (w2i,j > t) ≤ 2 exp

(− t

2λjσ2

), ∀t ≥ 0. (22)

To obtain a concentration bound for‖y‖22, we invoke Theorem II.1 for the random variablesw2i,j across alli, j. From (22),

we see that the assumptions of the theorem are satisfied witha = 2 andα−1i,j = 2λjσ

2 = 2M λj . Note thatα−1

i,j is constant for a

fixed j. Hence, ford ≥ maxi,j α−1i,j = 2

M maxj λj andb ≥ a∑

i,j α−2i,j = 8

M

∑j λ

2j , the concentration of measure inequality as

shown in (17) holds. Note that‖x‖22 = Tr(XTX) = ‖λ‖1 and‖x‖42 = ‖λ‖21 since the eigenvaluesλjNj=1 are non-negative.

Substitutingd = 2M maxj λj =

2M ‖λ‖∞ andb = 8

M

∑j λ

2j = 8

M ‖λ‖22 into (17) completes the proof.

APPENDIX E

PROOF OFTHEOREM IV.1

Our result follows from an application of the following.

Theorem E.1. [33, Theorem 3.1]Let x ∈ CN ′

andβ > 1. SupposeN ′ > 512 and chooseNT andNΩ such that:

NT +NΩ ≤0.5583N ′/q√(β + 1) log(N ′)

and NT +NΩ ≤

√2/3N ′

(1q −

(logN ′)2

N ′

)

√(β + 1) log(N ′)

. (23)

Fix a subsetT of the time domain with|T | = NT . LetΩ be a subset of sizeNΩ of the frequency domain generated uniformly

at random. Then with probability at least1−O((log(N ′))1/2N ′−β), every signalx supported onΩ in the frequency domain

has most of its energy in the time domain outside ofT . In particular, ‖xT ‖22 ≤‖x‖2

2

q ,wherexT denotes the restriction ofx to

the supportT .

Page 28: 1 Concentration of Measure for Block Diagonal Matrices

28

Proof of Theorem IV.1:First, observe that‖γ‖21 = ‖x‖42 and‖γ‖22 =∑J

k=1 ‖xk‖42. Next, apply Theorem E.1 withNΩ = S

andNT = N = N ′/J , being careful to select a value forq such that (23) is satisfied. In particular, we require

1

q≥ (N + S)

√(β + 1) logN ′

0.5583N ′ and1

q≥

(N+S)√2/3

√(β + 1) logN ′ + (logN ′)2

N ′ .

This is satisfied if we choose

q ≤ min

0.5583N ′

(N + S)√(β + 1) logN ′

,N ′

(N+S)√2/3

√(β + 1) logN ′ + (logN ′)2

. (24)

Choosing anyq satisfying (24), we have that withfailure probabilityat mostO((log(N ′))1/2(N ′)−β), ‖xk‖22 ≤‖x‖2

2

q for each

k = 1, · · · , J , implying that each block individually is favorable. Taking a union bound for allk to cover each block, we

have that with total failure probability at mostO(J(log(N ′))1/2(N ′)−β), ‖γ‖22 =∑J

k=1 ‖xk‖42 ≤J‖x‖4

2

q2 . Thus with this same

failure probability, ΓM =

‖γ‖2

1

‖γ‖2

2

≥ q2

J . Combining with (24) and using the fact thatS < N , we thus have:

Γ

M≥ min

0.55832N2J

(N + S)2(β + 1) logN ′ ,N2J

((N+S)√

2/3

√(β + 1) logN ′ + (logN ′)2

)2

≥ min

(0.55832/22)J

(β + 1) log(N ′),

J(

2√2/3

√(β + 1) logN ′ + (logN ′)2

N

)2

.

APPENDIX F

PROOF OFLEMMA IV.1

Proof: Let X be theJ × N matrix as defined in (7). Without loss of generality, we suppose the nonzero eigenvalues

λimin(J,N)i=1 of XTX are sorted in order of decreasing magnitude, and we letλmax := λ1 andλmin := λmin(J,N). We can

lower boundΛ in terms of these extremal eigenvalues by writing

Λ =M‖λ‖21‖λ‖22

= M

∑i λ

2i +

∑i

∑j 6=i λiλj∑

i λ2i

≥M +Mλmin

λmax

∑i

∑j 6=i λi∑i λi

= M +Mλmin

λmax(J − 1). (25)

For some0 ≤ ǫ ≤ 1, let us define the following events:

A =

Nσ2(1− ǫ)2 ≤ ‖X

T z‖22‖z‖22

≤ Nσ2(1 + ǫ)2, ∀z ∈ RJ

, B =

λmax ≤ Nσ2(1 + ǫ)2

⋂λmin ≥ Nσ2(1− ǫ)2

,

C =

λmin

λmax≥

(1− ǫ

1 + ǫ

)2, D =

Λ ≥M +M

(1− ǫ

1 + ǫ

)2

(J − 1)

.

These events satisfyA = B ⊆ C ⊆ D, where the last relation follows from (25). It follows thatP (Dc) ≤ P (Ac), where

Ac represents the complement of eventA. BecauseXT is populated with i.i.d. subgaussian random variables, it follows as

a corollary of Theorem III.1 (by settingM ← N and J ← 1 in the context of that theorem) that for anyz ∈ RJ and

Page 29: 1 Concentration of Measure for Block Diagonal Matrices

29

ǫ ∈ (0, 1), P (∣∣‖XT z‖22 −Nσ2‖z‖22

∣∣ > ǫNσ2‖z‖22) ≤ 2e−Nǫ2

256c2 . Thus, for an upper bound forP (Ac), we can follow the

straightforward arguments in [10, Lemma 5.1] and conclude thatP (Ac) ≤ 2(12ǫ

)Je−

Nǫ2

4·256c2 . For any constantc1 > 0, we note

thatP (Dc) ≤ 2(12ǫ

)Je−

Nǫ2

4·256c2 = 2e−Nǫ2

4·256c2+J log( 12

ǫ ) ≤ 2e−c1N holds wheneverJ ≤ N(

ǫ2

4·256c2 − c1

)log−1

(12ǫ

). Finally,

the fact thatΓ ≥ Λ follows from (10).

APPENDIX G

PROOF OFLEMMA V.1

Proof: To apply Theorem III.2 directly, we first suppose the entriesof the random probeφ have varianceσ2 = 1 (since

M = 1 here). In this case, we haveE‖y‖22 = ‖x‖22 = J‖a‖22, and Theorem III.2 implies that

P (∣∣‖y‖22 − J‖a‖22

∣∣ > ǫJ‖a‖22) ≤

2 exp− ǫ2‖λ‖2

1

256‖λ‖2

2

, 0 ≤ ǫ ≤ 16‖λ‖2

2

‖λ‖∞‖λ‖1

2 exp− ǫ‖λ‖1

16‖λ‖∞, ǫ ≥ 16‖λ‖2

2

‖λ‖∞‖λ‖1

. (26)

Since J ≤ N , the nonzero eigenvalues ofXXT equal those ofXTX , and so we could equivalently defineλj as the

eigenvalues ofXXT . Note thatXXT is a symmetric matrix, and so all of its eigenvaluesλi are non-negative. Also, all of

the diagonal entries ofXXT are equal to‖a‖22. Consequently, it follows that‖λ‖1 =∑J

i=1 |λi| = tr(XXT ) = J‖a‖22. Using

the norm inequality‖λ‖22 ≤ ‖λ‖1‖λ‖∞, we have‖λ‖2

1

‖λ‖2

2

≥ J‖λ‖∞/‖a‖2

2

and it is easy to see that‖λ‖1

‖λ‖∞= J

‖λ‖∞/‖a‖2

2

.

By plugging these inequalities into (26), we obtain the probabilities specified in (15). Finally, we note that

∣∣‖y‖22 − J‖a‖22∣∣ > ǫJ‖a‖22 ⇐⇒

∣∣∣‖(1/√J)Φsmalla‖22 − ‖a‖22

∣∣∣ > ǫ‖a‖22,

and so if we suppose the entries of the random probeφ actually have varianceσ2 = 1J , we complete our derivation of (15).

REFERENCES

[1] M. B. Wakin, J. Y. Park, H. L. Yap, and C. J. Rozell, “Concentration of measure for block diagonal measurement matrices,” in Proc. Int. Conf. Acoustics,

Speech and Signal Processing (ICASSP), March 2010.

[2] C. J. Rozell, H. L. Yap, J. Y. Park, and M. B. Wakin, “Concentration of measure for block diagonal matrices with repeated blocks,” in Proc. Conf.

Information Sciences and Systems (CISS), February 2010.

[3] D. L. Donoho, “Compressed sensing,”IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289–1306, April 2006.

[4] E. J. Candes, “Compressive sampling,” inProc. Int. Congr. Math., Madrid, Spain, August 2006, vol. 3, pp. 1433–1452.

[5] M. Ledoux, “The concentration of measure phenomenon,”Mathematical Surveys and Monographs, AMS, 2001.

[6] W. B. Johnson and J. Lindenstrauss, “Extensions of Lipschitz mappings into a Hilbert space,” inProc. Conf. Modern Analysis and Probability, 1984.

[7] D. Achlioptas, “Database-friendly random projections,” in Proc. 20th ACM SIGMOD-SIGACT-SIGART Symp. Principles of Database Systems (PODS),

New York, NY, USA, 2001, pp. 274–281, ACM.

[8] S. Dasgupta and A. Gupta, “An elementary proof of the Johnson-Lindenstrauss lemma,”Random Struct. Algor., vol. 22, no. 1, pp. 60–65, 2002.

[9] E. J. Candes and T. Tao, “Decoding by linear programming,” IEEE Trans. Inf. Theory, vol. 12, no. 51, pp. 4203–4215, Dec 2005.

[10] R. G. Baraniuk, M. A. Davenport, R. A. DeVore, and M. B. Wakin, “A simple proof of the restricted isometry property forrandom matrices,”Const.

Approx., vol. 28, no. 3, pp. 253–263, Dec. 2008.

[11] S. Mendelson, A. Pajor, and N. Tomczak-Jaegermann, “Uniform uncertainty principle for Bernoulli and subgaussianensembles,”Const. Approx., vol.

28, no. 3, pp. 277–289, 2008.

Page 30: 1 Concentration of Measure for Block Diagonal Matrices

30

[12] M. S. Asif, D. Reddy, P. T. Boufounos, and A. Veeraraghavanc, “Streaming compressive sensing for high-speed periodic videos,” in Proc. Int. Conf.

Image Processing (ICIP), 2009.

[13] P. Boufounos and M. S. Asif, “Compressive sampling for streaming signals with sparse frequency content,” inProc. Conf. Information Sciences and

Systems (CISS), 2010.

[14] J. A. Tropp, M. B. Wakin, M. F. Duarte, D. Baron, and R. G. Baraniuk, “Random filters for compressive sampling and reconstruction,” in Proc. Int.

Conf. Acoustics, Speech and Signal Processing (ICASSP), 2006.

[15] J. Romberg, “Compressive sensing by random convolution,” SIAM J. Imaging Sciences, vol. 2, no. 4, pp. 1098–1128, 2009.

[16] W. U. Bajwa, J. D. Haupt, G. M. Raz, S. J. Wright, and R. D. Nowak, “Toeplitz-structured compressed sensing matrices,” in Proc. IEEE/SP 14th

Workshop Statistical Signal Processing (SSP), 2007, pp. 294–298.

[17] W. U. Bajwa, J. D. Haupt, G. M. Raz, and R. D. Nowak, “Compressed channel sensing,” inProc. Conf. Information Sciences and Systems (CISS), 2008.

[18] J. Haupt, W. U. Bajwa, G. Raz, and R. Nowak, “Toeplitz compressed sensing matrices with applications to sparse channel estimation,” Preprint, 2008.

[19] J. Romberg, “A uniform uncertainty principle for Gaussian circulant matrices,” inProc. Int. Conf. Digital Signal Processing (DSP), July 2009.

[20] V. Saligrama, “Deterministic designs with deterministic guarantees: Toeplitz compressed sensing matrices, sequence designs and system identification,”

2008, Preprint.

[21] H. Rauhut, “Circulant and Toeplitz matrices in compressed sensing,” inProc. Signal Processing with Adaptive Sparse Structured Representations

(SPARS), 2010, vol. 9.

[22] B. M. Sanandaji, T. L. Vincent, and M. B. Wakin, “Concentration of measure inequalities for compressive Toeplitz matrices with applications to detection

and system identification,” inProc. IEEE Conf. Decision and Control (CDC), 2010.

[23] V. V. Buldygin and Y. V. Kozachenko, “Sub-gaussian random variables,”Ukrainian Mathematical Journal, vol. 32, Nov. 1980.

[24] R. Vershynin, “On large random almost Euclidean bases,” Acta Math. Univ. Comenianae, vol. 69, no. 2, pp. 137–144, 2000.

[25] P. Indyk and R. Motwani, “Approximate nearest neighbors: Towards removing the curse of dimensionality,” inProc. ACM Symp. Theory of Computing,

1998, pp. 604–613.

[26] R. DeVore, G. Petrova, and P. Wojtaszczyk, “Instance-optimality in probability with an l1-minimization decoder,” Applied and Computational Harmonic

Analysis, vol. 27, no. 3, pp. 275–288, 2009.

[27] P. Frankl and H. Maehara, “The Johnson-Lindenstrauss lemma and the sphericity of some graphs,”J. Combinatorial Theory, Series B, vol. 44, no. 3,

pp. 355–362, 1988.

[28] M. A. Davenport, P. T. Boufounos, M. B. Wakin, and R. G. Baraniuk, “Signal processing with compressive measurements,” IEEE J. Sel. Topics Signal

Process., vol. 4, no. 2, pp. 445–460, 2010.

[29] E. J. Candes, “The restricted isometry property and its implications for compressed sensing,”Comptes Rendus Mathematique, vol. 346, no. 9-10, pp.

589–592, 2008.

[30] R. G. Baraniuk and M. B. Wakin, “Random projections of smooth manifolds,”Found. Comp. Math., vol. 9, no. 1, pp. 51–77, 2009.

[31] A. J. Izenman,Modern Multivariate Statistical Techniques, Springer, 2008.

[32] J. A. Tropp, “On the linear independence of spikes and sines,” J. Fourier Analysis and Applications, vol. 14, no. 5, pp. 838–858, 2008.

[33] E. J. Candes and J. Romberg, “Quantitative robust uncertainty principles and optimally sparse decompositions,”Found. Comp. Math., vol. 6, no. 2, pp.

227–254, 2006.

[34] J. D. Haupt and R. D. Nowak, “Compressive sampling for signal detection,” inProc. Int. Conf. Acoustics, Speech and Signal Processing (ICASSP),

2007.

[35] M. A. Davenport, M. F. Duarte, M. B. Wakin, J. N. Laska, D.Takhar, K. F. Kelly, and R. G. Baraniuk, “The smashed filter for compressive classification

and target recognition,” inProc. IS&T/SPIE Symp. Electronic Imaging: Computational Imaging, 2007.

[36] M. B. Wakin, B. M. Sanandaji, and V. L. Tyrone, “On the observability of linear systems from random, compressive measurements,” IEEE Conf.

Decision and Control (CDC), 2010.