the johnson-lindenstrauss lemma meets compressed sensing

53
Mark Davenport Richard Baraniuk Ron DeVore Michael Wakin The Johnson-Lindenstrauss Lemma Meets Compressed Sensing dsp.rice.edu/cs

Upload: others

Post on 30-May-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Mark Davenport

Richard Baraniuk Ron DeVore

Michael Wakin

The Johnson-Lindenstrauss Lemma Meets

Compressed Sensing

dsp.rice.edu/cs

Page 2: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Compressed Sensing (CS)

Observe

Random measurements

-sparse

Page 3: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Randomness in CSNew signal modelsNew applications

Page 4: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Restricted Isometry Property

What makes a “good” CS matrix?

Page 5: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Restricted Isometry Property

What makes a “good” CS matrix?

Let denote the set of all K-sparse signals.

has RIP of order K if there exists

such that

for all

Page 6: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Restricted Isometry Property

What makes a “good” CS matrix?

Let denote the set of all K-sparse signals.

has RIP of order K if there exists

such that

for all

k-planes

Page 7: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Proof of RIP

Random matrix will satisfy RIP for largest possible K with high probability [Candès, Tao; Donoho]

Page 8: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Proof of RIP

Random matrix will satisfy RIP for largest possible K with high probability [Candès, Tao; Donoho]

Appeal to known results on singular values of random matrices [Davidson, Szarek; Litvak, Pajor,

Rudelson, Tomczak-Jaegermann]

Page 9: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Proof of RIP

Random matrix will satisfy RIP for largest possible K with high probability [Candès, Tao; Donoho]

Appeal to known results on singular values of random matrices [Davidson, Szarek; Litvak, Pajor,

Rudelson, Tomczak-Jaegermann]

This is not light reading…

Page 10: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

“Proof” of RIP

“It uses a lot of newer mathematical techniques, things that were developed in the 80's and 90's. Noncommutative geometry, random matrices … the proof is very… hip.” - Hal

Page 11: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Dimensionality Reduction

Point dataset lives in high-dimensional space

Number of data points is small

Compress data to few dimensions

We do not lose information – can distinguish data points

Page 12: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Johnson-Lindenstrauss Lemma

Page 13: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Johnson-Lindenstrauss Lemma

Proof relies on a simple concentration of measure inequality

Page 14: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Gaussian

Bernoulli [Achlioptas]

Favorable JL Distributions

Page 15: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

“Database-friendly” [Achlioptas]

Fast JL Transform [Ailon, Chazelle]

Favorable JL Distributions

: Sparse Gaussian matrix : Fast Hadamard transform: Random modulation

Page 16: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Theorem: Supposing F is drawn from a JL-favorable distribution, then with probability at least 1 - , Fmeets the RIP with .

JL Meets CS [Baraniuk, DeVore, Davenport, Wakin]

Page 17: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Theorem: Supposing F is drawn from a JL-favorable distribution, then with probability at least 1 - , Fmeets the RIP with .

Key idea

construct a set of points Q

JL Meets CS [Baraniuk, DeVore, Davenport, Wakin]

Page 18: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Theorem: Supposing F is drawn from a JL-favorable distribution, then with probability at least 1 - , Fmeets the RIP with .

Key idea

construct a set of points Q

apply JL lemma (union bound on concentration of measure)

JL Meets CS [Baraniuk, DeVore, Davenport, Wakin]

Page 19: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Theorem: Supposing F is drawn from a JL-favorable distribution, then with probability at least 1 - , Fmeets the RIP with .

Key idea

construct a set of points Q

apply JL lemma (union bound on concentration of measure)

show that isometry on Q extends to isometry on

JL Meets CS [Baraniuk, DeVore, Davenport, Wakin]

Page 20: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Recall: RIP

has RIP of order K if there exists

such that

for all

Page 21: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Recall: RIP

has RIP of order K if there exists

such that

for all

Fix a K-dimensional subspace

Page 22: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Recall: RIP

has RIP of order K if there exists

such that

for all

Fix a K-dimensional subspace

Consider only

Page 23: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Recall: RIP

has RIP of order K if there exists

such that

for all

Fix a K-dimensional subspace

Consider only

Pick Q such that for anythere exists a such that

Page 24: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Bootstrapping

Apply JL to get

Page 25: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Bootstrapping

Apply JL to get

Define A to be the smallest number such that

for all with

Page 26: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Bootstrapping

Apply JL to get

Define A to be the smallest number such that

for all with

For any , pick the closest

Page 27: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Bootstrapping

Apply JL to get

Define A to be the smallest number such that

for all with

For any , pick the closest

Hence

Page 28: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

How Many Points?

For each K-dimensional space, we need

Page 29: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

How Many Points?

For each K-dimensional space, we need

spaces to consider

Page 30: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

How Many Points?

For each K-dimensional space, we need

spaces to consider

How many measurements do we need to get RIP with probability at least ?

Page 31: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

How Many Points?

For each K-dimensional space, we need

spaces to consider

How many measurements do we need to get RIP with probability at least ?

Page 32: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Universality

Easy to see why random matrices are universal with respect to sparsity basis

Resample your points in new basis – JL provides guarantee for arbitrary set of points

Gaussian

Bernoulli

Others…

Page 33: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Better understanding of the relevant geometry

provides simple proofs of key CS / n-width results

Summary

Page 34: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Better understanding of the relevant geometry

provides simple proofs of key CS / n-width results

New conditions on what it takes to be a good CS matrix

concentration of measure around the mean

Summary

Page 35: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Better understanding of the relevant geometry

provides simple proofs of key CS / n-width results

New conditions on what it takes to be a good CS matrix

concentration of measure around the mean

New signal models

manifolds

Summary

Page 36: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Better understanding of the relevant geometry

provides simple proofs of key CS / n-width results

New conditions on what it takes to be a good CS matrix

concentration of measure around the mean

New signal models

manifolds

Natural setting for studying information scalability

detection

estimation

learning

Summary

Page 37: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Randomness in CSNew signal models

New applications

Page 38: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Manifold Compressive Sensing

Locally Euclidean topological space

Typically for signal processing

nonlinear K-dimensional “surface” in signal space RN

potentially very low dimensional signal model

Examples (all nonlinear)

chirps

modulation schemes

image articulations

Page 39: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Stable Manifold Embedding

Stability [Wakin, Baraniuk]

Number of measurements required

Page 40: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Example: Linear Chirps

original initial guess initial error

N = 256

K = 2 (start & end frequencies)

M = 5: 55% success

M = 30: 99% success

Page 41: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Manifold Learning

Manifold learning algorithmsfor sampled data in RN

ISOMAP, LLE, HLLE, etc.

Stable embedding preserves key properties in RM

ambient and geodesic distances

dimension and volume of the manifold

path lengths and curvature

topology, local neighborhoods, and angles

etc…

Can we learn these properties from projections in RM ?

savings in computation, storage, acquisition costs

Page 42: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Example: Manifold Learning

ISOMAP HLLELaplacian

Eigenmaps

R4096

RM

M=15 M=15M=20

Page 43: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Randomness in CSNew signal modelsNew applications

Page 44: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Detection – Matched Filter

Testing for presence of a known signal s

Sufficient statistic for detecting s:

Page 45: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Compressive Matched Filter

Now suppose we have CS measurements

when is an orthoprojector, remains white noise

new sufficient statistic is simply the compressive matched filter (smashed filter?)

Page 46: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

CMF – Performance

ROC curve for Neyman-Pearson detector:

From JL lemma, for random orthoprojector

Thus

Page 47: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

CMF – Performance

Page 48: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Generalization – Classification

More generally, suppose we want to classify between several possible signals

by the JL Lemma these distances are preserved

Page 49: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

CMF as an Estimator

How well does the compressive matched filter estimate the output of the true matched filter?

With probability at least

where

[Alon, Gibbons, Matias, Szegedy;

Davenport, Baraniuk, Wakin]

Page 50: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Conclusions dsp.rice.edu/CS

Random matrices work for CS for the same reason that they work for the JL lemma

Page 51: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Conclusions dsp.rice.edu/CS

Random matrices work for CS for the same reason that they work for the JL lemma

Another way to characterize “good” CS matrices

Page 52: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Conclusions dsp.rice.edu/CS

Random matrices work for CS for the same reason that they work for the JL lemma

Another way to characterize “good” CS matrices

Allows us to extend CS to new signal models

manifold/parametric models

Page 53: The Johnson-Lindenstrauss Lemma Meets Compressed Sensing

Conclusions dsp.rice.edu/CS

Random matrices work for CS for the same reason that they work for the JL lemma

Another way to characterize “good” CS matrices

Allows us to extend CS to new signal models

manifold/parametric models

Allows us to extend CS to new settings

detection

classification/learning

estimation