richard baraniuk chinmay hegde marco duarte mark davenport rice university michael wakin university...

34
Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Upload: lindsey-clemence-gilbert

Post on 28-Dec-2015

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Richard Baraniuk Chinmay HegdeMarco DuarteMark DavenportRice University

Michael Wakin University of Michigan

Compressive Learning and Inference

Page 2: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Pressure is on Digital Sensors

• Success of digital data acquisition is placing increasing pressure on signal/image processing hardware and software to support

higher resolution / denser sampling» ADCs, cameras, imaging systems, microarrays, …

xlarge numbers of sensors

» image data bases, camera arrays, distributed wireless sensor networks, …

x increasing numbers of modalities

» acoustic, RF, visual, IR, UV=

deluge of datadeluge of data» how to acquire, store, fuse,

process efficiently?

Page 3: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Sensing by Sampling• Long-established paradigm for digital data acquisition

– sample data at Nyquist rate (2x bandwidth) – compress data (signal-dependent, nonlinear)– brick wall to resolution/performance

compress transmit/store

receive decompress

sample

sparse /compressiblewavelettransform

Page 4: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Compressive Sensing (CS)

• Directly acquire “compressed” data

• Replace samples by more general “measurements”

compressive sensing transmit/store

receive reconstruct

Page 5: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Compressive Sensing

• When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss

• Random projection will work

measurements

[Candes-Romberg-Tao, Donoho, 2004]

sparsesignal

nonzeroentries

Page 6: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Why CS Works• Random projection not full rank, but stably embeds

signals with concise geometrical structure– sparse signal models is K-sparse– compressible signal models

with high probability provided M large enough

Page 7: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Why CS Works• Random projection not full rank, but stably embeds

signals with concise geometrical structure– sparse signal models is K-sparse– compressible signal models

with high probability provided M large enough

• Stable embedding: preserves structure– distances between points, angles between vectors, …

K-dim planes

K-sparsemodel

Page 8: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

CS Signal Recovery

• Recover sparse/compressible signal x from CS measurements y via optimization

K-dim planes

K-sparsemodel

recovery

linear program

Page 9: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Information Scalability

• Many applications involve signal inference and not reconstruction

detection < classification < estimation < reconstruction

computationalcomplexityfor linearprogramming

Page 10: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Information Scalability

• Many applications involve signal inference and not reconstruction

detection < classification < estimation < reconstruction

• Good news: CS supports efficient learning, inference, processing directly on compressive measurements

• Random projections ~ sufficient statisticsfor signals with concise geometrical structure

• Extend CS theory to signal models beyond sparse/compressible

Page 11: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Application:

CompressiveDetection/Classification

viaMatched Filtering

Page 12: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Matched Filter• Detection/classification with K unknown

articulation parameters– Ex: position and pose of a vehicle in an image– Ex: time delay of a radar signal return

• Matched filter: joint parameter estimation and detection/classification– compute sufficient statistic for each potential target and

articulation– compare “best” statistics to detect/classify

Page 13: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Matched Filter Geometry

• Detection/classification with K unknown articulation parameters

• Images are points in

• Classify by finding closesttarget template to datafor each class (AWG noise)

– distance or inner product

data

target templatesfrom

generative modelor

training data (points)

Page 14: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Matched Filter Geometry

• Detection/classification with K unknown articulation parameters

• Images are points in

• Classify by finding closesttarget template to data

• As template articulationparameter changes, points map out a K-dimnonlinear manifold

• Matched filter classification = closest manifold search articulation parameter space

data

Page 15: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

CS for Manifolds

• Theorem: random measurements preserve manifold structure[Wakin et al, FOCM ’08]

• Enables parameter estimation and MFdetection/classificationdirectly on compressivemeasurements– K very small in many

applications

Page 16: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Example: Matched Filter

• Detection/classification with K=3 unknown articulation parameters1. horizontal translation2. vertical translation3. rotation

Page 17: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Smashed Filter

• Detection/classification with K=3 unknown articulation parameters (manifold structure)

• Dimensionally reduced matched filter directly on compressive measurements

Page 18: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Smashed Filter

• Random shift and rotation (K=3 dim. manifold)• Noise added to measurements• Goal: identify most likely position for each image class

identify most likely class using nearest-neighbor test

number of measurements Mnumber of measurements M

avg

. sh

ift

est

imate

err

or

class

ifica

tion

rate

(%

)more noise

more noise

Page 19: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Application:

CompressiveData Fusion

Page 20: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Multisensor Inference• Example: Network of J cameras observing

an articulating object

• Each camera’s images lie on K-dim manifold in• How to efficiently fuse imagery from J cameras

to maximize classification accuracy and minimize network communication?

Page 21: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Multisensor Fusion• Fusion: stack corresponding image vectors

taken at the same time

• Fused images still lie on K-dim manifold in

Joint Articulation Manifold (JAM)

Page 22: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

• Can take CS measurements of stacked imagesand process or make inferences

CS + JAM

w/ unfused sensing

Page 23: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

• Can compute CS measurements in-networkas we transmit to collection/processing point

CS + JAM

Page 24: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Simulation Results

• J=3 CS cameras, each N=320x240 resolution• M=200 random measurements per camera

• Two classes1. truck w/ cargo2. truck w/ no cargo

class 1 class 2

Page 25: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Simulation Results

• J=3 CS cameras, each N=320x240 resolution• M=200 random measurements per camera

• Two classes– truck w/ cargo– truck w/ no cargo

• Smashed filtering– independent– majority vote– JAM fused

Page 26: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Application:

Compressive Manifold Learning

Page 27: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Manifold Learning

• Given training points in , learn the mapping to the underlying K-dimensional articulation manifold

• ISOMAP, LLE, HLLE, …

• Ex: images of rotating teapot

articulation space= circle

Page 28: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Compressive Manifold Learning

• ISOMAP algorithm based on geodesic distances between points

• Random measurements preserve these distances

• Theorem: If , then theISOMAP residual variance in the projected domain is bounded by

the additive error factor

full data (N=4096) M = 100 M = 50 M = 25

translatingdisk manifold

(K=2)

[Hegde et al ’08]

Page 29: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Conclusions

• Why CS works: stable embedding for signals with concise geometric structure

– sparse signals (K-planes), compressible signals ( balls)– smooth manifolds

• Information scalability– detection< classification < estimation < reconstruction– compressive measurements ~ sufficient statistics– many fewer measurements may be required to

detect/classify/estimate than to reconstruct– leverages manifold structure and not sparsity

• Examples– smashed filter– JAM for data fusion– manifold learning

dsp.rice.edu/cs

Page 30: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference
Page 31: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

• Partnership on open educational resources– content development– peer review

• Contribute your course notes, tutorial article, textbook draft, out-of-print textbook, …

(you must own the copyright)

• MS Word and LaTeX importers

• For more info: IEEEcnx.org

Page 32: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

capetowndeclaration.org

Page 33: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference
Page 34: Richard Baraniuk Chinmay Hegde Marco Duarte Mark Davenport Rice University Michael Wakin University of Michigan Compressive Learning and Inference

Why CS Works (#3)• Random projection not full rank, but stably embeds

– sparse/compressible signal models– smooth manifolds – point clouds

into lower dimensional space with high probability• Stable embedding: preserves structure

– distances between points, angles between vectors, …

provided M is large enough: Johnson-Lindenstrauss

Q points