rg baraniuk, mk wakin foundations of computational mathematics presented to the university of...

17
“Random Projections on Smooth Manifolds” -A short summary RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip K Poon Optical Computing and Processing Laboratory

Post on 19-Dec-2015

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

“Random Projections on Smooth Manifolds”-A short summary

RG Baraniuk, MK WakinFoundations of Computational Mathematics

Presented to the University of Arizona

Computational Sensing Journal Club

Presented by Phillip K PoonOptical Computing and Processing Laboratory

Page 2: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

The Motivation\ProblemThe Goal: Dimensionality Reduction

Extract low dimensional information from a signal that is in a high dimensional space

Preserve critical relationships among parts of the data

Understand the structure of the low dimensional information

Applications: Compression

Find a low dimensional representation from which the original high dimensional data can be reproduced

Page 3: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

General overview of dimensionality reductionMake a model (or estimate) of the expected

behavior of the low dimensional information. These models often assume some structure to

the low dimensional informationEx. Given our data was presented as a cube

our information probably lives on a line.Using the constraints and assumptions

placed by the model, algorithms process the data into the desired low dimensional information

Page 4: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

Three common model classesLinear modelsSparse modelsManifold models

Page 5: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

Linear Models High dimensional data signal, depends linearly on a low dimensional set

of parameters. These models often uses a basis which allows the data, x, to be

represented with a few coefficients, ½ {1,2,…,N} i.e. A Fourier orthonormal basis

This results in a signal class, F = span({Ãi} i 2 ) , in a K-dimensional linear subspace of RN

These classes of signals have a linear geometry which lead to linear algorithms for dimensionality reduction

Principal Component Analysis is the optimal linear algorithm Non-adaptive technique requires training data

High dimensional signal

Low dimensional representation

Page 6: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

Sparse (Nonlinear) Models Similar to linear models

A few coefficients in the new basis represents/approximates the high dimensional data Unlike linear model, the relevant set of basis elements, may change from signal to

signal No single low dimensional subspace suffices to represent all K-sparse signals Thus not closed under addition Thus NOT a linear

The set of sparse signals must be written as a non-linear union on distinct K dimensional subspaces

K := F = [ span ({Ãi} i 2 ) Examples of situations requiring sparse models

Natural signals Audio recordings Images

Piecewise smooth signals [???] Information in signal encoded in the location and strength of each coefficient On the surface sparse signals seem to requires an adaptive and nonlinear

technique!!

Page 7: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

Compressive Sensing and Sparse (Nonlinear) ModelsCS uses nonadaptive linear methodsEncoder requires no prior knowledge of the signalDecoder uses the sparsity model to recover the signalEvery K-sparse signal can be recovered high probability of

success using M = O(K log (N/K)) linear measurements, y = ©x © is a measurement\encoding matrix drawn from random

distributionRandom measurements allow for a universal or nonadaptive

measurement schemeInvokes the Restricted Isometry Property of the measurement

scheme:No two points are mapped to the same location in the new basisSimilar to concept of injective mapping???

New and explosive area of research!

Page 8: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

The Restricted Isometry PropertyAccurate recovery of sparse signals require a

stable embedding when encodedNo two points map to same locationSparse signals remain well separated in RM

A random measurement matrix obeying the RIP will guarantee accurate recovery

Requires M = O(K log (N/K)) The Johnson-Lindenstrauss Lemma

Intimately connected to the RIP

Page 9: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

Manifold ModelsClassic example: The swiss rollLinear models like PCA or

MDS would failThey only see the Euclidean

straight line distance!Even more we can’t assume

sparsity! Compressive sensing fail

So manifold modeled signals are needed!

Page 10: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

A few terms:What is a manifold?Manifolds are very general

shapes and structures.A surface is a 2 manifoldA volume is a 3 manifold

Manifold’s are locally EuclideanThey seem “flat” if you look

close enough! i.e. Earth looks “flat!”Any object that can be

“charted” is a manifoldGeodesic Distance: Shortest

distance between points in curved space

Page 11: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

Manifold ModelsMay not be represented with a sparse set of

coefficientsMore general than framework of basesManifold models arise when we believe signal

has as a continuous and often nonlinear dependence on some parameters. K-dimensional parameter µ carries relevant

informationThe signal xµ 2 RN changes as a continuous and

nonlinear function of these parametersThe geometry of the signal class forms a

nonlinear K-dimensional submanifold of RN, F = { xµ : µ 2 £ } £ is a K-dimensional parameter space

Page 12: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

Manifold LearningMost manifold modeled signals

require learning the manifold structure

Learning involves constructing non-linear mappings from RN to RM that is adapted from training data

Mapping preserves a characteristic property of manifold

Page 13: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

A Classic Manifold Learning Technique: ISOMAPSeeks to preserve all the pair-wise

geodesic manifold distances!Approximates faraway distances by

adding up the small interpoint distances through the shortest route

Burden of storing sampled data points increases with native dimension N of the data

Each manifold learning algorithm attempts to preserve a different geometric property of the underlying manifold

Page 14: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

What does Random Projections on Manifolds do for us?Suggests that Compressive Sensing is not

limited only to sparse signals but also manifold modeled signals

A provably small number of M random linear projections can preserve key information of the manifold-modeled signal

No training Non-adaptiveMapping to lower dimension is linear!Significantly reduced computation

Page 15: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

Random Projections Tells Us A LotManifold Learning algorithms try to preserve

key properties of the manifoldRandom projections accurately approximate

many properties of the manifold:Ambient and geodesic distances between all

point pairsDimension of the manifoldTopology, local neighborhoods, and local anglesLengths and curvature of paths on the manifoldVolume of the manifold

Page 16: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

The punchlineLike CS for sparse models, Random

Projections for Manifold Models requires that the number of random linear projections, M, is linear in the “information level” K and logarithmic in ambient dimension N.

Allows for stable embedding under a random linear projection from the high to low dimensional submanifold.

CS can be extended to sparse and manifold signals!

Page 17: RG Baraniuk, MK Wakin Foundations of Computational Mathematics Presented to the University of Arizona Computational Sensing Journal Club Presented by Phillip

ConclusionOverview of Dimensionality ReductionLinear ModelsSparse ModelsManifold ModelsManifold Learning TechniquesRandom Projections on Smooth Manifolds

more efficient than learning and possibly extends CS to non sparse signals!