random projections of signal manifolds michael wakin and richard baraniuk random projections for...

14
Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard Baraniuk Random Projections of Smooth Manifolds Richard Baraniuk and Michael Wakin Presented by: John Paisley Duke University

Upload: gwendolyn-cannon

Post on 22-Dec-2015

216 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk

Random Projections for Manifold LearningChinmay Hegde, Michael Wakin and Richard Baraniuk

Random Projections of Smooth ManifoldsRichard Baraniuk and Michael Wakin

Presented by:John Paisley

Duke University

Page 2: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Overview/Motivation

• Random projections can allow for linear, nonadaptive dimensionality reduction.

• If we can ensure that the manifold information is preserved in these projections, we can use all manifold learning techniques in this compressed space and know the results will be (essentially) the same.

• Therefore we can sense compressively, meaning we can bypass the overhead and directly sense the compressed (dimensionality reduced) signal.

Page 3: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections of Signal Manifolds (ICASSP 2006)

• This paper: If we have manifold information, we can perform compressive sensing using significantly fewer measurements.

• Whitney’s Embedding Theorem: For a noiseless manifold with intrinsic dimensionality of K, this theorem implies that a signal x in RN, projected into RM by the M x N orthonormal matrix, P (y = Px), can be recovered with high probability if M > 2K

• Note that K is the intrinsic dimensionality, which is different from (and less than) the level of sparsity.

Page 4: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections of Signal Manifolds (ICASSP 2006)

• The recovery algorithm considered here is a simple search through the projected manifold for the nearest neighbor.

• Consider the case where the data is noisy, so slightly off the manifold, and define

Page 5: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections of Signal Manifolds (ICASSP 2006)

Page 6: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections for Manifold Learning (NIPS 2007)

• How does a random projection of a manifold,

impact the ability to estimate the intrinsic dimensionality of the manifold and to embed that manifold into a Euclidean space that preserves geodesic distances (e.g. via the Isomap algorithm)?

How many projections are needed?

• Grassberger-Procacia (GP) algorithm: A common algorithm for estimating the intrinsic dimensionality of a manifold.

• Also written as C(r1)/C(r2) = (r1/r2)K where K is the intrinsic dimensionality. This method uses the fact that the volume of the intersection of a K dimensional object and a hypersphere of radius r is proportional to rK

Page 7: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections for Manifold Learning (NIPS 2007)

• Isomap algorithm: Produces a mapping where the Euclidean distance in the mapped space equals the geodesic distance in the original space.

Page 8: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections for Manifold Learning (NIPS 2007)

• Lower bound on M for the GP algorithm. The proof is in

Page 9: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections for Manifold Learning (NIPS 2007)

• Lower bound on M for the Isomap algorithm. The proof is in

Page 10: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections for Manifold Learning (NIPS 2007)

• ML-RP algorithm (manifold learning using random projections)– Developed in paper to find M

Page 11: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections for Manifold Learning (NIPS 2007)

Page 12: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections for Manifold Learning (NIPS 2007)

Page 13: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections of Smooth Manifolds (in Foundations of Computational Mathematics)

Page 14: Random Projections of Signal Manifolds Michael Wakin and Richard Baraniuk Random Projections for Manifold Learning Chinmay Hegde, Michael Wakin and Richard

Random Projections of Smooth Manifolds (in Foundations of Computational Mathematics)

Sketch of proof

• Sample points from the manifold such that the (geodesic) distortion of any point on the manifold to the nearest sampled point is less than some value. Also, sample points from the tangent space of the manifold, ensuring the distance of all points to the nearest sample is less than some threshold. Then use the JL-lemma to ensure that the embedding of all of these sampled points preserves relative distances. Then use some theorems and the facts about how the points were sampled to extend this distance preservation to all points on the manifold.