signal reconstruction using least absolute errorsasurtg/projects/rtgslidesislasf16.pdfsignal...

Post on 20-May-2020

5 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Signal Reconstruction using Least Absolute Errors

Genesis J. Islas

School of Mathematical and Statistical Sciences, Arizona State University

November 21, 2016

Overview

Inverse Problem Introduction

Regularization ModelsCharacteristics of different normsProblem

Results

Gaussian MatricesFourier MatricesGaussian Noise

2 / 24

Inverse Problem:

Suppose f is an unknown function in Rn. We would like to recover fgiven A ∈ Rm×n and b ∈ Rm that satisfy the relationship

Af + e = b.

Here e ∈ Rm is a vector of errors.

The signal f can be approximated by solving the minimization problem

minf∈Rn

||Af − b||.

3 / 24

Tikhonov Regularization

The characteristics of f determines which models can successfully solvethe problem.

Example:

If f is smooth then Tikhonov regularization can be used

||Af − b||22 + λ||Tf ||2.

A common choice for T is

T =

−1 1. . .

. . .

−1 1

∈ Rn−1×n.

T is the finite difference.

This model penalizes solutions with discontinuities or sharpchanges.

4 / 24

TV Regularization

The characteristics of f determines which models can successfully solvethe problem.

Example:

If f is known to be sparse then TV regularization can be used.

||Af − b||22 + λ||Tf ||1

This model is able to recover functions with discontinuities.

5 / 24

Norms

0-norm: The number of non-zero values

||x||0 = |{k|xk 6= 0}|1-norm:

||x||1 = |x1|+ |x2|+ ...+ |xn|2-norm:

||x||2 = (x21 + x22 + ...+ x2n)12

Ax = b Ax = b

x̂x̂

`1 `2

6 / 24

Problem

Goal: Recover f given A ∈ Rm×n and b ∈ Rm that satisfy therelationship

Af + e = b.

Assume ||e||0 is small. Then the signal only has a few corruptions butthe magnitudes can be large.

Perhaps we would like to solve the minimization problem

minf∈Rn

||Af − b||0.

However, this is not a convex problem and is not easy to solve.

7 / 24

Reconstruction using `1 vs `2

We have seen that there is a relationship between `0 and `1.

Under certain conditions we have the following equivalence

f∗ = arg minf∈Rn

||Af − b||0 ⇐⇒ f∗ = arg minf∈Rn

||Af − b||1.

We are interested in comparing the accuracy of the solutions to thefollowing minimization problem using `1 vs `2:

f∗ = arg minf∈Rn

||Af − b||

8 / 24

Single Case

Let A be an 80× 50 matrix with values drawn from the standardnormal distribution.

The signal, f ∈ R50, has the form

f = 5.27 sin(.352π

50x) + 5.08 cos(.94

50x).

9 / 24

Single Case

Let ||e||0 = 5 with ei ∈ [0, 100].

10 / 24

Single Case

11 / 24

Gaussian Standard Normal Matrix

A ∈ Rm×50 with values drawn from a standard normal distribution

m = 50, 52, ..., 100

The signal is given by

f = A sin(k2π

50x) +B cos(j

50x)

for x = 1, 2, ..., 50

||e||0 = 0, 1, ..., 20

Each case is performed 20 times for `1 and `2.

12 / 24

Gaussian Standard Normal Matrix

Success - 1 (Yellow), Failure - 0 (Blue)

13 / 24

Gaussian Standard Normal Matrix

Relative Error :∣∣∣∣∣∣x−xapprox

x

∣∣∣∣∣∣2

14 / 24

Fourier Matrix

A ∈ Rm×50 is a Fourier Matrix with values determined by

Akj =1√m

exp

(2πi kj

m

)m = 50, 52, ..., 100

The signal stays the same,

f = A sin(k2π

50x) +B cos(j

50x)

||e||0 = 0, 1, ..., 20

The probability of recovery is calculated from 20 iterations.

15 / 24

Fourier Matrix

Success - 1 (Yellow), Failure - 0 (Blue)

16 / 24

Fourier Matrix

17 / 24

Gaussian Noise

Now consider the case where e consists of a few large corruptions aswell as small corruptions everywhere.

18 / 24

Gaussian Noise

A ∈ Rm×50 with m = 50, 52, ..., 100.

The signal stays the same,

f = A sin(k2π

50x) +B cos(j

50x).

Now e = e1 + e2||e1||0 = 0, 1, ..., 20e2 has values drawn from a normal distribution

Each case is performed 20 times for `1 and `2.

19 / 24

Gaussian Noise

20 / 24

Gaussian Noise

Residual : ||x− xapprox||2

21 / 24

Gaussian Noise

22 / 24

Acknowledgments

Dr. Rodrigo Platte

Dr. Toby Sanders

Research Training Group Program

School of Mathematical and Statistical Sciences, ASU

23 / 24

Thank You.

Questions?

24 / 24

top related