low-complexity parameter estimation algorithms using cooperation and sparsity

Post on 04-Jul-2015

116 Views

Category:

Technology

4 Downloads

Preview:

Click to see full reader

DESCRIPTION

II International Workshop on Challenges and Trends on Broadband Wireless Mobile Access Networks – Beyond LTE-A

TRANSCRIPT

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Low-Complexity Parameter Estimation

Algorithms using Cooperation and

Sparsity

Vítor H. Nascimento

Laboratório de Processamento de Sinais

Departamento de Eng. de Sistemas Eletrônicos

Escola Politécnica

Universidade de São Paulo

November 6, 2014

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Contents

1 Introduction

Parameter estimation

Echo cancellation

Channel estimation

2 Prior information

Sparsity

Low-cost solutions

Other priors

3 Collaborative estimation

Distributed estimation

Low-complexity algorithms

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Parameter estimation

Putting numbers to mathematical models.

• Adaptive filtering

• Kalman filtering

• Current research goals:

• Reduce complexity — save energy and chip area

• Improve performance — better tracking without

knowledge of underlying variation model

• Sparse system identification

• Distributed estimation

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Example: acoustic echo

car

speech

echo

time (s)

time (s)

1

1

1

2

2

3

3

4

4-1

0

0

0

0

-0.2

0.2

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Example: acoustic echo

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Echo model

User B

x(n)

y (n)

s(n)

LoudspeakerMicrophone

User A

+

Echo

d(n)=s(n)+y(n)+noise

User A + echo + noise

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Echo model

User B

x(n)

y (n)

s(n)

LoudspeakerMicrophone

User A

+

Echo

d(n)=s(n)+y(n)+noise

User A + echo + noise

y(n) = a0x(n− τ0) + a1x(n− τ1) + . . .

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Echo model

User B

x(n)

y (n)

s(n)

LoudspeakerMicrophone

User A

+

Echo

d(n)=s(n)+y(n)+noise

User A + echo + noise

y(n) = a0x(n− τ0) + a1x(n− τ1) + . . .

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Echo model

User B

x(n)

y (n)

s(n)

LoudspeakerMicrophone

User A

+

Echo

d(n)=s(n)+y(n)+noise

User A + echo + noise

y(n) = a0x(n− τ0) + a1x(n− τ1) + . . .

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Echo model

User B

x(n)

y (n)

s(n)

LoudspeakerMicrophone

User A

+

Echo

d(n)=s(n)+y(n)+noise

User A + echo + noise

y(n) = a0x(n)+a1x(n−1)+a2x(n−2)+ · · ·+a999x(n−999)

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Echo model

User B

x(n)

y (n)

s(n)

LoudspeakerMicrophone

User A

+

Echo

d(n)=s(n)+y(n)+noise

User A + echo + noise

y(n) =[

a0 a1 . . .]

x(n− τ0)

x(n− τ1)...

= aTxn

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Echo model

User B

x(n)

y (n)

s(n)

LoudspeakerMicrophone

User A

+

Echo

d(n)=s(n)+y(n)+noise

User A + echo + noise

y(n) =[

0 0 0 −0.4 0 . . . 0 0.2 0 . . .]

xn

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Echo impulse response

0 20 40 60 80 100 120 140 160 180−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

n

an

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Echo impulse response

0 20 40 60 80 100 120 140 160 180−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

n

an

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Challenges

• Long impulse responses

⇒ high complexity & low tracking speed

• Fast changes in impulse response

• Fast changes in signal and noise power

• No model available for time variation

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Challenges

• Long impulse responses

⇒ high complexity & low tracking speed

• Fast changes in impulse response

• Fast changes in signal and noise power

• No model available for time variation

i.e., for how

the ai vary

with time.

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Channel estimation

Challenges:

• Quick acquisition (reduce use of pilots)

• Track fast mobile users

• Keep complexity low

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Channel estimation

Challenges:

• Quick acquisition (reduce use of pilots)

• Track fast mobile users

• Keep complexity low (especially for massive MIMO)

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Prior information

• Sparsity of the solution

• Smoothness of solution

• Statistical properties

• Model for time variation (of the ai)

Advantages

• Requires less data

• Better noise rejection

• Faster tracking

• More robust against model uncertainties

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Least squares

Standard method for parameter estimation (Gauß, 1795):

Minimize square error

minx

‖Ax − b‖22

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−0.2

0

0.2

0.4

0.6

0.8

1

1.2

1.4

b

Ax

Ax − b

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Measures of distance — norms

x

a

b

0

How far is x from 0?

• Euclidean distance (ℓ2-norm): ‖x‖2 =√a2 + b2

• Manhattan distance (ℓ1-norm): ‖x‖1 = |a|+ |b|• Maximum (ℓ∞-norm): ‖x‖∞ = max{|a|, |b|}

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Measures of distance — norms

x

a

b

0

How far is x from 0?

• Euclidean distance (ℓ2-norm): ‖x‖2 =√a2 + b2

• Manhattan distance (ℓ1-norm): ‖x‖1 = |a|+ |b|• Maximum (ℓ∞-norm): ‖x‖∞ = max{|a|, |b|}

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Measures of distance — norms

x

a

b

0

How far is x from 0?

• Euclidean distance (ℓ2-norm): ‖x‖2 =√a2 + b2

• Manhattan distance (ℓ1-norm): ‖x‖1 = |a|+ |b|• Maximum (ℓ∞-norm): ‖x‖∞ = max{|a|, |b|}

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Measures of distance — norms

x

a

b

0

How far is x from 0?

• Euclidean distance (ℓ2-norm): ‖x‖2 =√a2 + b2

• Manhattan distance (ℓ1-norm): ‖x‖1 = |a|+ |b|• Maximum (ℓ∞-norm): ‖x‖∞ = max{|a|, |b|}

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Regularization

—A method for incorporating prior knowledge

Regularized least squares

minx

{

‖Ax − b‖22 + f(x)}

f(x) — measures how far x is from the kind of solution we

expect.

Examples:

• f(x) = ‖x‖22: if we know that x is not too far from the

origin

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Regularization

—A method for incorporating prior knowledge

Regularized least squares

minx

{

‖Ax − b‖22 + f(x)}

f(x) — measures how far x is from the kind of solution we

expect.

Examples:

• f(x) = ‖x‖22: if we know that x is not too far from the

origin

• f(x) = ‖x‖1: if we know that x is sparse

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Regularization

—A method for incorporating prior knowledge

Regularized least squares

minx

{

‖Ax − b‖22 + f(x)}

f(x) — measures how far x is from the kind of solution we

expect.

Examples:

• f(x) = ‖x‖22: if we know that x is not too far from the

origin

• f(x) = ‖x‖1: if we know that x is sparse

• f(x) = TV(x): if we know that x is smooth

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Regularization

—A method for incorporating prior knowledge

Regularized least squares

minx

{

‖Ax − b‖22 + f(x)}

f(x) — measures how far x is from the kind of solution we

expect.

Examples:

• f(x) = ‖x‖22: if we know that x is not too far from the

origin

• f(x) = ‖x‖1: if we know that x is sparse

• f(x) = TV(x): if we know that x is smooth

• f(x) = ‖x‖∞: optimize the worst case scenario

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Sparsity

Sparse (or approximately sparse) systems occur

frequently:

• Echo impulse resposte

• Channels in mobile communications

• Underwater channels

• Directions of interferers in beamforming

• Radar targets

• Image processing and acquisition

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Sparsity-inducing regularizations

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

True solution

Noisy least-squares

solution

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Sparsity-inducing regularizations

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

True solution

ℓ1 regularization

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Sparsity-inducing regularizations

Pointy shapes:

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

True solution

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Sparsity-inducing regularizations

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

True solution

ℓ2 regularization

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Sparsity-inducing regularizations

Pointy shapes: ℓ0,5

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Sparsity-inducing regularizations

Pointy shapes: ℓ0,5

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Sparsity-inducing regularizations

Pointy shapes: ℓ0?

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Solution to regularized problems

Difficulties:

• Lack of closed-form solution

• For ℓ1: non-diferentiable cost function

• For ℓp with p < 1:

• non-convex cost function

• find global minimum: NP-hard

• find local minimum: polynomial time

Solution: use iterative algorithms, starting from good

initial conditions.

Example: homotopy algorithms

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Homotopy

Solve a sequence of problems

minxk

{

‖Axk − b‖22 + λk‖x‖p}

.

1 Start with a large value of λ0 ⇒ x0 = 0.

2 Reduce λk slowly, using xk−1 as initial condition for

new optimization

Problem:

Must solve large number of systems linear equations.

Solution:

Use DCD.

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Dichotomous coordinate-descent algorithm

Zakharov & Tozer, 2004.

• Iterative method for solving systems of linear

equations

• Avoids multiplications, no divisions (only shifts and

adds)

• Easy to implement in hardware (FPGAs or ASICs)

• Many problems in Engineering are solved by

sequentially solving systems of linear equations

Why is DCD useful in our case?

• Use previous estimate xk−1 as a warm start for DCD

at iteration k

• Very few DCD iterations are necessary (1 ∼ 8)

• Each DCD iteration is very cheap

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Example

5 10 15 20 25 30 35 40−40

−35

−30

−25

−20

−15

−10

−5

0L1_2m12_gr (57): σ = 0.01, M = 64, N = 256

Number of non−zeros

MS

E (

dB)

Hl1−DCD: N

u = 1

Hl1−DCD: N

u = 2

Hl1−DCD: N

u = 8

Hl1−DCD: N

u = 32

MPYALL1Oracle

5 10 15 20 25 30 35 4010

4

105

106

107

L1_2m12_gr (57): σ = 0.01, M = 64, N = 256

Number of non−zeros

Ope

ratio

ns

Hl1−DCD: N

u = 1

Hl1−DCD: N

u = 2

Hl1−DCD: N

u = 8

Hl1−DCD: N

u = 32

MPYALL1

Parameters of the scenario: M = 64 measurements,

N = 256 unknowns, noise variance σ = 10−4.

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Other ways of using sparsity

Split the set of coefficients and use several independent

estimations.

+

y(n)

x(n)

e (n) e (n)

4w3w2ww1

z−1 z−1 z−1

21

22

+

−−

0 0.5 1 1.5 2

x 104

10−2

10−1

100

101

NLMSSSLMSVL alg.

• Needs less data to acquire a good estimate.

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Regularization for smooth solutionsTV regularization (derivative): acoustic images

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Blind equalization

Prof. Magno Silva, Maria Miranda & João Mendes

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Collaborative estimation

w(n)

w1(n)

w2(n)

e1(n)

e2(n)

e(n)

d(n)

y(n)

y1(n)

y2(n)

λ(n)

1 − λ(n)

x(n)

• Exchange of information between different algorithms

• Increased diversity leads to improved performance

• More robust against parameter choices and changes

in environment

• Overall performance same or better than possible

with each component filter

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Diversity gain

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Robustness

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

x 104

10−5

10−4

10−3

10−2

10−1

100

101

Time

MS

D

CombinationVSS aVSS b

µo = µf µo = µs µo > µf µo < µs µs < µo < µf

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Advantages

• May approach performance of model-aided

algorithms using model-free methods

• Highly parallelizable

• Very robust to environment conditions

• Redundancy in filters can be used to limit complexity

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Distributed estimation / sensor networks

2 3

5 6 7

4

8 9 10

1

0.2

0.30.2

0.2

0.1

0.1

0.1

0.2

0.3

0.2

0.20.5

0.3

0.2

0.1

0.2

0.1

0.7

0.3

0.3

Collaboration of several nodes

• Information exchange: better performance

• How much information exchange is necessary?

• Parallel estimation algorithms

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Distributed MIMO

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Distributed MIMO

Massive MIMO

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Distributed MIMO

Device-to-device communications

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Distributed MIMO

Device-to-device communications

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Distribution field estimation

Prof. Cassio Lopes

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Low-cost estimation algorithms

Better performance:

• Prior information

• Cooperative estimation

• Diversity gain: different algorithms

• Robustness: different settings

Lower-cost algorithms:

• Reduce redundancy → differential cooperation

• Avoid multiplications and divisions→ DCD.

Introduction

Parameter estimation

Echo cancellation

Channel estimation

Prior

information

Sparsity

Low-cost solutions

Other priors

Collaborative

estimation

Distributed estimation

Low-complexity

algorithms

Thank you.

top related