t he scaling law of snr-monitoring in dynamic wireless networks

30
The Scaling Law of SNR- Monitoring in Dynamic Wireless Networks Soung Chang Liew Hongyi Yao Xiaohang Li

Upload: adamma

Post on 17-Jan-2016

23 views

Category:

Documents


0 download

DESCRIPTION

T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks. Hongyi Yao. Xiaohang Li. Soung Chang Liew. Channel Gain or Single-Noise-Ratio (SNR). The channel gain H of a wireless channel (S,R) is defined by: Y= H X , where X is the signal sent by S and Y is the signal received by R. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

The Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Soung Chang LiewHongyi Yao Xiaohang Li

Page 2: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Channel Gain or Single-Noise-Ratio (SNR)

The channel gain H of a wireless channel (S,R) is defined by: Y= H X, where X is the signal sent by S and Y is the signal received by R.

S

HChannel Model

R

1

Page 3: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Channel Gain Monitoring In a wireless network, the knowledge of channel gains

are needed to design high performance communication schemes.

Due to fading, node mobility and node power instability, channel gains vary with time.

Thus, tracking and estimating channel gains of wireless channels is fundamentally important

This work seeks the answer of the following question: What is the minimum communication overhead such

that all network channels can be tracked? 2

Page 4: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Toy Example

R

S1 S2 S3

H1 H2 H3

Prior Knowledge:

H1=1 and H2=1 and H3=1.

Update

There exists i in {1,2,3} such that Hi varied.

Monitoring Object: The receiver R wants to recover i and Hi.

3

Page 5: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Toy Example

Recovering i and x: Unit Probing

R

S1 S2 S3

1 1 1Time Slot

1:Time Slot 2:Time Slot 3:

Three time slots are required for probing.

4

Hi is unknown, Hj = 1 for .j i

Page 6: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Toy Example (Differential Group Probing)

R

S1 S2 S3

1 1 1

Time Slot 1:

Time Slot 2:

R

S1 S2 S3

1 2 3

Receive Y[1]=3+(Hi-1) Receive Y[2]=6+(Hi-1)i

Using the a priori knowledge of the channel gains, R computes [Y’[1],Y’[2]]=[3,6] and then the difference: [Y[1],Y[2]] - [Y’[1],Y’[2]]=(Hi-1)[1,i].

5Since [1,1], [1,2] and [1,3] are linear independent, R can decode i and then Hi. - One time slot saving !

Hi is unknown, Hj = 1 for .j i

Page 7: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Motivation Raised by the Toy Example

Unit Probing VS. Differential Group Probing. Unit Probing (Scheduling Interference): Since we

do not know which channel varied, all channels must be sampled one by one.

Differential Group Probing (Embracing Interference): All channels are sampled simultaneously to explore the a prior knowledge.

Question: Does differential group probing suffice to achieve the minimum communication overheads? Answer: YES!

6

Page 8: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Outline of the Talk Fundamental setting: multiple transmitters and

one receiver. The scaling law of tracking all channel gains. Achieving the scaling law by ADMOT.

General setting: multiple transmitters, relay nodes and receivers.

The scaling law of above fundamental setting still holds. Achieving the scaling law by ADMOT-GENERAL.

Simulation results.

7

Page 9: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Fundamental Setting

Multiple transmitters and one receiver:

R

S1 S2 Sn

H1 H2 Hn

… For Si, the probe in the s’th time slot is Xi[s].

R receives: 1

[ ] [ ].n

i ii

Y s H X s

Definition (State): The state H is a length n vector, with the i’th component equaling Hi. The vector H’ is the a priori knowledge of H preserved by R.

8

Page 10: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

State Variation

The state variation H-H’ is said to be approx-k-sparse if there are at most k “significant” nonzero components in H-H’.

Practical interpretation: Approx-k-sparse state variation means there are at most k channels suffering significant variations, while the variations of other channels are negligible.

Details about “approx” can be found in paper [1].

9

Page 11: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Main Theorem

Theorem: When the state variation H-H’ is approx-k-sparse, we have: Scaling Law: At least

time slots are required for reliably estimating all the n channels.

Achievability: There exists a monitoring scheme using time slots, such that R can estimate all the n channels in a reliable and computational efficient manner.

( log( / ))k n k

( log( / ))k n k

10

Page 12: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Proof idea of the Scaling Law Estimating H is equivalent to estimate the

variation difference H-H’

Due to the feature of wireless communications, each time slot’s communication only provides one linear sample for H-H’.

Using the results in [BIPW2010,SODA], at least linear samples are required for reliably

recovering a approx-k-sparse vector H-H’.

11

( log( / ))k n k

Page 13: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Achieve the Scaling Law by ADMOT

Systematical View of ADMOT:

Core techniques in ADMOT: Differential Group Probing+ Compressive Sensing.

13

Page 14: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

The Training Data of ADMOT

The matrix of dimensions consists of the training data of ADMOT. Here, N is the maximum number of time slots allowed by ADMOT, and n is the number of transmitters.

Each component of is i.i.d. chosen from {-1,1} with equal probability.

The i’th column of is the training data of transmitter Si. To be concrete, in the s’th time slot, Si sends , as:

N n

( , )s i

14

Page 15: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Construction of ADMOT ADMOT(m, H’)

Variables Initialization: H* is the estimation of H. Vector Y is of dimension m. Matrix consists of the 1,2,…,m’th rows of .

Step A (Probing): For s = 1, 2,…m, in the s’th time slot:

For each i in {1,2,…,n}, Si sends Receiver R sets Y[s] (i.e., the s’th componen

t of Y) to be the received sample. Thus, Then we have

m

( , ).m s i

[ ] ( , ) [ ].m ii

Y s s i H Z s mH Z.Y

15

Page 16: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Construction of ADMOT ADMOT(m, H’) Continued from previous slide

Step B (Computing Differences): Receiver R computes

Step C (Norm-1 Sparse Recovering): Receiver R finds the solution E* of the following convex program:

Minimize , subject to

Step D (Estimating) : Receiver R estimates H as H*=H’+E*.

Step E: Terminate ADMOT.

' ( ') .m mD Y H H H Z

1|| E || 2|| E D || 2m.m

16

Page 17: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Comments 1 for ADMOT

If H-H’ is approx-k-sparse, using the results of Compressive Sensing[3], E* is a reliable estimation of H-H’ provided that m=Cklog(n/k) for a constant C.

17

Tightly Match the Scaling Law!

Page 18: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Comments 2 for ADMOT Would error be propagated?

Yes case: Di is sparse, and Die is a estimation of Di.

No case: Dia is “almost” sparse, Die is a estimation of Dia.

Page 19: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Comments 3 for ADMOT How to deal with the case where the sparsit

y parameter k is not known? Interactive estimation.

Page 20: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Outline of the Talk Fundamental setting: multiple transmitters and

one receiver. The scaling law of tracking all channel gains. Achieving the scaling law by ADMOT.

General setting: multiple transmitters, relay nodes and receivers.

The scaling law of above fundamental setting still holds. Achieving the scaling law by ADMOT-GENERAL.

Simulation results.

18

Page 21: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Simplified Model

The challenging of general communication network rises from the existence of nodes which can perform as both source and receiver.

For the simplicity, we consider a network with nodes V={v1,v2,…,vn}.

Thus, for each node vi in V, it wants to estimate the channel (vj,vi) for each j=1,2,…,n. Complete Network!

Constraint: Any node in V can not transmit and receive in the same time slot.

20

Page 22: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

The Scaling Law of General Setting

Assume for each node vi in V, the incoming channels of vi suffer approx-k-sparse variation.

Directly using the scaling law of the single receiver scenario, at least time slots are required.

Surprisingly, this scaling law is also tight for general communication networks.

( log( / ))k n k

21

Page 23: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

ADMOT-GENERAL

We construct ADMOT-GENERAL to achieve overheads for a constant C’.

The matrix of dimensions consists of the training data.

Each component of is i.i.d. chosen from {0,-1,1} with probability {1/2,1/4,1/4}.

The i’th column of is the training data of vi.

3C' log(n / )k k

N n

23

Page 24: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

ADMOT-GENERAL ADMOT-GENERAL runs m time slots.

In the s’th time slot, if node vi receives in the time slot; Otherwise, vi sends in the time slot.

In the end, with large probability (Chernoff Bound), each node, say vi, received at least m/3 data.

Let the vector Yi consist of the received data of vi, and Hi be the vector consisting of all incoming channel gains of vi.

Each component of Yi is a linear sample (with noise) of Hi. That is, , where consists of at least m/3 rows of .

( , ) 0,s i ( , )s i

24

i i i iY H Z i

Page 25: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

ADMOT-GENERAL Node vi computes the difference

using the a priori knowledge Hi’ for its incoming channel gains.

Note each component of is i.i.d. sampled from {0,-1/2,1/2} with probability {0.5, 0.25, 0.25}, which are therefore sub-Gaussian ensembles.

Approx-k-sparse Hi-Hi’ can be recovered provided that RowNumber( ) for a constant C’ [4].

25

' ( ')i i i i i i i iD Y H H H Z

i

m/3 C' log(n / )k k i

Tightly Match the Scaling Law!

Page 26: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Outline of the Talk Fundamental setting: multiple transmitters and

one receiver. The scaling law of tracking all channel gains. Achieving the scaling law by ADMOT.

General setting: multiple transmitters, relay nodes and receivers.

The scaling law of above fundamental setting still holds. Achieving the scaling law by ADMOT-GENERAL.

Simulation results.

26

Page 27: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Simulations Setting:

n=500 transmitters. One receiver. Average SNR = 20 dB. Approx-k state variation. Define channel stability=1-

k/n. ADMOT is implemented as the consecutive manner:

27

Page 28: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Simulations

28

Page 29: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Future Works General Setting: Network Tomography +

Channel Gain Estimation?

Current ADMOT-GENERAL requires the internal nodes in V performing sophisticated protocol (ADMOT) for channel gain estimation.

Can we estimate internal channel gains as “tomography”, in which relay nodes do normal network transmission, only the transmitters and receivers perform sophisticated protocols?

29

Page 30: T he Scaling Law of SNR-Monitoring in Dynamic Wireless Networks

Thanks! & Questions?

[1]. H. Yao and X. Li and S. C. Liew, “Achieving the Scaling Law of SNR-Monitoring for Dynamic Wireless Networks”, arxiv 1008.0053.

[2]. K. D. Ba, P. Indyk, E. Price, and D. P. Woodruff, “Lower bounds for sparse recovery,” in Proc. of SODA, 2010.

[3]. E. Cand´es, J. Romberg, and T. Tao, “Stable signal recovery from incomplete and inaccurate measurements,” Communications on Pure and Applied Mathematics, 2006.

[4]. S. Mendelson, A. Pajor, and N. T. Jaegermann, “Uniform uncertainty principle for bernoulli and subgaussian ensembles,” Constructive Approximation, 2008.

30