domino: relative scheduling in enterprise wireless lans
DESCRIPTION
DOMINO: Relative Scheduling in Enterprise Wireless LANs. Wenjie Zhou ( Co-Primary Author ), Dong Li ( Co-Primary Author ), Kannan Srinivasan , Prasun Sinha. Enterprise Networks. Internet. Important research topics: Channel assignment AP association Power adaptation Channel access. - PowerPoint PPT PresentationTRANSCRIPT
1
DOMINO: Relative Scheduling in Enterprise Wireless LANs
Wenjie Zhou (Co-Primary Author), Dong Li (Co-Primary Author),
Kannan Srinivasan, Prasun Sinha
2
Enterprise Networks
Important research topics:- Channel assignment- AP association- Power adaptation - Channel access
Internet
Router
AP1 AP2 APN
…
Client1 Client2
…
Client3 ClientN
3
Channel Access Schemes in enterprise networks:
- Distributed Coordination Function (DCF)- Downlink-only Centralized Schemes- Fully Centralized Schemes
4
Distributed Channel Access: DCF (WiFi)
Pros:- Simple to implement- Robust to failures
AP2 AP1 AP3
C2 C1 C3
ExposedHidden
Cons:- Hidden and exposed terminal problems- Low efficiency
Interfering nodes Flow direction
5
Downlink-only Centralized Schemes
• CENTAUR (Mobicom’09): Downlink packets that could be sent simultaneously are forwarded to the APs at the same time.
• OmniVoice (MobiHoc’11): Downlink packets are sent according to broadcast schedules.
Pros:- No modification on clients
Cons:- Downlink traffic only
6
0
4
8
12
16
20 DCF(Distributed)CENTAUR(Downlink-only)Omniscient(fully centralized)
Thro
ughp
ut (M
bps)
Fully Centralized Scheme
• Schedule both uplink and downlink traffic
AP1 -> C1 C2 -> AP2 AP3 -> C3 Overall
~61%
• Higher centralization higher throughput• Many theoretical work proposed• Challenges for fully centralized scheme:• Queue status of Clients• Time synchronization
AP2 AP1 AP3
C2 C1 C3
Interfering nodes Flow direction
7
Domino• a practical platform to enable arbitrary
centralized scheduling algorithms• without requiring tight time-synchronization
8
DOMINO Outline
• Rapid OFDM Polling (ROP)– Obtain the queue status of clients for uplink
scheduling
• Relative scheduling– Avoid tight time synchronization
• Schedule converter– Create schedules for relative scheduling
10
DOMINO Outline
• Rapid OFDM Polling (ROP)– Obtain the queue status of clients for uplink
scheduling
• Relative scheduling– Avoid tight time synchronization
• Schedule converter– Create schedules for relative scheduling
11
Question: How can we collect the queue status of clients efficiently?Solution: Concurrent transmission based on Orthogonal frequency-division multiplexing (OFDM)
Central controller
AP1 AP2 APN
…
Client1 Client2
…
Client3 ClientN
ROP: Rapid OFDM Polling
12
ROP: Rapid OFDM Polling
• Clients transmit queue status using subcarriers
Client 1 Client 2Practical issues:- Freq offset- Time offset- Power mismatch(details in paper)
Related work:- PAMAC (INFOCOM’09)- B2F (MobiCom’11)
14
ROP collects the queue status of all clients with little overhead: - 40 μs (polling message) + 16 μs (OFDM symbol) - regular packet duration: 1000 μs - multiple regular transmissions/poll
15
DOMINO Outline
• Rapid OFDM Polling (ROP)– Obtain the queue status of clients
• Relative scheduling– Avoid tight time synchronization
• Schedule converter– Create schedule for relative scheduling
16
AP1 ---> C1:
AP4 ---> C4:
AP2 ---> C2:
AP3 ---> C3:
Data Packet ACK
Data Packet ACK
Data Packet
Data Packet ACK
Misalignment
Collision!
AP1
C1AP2
C2 AP3
C3 AP4
C4
Slot 1
AP1---->C1
AP4---->C4
μs level synchronization requiredOne Wi-Fi slot: 9 μs
Why time synchronization?
Interfering nodes AP-client association Currently transmitting
Slot 2
AP2---->C2
AP3---->C3
17
Current Time Synchronization Scheme
Network Time Protocol (NTP), Precision Time Protocol (PTP), Reference-Broadcast Synchronization (RBS) (SIGOPS’02), Sourcesync (SIGCOMM’10):– Low accuracy; Or– Expensive hardware; Or– Low accuracy in large network.
18
Can we avoid tight time synchronization?
19
Relative Scheduling
AP1
C1AP2
C2 AP3
C3 AP4
C4
Slot 1 Slot 2
AP1---->C1
AP4---->C4
AP2---->C2
AP3---->C3
Data Packet ACK
Data Packet ACK
Data Packet ACK
Data Packet ACK
Interfering nodes AP-client association Currently transmitting
AP1 ---> C1:
AP4 ---> C4:
AP2 ---> C2:
AP3 ---> C3:
20
Relative Scheduling
AP1
C1AP2
C2 AP3
C3 AP4
C4
Slot 2 Slot 3
AP1---->C1
AP4---->C4
AP2---->C2
AP3---->C3
Data Packet ACK
Data Packet ACK
Data Packet ACK
Data Packet ACK
Transmission alignment achieved
Interfering nodes AP-client association Currently transmitting
AP1 ---> C1:
AP4 ---> C4:
AP2 ---> C2:
AP3 ---> C3:
21
Node signatures as triggers:– A sequence of bits with a certain length – These sequences are orthogonal to each other– High detecting ratio even under interference– Experiment results:• 4 combined signatures can be decoded correctly• 4 transmissions can be triggered by one node
22
Only APs know the schedules from the central controller
How can we ask the clients to send the triggers?
23
Data Packet SA3 SA2A1:
C1: ACK SA3
SIFS 1 slotS′
S′The combined signatures that should be sent by the client
The combined signatures that should be sent by the AP
A special signature that notifies the start of transmission
A2 A1 A3
C2 C1 C3
24
A2 A1 A3
C2 C1 C3
Data Packet
SA2A1:
C1:
SA3
SA3
SIFS 1 slotS′
S′
ACK
25
DOMINO Outline
• Rapid OFDM Polling (ROP)– Obtain the queue status of clients
• Relative scheduling– Avoid tight time synchronization
• Schedule converter– Create schedule for relative scheduling
26
Schedule Converter
Requirements:– Every transmitter should be triggered– Polling packets should also be scheduled– Backup triggers should be included in case of
transmission failure– Details in paper
Arbitrary Schedule Relative Schedule?
27
Experiment
Same Contention Domain
Hidden terminal Exposed terminal0123456789
10
4.25
5.42
9.18
2.76
1.62
2.72
DOMINO DCF
Thro
ughp
ut (K
bps)
>3X
>3X
>1.5X
USRP
USRP USRP
USRP
28
Trace Driven Simulation
• Simulation Setup:– RSS trace collected from a 40 Wi-Fi nodes testbed– Randomly picked 10 APs and 2 clients per AP
• Other schemes:− CENTAUR: Downlink traffic scheduled; using fixed
backoff to align transmission− DCF: 802.11 standard (Wi-Fi)
29
UDP Throughput & Delay
0 1 2 3 4 5 60
0.2
0.4
0.6
0.8
1
DOMINODCFCENTAUR
Throughput (Mbps)
CDF
(%)
Downlink traffic only
0
200000
400000
600000
800000
10000000
0.2
0.4
0.6
0.8
1
Packet delivery Delay (us)
CDF
(%)1.74X 0.5X
30
UDP Throughput & Delay
Uplink and downlink traffic
0 1 2 3 4 5 60
0.2
0.4
0.6
0.8
1
DOMINO DCF CENTAUR
Throughput (Mbps)
CDF
(%)
1.24X
Heavy tailLow fairness
31
TCP Throughput
0 0.5 1 1.5 2 2.5 3 3.5 4 4.50
0.2
0.4
0.6
0.8
1
DOMINO
CENTAUR
DCF
Throughput (Mbps)
CDF
(%)
Downlink traffic only
1.15XTCP ACK as regular packet
32
Conclusions
Domino: a platform to enable centralized scheduling algorithms without requiring tight time-synchronization:– Queue information of clients are efficiently collected using
one OFDM symbol– Nodes transmit relatively one after another instead of
according to time stamps
Future work: coexistence with existing Wi-Fi
Thank you!
33
Backup slides
34
Why is CENTAUR behaving worse than DCF?
35
UDP Throughput & fairness
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 190
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
DOMINOCENTAURDCF
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.50
5
10
15
20
25
30
35
40
DOMINOCENTAURDCF
- 24%-74% throughput gain- High and stable fairness
36
TCP Throughput & fairness
- 10%-15% throughput gain- TCP ACK as regular packet
- High and stable fairness0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.50
5
10
15
20
25
30
DOMINOCENTAURDCF
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.50
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
DOMINOCENTAURDCF
370 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.50
50000
100000
150000
200000
250000
300000
350000
400000
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 7 7.5 8 8.5 9 9.50
1000000
2000000
3000000
4000000
5000000
6000000
7000000
8000000
9000000
10000000
DOMINO
CENTAUR
DCF
Evaluation: UDP & TCP Delay
DCF: 2X higher - Queuing delay
Similar delay performance: - Queuing delay - TCP congestion control
38
UDP Throughput & Delay
Uplink and downlink traffic
0 1 2 3 4 5 60
0.2
0.4
0.6
0.8
1
DOMINODCFCENTAUR
Throughput (Mbps)
CDF
(%)
0 100000000
0.2
0.4
0.6
0.8
1
Packet delivery Delay (us)
CDF
(%)
1.24X
Heavy tail
2X
39
TCP Throughput & Delay
0 0.5 1 1.5 2 2.5 3 3.5 4 4.50
0.2
0.4
0.6
0.8
1
DOMINO
CENTAUR
DCF
Throughput (Mbps)
CDF
(%)
Uplink traffic only
0 500000 10000000
0.2
0.4
0.6
0.8
1
Packet delivery Delay (us)
CDF
(%)
1.15X
40
Domino Solution Overview
• Identify Hidden & Exposed Links– Construct link conflict map
• Co-existence with current networksContention Free Period Contention Period
Slot 1 Slot 2 Slot 3 … Slot N
AP1 → C1
AP2 → C2
…APM → CM
Concurrent transmissions
41
Evaluation: Misalignment
0 1 2 3 4 50
5
10
15
20
25 20 us40 us60 us80 us
Slot index
Max
Tx
misa
lignm
ent (
us)
Alignment achieved - Slot size: 9 μs > 2 μs
42
Current Time Synchronization Scheme
• Network Time Protocol (NTP): time accuracy of about 1ms in a quiet Ethernet network.
• Precision Time Protocol (PTP): requires specialized and expensive hardware.
• Reference-Broadcast Synchronization (RBS) (SIGOPS’02): synchronization accuracy decreases with network size.
• Sourcesync (SIGCOMM’10): one collision domain
43
Throughput Gain of network with 80 nodes
~58%
44
ROP: Rapid OFDM Polling
• Client TX queue state over subcarriers
• Polling strategy
0 1 3 9 100 109 127-128 -109 -100
-9 -3 -1 2-2 4-4
DC
… … … ……………
guard band
guard band
subchannel 0
subchannel 11
subchannel 12
subchannel 23
guard subcarriers
Polling Packet FromAP
Client 0 Client 1 Client 2
Client N…
Client 3
1 slotSubchannel
Time0123…N
FFT window
CPCPCPCP
CPCP
45
ROP: Rapid OFDM Polling
Polling Packet From AP
Client 0
Client 1
Client 2
Client N
…
Client 3
1 slot
Subchannel
Time
0
1
2
3
…N
FFT window
CP
CP
CP
CP
CP
CP
46
DOMINO: Example
9291 93 9490
Batch 10 Batch 11SlotsLinks
AP1 AP1
AP1 C1
C1 AP1
AP2 AP2
AP2 C2
C2 AP2
AP3 AP3
AP3 C3
C3 AP3
AP4 AP4
AP4 C4
C4 AP4
0 1 2 3
Batch 0
1
2
24 s
fake 3
(a) (b)
AP1
C1AP2
C2 AP3
C3 AP4
C4
47
Subcarrier Separation
10 15 20 25 30 35 400
20
40
60
80
100
0
1
2
3
4
Difference in RSS (dB)
Corr
ect d
ecod
ing
ratio
(%) Separation
subcarriers
(38dB, 99.9%, 3 sub)
Tradeoff: - Less overhead - Higher decoding ratio
TX1
RX
TX2
48
Relative Scheduling: Node signatures as trigger
• Node signature are orthogonal to each other, easier to detect.
1 2 3 4 5 6 70
200
400
600
800
1000
1200
1 sender2 senders, same sig2 senders, different sig3 senders, same sig3 senders, different sig
Number of combined signatures
Det
ectio
n ra
tion
(%)
49
Existing work: MIFI
50
Existing work: CENTAUR
51
Domino Solution Overview
• Identify Hidden & Exposed Links– Construct link conflict map
• Co-existence with current networks• ROP: Rapid OFDM Polling• Relative Scheduling• Schedule Converter
52
ROP: How it performs
Decoded OFDM symbols of two clients at adjacent subchannels with guard interval. (30db diff. RSSI at AP)
53
Relative Scheduling
AP1
C1AP2
C2 AP3
C3 AP4
C4
Slot 1 Slot 4Slot 3Slot 2
AP1---->C1
AP4---->C4
AP2---->C2
AP3---->C3
AP1---->C1
AP4---->C4
AP2---->C2
AP3---->C3
AP1C1 AP2C2
AP3C3 AP4C4
54
Relative Scheduling: Node signature as trigger
• Node signature are orthogonal to each other, easier to detect.– AP to client
– Client to AP
Data Packet S1 S2AP:
Client: ACK S1
SIFS 1 slotS′
S′
Data Packet
ACK S2AP:
Client:
S1
S1
SIFS 1 slotS′
S′
55
Relative Scheduling: How it performs
56
Schedule Converter
Inbound & outbound Constraint
– 1<= Inbound <=2– Outbound <=4
In
boun
d Sender Receiver
Out
bo
und
Arbitrary Schedule Relative Schedule?
57
Schedule Converter
• Insert Fake Link– Saturate the network with fake links at each slot
• Retain Last Slot– Last slot of current schedule is used as the first
slot of next schedule
• Insert Polling slot– Insert polling slots between slots
58
Evaluation: Experiment
SC HT ET0123456789
10
4.255.42
9.18
2.761.62
2.72
Throughput of USRP Prototype
DOMINO DCF
USRP
USRP
59
Evaluation: Simulation
• Implement DCF, CENTAUR, DOMINO in NS3• 40 nodes with known RSSI trace (Stanford)
• Topology : APs, each AP has clients
60
Evaluation: Misalignment
Vary wired latency from 20 to 80 (using T(10,2))
61
Evaluation: UPD & TCP
TCP and UDP throughput, delay and fairness of . The downlink data rate is fixed to 10 Mbps and the uplink data rate changes from 0 to 10 Mbps.
62
Discussion
• Triggering may not easy
• Is fixed packet size good?
• Building conflict graph dynamically
• Low traffic, low efficiency
AP1
C1
AP2
C2
63
Challenge : Time Synchronization
• Network Time Protocol (NTP): time accuracy of about 1ms in a quiet Ethernet network.
• Precision Time Protocol (PTP): require specialized and expensive hardware.
• Reference-Broadcast Synchronization (RBS): accuracy decreases as network size increases.
64
Centralized Scheme
• Schedule both uplink and downlink traffic
AP2 AP1 AP3
C2 C1 C3
65
DOMINO: Example
9291 93 9490
Batch 10 Batch 11SlotsLinks
AP1 AP1
AP1 C1
C1 AP1
AP2 AP2
AP2 C2
C2 AP2
AP3 AP3
AP3 C3
C3 AP3
AP4 AP4
AP4 C4
C4 AP4
0 1 2 3
Batch 0
2
24 s
fake 3
AP1
C1AP2
C2 AP3
C3 AP4
C4
Polling packetData packet
1
66
Distributed Channel Access: DCF
Pros:- Simple to implement- Robust to failures
AP2 AP1 AP3
C2 C1 C3
Exposed Hidden
Cons:- Hidden and exposed terminal problems- Low efficiency
68
Enterprise Network
Internet
Central Server
AP1 AP2 APN
…
Client1 Client2 Client3 ClientM
…
Control Plane:
- Channel Assignment- Client Association- Power control
What about data plane?
69
Exposed and hidden terminals
C2
AP2
C1
AP1
C3
AP3
Exposed
Hidden
Centralized schedule could avoid hidden terminals while utilize exposed terminals
70
Expected Improvement
• DCF : purely distributed• CENTAUR : half-distributed, half centralized• DOMINO: centralized
DCF CENTAUR TRAIT
Throughput
DOMINO
centralization
71
Design Overview
Internet
Central Server
AP1 AP2 APN
…
Client1 Client2 Client3 ClientM
…
Collector
Scheduler
Converter
queue size
time schedule
relative schedule
• Obtaining clients queue status• Identifying exposed and hidden links• Time synchronization
Challenges: