1 manpreet singh, prashant pradhan* and paul francis * mpat: aggregate tcp congestion management as...
Post on 21-Dec-2015
218 views
TRANSCRIPT
1
Manpreet Singh, Prashant Pradhan* and Paul Francis
*
MPAT: Aggregate TCP Congestion Management
as a Building Block for Internet QoS
2
Each TCP flow gets equal bandwidth
0
5
10
15
20
25
30
35
40
0 10 20 30 40 50
Time
Co
ng
esti
on
win
do
w s
ize
Red flow
Blue flow
3
Our Goal: enable bandwidth apportionment
among TCP flows in a best-effort network
0
5
10
15
20
25
30
35
40
0 10 20 30 40 50
Time
Co
ng
esti
on
win
do
w s
ize
Red flow
Blue flow
4
0
10
20
30
40
0 10 20 30 40 50
0
10
20
30
40
0 10 20 30 40 50
Transparency: – No network support:
• ISPs, routers, gateways, etc. • Clients unmodified
– TCP-friendliness• “Total” bandwidth should be the same
5
Why is it so hard? • Fair share of a TCP flow keeps
changing dynamically with time.
Server Client
bottleneck
Lot of cross-traffic
6
Why not open extra TCP flows ?
• pTCP scheme [Sivakumar et. al.] – Open more TCP flows for a high-priority application
• Resulting behavior is unfriendly to the network
• Large number of flows active at a bottleneck lead to significant unfairness in TCP
7
Why not modify the AIMD parameters?
• mulTCP scheme [ Crowcroft et. al. ]– Use different AIMD parameters for each flow
• Increase more aggressively on successful transmission. • Decrease more conservatively on packet loss.
Unfair to the background traffic
Does not scale to larger differentials– Large number of timeouts– Two mulTCP flows running together try to
“compete” with each other
8
Properties of MPAT• Key insight: send the packets of one flow through the open
congestion window of another flow.
• Scalability– Substantial differentiation between flows (demonstrated up to 95:1)– Hold fair share (demonstrated up to 100 flows)
• Adaptability – Changing performance requirements– Transient network congestion
• Transparency– Changes only at the server side – Friendly to other flows
9
MPAT: an illustration Server Unmodified
client
Congestion
window
Target 4:1
Flow1 5 8
Flow2 5 2
Total congestion window = 10
10
MPAT: transmit processing
Send three additional red packets through the congestion window of blue flow.
TCP1
cwnd
6781
2TCP2
cwnd
12345
11
MPAT: implementation
• New variable:
MPAT window
• Actual window =
min ( MPAT window,
recv window)
• Map each outgoing packet to one of the congestion windows.
Seqno Congestion window
1 Red
2 Red
3 Red
4 Red
5 Red
6 Blue
7 Blue
8 Blue
1 Blue
2 Blue
Maintain a virtual mapping
12
Incoming Acks
For every ACK received on a flow, update the congestion window through which that packet was sent.
MPAT: receive processing
TCP1
cwnd 12345678
1
2TCP2
cwnd
Seqno window
1 Red
2 Red
3 Red
4 Red
5 Red
6 Blue
7 Blue
8 Blue
1 Blue
2 Blue
12
78
1
2
...
13
TCP-friendliness
0
10
20
30
40
0 10 20 30 40 50Time
Co
ng
esti
on
win
do
w
Red flow
Blue flow
Total
Invariant: Each congestion window experiences the same loss rate.
14
MPAT decouples reliability from congestion control
• Red flow is responsible for the reliability of all red packets. – (e.g. buffering, retransmission, etc. )
• Does not break the “end-to-end” principle.
15
Experimental Setup
• Wide-area network test-bed• Planet-lab • Experiments over the real internet
• User-level TCP implementation• Unconstrained buffer at both ends
• Goal: • Test the fairness and scalability of MPAT
16 MPAT can apportion available bandwidth among its flows,
irrespective of the total fair share
The MPAT scheme used to apportion total bandwidth in the ratio 1:2:3:4:5
0
200
400
600
800
1000
0 50 100 150 200 250 300
Time elapsed (sec)
Ba
nd
wid
th (
KB
ps
)Bandwidth Apportionment
17
0
20
40
60
80
0 20 40 60 80
Target differential
Ach
ieve
d d
iffe
ren
tial
MPATmulTCP
Scalability of MPAT
95 times differential achieved in
experiments
95
95
18
Responsiveness
MPAT adapts itself very quickly to dynamically changing performance requirements
2030405060708090
100
239.9 240 240.1 240.2
0
10
20
30
40
50
60
70
80
90
100
0 60 120 180 240 300 360 420 480 540
Time elapsed (sec)
Re
lati
ve
ba
nd
wid
th
Achieved differential
Target differential
19
0
1
2
3
4
0 100 200 300 400
Time elapsed (sec)
Re
lati
ve
Ba
nd
wid
thFairness
1.6
• 16 MPAT flows • Target ratio: 1 : 2 : 3 : … : 15 : 16
• 10 standard TCP flows in background
20
Applicability in real world
• Deployment: – Enterprise network– Grid applications
• Gold vs Silver customers
• Background transfers
21
Sample Enterprise network(runs over the best-effort Internet)
San Jose(database server)
Zurich(transaction server)
New York(web server)New Delhi
(application server)
22
Background transfers• Data that humans are not waiting for
– Non-deadline-critical
• Examples– Pre-fetched traffic on the Web– File system backup– Large-scale data distribution services– Background software updates– Media file sharing
• Grid Applications
23
Future work
• Benefit short flows: – Map multiple short flows onto a single long flow– Warm start
• Middle box– Avoid changing all the senders
• Detect shared congestion: – Subnet-based aggregation
24
Conclusions• MPAT is a very promising approach for
bandwidth apportionment
• Highly scalable and adaptive: • Substantial differentiation between flows
(demonstrated up to 95:1)• Adapts very quickly to transient network
congestion
• Transparent to the network and clients: • Changes only at the server side • Friendly to other flows
25
Extra slides…
26
MPAT exhibits much lower variancein throughput than mulTCP
Bandwidth of a mulTCP flow with N=16
0
200
400
600
800
1000
1200
1400
1600
1800
2000
0 20 40 60 80 100 120
Time elapsed (sec)
Band
widt
h (K
Bps)
Total bandwidth of an MPAT aggregate with N=16
1000
1200
1400
1600
1800
2000
2200
2400
2600
2800
3000
0 20 40 60 80 100 120
Time elapsed (sec)
Ban
dwid
th (K
Bps
)
Reduced variance
27
Multiple MPAT aggregates “cooperate” with each other
Bandwidth of 5 MPAT aggregates running simultaneously
0
100
200
300
400
500
600
700
800
900
1000
0 50 100 150 200 250 300 350
Time elapsed (sec)
Ban
dw
idth
(K
Bp
s)
N=2
N=4
N=6
N=8
N=10
Fairness across aggregates
28
Multiple MPAT aggregates running simultaneously cooperate with each other
NWith aggregation No aggregation With aggregation No aggregation With aggregation No aggregation
2 (MPAT) 565 540 44 28 97.6 106.74 (MPAT) 564 542 34 25 98.5 110.96 (MPAT) 555 546 39 27 98.1 108.58 (MPAT) 535 539 38 25 99.3 106.410 (MPAT) 537 531 41 26 97.9 109.5
5 (TCP) 577 538 41 30 93.6 107.1
# Fast # Timeouts Bandwidth (KBps)
29
Congestion Manager (CM)
CMCongestioncontroller Scheduler
Per-”aggregate” statistics (cwnd, ssthresh, rtt, etc)
Per-flow schedulingFlow integration
SenderTCP4TCP1 TCP2 TCP3 Receiver
API Callbacks
Data
Feedback
• An end-system architecture for congestion management.• CM abstracts all congestion-related info into one place.• Separates reliability from congestion control.
• An end-system architecture for congestion management.• CM abstracts all congestion-related info into one place.• Separates reliability from congestion control.
Goal: To ensure fairness
30
Issues with CM
CongestionManager
Unfair allocation of bandwidth to CM flows
CM maintains one congestion window per “aggregate”TCP1
TCP2
TCP3
TCP4
TCP5
31
mulTCP
• Goal: Design a mechanism to give N times more bandwidth to one flow over another.
• TCP throughput = f(α, β) / (rtt *sqrt(p))• α: additive increase factor • β: multiplicative decrease factor• p: loss probability• rtt: round-trip time
• Set α = N and β = 1 - 1/(2N)• Increase more aggressively on successful transmission. • Decrease more conservatively on packet loss.
Does not scale with N• Loss process induced is much different from that of N standard TCP flows. • Unstable controller as N increases.
32
Gain in throughput of mulTCP
33
Drawbacks of mulTCP• Does not scale with N• Large number of timeouts
• The loss process induced by a single mulTCP flow is much different
• Increased variance with N• Amplitude increases with N• Unstable controller as N grows
• Two mulTCP flows running together try to “compete” with each other
34
TCP Nice• Two-level prioritization scheme
• Only give less bandwidth to low-priority applications
• Cannot give more bandwidth to deadline-critical jobs