manpreet singh , prashant pradhan* and paul francis
DESCRIPTION
MPAT: Aggregate TCP Congestion Management as a Building Block for Internet QoS. Manpreet Singh , Prashant Pradhan* and Paul Francis. *. Each TCP flow gets equal bandwidth. Our Goal: enable bandwidth apportionment among TCP flows in a best-effort network. Transparency : - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/1.jpg)
1
Manpreet Singh, Prashant Pradhan* and Paul Francis
*
MPAT: Aggregate TCP Congestion Management
as a Building Block for Internet QoS
![Page 2: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/2.jpg)
2
Each TCP flow gets equal bandwidth
0
5
10
15
20
25
30
35
40
0 10 20 30 40 50
Time
Co
ng
esti
on
win
do
w s
ize
Red flow
Blue flow
![Page 3: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/3.jpg)
3
Our Goal: enable bandwidth apportionment
among TCP flows in a best-effort network
0
5
10
15
20
25
30
35
40
0 10 20 30 40 50
Time
Co
ng
esti
on
win
do
w s
ize
Red flow
Blue flow
![Page 4: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/4.jpg)
4
0
10
20
30
40
0 10 20 30 40 50
0
10
20
30
40
0 10 20 30 40 50
Transparency: – No network support:
• ISPs, routers, gateways, etc. • Clients unmodified
– TCP-friendliness• “Total” bandwidth should be the same
![Page 5: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/5.jpg)
5
Why is it so hard? • Fair share of a TCP flow keeps
changing dynamically with time.
Server Client
bottleneck
Lot of cross-traffic
![Page 6: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/6.jpg)
6
Why not open extra TCP flows ?
• pTCP scheme [Sivakumar et. al.] – Open more TCP flows for a high-priority application
• Resulting behavior is unfriendly to the network
• Large number of flows active at a bottleneck lead to significant unfairness in TCP
![Page 7: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/7.jpg)
7
Why not modify the AIMD parameters?
• mulTCP scheme [ Crowcroft et. al. ]– Use different AIMD parameters for each flow
• Increase more aggressively on successful transmission. • Decrease more conservatively on packet loss.
Unfair to the background traffic
Does not scale to larger differentials– Large number of timeouts– Two mulTCP flows running together try to
“compete” with each other
![Page 8: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/8.jpg)
8
Properties of MPAT• Key insight: send the packets of one flow through the open
congestion window of another flow.
• Scalability– Substantial differentiation between flows (demonstrated up to 95:1)– Hold fair share (demonstrated up to 100 flows)
• Adaptability – Changing performance requirements– Transient network congestion
• Transparency– Changes only at the server side – Friendly to other flows
![Page 9: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/9.jpg)
9
MPAT: an illustration Server Unmodified
client
Congestion
window
Target 4:1
Flow1 5 8
Flow2 5 2
Total congestion window = 10
![Page 10: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/10.jpg)
10
MPAT: transmit processing
Send three additional red packets through the congestion window of blue flow.
TCP1
cwnd
6781
2TCP2
cwnd
12345
![Page 11: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/11.jpg)
11
MPAT: implementation
• New variable:
MPAT window
• Actual window =
min ( MPAT window,
recv window)
• Map each outgoing packet to one of the congestion windows.
Seqno Congestion window
1 Red
2 Red
3 Red
4 Red
5 Red
6 Blue
7 Blue
8 Blue
1 Blue
2 Blue
Maintain a virtual mapping
![Page 12: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/12.jpg)
12
Incoming Acks
For every ACK received on a flow, update the congestion window through which that packet was sent.
MPAT: receive processing
TCP1
cwnd 12345678
1
2TCP2
cwnd
Seqno window
1 Red
2 Red
3 Red
4 Red
5 Red
6 Blue
7 Blue
8 Blue
1 Blue
2 Blue
12
78
1
2
...
![Page 13: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/13.jpg)
13
TCP-friendliness
0
10
20
30
40
0 10 20 30 40 50Time
Co
ng
esti
on
win
do
w
Red flow
Blue flow
Total
Invariant: Each congestion window experiences the same loss rate.
![Page 14: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/14.jpg)
14
MPAT decouples reliability from congestion control
• Red flow is responsible for the reliability of all red packets. – (e.g. buffering, retransmission, etc. )
• Does not break the “end-to-end” principle.
![Page 15: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/15.jpg)
15
Experimental Setup
• Wide-area network test-bed• Planet-lab • Experiments over the real internet
• User-level TCP implementation• Unconstrained buffer at both ends
• Goal: • Test the fairness and scalability of MPAT
![Page 16: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/16.jpg)
16 MPAT can apportion available bandwidth among its flows,
irrespective of the total fair share
The MPAT scheme used to apportion total bandwidth in the ratio 1:2:3:4:5
0
200
400
600
800
1000
0 50 100 150 200 250 300
Time elapsed (sec)
Ba
nd
wid
th (
KB
ps
)Bandwidth Apportionment
![Page 17: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/17.jpg)
17
0
20
40
60
80
0 20 40 60 80
Target differential
Ach
ieve
d d
iffe
ren
tial
MPATmulTCP
Scalability of MPAT
95 times differential achieved in
experiments
95
95
![Page 18: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/18.jpg)
18
Responsiveness
MPAT adapts itself very quickly to dynamically changing performance requirements
2030405060708090
100
239.9 240 240.1 240.2
0
10
20
30
40
50
60
70
80
90
100
0 60 120 180 240 300 360 420 480 540
Time elapsed (sec)
Re
lati
ve
ba
nd
wid
th
Achieved differential
Target differential
![Page 19: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/19.jpg)
19
0
1
2
3
4
0 100 200 300 400
Time elapsed (sec)
Re
lati
ve
Ba
nd
wid
thFairness
1.6
• 16 MPAT flows • Target ratio: 1 : 2 : 3 : … : 15 : 16
• 10 standard TCP flows in background
![Page 20: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/20.jpg)
20
Applicability in real world
• Deployment: – Enterprise network– Grid applications
• Gold vs Silver customers
• Background transfers
![Page 21: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/21.jpg)
21
Sample Enterprise network(runs over the best-effort Internet)
San Jose(database server)
Zurich(transaction server)
New York(web server)New Delhi
(application server)
![Page 22: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/22.jpg)
22
Background transfers• Data that humans are not waiting for
– Non-deadline-critical
• Examples– Pre-fetched traffic on the Web– File system backup– Large-scale data distribution services– Background software updates– Media file sharing
• Grid Applications
![Page 23: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/23.jpg)
23
Future work
• Benefit short flows: – Map multiple short flows onto a single long flow– Warm start
• Middle box– Avoid changing all the senders
• Detect shared congestion: – Subnet-based aggregation
![Page 24: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/24.jpg)
24
Conclusions• MPAT is a very promising approach for
bandwidth apportionment
• Highly scalable and adaptive: • Substantial differentiation between flows
(demonstrated up to 95:1)• Adapts very quickly to transient network
congestion
• Transparent to the network and clients: • Changes only at the server side • Friendly to other flows
![Page 25: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/25.jpg)
25
Extra slides…
![Page 26: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/26.jpg)
26
MPAT exhibits much lower variancein throughput than mulTCP
Bandwidth of a mulTCP flow with N=16
0
200
400
600
800
1000
1200
1400
1600
1800
2000
0 20 40 60 80 100 120
Time elapsed (sec)
Band
widt
h (K
Bps)
Total bandwidth of an MPAT aggregate with N=16
1000
1200
1400
1600
1800
2000
2200
2400
2600
2800
3000
0 20 40 60 80 100 120
Time elapsed (sec)
Ban
dwid
th (K
Bps
)
Reduced variance
![Page 27: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/27.jpg)
27
Multiple MPAT aggregates “cooperate” with each other
Bandwidth of 5 MPAT aggregates running simultaneously
0
100
200
300
400
500
600
700
800
900
1000
0 50 100 150 200 250 300 350
Time elapsed (sec)
Ban
dw
idth
(K
Bp
s)
N=2
N=4
N=6
N=8
N=10
Fairness across aggregates
![Page 28: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/28.jpg)
28
Multiple MPAT aggregates running simultaneously cooperate with each other
NWith aggregation No aggregation With aggregation No aggregation With aggregation No aggregation
2 (MPAT) 565 540 44 28 97.6 106.74 (MPAT) 564 542 34 25 98.5 110.96 (MPAT) 555 546 39 27 98.1 108.58 (MPAT) 535 539 38 25 99.3 106.410 (MPAT) 537 531 41 26 97.9 109.5
5 (TCP) 577 538 41 30 93.6 107.1
# Fast # Timeouts Bandwidth (KBps)
![Page 29: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/29.jpg)
29
Congestion Manager (CM)
CMCongestioncontroller Scheduler
Per-”aggregate” statistics (cwnd, ssthresh, rtt, etc)
Per-flow schedulingFlow integration
SenderTCP4TCP1 TCP2 TCP3 Receiver
API Callbacks
Data
Feedback
• An end-system architecture for congestion management.• CM abstracts all congestion-related info into one place.• Separates reliability from congestion control.
• An end-system architecture for congestion management.• CM abstracts all congestion-related info into one place.• Separates reliability from congestion control.
Goal: To ensure fairness
![Page 30: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/30.jpg)
30
Issues with CM
CongestionManager
Unfair allocation of bandwidth to CM flows
CM maintains one congestion window per “aggregate”TCP1
TCP2
TCP3
TCP4
TCP5
![Page 31: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/31.jpg)
31
mulTCP
• Goal: Design a mechanism to give N times more bandwidth to one flow over another.
• TCP throughput = f(α, β) / (rtt *sqrt(p))• α: additive increase factor • β: multiplicative decrease factor• p: loss probability• rtt: round-trip time
• Set α = N and β = 1 - 1/(2N)• Increase more aggressively on successful transmission. • Decrease more conservatively on packet loss.
Does not scale with N• Loss process induced is much different from that of N standard TCP flows. • Unstable controller as N increases.
![Page 32: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/32.jpg)
32
Gain in throughput of mulTCP
![Page 33: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/33.jpg)
33
Drawbacks of mulTCP• Does not scale with N• Large number of timeouts
• The loss process induced by a single mulTCP flow is much different
• Increased variance with N• Amplitude increases with N• Unstable controller as N grows
• Two mulTCP flows running together try to “compete” with each other
![Page 34: Manpreet Singh , Prashant Pradhan* and Paul Francis](https://reader031.vdocuments.site/reader031/viewer/2022012404/5681436a550346895dafe94f/html5/thumbnails/34.jpg)
34
TCP Nice• Two-level prioritization scheme
• Only give less bandwidth to low-priority applications
• Cannot give more bandwidth to deadline-critical jobs