using edge-to-edge feedback control to make assured service more assured in diffserv networks

Post on 20-Jan-2016

38 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Using Edge-To-Edge Feedback Control to Make Assured Service More Assured in DiffServ Networks. K.R.R.Kumar, A.L.Ananda, Lillykutty Jacob Centre for Internet Research School of Computing National University of Singapore. Outline. Introduction Need for QoS Solutions TCP over DiffServ - PowerPoint PPT Presentation

TRANSCRIPT

Using Edge-To-Edge Feedback Control to Make Assured Service

More Assured in DiffServ Networks

K.R.R.Kumar, A.L.Ananda, Lillykutty Jacob

Centre for Internet Research

School of Computing

National University of Singapore

Outline Introduction

– Need for QoS– Solutions

TCP over DiffServ– Issues

CATC– Key Observations– Design Considerations– Topology– Edge-to-Edge Feedback Architecture– Marking Algorithm

Simulation Details Results and Analysis Deployment Inferences and Future work

Introduction

Need for QoS– An exponential growth in traffic resulted in

deterioration of QoS.– Over provisioning of networks could be a

solution.– A better solution: An intelligent network service

with better resource allocation and management methods,

Solutions

Integrated Service– Per flow based QoS.– Not scalable.

Differentiated services– QoS for aggregated flows– Scalable– The philosophy: simpler at the core (AQM),

complex at the edges.

DiffServ

Classifier

Meter

MarkerShaper/Dropper

Packets

Logical View of a Packet Classifier and Traffic Conditioner

Drop

Forward

DiffServ cont’d..

Per-Hop behaviours– Expedited forwarding: Deterministic QoS– Assured forwarding: Statistical QoS

Classifier Traffic Conditioner

– Token Bucket (TB), Time Sliding Window (TSW) Meter Marker Shaper/Dropper

TCP over DiffServ

Recent measurements have shown TCP flows being in majority (95% approx. of byte share).

TCP flows are much more sensitive to transient congestion.

Unruly flows like UDP kills TCP traffic Bandwidth assurance affected by size of target rate. Biased against

– Longer RTTs– Smaller window sizes

Congestion Aware Traffic Conditioner (CATC)

Key Observations– Markers ,one of the major building blocks of a

traffic conditioner helps in resource allocation.– Proper understanding of transient congestion in

the network helps.– Edge routers have a better understanding of the

domain traffic.– An early indication of congestion in a network

helps to prioritize the packets in advance.– Existing feedback mechanisms are end-to-end.

Eg: ECN

CATC cont’d..

Design Considerations– Markers should

Be least sensitive to marker or TCP parameters.

Be transparent to end hosts. Maintain optimum marking. Minimize synchronizations. Be fair to different target sizes. Be congestion aware.

Topology

Edge-to-Edge Feedback architecture Two edge routers

– Control sender (CS) and control receiver (CR) Upstream:

– At CS: CS sends control packets (CP) at regular interval of time,

control packet interval (cpi). CPs are given highest priority.

– At Core: Core routers maintain the status of drops of the best effort

packets. Information maintained as a status flag to a max. of cpi time. CP’s congestion notification (CN) bit set or reset based on

status flag.– At CR:

Responds to the incoming CP with a CN bit set by setting the congestion echo (CE) bit of the outgoing acknowledgement.

Feedback arch. Cont’d

Downstream– At CS:

Maintains a parameter, congestion factor (cf). Cf is set to 1 or 0 based on status of the CE

bit in acknowledgement received.

Marking algorithm

For each packet arrival

If avg_rate cir

then

mp=mp+(1- avg_rate/cir)*(1+cf*(cir/cir_max));

 

mark the packet using :

cp 11 w.p. mp (marked packets)

cp 00 w.p. (1-mp) (unmarked packets)

 

Marking Algo. Cont’d..

else if avg_rate > cir

then

mp=mp+ (1- avg_rate/cir)*(1-cf*(cir/cir_max));

 

mark the packet using :

cp 11 w.p. mp (marked packets)

cp 00 w.p. (1-mp) (unmarked packets)

Marking Algo. Cont’d..

where,

avg_rate = the rate estimate on each packet arrival

mp = marking probability ( 1)

cir = committed information rate (target rate)

cf = congestion factor

cir_max = maximum committed information rate

also,

cp denotes ‘codepoint’ and w.p. denotes ‘with probability’.

Algo cont’d..

Marking probability computation based on:– cir– avg_rate– cf– cir_max among all cirs.

Algo. Cont’d..

The effect on mp:– i)Flow component (1- avg_rate/cir) constantly

compares the average rate observed with the target rate to keep the rate closer to the target.

– ii)Network component cf*(cir/cir_max) provides a dynamic indication of congestion level status in the network. The marking probability increment is done in proportion to the target rate by multiplying cf with a weight factor cir/cir_max to mitigate the impact of the target rates.

Simulation Details

NS (2.1b7a) simulator on Red Hat 7.0 Modified Nortel’s DiffServ module for

our architecture implementation. Core routers use RIO like mechanism FTP bulk data transfer for TCP traffic

Simulation Parameters

TCP segment size 536 bytesRTT 100 mssimulation time 210 sTSW window length 1 sControl packet interval 1 msControl packet size 41 bytesLink bandwidth 10 Mbps

Marked UnmarkedMin_th(packets) 250 150Max_th(packets) 500 300Max_dp 0.02 0.1

Simulation details cont’d..

Experiments conducted:– Assured services (AS) for aggregates.

AS in under- and well- subscribed cases. AS in the oversubscribed case.

– Protection from BE UDP flows– Effect of UDP flows with assured (target)

rates.

R&A: under- and well- subscribedExpt #

Rt 1 Rt 2 Ra1 Ra2 BE TCP flow Link goodput (Mbps) (Mbps)

1 1 1 2.54 2.58 3.76 8.882 1 2 2.54 2.58 3.76 8.883 1 3 2.41 2.93 3.46 8.84 2 3 2.36 2.89 3.58 8.835 3 3 2.8 2.8 3.21 8.816 3 4 2.73 3.49 2.59 8.81

8.835

Target Rates(Mbps) Achieved Rates (Mbps)

Average link bandwidth (Mbps)

R&A:over-subscribed

Expt #Rt 1 Rt 2 Ra1 Ra2 BE TCP flow Link goodput

(Mbps) (Mbps)1 2 6 1.83 4.85 2.06 8.742 3 5 2.5 4.04 2.05 8.593 3 6 2.4 4.6 1.53 8.534 1 8 1.2 6 1.28 8.485 4 6 3.17 4.5 0.11 7.786 2 8 1.55 6.16 0.72 8.43

8.425

Target Rates(Mbps) Achieved Rates (Mbps)

Average link bandwidth (Mbps)

R&A: Goodput vs Time Graph (2/6 Mbps target rate.)

-1

0

1

2

3

4

5

6

0 50 100 150 200 250

t ime (s)

good

put (

Mbp

s)

-2

0

2

4

6

8

10

0 50 100 150 200 250

time (s)

good

put (

Mbp

s)

Analysis

CATC Able to achieve the target rates for the

under- and well- subscribed cases. Maintain the achieved rate close to its

target rate. Total link utilization remains more or

less constant throughout.

R&A:AS in presence of BE UDP and TCP

Expt #Rt 1 Rt 2 Ra1 Ra2 BE TCP flow BE UDP Link goodput

(Mbps) (Mbps) (Mbps)1 2 6 1.52 4.18 0.46 3.54 6.162 3 5 2.08 3.41 0.44 2.52 5.933 3 6 2 4.42 0.13 2.12 6.554 1 8 0.66 6.34 0.01 1.87 7.015 4 6 2.65 4.6 0 1.5 7.256 2 8 1.21 6 0 1.6 7.21

6.685

Target Rates(Mbps) Achieved Rates (Mbps)

Average link bandwidth (Mbps)

R&A:AS in presence of AS UDP and BE TCP

Expt #Rt 1 Rt 2 Ra1 Ra2 BE TCP flow AS UDP Link goodput

(Mbps) (Mbps) (Mbps)1 1 1 1.7 1.77 2.61 2.99 6.082 2 2 1.92 1.88 2.27 2.99 6.073 3 3 2.37 2.47 1.18 2.99 6.024 4 4 2.92 2.98 0.13 2.98 6.035 5 5 3.12 2.83 0.1 2.97 6.05

6.05

Target Rates(Mbps) Achieved Rates (Mbps)

Average link bandwidth (Mbps)

Analysis

CATC– Achieves goodput close to the target rates.– Succeeds in taking the share of BE TCP and

UDP flows in the worst case scenario.– The average link utilization pretty good.– The AS UDP flow gets its assured rate.

Deployment

MPLS over DiffServ. Marker anywhere (lack of sensitivity to

marker parameters).

Inferences and Future work

The architecture is transparent to TCP sources and hence doesn’t require any modifications at the end hosts.

The edge-to-edge feedback control loop helps the marker to take proactive measures in maintaining the assured service effectively, especially during periods of congestion.

A single feedback control is used for an aggregated flow. Hence this architecture is scalable to any number of flows between the two edge gateways.

The architecture is adaptive to changes in load and network conditions.

The marking algorithm takes care of any bursts in the flows.

Future work

Extend present architecture to take care of drops in priority queues.

A new algorithm to incorporate this.

Q&A

Thank You!

top related