the effects of active queue management on web performance
DESCRIPTION
The Effects of Active Queue Management on Web Performance. SICOMM 2003 Long Le, Jay Aikat, Kevin Jeffay, F.Donelson Smith. 29 th January, 2004 Presented by Sookhyun, Yang. Contents. Introduction Problem Statement Related Work Experimental Methodology Result and Analysis Conclusion. - PowerPoint PPT PresentationTRANSCRIPT
The Effects of Active Queue The Effects of Active Queue Management on Web PerformanceManagement on Web Performance
SICOMM 2003SICOMM 2003
Long Le, Jay Aikat, Kevin Jeffay, F.Donelson SmithLong Le, Jay Aikat, Kevin Jeffay, F.Donelson Smith
29th January, 2004Presented by Sookhyun, Yang
22/36/36
Contents
• IntroductionIntroduction
• Problem StatementProblem Statement
• Related WorkRelated Work
• Experimental MethodologyExperimental Methodology
• Result and AnalysisResult and Analysis
• ConclusionConclusion
33/36/36
IntroductionIntroductionDrop policyDrop policy– Drop tail : Drop tail : whenwhen a queue overflows a queue overflows– Active queue management (AQM) : Active queue management (AQM) : beforebefore a queue overflows a queue overflows
Active queue management (AQM)Active queue management (AQM)– Keep the Keep the average queue size smallaverage queue size small in routers in routers– RED (Random early detection) algorithmRED (Random early detection) algorithm
Most widely studied and implementedMost widely studied and implemented
Various design issues of AQMVarious design issues of AQM– How to detect congestionHow to detect congestion– How to control for achieving a stable point for queue sizeHow to control for achieving a stable point for queue size– How congestion signal is delivered to the senderHow congestion signal is delivered to the sender
Implicitly by dropping packets at the routerImplicitly by dropping packets at the routerExplicitly by signal explicit congestion notification (ECN)Explicitly by signal explicit congestion notification (ECN)
44/36/36
Problem StatementProblem Statement
GoalGoal– Compare the performance of Compare the performance of control theoretic control theoretic AQM algorithms with AQM algorithms with
original original randomized dropping paradigmsrandomized dropping paradigms
Considered AQM schemesConsidered AQM schemes– Control theoretic AQM algorithmsControl theoretic AQM algorithms
Proportional integrator (PI) controllerProportional integrator (PI) controllerRandom exponential marking (REM) controllerRandom exponential marking (REM) controller
– Original randomized dropping paradigmsOriginal randomized dropping paradigmsAdaptive random early detection (ARED) controllerAdaptive random early detection (ARED) controller
Performance metricsPerformance metrics– Link utilizationLink utilization– Loss rateLoss rate– Response time for each request/response transactionResponse time for each request/response transaction
55/36/36
Contents
• IntroductionIntroduction• Problem StatementProblem Statement• Related WorkRelated Work• Experimental MethodologyExperimental Methodology
• PlatformPlatform• CalibrationCalibration• ProcedureProcedure
• Result and AnalysisResult and Analysis• AQM Experiments with Packet DropsAQM Experiments with Packet Drops• AQM Experiments with ECNAQM Experiments with ECN• DiscussionDiscussion
• ConclusionConclusion
66/36/36
Random Early DetectionRandom Early Detection
Original REDOriginal RED– Measure of congestion: weighted-average queue size (Measure of congestion: weighted-average queue size (AvgQLenAvgQLen))
minth maxth
Drop packetslinearly
Dropall packetsDrop probability
AvgQLenminth maxth
1
maxp
77/36/36
Random Early DetectionRandom Early Detection
Modification of the original REDModification of the original RED– Gentle modeGentle mode
– Mark or drop probability increases linearlyMark or drop probability increases linearly
minth maxth
Drop packetslinearly
Dropall packetsDrop probability
AvgQLenminth maxth
1
maxp
2 * maxth
2 * maxth
88/36/36
Random Early DetectionRandom Early Detection
Weakness of REDWeakness of RED– Does not consider the number of flows sharing a bottleneck linkDoes not consider the number of flows sharing a bottleneck link– In TCP congestion control mechanismIn TCP congestion control mechanism
Packet mark or drop reduces the offered load by a factor of Packet mark or drop reduces the offered load by a factor of
Self-configuring REDSelf-configuring RED– AdjustAdjust maxmaxpp every timeevery time AvgQLen AvgQLen
AREDARED– Adaptive and gentle refinements to original REDAdaptive and gentle refinements to original RED
1- 0.5/n (n: number of flows sharing the bottleneck link)
minth maxth
multiplicative decrease additive/multiplicative increase
99/36/36
Control Theoretic AQMControl Theoretic AQM
Misra Misra et al.et al.– Applied control theory to develop a model for TCP/AQM dynamicsApplied control theory to develop a model for TCP/AQM dynamics
– Used this model for analyzing REDUsed this model for analyzing RED
– Limitation of REDLimitation of REDResponse to changes in network trafficResponse to changes in network traffic
Use of a weighted average queue lengthUse of a weighted average queue length
PI controller PI controller (Hollot (Hollot et al.et al.))– Regulate queue length to target value called “queue reference” (Regulate queue length to target value called “queue reference” (qqrefref ))
– Use instantaneous samples of the queue length at a constant sampling fUse instantaneous samples of the queue length at a constant sampling frequencyrequency
– Drop probabilityDrop probability p(kT)p(kT)
((q(kT)q(kT): instantaneous sample of queue length, : instantaneous sample of queue length, T=1/sampling-frequencyT=1/sampling-frequency))
p(kT) = a * (q(kT) – qref) – b * (q((k-1)T) – qref) + p((k-1)T)
link capacity, maximum RTT, expected number of active flows
1010/36/36
Control Theoretic AQMControl Theoretic AQM
REM schemeREM scheme (Athuraliya (Athuraliya et alet al.).)– Periodically updates a Periodically updates a congestion measurecongestion measure called “price” called “price”– Price Price p(t)p(t)
Rate mismatch between packet arrival and departure rate at the linkRate mismatch between packet arrival and departure rate at the link
Queue difference between the actual queue length and target valueQueue difference between the actual queue length and target value
– Drop probabilityDrop probability
p(t) = max( 0, p(t-1) + γ * (α * (q(t) – qref)) + x(t) –c )
c : link capacity, q(t) : queue length, qref : target value – queue size, x(t) : packet arrival rate
prob(t) = 1 - ,where Φ >1 is a constant1Φ( )
p(t)
1111/36/36
Contents
• IntroductionIntroduction• Problem StatementProblem Statement• Related WorkRelated Work• Experimental MethodologyExperimental Methodology
• PlatformPlatform• CalibrationCalibration• ProcedureProcedure
• Result and AnalysisResult and Analysis• AQM Experiments with Packet DropsAQM Experiments with Packet Drops• AQM Experiments with ECNAQM Experiments with ECN• DiscussionDiscussion
• ConclusionConclusion
1212/36/36
PlatformPlatform
Emulate one peering link carrying web traffics between sources and Emulate one peering link carrying web traffics between sources and destinationsdestinations
…… ……
100Mbps100Mbps
EthernetSwitches
EthernetSwitches
Networkmonitor
Networkmonitor
ISP 1router
ISP 2router
1Gbps 1Gbps100/1000
Mbps
ISP1Browser/Servers
ISP2Browser/ServersIntel-based machines with FreeBSD 4.5
Web request generator (browser) : 14 machinesWeb response generator (server) : 8 machinesTotal number of flows = 44
100 Mbps Ethernet interface3Com 10/100/1000 Ethernet switches
ALTQ extensions to FreeBSD (PI, REM, ARED)1GHz Pentium Ⅲ1GB of memory1000-SX fiber gigabit Ethernet NIC100Mpbs Fast Ethernet NICs
1000-SX fiber gigabit Ethernet NIC100Mpbs Fast Ethernet NICs
UncongestedNetwork
Bottleneck1Gbps 1Gbps
1313/36/36
Monitoring ProgramMonitoring Program
Program 1Program 1: monitoring router interface: monitoring router interface– Effects of the AQM algorithmsEffects of the AQM algorithms– Log of queue size sampled every 10ms alongLog of queue size sampled every 10ms along
Number of entering packetsNumber of entering packetsNumber of dropped packetsNumber of dropped packets
Program 2Program 2: link-monitoring machine: link-monitoring machine– Connected to the links between the routersConnected to the links between the routers
Hubs on the 100Mbps segmentsHubs on the 100Mbps segmentsFiber splitters on the Gigabit linkFiber splitters on the Gigabit link
– Collect TCP/IP headersCollect TCP/IP headersLocally-modified version of theLocally-modified version of the tcpdump tcpdump utility utility
– Log of Log of link utilizationlink utilization
1414/36/36
Emulation of End-to-End LatencyEmulation of End-to-End Latency
Congestion control loop is influenced by Congestion control loop is influenced by RTTRTT
Emulate different RTTs on each TCP connection (Emulate different RTTs on each TCP connection (per-flowper-flow delay) delay)– Locally-modified version of Locally-modified version of dummynetdummynet component of FreeBSD component of FreeBSD– Add a randomly chosen minimum delay to all packets from each flowAdd a randomly chosen minimum delay to all packets from each flow
Minimum delayMinimum delay– Sampled from a discrete uniform distribution Sampled from a discrete uniform distribution – Internet RTTs within the continental U.S.Internet RTTs within the continental U.S.
RTT RTT – Flow’s minimum delayFlow’s minimum delay + additional delay (caused by queues at the route + additional delay (caused by queues at the route
rs or on the end systems)rs or on the end systems)
TCP window size = 16Kbyte on all end systems (widely used value)TCP window size = 16Kbyte on all end systems (widely used value)
1515/36/36
Web-Like Traffic GenerationWeb-Like Traffic GenerationModel of [13]Model of [13]– Based on empirical dataBased on empirical data
– Empirical distributions describing the Empirical distributions describing the elementselements necessary to generate necessary to generate synthetic to generate synthetic HTTP workloadssynthetic to generate synthetic HTTP workloads
Browser program and server programBrowser program and server program– Browser program logs Browser program logs response timeresponse time for each request/response pair for each request/response pair
thinkingthinkingrequesting
awebpage
requestinga
webpage
request
After random time
Server’s service time = 0
1616/36/36
CalibrationsCalibrationsOffered loadsOffered loads– Network traffic resulting from emulating the browsing behavior of a fixed Network traffic resulting from emulating the browsing behavior of a fixed
size population of web userssize population of web users
Three critical calibrationsThree critical calibrations before experimentsbefore experiments– Only one primary Only one primary bottleneckbottleneck
100Mbps links between two routers100Mbps links between two routers
– Predictably controlled Predictably controlled offered loadoffered load on the network on the network
– Resulting packet arrival time-series (packet counts per ms)Resulting packet arrival time-series (packet counts per ms)Long-range dependent (LRD) behavior [14]Long-range dependent (LRD) behavior [14]
Calibration experimentCalibration experiment– Configure the network connecting the routers at 1GbpsConfigure the network connecting the routers at 1Gbps
– Drop-tail queues having 2400 queue elementsDrop-tail queues having 2400 queue elements
1717/36/36
CalibrationsCalibrations
One direction of the 1Gbps linkOne direction of the 1Gbps link
1818/36/36
CalibrationsCalibrations
Heavy-tailed distributionfor both user “think” time and response size [13]Heavy-tailed distributionfor both user “think” time and response size [13]
1919/36/36
ProcedureProcedure
Experimental settingExperimental setting– Offered loads by user populationsOffered loads by user populations
80%, 90%, 98%, or 105% of the capacity of the 100Mbps link80%, 90%, 98%, or 105% of the capacity of the 100Mbps link
– Run for 120 min over 10,000,000 request/response exchangesRun for 120 min over 10,000,000 request/response exchangesCollect data during 90min intervalCollect data during 90min interval
– Repeat three times for each AQM schemes PI, REM, AREDRepeat three times for each AQM schemes PI, REM, ARED
Experimental focusExperimental focus– End-to-end End-to-end response timeresponse time for each request/response pair for each request/response pair– Loss rateLoss rate : fraction of IP datagram dropped at the link queue : fraction of IP datagram dropped at the link queue– Link utilizationLink utilization on the bottleneck link on the bottleneck link– Number of request/response exchanges Number of request/response exchanges completedcompleted
Contents
• IntroductionIntroduction• Problem StatementProblem Statement• Related WorkRelated Work• Experimental MethodologyExperimental Methodology
• PlatformPlatform• CalibrationCalibration• ProcedureProcedure
• Result and AnalysisResult and Analysis• AQM Experiments with Packet DropsAQM Experiments with Packet Drops• AQM Experiments with ECNAQM Experiments with ECN• DiscussionDiscussion
• ConclusionConclusion
2121/36/36
AQM Experiments with Packet DropsAQM Experiments with Packet Drops
Two target queue lengthTwo target queue length of PI, REM, and ARED of PI, REM, and ARED– Tradeoff between link utilization and queuing delayTradeoff between link utilization and queuing delay
24 packets for minimum latency24 packets for minimum latency240 packets for high link utilization240 packets for high link utilizationRecommended in [1,6,8]Recommended in [1,6,8]
– Set the maximum queue size sfficient to ensure drop-tail do not Set the maximum queue size sfficient to ensure drop-tail do not occuroccur
BaselineBaseline– Conventional drop-tail FIFO queuesConventional drop-tail FIFO queues– Queue size for drop-tailQueue size for drop-tail
24, 240 packets : comparing with AQM schemes24, 240 packets : comparing with AQM schemes2400 packets : recently favorable buffering equivalent to 100ms at t2400 packets : recently favorable buffering equivalent to 100ms at the link’s transmission speed (from mailing list)he link’s transmission speed (from mailing list)
2222/36/36
Queue Size for Drop-TailQueue Size for Drop-Tail
Drop-tail queue size = 240Drop-tail queue size = 240
Drop-Tail Performance
2323/36/36
Response Time at 80% LoadResponse Time at 80% LoadAQM Experiments with Packet Drops
AREM show some degradation relative to the resultson the un-congested link at 80% load
AREM show some degradation relative to the resultson the un-congested link at 80% load
2424/36/36
Response Time at 90% LoadResponse Time at 90% LoadAQM Experiments with Packet Drops
2525/36/36
Response Time at 98% LoadResponse Time at 98% LoadAQM Experiments with Packet Drops
No AQM scheme can offset the performance degradation at 98% loadNo AQM scheme can offset the performance degradation at 98% load
2626/36/36
Response Time at 105% LoadResponse Time at 105% LoadAQM Experiments with Packet Drops
All schemes degrades uniformly from the 98% caseAll schemes degrades uniformly from the 98% case
2727/36/36
AQM Experiments with ECNAQM Experiments with ECN
Explicitly signal congestion to end-systems with an ECN bitExplicitly signal congestion to end-systems with an ECN bit
Procedure of signal congestion with ECNProcedure of signal congestion with ECN– [Router] : mark a ECN bit in the TCP/IP header of the packet[Router] : mark a ECN bit in the TCP/IP header of the packet
– [Receiver] : mark TCP header of the next outbound segment (typically [Receiver] : mark TCP header of the next outbound segment (typically an ACK) destined for sender of original marked segmentan ACK) destined for sender of original marked segment
– [Original sender][Original sender]React as if a single segment had been lost within a send windowReact as if a single segment had been lost within a send window
Mark the next outbound segment to confirm that it reacted to the congestionMark the next outbound segment to confirm that it reacted to the congestion
ECN has no effect on response time of PI, REM, and ARED up to ECN has no effect on response time of PI, REM, and ARED up to 80% offered load80% offered load
2828/36/36
Response Time at 90% Load Response Time at 90% Load AQM Experiments with ECN
Both PI and REM provide response time performancethat is both close to that on un-congested link
Both PI and REM provide response time performancethat is both close to that on un-congested link
2929/36/36
Response Time at 98% LoadResponse Time at 98% LoadAQM Experiments with ECN
Degradation, but far superior to Drop tailDegradation, but far superior to Drop tail
3030/36/36
Response Time at 105% LoadResponse Time at 105% LoadAQM Experiments with ECN
REM shows the most significant improvementin performance with ECN
REM shows the most significant improvementin performance with ECN
ECN has very little effect on the performance Of ARED
ECN has very little effect on the performance Of ARED
3131/36/36
Loss ratio/Completed requests/Link utilizationLoss ratio/Completed requests/Link utilization
AQM Experiments with Packet Drops or with ECN
3232/36/36
SummarySummaryFor 80% loadFor 80% load– No AQM scheme provides better response time performance than No AQM scheme provides better response time performance than
simple drop-tail FIFO queue managementsimple drop-tail FIFO queue management– Not changed by the AQM schemes with ECNNot changed by the AQM schemes with ECN
For 90% load or greater without ECNFor 90% load or greater without ECN– PI is better than drop-tail and other AQM schemes without ECNPI is better than drop-tail and other AQM schemes without ECN
With ECNWith ECN– Both PI and REM provide significant response time improvementBoth PI and REM provide significant response time improvement
ARED with recommended parameter settingsARED with recommended parameter settings– poorest response time performancepoorest response time performance– Lowest link utilizationLowest link utilization– Not changed with ECNNot changed with ECN
3333/36/36
DiscussionDiscussion
Positive Impact of ECNPositive Impact of ECN– Response time performance under PI and REM with ECN at loads of Response time performance under PI and REM with ECN at loads of
90% and 98%90% and 98%
– 90% load: approximately achieved on an un-congested network90% load: approximately achieved on an un-congested network
3434/36/36
DiscussionDiscussion
Performance gap between PI and REM with packet Performance gap between PI and REM with packet dropping was closed through the addition of ECNdropping was closed through the addition of ECN
Difference in performance between ARED and the other Difference in performance between ARED and the other AQM schemesAQM schemes– PI and REM operate in “byte mode” in default, but ARED in PI and REM operate in “byte mode” in default, but ARED in
“packet mode”“packet mode”– Gentle mode in REMGentle mode in REM– PI and REM periodically sample the queue length when deciding PI and REM periodically sample the queue length when deciding
to mark packets, but ARED uses a weighted averageto mark packets, but ARED uses a weighted average
3535/36/36
Contents
• IntroductionIntroduction• Problem StatementProblem Statement• Related WorkRelated Work• Experimental MethodologyExperimental Methodology
• PlatformPlatform• CalibrationCalibration• ProcedureProcedure
• Result and AnalysisResult and Analysis• AQM Experiments with Packet DropsAQM Experiments with Packet Drops• AQM Experiments with ECNAQM Experiments with ECN• SummarySummary
• ConclusionConclusion
3636/36/36
ConclusionConclusion
Unlike a similar earlier negative study on the use of Unlike a similar earlier negative study on the use of AQM, the AQM scheme AQM, the AQM scheme with ECNwith ECN can be realized in can be realized in practicepractice
Limitation of this paperLimitation of this paper– Comparison between only two classes of algorithmsComparison between only two classes of algorithms
Control theoretic principlesControl theoretic principlesOriginal randomized dropping paradigmOriginal randomized dropping paradigm
– Studied a link carrying only web-like trafficStudied a link carrying only web-like trafficMore realistic mixed of HTTP, other TCP traffic, and UDP trafficMore realistic mixed of HTTP, other TCP traffic, and UDP traffic