17. network border patrol preventing congestion collapse and promoting fairness in the internet

Download 17. network border patrol preventing congestion collapse and promoting fairness in the internet

Post on 04-Dec-2014

2.189 views

Category:

Technology

2 download

Embed Size (px)

DESCRIPTION

 

TRANSCRIPT

  • 1. 1 Network Border Patrol: Preventing Congestion Collapse and Promoting Fairness in the Internet C lio Albuquerquey, Brett J. Vickersz and Tatsuya Suday e y Dept. of Information and Computer Science z Dept. of Computer Science University of California, Irvine Rutgers University fcelio,sudag@ics.uci.edu bvickers@cs.rutgers.edu Abstract The end-to-end nature of Internet congestion control is an important factor in its scalability and robustness. However, end-to-endcongestion control algorithms alone are incapable of preventing the congestion collapse and unfair bandwidth allocations created byapplications that are unresponsive to network congestion. To address this aw, we propose and investigate a novel congestion avoidancemechanism called Network Border Patrol (NBP). NBP relies on the exchange of feedback between routers at the borders of a network inorder to detect and restrict unresponsive trafc ows before they enter the network. An enhanced core-stateless fair queueing mechanismis proposed in order to provide fair bandwidth allocations among competing ows. NBP is compliant with the Internet philosophy ofpushing complexity toward the edges of the network whenever possible. Simulation results show that NBP effectively eliminatescongestion collapse that, when combined with fair queueing, NBP achieves approximately max-min fair bandwidth allocations forcompeting network ows. Keywords Internet, congestion control, congestion collapse, max-min fairness, end-to-end argument, core-stateless mechanisms, border con-trolT I. I NTRODUCTION HE essential philosophy behind the Internet is expressed by the scalability argument: no protocol, al- gorithm or service should be introduced into the Internet if it does not scale well. A key corollary to This research is supported by the National Science Foundation through grant NCR-9628109. It has also been supported by grants from the Universityof California MICRO program, Hitachi America, Hitachi, Standard Microsystem Corp., Canon Information Systems Inc., Nippon Telegraph andTelephone Corp. (NTT), Nippon Steel Information and Communica-tion Systems Inc. (ENICOM), Tokyo Electric Power Co., Fujitsu, Novell,Matsushita Electric Industrial Co. and Fundacao CAPES/Brazil.
  • 2. 2the scalability argument is the end-to-end argument: to maintain scalability, algorithmic complexity should bepushed to the edges of the network whenever possible. Perhaps the best example of the Internet philosophy isTCP congestion control, which is achieved primarily through algorithms implemented at end systems. Unfor-tunately, TCP congestion control also illustrates some of the shortcomings of the end-to-end argument. As aresult of its strict adherence to end-to-end congestion control, the current Internet suffers from two maladies:congestion collapse from undelivered packets, and unfair allocations of bandwidth between competing trafcows. The rst maladycongestion collapse from undelivered packetsarises when bandwidth is continually con-sumed by packets that are dropped before reaching their ultimate destinations [1]. John Nagle coined the termcongestion collapse in 1984 to describe a network that remains in a stable congested state [2]. At that time,the primary cause of congestion collapse was the poor calculation of retransmission timers by TCP sources,which led to the unnecessary retransmission of delayed packets. This problem was corrected with more recentimplementations of TCP [3]. Recently, however, a potentially more serious cause of congestion collapse hasbecome increasingly common. Network applications are now frequently written to use transport protocols, suchas UDP, which are oblivious to congestion and make no attempt to reduce packet transmission rates when packetsare discarded by the network [4]. In fact, during periods of congestion some applications actually increase theirtransmission rates, just so that they will be less sensitive to packet losses [5]. Unfortunately, the Internet currentlyhas no effective way to regulate such applications. The second maladyunfair bandwidth allocationarises in the Internet for a variety of reasons, one of whichis the presence of application which do not adapt to congestion. Adaptive applications (e.g., TCP-based ap-plications) that respond to congestion by rapidly reducing their transmission rates are likely to receive unfairlysmall bandwidth allocations when competing with unresponsive or malicious applications. The Internet proto-cols themselves also introduce unfairness. The TCP algorithm, for instance, inherently causes each TCP ow toreceive a bandwidth that is inversely proportional to its round trip time [6]. Hence, TCP connections with shortround trip times may receive unfairly large allocations of network bandwidth when compared to connectionswith longer round trip times.
  • 3. 3 To address these maladies, we introduce and investigate a new Internet trafc control protocol called NetworkBorder Patrol. The basic principle of Network Border Patrol (NBP) is to compare, at the borders of a network,the rates at which packets from each application ow are entering and leaving the network. If a ows packets areentering the network faster than they are leaving it, then the network is likely buffering or, worse yet, discardingthe ows packets. In other words, the network is receiving more packets than it can handle. NBP prevents thisscenario by patrolling the networks borders, ensuring that each ows packets do not enter the network at arate greater than they are able to leave it. This patrolling prevents congestion collapse from undelivered packets,because an unresponsive ows otherwise undeliverable packets never enter the network in the rst place. As an option, Network Border Patrol can also realize approximately max-min fair bandwidth allocations forcompeting packet ows if it is used in conjunction with an appropriate fair queueing mechanism. Weightedfair queuing (WFQ) [7], [8] is an example of one such mechanism. Unfortunately, WFQ imposes signicantcomplexity on routers by requiring them to maintain per-ow state and perform per-ow scheduling of packets.In this paper we introduce an enhanced core-stateless fair queuing mechanism, in order to achieve some of theadvantages of WFQ without all of its complexity. NBPs prevention of congestion collapse comes at the expense of some additional network complexity, sincerouters at the border of the network are expected to monitor and control the rates of individual ows. NBPalso introduces an added communication overhead, since in order for an edge router to know the rate at whichits packets are leaving the network, it must exchange feedback with other edge routers. However, unlike otherproposed approaches to the problem of congestion collapse, NBPs added complexity is isolated to edge routers;routers within the core do not participate in the prevention of congestion collapse. Moreover, end systems operatein total ignorance of the fact that NBP is implemented in the network, so no changes to transport protocols arenecessary. The remainder of this paper is organized as follows. Section II describes why existing mechanisms are noteffective in preventing congestion collapse nor providing fairness in the presence of unresponsive ows. Insection III, we describe the architectural components of Network Border Patrol in further detail and presentthe feedback and rate control algorithms used by NBP edge routers to prevent congestion collapse. Section IV
  • 4. 4explains the enhanced core-stateless fair queueing mechanism, and illustrates the advantages of providing lowerqueueing delays to ows transmiting at lower rates. In section V, we present the results of several simulations,which illustrate the ability of NBP to avoid congestion collapse and provide fairness to competing network ow.The scalability of NBP is further examined. In section VI, we discuss several implementation issues that must beaddressed in order to make deployment of NBP feasible in the Internet. Finally, in section VII we provide someconcluding remarks. II. R ELATED W ORK The maladies of congestion collapse from undelivered packets and of unfair bandwidth allocations have notgone unrecognized. Some have argued that there are social incentives for multimedia applications to be friendlyto the network, since an application would not want to be held responsible for throughput degradation in theInternet. However, malicious denial-of-service attacks using unresponsive UDP ows are becoming disturbinglyfrequent in the Internet and they are an example that the Internet cannot rely solely on social incentives to controlcongestion or to operate fairly. Some have argued that these maladies may be mitigated through the use of improved packet scheduling [12] orqueue management [13] mechanisms in network routers. For instance, per-ow packet scheduling mechanismslike Weighted Fair Queueing (WFQ) [7

Recommended

View more >