congestion control and traffic management in atm networks recent advances and a survey

Upload: rajib-prasad

Post on 07-Apr-2018

217 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    1/16

    ELSMIER

    COMPUTERIJETWORKSISDN SYSTEMS

    Computer Networks and ISDN Systems 28 (1996) 1723-1738

    Congestion control and traffic management in ATM networks:Recent advances and a surveyRaj Jain *

    Department of Computer and Information Science , The Ohio State Universiry. Columbus, OH 43210, USA

    AbstractCongestion control mechanisms for ATM networks as selected by the ATM Forum traffic management group are described.Reasons behind these selections are expla ined. In particular, selection criterion for selection between rate-based and credit-based approach and the key points of the debate between these two approaches are presented. The approach that was finallyselected and several other schemes that were considered are described.

    Keywords: ATM Networks; Traffic management; Congestion control; flow control; Computer networks; C ongestion avoidance

    1. IntroductionFuture high speed networks are expected to use the

    Asynchronous Transfer Mode (ATM) in which theinformation is transmitted using short fixed-size cellsconsisting of 48 bytes of payload and 5 bytes of header.The fixed size of the cells reduces the variance of de-lay making the networks suitable for integrated trafficconsisting of voice, video, and data. Providing the de-sired quality of service for these various traffic typesis much more complex than the data networks of to-day. Proper traflic management helps ensure efficientand fair operation of networks in spite of constantlyvarying demand. This is particularly important for thedata traffic which has very lit tle predictabi lity and,therefore, cannot reserve resources in advance as inthe case of voice telecommunications networks.

    Traffic management is concerned with ensuring thatusers get their desired qual ity of service. The prob-lem is especially difficult during periods of heavy load

    * Ema il: [email protected].

    particularly if the traffic demands cannot be predictedin advance. This is why congestion control, althoughonly a part of the traffic management issues, is themost essential aspect of traffic management.

    Congestion control is critical in both ATM and non-ATM networks. When two bursts arrive simultane-ously at a node, the queue lengths may become largevery fast resulting in buffer overtlow. Also, the rangeof link speeds is growing fast as higher speed links arebeing introduced in slower networks of the past. Atthe points, where the total input rate is larger than theoutput link capacity, congestion becomes a problem.

    The protocols for ATM networks began being de-signed in 1984 when the Consultative Committeeon International Telecommunication and Telegraph(CCITI) - a United Nations Organization responsi-ble for telecommunications standards - selected ATMas the paradigm for its broadband integrated servicedigital networks (B-ISDN) . Like most other telecom-munications standards, the ATM standards specify theinterface between various networking components. Aprincipal focus of these standards is the user-network

    0169-7552/96/$15.00 Copyright @ 1996 Elsevier Scienc e B.V. All rights reserved.PIf SOl69-7552(96)00012-g

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    2/16

    1724 R. Jain/Computer Nehvork and ISDN Systems 28 (I 996) 1723-I 738

    interface (UNI), which specifies how a computersystem (which is owned by a user) should commu-nicate with a switch (which is owned by the networkservice provider).

    ATM networks are connection-oriented in the sensethat before two systems on the network can commu-nicate, they should inform al l intermediate switchesabout their service requirements and traffic parame-ters. This is similar to the telephone networks wherea circuit is setup from the calling party to the calledparty. In ATM networks, such circuits are called virtualcircuits (VCs) . The connections allow the network toguarantee the quality of service by limi ting the num-ber of VCs. Typically, a user declares key service re-quirements at the time of connection set up, declaresthe traffic parameters and may agree to control theseparameters dynamical ly as demanded by the network.

    This paper is organized as follows. Section 2 definesvarious quali ty of service attributes. These attributeshelp define various classes of service in Section 3. Wethen provide a general overview of congestion con-trol mechanisms in Section 4 and describe the gen-eralized cell algorithm in Section 4.1. Section 5 de-scribes the criteria that were set up for selecting thefinal approach. A number of congestion schemes aredescribed briefly in Section 6 with the credit-basedand rate-based approaches described in more detailsin Sections 7 and 8. The debate that lead to the even-tual selection of the rate-based approach is presentedin Section 9.

    The description presented here is not intended to bea precise description of the standard. In order to makethe concept easy to understand, we have at times sim-pli fied the description. Those interested in precise de-tails should consult the standards documents, whichare still being developed as of this writing in Jan-uary 1995.

    2. Quali ty of Service (QoS) and traffic attributesWhile setting up a connection on ATM networks,

    users can specify the following parameters related tothe input traffic characteristics and the desired qualityof service.

    ( 1) Peak Cell Rate (PCR) : The maximum instan-taneous rate at which the user will transmit.

    (2) Sustained Cell Rate (SCR) : This is the averagerate as measured over a long interval.

    (3) Cell Loss Ratio (CLR) : The percentage of cellsthat are lost in the network due to error and congestionand are not delivered to the destination.Cell Loss Ratio = Lost CellsTransmitted Cells Each ATM cell has a Cell Loss Priority (CLP) bitin the header. During congestion, the network firstdrops cells that have CLP b it set. Since the loss ofCLP = 0 cell is more harmful to the operation of theapplication, CLR can be specified separately for cellswith CLP = 1 and for those with CLP = 0.

    (4) Cell Transfer Delay (CTD) : The delay experi-enced by a cel l between network entry and exit pointsis called the cel l transfer delay. It includes propaga-tion delays, queueing delays at various intermediateswitches, and service times at queueing points.

    (5) Cell Delay Variation (CDV) : This is a mea-sure of variance of CID. High variation implies largerbuffering for delay sensitive traffic such as voice andvideo. There are mul tiple ways to measure CDV. Onemeasure called peak-to-peak CDV consists of com-puting the difference between the (1 - a)-percentileand the minimum of the cell transfer delay for somesmall value of (Y.

    (6) Cell Delay Variation Tolerance (CDVT) andBurst ToZerance (B7) : For sources transmitting at anygiven rate, a slight variation in the inter-cell time isallowed. For example, a source with a PCR of 10,000cells per second should nominally transmits cells every100~s. A leaky bucket type algorithm called Gener-alized Cell Rate Algorithm (GCRA) is used to de-termine if the variation in the inter-cell times is ac-ceptable. This algorithm has two parameters. The firstparameter is the nomina l inter-cell time (inverse of therate) and the second parameter is the allowed variationin the inter-cell time. Thus, a GCRA( 100,~s lops),will allow cells to arrive no more than 10 ,us earlierthan their nominal scheduled time. The second param-eter of the GCRA used to enforce PCR is called CellDelay Variation Tolerance (CDVT) and of that usedto enforce SCR is called Burst Tolerance (BT)(7) Maximum Burst Size (MBS): The maximumnumber of back-to-back cells that can be sent at thepeak cel l rate but without violating the sustained cellrate is called maximum burst size (MBS) . It is related

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    3/16

    R. Jain/Computer Networks and ISDN Systems 28 (I 996) 1723-l 738 1725

    to the PCR, SCR, and BT as follows:

    Burst Tolerance = (MBS - 1) (A - A).Since MBS is more intuitive than BT, signaling mes-sages use MBS. This means that during connectionsetup, a source is required to specify MBS. BT can beeasily calculated from MBR, SCR, and PCR.Note that PCR, SCR, CDVT, BT, and MBS are in-put traffic characteristics and are enforced by the net-work at the network entry. CLR, CTD, and CDV arequalities of service provided by the network and aremeasured at the network exit point.

    ( 8) Minimum Cell Rate (MCR) : This is the mini-mum rate desired by a user.Only the first six of the above parameters were spec-ified in UN1 version 3.0. MCR has been added re-cently and will appear in the next version of the traff icmanagement document.

    3. Service categoriesThere are five categories of service. The QoS pa-

    rameters for these categories are summarized in Ta-ble 1 and are explained below [ 461:( 1) Constant Bit Rate (CBR): This category isused for emulating circuit switching. The cell rate isconstant. Cell loss ratio is specified for CLP = 0 cellsand may or may not be specified for CLP = 1 cells.Examples of applications that can use CBR are tele-phone, video conferencing, and television (entertain-ment video).(2) Variable Bit Rate ( VBR) : This category allowsusers to send at a variable rate. Statistical multiplex-ing is used and so there may be a small nonzero ran-dom loss. Depending upon whether or not the appli-cation is sensitive to cell delay variation, this categoryis subdivided into two categories: Real time VBR andNonreal time VBR. For nonreal time VBR, only meandelay is specified, while for realtime VBR, maximumdelay and peak-to-peak CDV are specified. An exam-ple of realtime VBR is interactive compressed videowhile that of nonreal time VBR is multimedia email.(3) Available Bit Rate (ABR) : This category is de-signed for normal data traffic such as file transfer andemail. Although, the standard does not require the celltransfer delay and cell loss ratio to be guaranteed or

    Fig. 1. Congestion echniquesor variouscongestiondurations.minimized, it is desirable for switches to minimize thedelay and loss as much as possible. Depending uponthe congestion state of the network, the source is re-quired to control its rate. The users are allowed to de-clare a minimum cell rate, which is guaranteed to theVC by the network. Most VCs will ask for an MCRof zero. Those with higher MCR may be denied con-nection if sufficient bandwidth is not available.(4) Unspecified Bit Rate (UBR): This category isdesigned for those data applications that want to useany left-over capacity and are not sensitive to cell lossor delay. Such connections are not rejected on the ba-sis of bandwidth shortage (no connection admissioncontrol) and not policed for their usage behavior. Dur-ing congestion, the cells are lost but the sources arenot expected to reduce their cell rate. In stead, theseapplications may have their own higher-level cell lossrecovery and retransmission mechanisms. Examplesof applications that can use this service are email, filetransfer, news feed, etc. Of course, these same appli-cations can use the ABR service, if desired.

    Note that only ABR traffic responds to congestionfeedback from the network. The rest of this paper isdevoted to this category of traffic.

    4. Congestion control methodsCongestion happens whenever the input rate is morethan the available link capacity:

    c Input Rate > Available link capacity.Most congestion control schemes consist of adjust-ing the input rates to match the available link capac-ity (or rate). One way to classify congestion controlschemes is by the layer of ISO/OSI reference model atwhich the scheme operates. For example, there are datalink, routing, and transport layer congestion control

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    4/16

    1726Table IATM layer service categories

    R. Jain/Com puter Netwo rks and ISDN Sys tem s 28 (19961 1723-1738

    Attribute CBR rt-VBR rut-VBR UBRPCR and CDVT Specified Specified Specified SpecifiedCSCR, MBS, CDVTe, n/a Specified Specified n/aMCR n/a n/a n/a n/aPeak-to-peak CDV Specified Specified Unspecified UnspecifiedMean CTD Unspecified Unspecified Specified UnspecifiedMaximum CTD Specified Specified Unspecified UnspecifiedCLR= Specified Specified Specified UnspecifiedFeedback Unspecified Unspecified Unspecified Unspecified

    a For all service categories, the cell loss ratio may be unspecified for CLP = 1.b Minimized for sources that adjust cell flow in response to control information.c May not be subject to connection admission control and user parameter control procedures.d Represents the maximum rate at which the source can send as controlled by the control information.e These parameters are either explicitly or implicitly specified.r Different values of CDVT may specified for SCR and PCR.

    ABRSpccifieddn/aSpecifiedUnspecifiedUnspecifiedUnspecifiedSpecifiedbSpecified

    schemes.Typically, a combination of such schemesis used. The selection dependsupon the severity andduration of congestion.

    Fig. 1 showshow the duration of congestionaffectsthe choice of the method. The best method for net-works that are almost always congested s to installhigher speed inks and redesign he topology to matchthe demand pattern.For sporadic congestion, one method is to route ac-cording to load level of links and to reject new con-nections if all paths are highly loaded. This is calledconnection admission control (CAC). The busytone on telephone networks is an example of CAC.CAC is effective only for medium duration congestionsince once the connection is admitted the congestionmay persist for the duration of the connection.For congestions lasting less than the duration ofconnection, an end-to-end control scheme an be used.For example, during connection setup, the sustainedand peak rate may be negotiated. Later a leaky bucketalgorithm may be usedby the sourceor the network toensure hat the input meets he negotiatedparameters.Such traffic shaping algorithms are open oop in thesense hat the parameterscannot be changed dynam-ically if congestion is detected after negotiation. In aclosed oop scheme,on the other hand, sourcesare in-formed dynamically about the congestion state of thenetwork and are asked to increase or decrease heirinput rate. The feedback may be usedhop-by-hop (atdatalink layer) or end-to-end (transport layer). Hop-by-hop feedback is more effective for shorter termoverloads than the end-to-end feedback.

    For very short spikes n traffic load, providing suf-ficient buffers in the switches s the best solution.Notice that solutions that are good for short termcongestion are not good for long-term overload andvice-versa. A combination of various techniques(rather than just one technique) is used since over-loads of various durations are experienced on allnetworks.UNI 3.0 allows CAC, traffic shaping, and binaryfeedback (EFCI). However, the algorithms for CACare not specified. The traffic shaping and feedbackmechanisms re described next.

    4.1. Generalized Cell Rate Algorithm (GCRA)As discussed arlier, GCRA is the so-called leakybucket algorithm, which is used to enforce regular-ity in the cell arrival times. Basically, all arriving cellsare put into a bucket, which is drained at the speci-

    fied rate. If too many cells arrive at once, the bucketmay overflow. The overflowing cells are called non-conforming and may or may not be admitted in to thenetwork. If admitted, the cell loss priority (CLP) bitof the non-conforming cells may be set so that theywill be first to be dropped in caseof overload. Cellsof service categoriesspecifying peak cell rate shouldconform to GRCA( 1 PCR, CDVT), while those alsospecifying sustainedcell rate should additionally con-form to GCRA( l/SCR, BT) . See Section 2 for deti-nitions of CDVT and BT.

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    5/16

    R. -lain/Computer Networks and ISDN Systems 28 (1996) 1723-I 738 1727

    Generalized Vmua l Virtual Payload Cell Loss HeaderFlow Control Path ID Cwxir ID Type Prionry Error Check4 8 16 3 1 8 -sm

    Fig. 2. ATM Cell header format.4.2. Feedback acilities

    UNI V3.0 specified two different facilities for feed-back control:( 1) Generalized Flow Control (GFC) .(2) Explicit Forward Congestion Indication (EFCI) .

    As shown in Fig. 2, the first four bits of the cellheader at the user-network interface (UNI) were re-served for GFC. At network-network interface, GFCis not used and the four bits are part of an extendedvirtual path (VP) field. It was expected that the GFCbits will be used by the network to flow control thesource. The GFC algorithm was to be designed later.This approach has been abandoned.Switches can use the payload type field in the cellheader to convey congestion information in a binary(congestion or no congestion) manner. When the firstbit of this field is 0, the second bit is treated as ex-plicit forward congestion indication (EFCI) bit. Forall cells leaving their sources, the EFCI bit is clear.Congested switches on the path, if any, can set theEFCI bit. The destinations can monitor EFCI bits andask sources to increase or decrease their rate. The ex-act algorithm for use of EFCI was left for future defi-nition. The use of EFCI is discussed later in Section 8.

    5. Selection criteriaATM network design started initially in CCIIT(now known as ITU). However, the progress was

    rather slow and also a bit voice-centric in the sensethat many of the decisions were not suitable for datatraftic. So in October 1991, four companies - Adap-tive (NET), CISCO, Northern Telecom, and Sprint,formed ATM Forum to expedite the process. Sincethen ATM Forum membership has grown to over 200principal members. The traflic management workinggroup w as started in the Forum in May 1993. A num-ber of congestion schemes were presented. To sortout these proposals, the group decided to first agreeon a set of selection criteria. Since these criteria are

    Fig. 3. Sample configuration for max-min fairnessof general interest and apply to non-ATM networksas well, we describe some of them briefly here.5.1, Scalability

    Networks are generally classified based on extent(coverage), number of nodes, speed, or number ofusers. Since ATM networks are intended to cover awide range along all these dimensions, it is necessarythat the scheme be not limited to a particular rangeof speed, distance, number of switches, or number ofVCs. In particular, this ensures that the same schemecan be used for local area networks (LANs) as wellas wide area networks ( WANs) .5.2. Optima&y

    In a shared environment the throughput for a sourcedepends upon the demands by other sources. The mostcommonly used criterion for what is the correct shareof bandwidth for a source in a network environment, isthe so-called max-min allocation [ 121. It providesthe maximum possible bandwidth to the source receiv-ing the least among all contending sources. Mathemat-ically, it is defined as follows. Given a configurationwith n contending sources, suppose the ith source getsa bandwidth xi. The allocation vector {xl, x2, . . . , x}is feasible i f all link load levels are less than or equal to100%. The total number of feasible vectors is infinite.For each allocation vector, the source that is gettingthe least allocation is in some sense, the unhappiestsource. Given the set of all feasible vectors, find thevector that gives the maximum allocation to this un-happiest source. Actually, the number of such vectorsis also infinite although we have narrowed down thesearch region considerably. Now we take this unhap-piest source out and reduce the problem to that ofremaining n - 1 sources operating on a network withreduced link capacities. Again, we find the unhappi-est source among these n - 1 sources, give that source

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    6/16

    I728 R. JaidComp uter Netwo rks rind IS&V Sys tem s 28 (1996) 1723-l 738

    Fig. 4. Configuration after removing VC 3

    the maximum allocation and reduce the problem byone source. We keep repeating this process until allsources have been given the maximum that they couldget.

    The following example illustrates the above conceptof max-min fairness. Fig. 3 shows a network with fourswitches connected via three 150Mbps links. FourVCs are setup such that the first link Ll is shared bysources S 1, S2, and S3. The second link is shared by S3and S4. The third link is used only by S4. Let us dividethe link bandwidths fairly among contending sources.On link Ll , we can give 50Mbps to each of the threecontending sources Sl, S2, and S3. On link L2, wewould give 75 Mbps to each of the sources S3 and S4.On link L3, we would give all 155 Mbps to sourceS4. However, source S3 cannot use its 75Mbps shareat link L2 since it is allowed to use only 50Mbps atlink Ll. Therefore, we give 50 Mbps to source S3 andconstruct a new configuration shown in Fig. 4, whereSource S3 has been removed and the link capacitieshave been reduced accordingly. Now we give l/2 oflink Lls remaining capacity to each of the two con-tending sources: S 1 and S2; each gets 50Mbps. SourceS4 gets the entire remaining bandwidth (1OOMbps)of link L2. Thus, the fair allocation vector for this con-figuration is (50, 50, 50, 100). This is the max-minallocation.

    Notice that max-min allocation is both fair and ef-ficient. It is fair in the sense that all sources get anequal share on every link provided that they can useit. It is efficient in the sense that each link is utilizedto the maximum load possible.It must be pointed out that the max-min fairness isjust one of several possible optimality criteria. It doesnot account for the guaranteed minimum (MCR).Other criterion such as weighted fairness have beenproposed to determine optimal allocation of resourcesover and above MCR.

    Fig. 5. Theater parking lot.5.3. Fairness ndex

    Given any optimality criterion, one can determinethe optimal allocation. If a scheme gives an alloca-tion that is different from the optimal, its unfairness isquantified numerically as follows. Suppose a schemeallocates { Zt , &, . . . , a,,} instead of the optimal allo-cation {at, 2.2, . . . , a,}. Then, we calculate the nor-malized allocations xi = ii/ii for each source andcompute the fairness index as fol lows [ 13,141:

    Since allocations Xis usually vary with time, the fair-ness can be plotted as a function of time. Alternatively,throughputs over a given interval can be used to com-pute overall fairness.5.4. Robustness

    The scheme should be insensitive to minor devia-tions. For example, slight mistuning of parameters orloss of control messages should not bring the networkdown. It should be possible to isolate misbehavingusers and protect other users from them.5.5. Implementability

    The scheme should not dictate a particular switcharchitecture. As discussed later in Section 9, thisturned out to be an important point in final selectionsince many schemes were found to not work withFIFO scheduling.5.6. Simulation configurations

    A number of network configuration were alsoagreed upon to compare various proposals. Mostof these were straightforward serial connection of

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    7/16

    R. Jain/Computer Networks and ISDN Systems 28 (1996) 1723-l 738 1719

    Fig. 6. Parking lot configuration

    switches. The most popular one is the so-called Park-ing Lot configuration for studying fairness. The con-figuration and its name is derived from theater parkinglots, which consist of several parking areas connectedvia a single exit path as shown in Fig. 5. At the endof the show, congestion occurs as cars exiting fromeach parking area try to join the main exit stream.

    For computer networks, an n-stage parking lot con-figuration consists of n switches connected in a series.There are n VCs. The first VC starts from the firstswitch and goes to the end. For the remaining ith VCstarts at the (i - 1) th switch. A 3-switch parking lotconfiguration is shown in Fig. 6.

    5.7. TrafJ patternsAmong the traffic patterns used in various simula-

    tions, the following three were most common:( 1) Persistent Sources:These sources, also knownas greedy or infinite sources always have cells tosend. Thus, the network is always congested(2) StaggeredSources: The sources start at differ-ent times. This allows us to study the ramp-up (orramp-down) time of the schemes.

    (3) Bursty Sources: These sources oscillate be-tween active state and idle state. During active state,they generate a burst of cells [ lo]. This is a morerealistic source model than a persistent source. Withbursty sources, if the total load on the link is less thanlOO%, then throughput and fairness are not at issue,what is more important is the burst response time- the time from first cell in to the last cell out.If the bursts arrive at a fixed rate, it is called openloop. A more realistic scenario is when the next burstarrives some time after the response to the previousburst has been received. In this later case, the burstarrival is affected by network congestion and so thetraffic model is called closed loop.

    6. Congestion schemesIn this section, we briefly describe proposals that

    were presented but were discarded early at the ATMForum. The two key proposals - the credit based andthe rate based - that were discussed at length are de-scribed in detail in the next two sections.6.1. Fast resourcemanagement

    This proposal from France Telecom [4] requiressources to send a resource management (RM) cell re-questing the desired bandwidth before actually send-ing the cells. If a switch cannot grant the request itsimply drops the RM cell; the source times out and re-sends the request. If a switch can satisfy the request, itpasses the RM cell on to the next switch. Finally, thedestination returns the cell back to the source whichcan then transmit the burst.As described above, the burst has to wait for at leastone round trip delay at the source even if the networkis idle (as is often the case). To avoid this delay, animmediate transmission (IT) mode was also pro-posed in which the burst is transmitted immediatelyfollowing the RM cell. If a switch cannot satisfy therequest, it drops the cell and the burst and sends anindication to the source.If cell loss, rather than bandwidth is of concern,the resource request could contain the burst size. Aswitch would accept the request only if it had thatmany buffers available.

    The fast resource management proposal was not ac-cepted at the ATM Forum primarily because it wouldeither cause excessive delay during normal operationor excessive loss during congestion.6.2. Delay-based rate control

    This proposal made by Fujitsu 1241 requires thatthe sources monitor the round trip delay by periodi-cally sending resource management (RM) cells thatcontain timestamp. The cells are returned by the desti-nation. The source uses the timestamp to measure theroundtrip delay and to deduce the level of congestion.This approach, which is similar to that described inJain [ 171, has the advantage that no explicit feedbackis expected from the network and, therefore, it willwork even if the path contained non-ATM networks.

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    8/16

    1730 R. Join/Computer Networks and ISDN System s 28 (1996) 1723-I 738

    Although the proposal was presented at the ATMForum, it was not followed up and the precise detailsof how the delay will be used were not presented.Also, this method does not really require any stan-dardization, since any source-destination pair can dothis without involving the network.6.3. Backward Explicit Congestion NotiJcation(BECN)

    This method presented by N.E.T. [ 28-301 consistsof switches monitoring their queue length and sendingan RM cell back to source if congested. The sourcesreduce their rates by half on the receipt of the RMcell. If no BECN cells are received within a recoveryperiod, the rate for that VC is doubled once each perioduntil it reaches the peak rate. To achieve fairness, thesource recovery period was made proportional to theVCs rate so that lower the transmission rate the shorterthe source recovery period.

    This scheme w as dropped because it was found to beunfair. The sources receiving BECNs were not alwaysthe ones causing the congestion [ 321.6.4. Early packet discard

    This method presented by Sun Microsystems [ 351is based on the observation that a packet consists ofseveral cells. It is better to drop all cells of one packetthen to randomly drop cells belonging to differentpackets. In AALS, when the first bit of the payloadtype bit in the cell header is 0, the third bit indicatesend of message (EOM). When a switchs queuesstart getting full, it looks for the EOM marker and itdrops all future cells of the VC until the end of mes-sage marker is seen again.It was pointed out [ 331 that the method may not befair in the sense that the cell to arrive at a full buffermay not belong to the VC causing the congestion.Note that this method does not require any inter-switch or source-switch communication and, there-fore, it can be used without any standardization. Manyswitch vendors are implementing it.6.5. Link window with end-to-end binary Rate

    This method presented by Tzeng and Siu [ 451, con-sisted of combining good features of the credit-based

    and rate-based proposals being discussed at the time.It consists of using window flow control on every linkand to use binary (EFCI-based) end-to-end rate con-trol. The window control is per-link (and not per-VCas in credit-based scheme). It is, therefore, scalablein terms of number of VCs and guarantees zero cellloss. Unfortunately, neither the credit-based nor therate-based camp found it acceptable since it containedelements from the opposite camp.4.6. Fair queueing with rate and bufer feedback

    This proposal from Xerox and CISCO [ 271 consistsof sources periodically sending RM cells to determinethe bandwidth and buffer usage at their bottlenecks.The switches compute fair share of VCs. The mini-mum of the share at this switch and that from previ-ous switches is placed in the RM cells. The switchesalso monitor each VCs queue length. The maximumof queue length at this switch and those from the pre-vious switches is placed in the same RM cell. Eachswitch implements fair queueing, which consists ofmaintaining a separate queue for each VC and com-puting the time at which the cell would finish trans-mission if the queues were to be served round-robinone-bit at a time. The cells are scheduled to transmitin this computed time order.The fair share of a VC is determined as the inverseof the interval between the cell arrival and its trans-mission. The interval reflects the number of other VCsthat are active. Since the number and hence the inter-val is random, it w as recommended that the averageof several observed interval be used.This scheme requires per-VC (fair) queueing in theswitches, which was considered rather complex.

    7. Credit-based approachThis was one of the two leading approaches and

    also the first one to be proposed, analyzed, and imple-mented. Originally proposed by Professor H.T. Kung,it was supported by Digital, BNR, FORE, Ascom-Timeplex, SMC, Brooktree, and Mitsubishi [ 26,411.The approach consists of per-link, per-VC, windowflow control. Each link consists of a sender node(which can be a source end system or a switch) and areceiver node (which can be a switch or a destination

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    9/16

    R. JainlComputer Networks and ISDN Systems 28 (1996) 1723-I 738 1731

    end system). Each node maintains a separate queuefor each VC. The receiver monitors queue lengths ofeach VC and determines the number of cells that thesender can transmit on that VC. This number is calledcredit. The sender transmits only as many cells asallowed by the credit.

    If there is only one active VC, the credit must belarge enough to allow the whole link to be full at alltimes. In other words:

    Credit >, Link Cell Ratex Link Round Trip Propagation Delay.

    The link cel l rate can be computed by dividing the linkbandwidth in Mbps by the cel l size in bits.

    The scheme as described so far is cal led Flow Con-trolled Virtual Circuit (FCVC) scheme. There aretwo problems with this ini tia l static version. First, ifthe credits are lost, the sender will not know it. Secondeach VC needs to reserve the entire round trip worth ofbuffers even though the link is shared by many VCs.These problems were solved by introducing a creditresynchronization algorithm and an adaptive versionof the scheme.The credit resynchronization algorithm consists ofboth sender and receiver maintaining counts of cellssent and received for each VC and periodically ex-changing these counts. The difference between thecells sent by the sender and those received by the re-ceiver represents the number of cells lost on the link.The receiver reissues that many addi tional credits forthat VC.

    The adaptive FCVC algorithm [ 2.51 consists of giv-ing each VC only a fraction of the roundtrip delayworth of buffer allocation. The fraction depends uponthe rate at which the VC uses the credit. For highlyactive VCs, the fraction is larger while for less ac-tive VCs, the fraction is smaller. Inactive VCs get asmall fixed credit. If a VC does not use its credits, itsobserved usage rate over a period is low and it getssmaller buffer allocation (and hence credits) in thenext cycle. The adaptive FCVC reduces he buffer re-quirements considerably but also introduces a ramp-up time. If a VC becomes active, it may take sometime before it can use the full capacity of the link evenif there are no other users.

    8. Rate-based approachThis approach,which was eventually adoptedas he

    standardwasproposedoriginally by Mike Hluchyj andwasextensively modified later by representatives rom22 different companies 71.Original proposal consistedof a rate-basedversionof the DECbit scheme [ 181, which consists of end-to-end control using a single-bit feedback from thenetwork. In the proposal, the switches monitor theirqueue lengths and if congestedset EFCI in the cellheaders. The destination monitors these indicationsfor a periodic interval and sendsan RM cell back tothe source. The sourcesuse an additive increaseandmultiplicative decreasealgorithm to adjust their rates.This particular algorithm usesa negative polarityof feedback in the sense hat RM cells are sent onlyto decrease he rate but no RM cells are required toincrease he rate. A positive polarity, on the other hand,would require sendingRM cells for increasebut noton decrease.f RM cells are sent or both increaseanddecrease, he algorithm would be called bipolar.The problem with negativepolarity is that if the RMcells are lost due to heavy congestion in the reversepath, the sourceswill keep increasing heir load on theforward path and eventually overload it.This problem was fixed in the next version by us-ing positive polarity. The sourcesset EFCI on everycell except the nth cell. The destination will send anincrease RM cell to source f they receive any cellswith the EFCI off. The sourceskeep decreasing heirrate until they receive a positive feedback. Since thesourcesdecrease heir rate proportional to the currentrate, this schemewascalled proportional rate controlalgorithm (PRCA).PRCA was found to have a fairnessproblem. Giventhe same evel of congestion at all switches, the VCstravelling more hops have a higher probability of hav-ing EFCI set than those travelling smaller number ofhops. If p is the probability of EFCI being set on onehop, then the probability of it being set for an n-hopVC is 1 - (1 - p) or np. Thus, long path VCs havefewer opportunities to increase and are beaten downmore often than short path VCs. This was called thebeat-down problem [ 31.One solution to the beat down problem is the se-lective feedback [ 3 ] or intelligent marking [ I] inwhich a congestedswitch takes nto account the cur-

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    10/16

    1732 R. Jain/Com puter Netwo rks and ISDN Sys tem s 28 (1996) 1723-1738rent rate of the VC in addition to its congestion levelin deciding whether to set the EFCI in the cell. Theswitch computesa fair share and if congested t setsEFCI bits in cells belonging to only thoseVCs whoserates are above this fair share.The VCs with rates be-low fair share are not affected.8.1. The MIT scheme

    In July 1994, we [6] argued that the binary feed-back was too slow for rate-basedcontrol in high-speednetworks and that an explicit rate indication wouldnot only be faster but would offer more flexibility toswitch designers.

    The single-bit binary feedback can only tell thesource whether it should go up or down. It was de-signed in 1986 for connectionlessnetworks in whichthe intermediate nodes had no knowledge of flows ortheir demands.The ATM networks are connection ori-ented. The switches know exactly who is using theresourcesand the flow paths are rather static. This in-creased nformation is not usedby the binary feedbackscheme.Secondly and more importantly, the binary feed-back schemeswere designed for window-basedcon-trols and are too slow for rate-basedcontrols. Withwindow-basedcontrol a slight difference between thecurrent window and the optimal window will show upas a slight increase n queue length. With rate-basedcontrol, on the other hand, a slight difference in cur-rent rate and the optimal rate will show up as contin-uously increasing queue ength [ 15,161. The reactiontimes have to be fast. We can no longer afford to takeseveral round trips that the binary feedback requires osettle to the optimal operation. The explicit rate feed-back can get the source o the optimal operating pointwithin a few round trips.

    The explicit rate schemeshave several additionaladvantages.First, policing is straightforward. The en-try switches can monitor the returning RM cells anduse the rate directly in their policing algorithm. Sec-ond with fast convergence time, the system come tothe optimal operating point quickly. Initial rate hasless mpact. Third, the schemes re robust againster-rors in or loss of RM cells. The next correct RM cellwill bring the system to the correct operating point.We substantiatedour argumentswith simulation re-sults for an explicit rate scheme designed by Anna

    Charny during her master thesis work at the Mas-sachusetts nstitute of Technology (MIT) [ 51. TheMIT schemeconsistsof each source sending an RMcell every nth data cell. The RM cell contains theVCs current cell rate (CCR) and a desired rate.The switches monitor all VCs rates and compute afair share. Any VCs whosedesired rate is less hanthe fair share s granted the desired rate. If a VCs de-sired rate is more than the fair share, he desired ratefield is reduced to the fair shareand a reduced bit isset n the RM cell. The destination returns the RM cellback to the source, which then adjusts ts rate to thatindicated in the RM cell. If the reduced bit is clear, thesourcecould demanda higher desired ate in the nextRM cell. If the bit is set, the source use the currentrate as the desired ate in the next RM cell.The fair share s computed using a iterative proce-dure as follows. Initially, the fair share s set at thelink bandwidth divided by the number of active VCs.All VCs, whose rates are less than the fair share arecalled underloading VCs. If the number of under-loading VCs increasesat any iteration, the fair shareis recomputed as follows:Fair Share

    =Link Bandwidth - c Bandwidth of Underloading VCs

    Number of VCs - Number of Underloading VCsThe iteration is then repeateduntil the number of un-derloading VCs and the fair share does not change.Charny [5] has shown that two iterations are suffi-cient for this procedure to converge.

    Chamy also showed that the MIT schemeachievemax-min optimality in 4k round trips, where k is thenumber of bottlenecks.This proposal was well received except that thecomputation of fair share equires order rr operations,where n is the number of VCs. Search for an O( 1)scheme ed to the EPRCA algorithm discussednext.8.2. Enhanced PRCA (EPRCA)

    The merger of PRCA with explicit rate scheme eadto the Enhanced PRCA (EPRCA) schemeat theend of July 1994 ATM Forum meeting [ 36,381. InEPRCA, the sourcessenddata cells with EFCI set to0. After every n data cells, they send an RM cell.The RM cells contain desired explicit rate (ER),current cell rate (CCR) , and a congestion ndication

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    11/16

    R. Jain/Computer Networks and ISDN Systems 28 (1996) 1723-l 738 1733

    (CI) bit. The sources initialize the ER field to theirpeak cell rate (PCR) and set the CI bi t to zero.

    The switches compute a fair share and reduce theER field in the returning RM cells to the fair shareif necessary. Using exponent ial weighted averaging amean allowed cel l rate (MACR) is computed and thefair share is set at a fraction of this average:MACR = ( 1 - a)MACR + &CR,Fair Share = SWDPF x MACR.Here, a is the exponential averaging factor andSWDPF is a multiplier (called switch down pressurefactor) set close to but below I. The suggested valuesof LYand SWDPF are l/ 16 and 7/8, respectively.The destinationsmonitor the EFCI bits in data cells.If the last seen data cell had EFCI bit set, they markthe CI bit in the RM cell.In addition to setting the explicit rate, the switchescan also set the CI bit in the returning RM cells if theirqueue length is more than a certain threshold.The sourcesdecrease heir rates continuously afterevery cell.ACR = ACR x RDF.Here, RDF is the reduction factor. When a source re-ceives the returned RM cell, it increasests rate by anamount AIR if permitted.IfCI=OThen New ACR = Min( ACR + AIR, ER, PCR) .If CI bit is set, the ACR is not changed.Notice that EPRCA allows both binary-feedbackswitches and the explicit feedback switches on thepath. The main problem with EPRCA as describedhere is the switch congestion detection algorithm. It isbasedon queue length threshold. If the queue lengthexceeds a certain threshold, the switch is said to becongested. If it exceed another higher threshold, itsaid to be very highly congested.This method of con-gestion detection was shown to result in unfairness.Sources that start up late were found to get lowerthroughput than those which start early.The problem was fixed by changing to queuegrowth rate as the load indicator. The change in thequeue length is noted down after processing, say, Kcells. The overload is indicated if the queue lengthincreases 44,431.

    8.3. OSU time-based congestion avoidanceJain, Kalyanaraman, and Viswanathan at the OhioState University (OSU) have developed a series of

    explicit rate congestion avoidance schemes.The firstscheme [ 19,201 called the OSU schemeconsistsofswitchesmeasuring heir input rate over a fixed aver-aging interval and comparing it with their target rateto compute the current Load factor Z:Load Factor z = Input RateTarget RateThe target rate is set at slightiy below, say, 85-95sof the link bandwidth. Unless the load factor is closeto 1, all VCs are asked o change (divide) their loadby this factor Z. For example, if the load factor is 0.5.all VCs are asked to divide their rate by a factor of0.5, that is, double their rates. On the other hand, ifthe load factor is 2, all VCs would be asked to halvetheir rates.Note that no selective feedback is taken when theswitch is either highly overloaded or highly under-loaded. However, if the load factor is close o one, be-tween I - A and 1 + A for a small A, the switch givesdifferent feedback to underloading sourcesand over-loading sources.A fair share s computed as follows:Fair Share= Target RateNumber of Active SourcesAll sources,whose rate is more than the fair shareareasked o divide their rates by z/( 1 + A) while thosebelow the fair shareare asked o divide their rates byz/( 1 - A). This algorithm called Target UtilizationBand (TUB) algorithm was proven to lead to fair-ness 19].The OSU schemehas hree distinguishing features.First, it is a congestion avoidance scheme. It giveshigh throughput and low delay. By keeping the targetrate slightly below the capacity, the algorithm ensuresthat the queues are very small, typically close to 1,resulting in low delay. Second, he switcheshave veryfew parameterscompared to EPRCA and are easy toset. Third, the time to reach the steady state is verysmall. The source each their final operating point 10to 20 times faster than that with EPRCA.

    In the original OSU scheme, the sources were re-quired to send RM cells periodically at fixed time in-terval. This meant hat the RM cell overheadper source

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    12/16

    1734 R. JainIComputer Networks and ISDN Systems 28 (1996) 1723-I 738was fixed and increasedas the number of sources n-creases.This was found to be unacceptable eading tothe count-based schemedescribed next.In the count-based scheme [ 211, the sourcessendRM cells after every n data cells, as n EPRCA. Theswitch rate adjustmentalgorithm is changed o encour-age quick rise. Sourcesbelow the fair shareare askedto come up to the fair sbare egardless f the load leveland those above the fair shareare asked o adjust theirrates by the load factor. This allows the scheme tokeep the three distinguishing feature while making theoverhead independentof number of VCs. Newer ver-sionsof the OSU scheme,named ERICA (ExplicitCongestion ndication for CongestionAvoidance) andERICA+ are count-based [ 22,231.8.4. CongestionAvoidance using ProportionalControl (CAPC)

    Andy Barnhart from HughesSystemshasproposeda scheme called Congestion Avoidance using Pro-portional Control (CAPC) [ 21. In this scheme, asin OSU scheme, the switches set a target utilizationslightly below 1. This helps keep the queue lengthsmall. The switches measure he input rate and loadfactor Z, as in OSU scheme,and use t to update thefair share.During underload (z < 1), fair share s increasedas follows:Fair Share

    = Fair Share x Min(ERU, 1 + (1 - z) * Rup).Here, Rup is a slope parameter n the range 0.025 to0.1. ERU is the maximum increaseallowed and wasset to 1.5.During overload (z > 1) , fair share s decreased sfollows:Fair Share

    = Fair Share x Max(ERF, 1 - (z - 1) * Rdn).Here, Rdn is a slope parameter n the range 0.2 to 0.8and ERF is the minimum decrease equired and wasset to 0.5.The fair share s the maximum rate that the switchwill grant to any VC.In addition to the load factor, the schemealso usesa queue threshold. Whenever the queue ength is over

    Fig. 7. Virtual source/destination.this threshold, a congestion ndication (CI) bit is set nall RM cells. This prevents all sources rom increasingtheir rate and allows the queues o drain out.The distinguishing feature of CAPC is oscillation-free steady stateperformance. The frequency of oscil-lations is a function of 1 - z, where z is the load fac-tor. In steady state, z = 1 , the frequency is zero, thatis, the period of oscillations s infinite. This scheme sstill under development.It must be pointed out that the final specificationdoes not mandate any particular switch algorithm.EPRCA, ERICA, CAPC, and a few other algorithmsare included in the Appendix asexamples of possibleswitch algorithms. However, each manufacturer isfree to use ts own algorithm.8.5. Virtual source and destination

    One objection to the end-to-end rate control is thatthe round trip delay can be very large. This problemis fixed by segmenting he network in smaller piecesand letting the switchesact as virtual source and/orvirtual destination. Fig. 7 shows an example [ 8 1.Switch A in the middle segmentacts as a virtual des-tination and returns all RM cells received from thesource as if the switch was the destination. Switch Bin the samesegmentacts as a virtual source and gen-eratesRM cells as f it were a source.

    Segmenting the network using virtual source/destination reduces the size of the feedback loops.Also, the intermediate segmentscan use any propri-etary congestion control scheme.This allows publictelecommunicationscarriers to follow the standard n-terface only at entry/exit switches. More importantly,virtual source/destinationprovide a more robust inter-face to a public network in the sensehat the resourcesinside the network do not have to rely on user com-pliance. Misbehaving userswill be isolated n the firstcontrol loop. The usershere include private networkswith switches that may or may not be compliant.Notice that the virtual sourcesand destinationsneed

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    13/16

    R. Jain/Computer Networks and ISDN Systems 28 (1996) 1723- I738 1735

    Fig. 8. Hop-by-hop rate control.

    Fig. 9. Explicit rate control for multicast VCs.to maintain per-VC queueing and may, therefore, bequite expensive.There is no limit on the number of segments thatcan be created. In the extreme case, every switch couldact as a virtual source/destination and one would gethop-by-hop rate control as shown in Fig. 8.8.6. Multicast VCs

    The explicit rate approach can be extended forpoint-to-multipoint connections. As shown in Fig. 9,the forward RhI cells are copied to each branch.While in the reverse direction, the returning RM cellinformation is combined so that the minimum of therates allowed by the branches is fed back towardsthe root. In one such schemes, proposed by Roberts[ 391, the information from the returning RM cells iskept at the branching node and is returned wheneverthe next forward RM cell is received.

    9. Credit vs rate debateAfter a considerable debate [ 9,11,34,37], which

    lasted for over a year, ATM Forum adopted therate-based approach and rejected the credit-based ap-proach. The debate was quite religious in the sensethat believers of each approach had quite differentgoals in mind and were unwilling to compromise. Toachieve their goals, they were willing to make trade-of fs that were unacceptable to the other side. In this

    section, we summarize some of the key points thatwere raised during this debate.( 1) Per-VC Queueing: Credit-based approach re-quires switches to keep a separate queue for each VC(or VP in case of virtual path scheduling). This ap-plies even to inactive VCs. Per-VC queueing makesswitch complexity proportional to the number of VCs.The approach was considered not scalable to a largenumber of VCs. Given that some large switches willsupport millions of VCs, this would cause consider-able complexity in the switches. This was the singlebiggest objection to the credit-based approach and themain reason for it not being adopted. Rate-based ap-proach does not require per-VC queueing. It can workwith or without per-VC queueing. The choice is leftto the implementers.

    (2) Zero Cell Loss: Under ideal conditions, thecredit-based approach can guarantee zero cell loss re-gardless of traffic patterns, the number of connections,buffer sizes, number of nodes, range of link band-widths, and range of propagation and queueing delays.Even under extreme overloads, the queue lengths can-not grow beyond the credits granted. The rate-basedapproach cannot guarantee cell loss. Under extremeoverloads, it is possible for queues to grow large re-sulting in buffer overflow and cell loss. The rate-basedcamp considered the loss acceptable arguing that withlarge buffers, the probability of loss is small. Also, theyargued that in reality there is always some loss due toerrors and, therefore, the user has to worry about losseven if there is zero congestion loss.(3) Ramp-Up Time: The static credit-based ap-proach allows VCs to ramp up to the full rate veryfast. In fact, any free capacity can be used imme-diately. Some rate-based schemes (and the adaptivecredit-based approach) can take several round tripdelays to ramp up.

    (4) Isolation and Misbehaving Users: A side ben-efit of the per-VC queueing is that misbehaving userscannot disrupt the operation of well-behaving users.However, this is less true for the adaptive scheme thanfor the static credit scheme. In the adaptive scheme, amisbehaving user can get a higher share of buffers byincreasing its rate. Note that isolation is attained byper-VC queueing and not so much by credits. Thus,if required, a rate-based switch can also achieve iso-lation by implementing per-VC queueing.(5) BufSer Requirements: The buffer requirements

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    14/16

    I736 R. Jain/Computer Networks and ISDN Sys tems 28 (1996) 1723-1738

    for the credit-based schemes were found to be lessthan those in the rate-based scheme with binary feed-back. However, this advantage disappeared when ex-plicit rate schemes were added. In the static credit-based approach, per-VC buffer requirement is propor-tional to link delay, while in the rate-based approach,total buffer requirement is proportional to the end-to-end delay. The adaptive credit scheme allows lowerbuffering requirements than the static scheme but atthe cost of increased ramp-up time.

    Note that the queueing delays have to be added inboth cases since it delays the feedback and adds to thereaction time.

    (6) Delay Estimate: Setting the congestion con-trol parameters in the credit-based approach requiresknowledge of link round trip delay. At least, the linklength and speed must be known. This knowledge isnot required for rate-based approaches (although itmay be helpful).

    (7) Switch Design Flexibil ity: The expl icit rateschemes provide considerable flex ibi lity to switchesin deciding how to allocate their resources. Differentswitches can use different mechanisms and still in-teroperate in the same network. For example, someswitches can opt for minimizing their queue length,while the others can optimize their throughput, whilestill others can optimize their profits. On the otherhand, the credit-based approach dictated that eachswitch use per-VC queueing with round-robin service.

    (8) Switch vs End-System Complexity: The credit-based approach introduces complexity in the switchesbut may have made the end-systems job a bit simpler.The proponents of credit-based approach argued thattheir host network interface card (NIC) is much sim-pler since they do not have to schedule each and everycell . As long as credits are available, the cells can besent at the peak rate. The proponents of the rate-basedapproach countered that all NIC cards have to haveschedulers for their CBR and VBR traffic and usingthe same mechanism for ABR does not introduce toomuch complexity.

    There were several other points that were raised.But a ll of them are minor compared to the per-VCqueueing required by the credit-based approach. Ma-jority of the ATM Forum participants were unwillingto accept any per-VC action as a requirement in theswitch. This is why, even an integrated proposal al-lowing vendors to choose either of the two approaches

    failed. The rate-based approach won by a vote of 7to 104.

    IO. SummaryCongestion control is important in high speed net-

    works. Due to larger bandwidth-distance product, theamount of data lost due to simultaneous arrivals ofbursts from mul tiple sources can be larger. For the suc-cess of ATM, it is important that it provides a goodtraffic management for both bursty and non-burstysources.

    Based on the type of the traffic and the quality ofservice desired, ATM applications can use one of thefive service categories: CBR, rt-VBR, nrt-VBR, UBR,and ABR. Of these, ABR is expected to be the mostcommonly used service category. It allows ATM net-works to control the rates at which delay-insensitivedata sources may transmit. Thus, the link bandwidthnot used by CBR and VBR applications can be fairlydivided among ABR sources.

    After one year of intense debate on ABR trafficmanagement, two leading approaches emerged: credit-based and rate-based. The credit-based approach usesper-hop per-VC window flow control. The rate-basedapproach uses end-to-end rate-based control. The rate-based approach was eventually selected primarily be-cause it does not require al l switches to keep per-VC queueing and allows considerable flexib ility in re-source allocation.

    Although, the ATM Forum traffic management ver-sion 4.0 specifications allows older EFCI switcheswith binary feedback for backward compatibility, thenewer explic it rate switches will provide better perfor-mance and faster control. The ATM Forum has speci-fied a number of rules that end-systems and switchesfollow to adjust the ABR traffic rate. The specifica-tion is expected to be finalized by February 1996 withproducts appearing in mid to late 1996.

    AcknowledgementThanks to Dr. Shirish Sathaye of FORE Systems,

    Dr. David Hughes of StrataCorn, and Rohit Goyal andRam Viswanathan of OSU for useful feedback on anearlier version of this paper.

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    15/16

    R. Jain/Computer Networks and ISDN Systems 28 (1996) 1723-l 738 1731

    References[ 1 ] A.W. Barnhart, Use o f the extended PRCA with various

    switch mechanisms, AF-TM 94-0898, September 1994.[ 21 A.W. Barnhart, Explicit rate performance evaluation, AF-TM94-0983R1, October 1994.] 31 J. Bennett and G. Tom Des Jardins, Comments on the JulyPRCA rate control baseline, AF-TM 94-0682, July 1994.141 P.E. Boyer and D.P Tranchier, A reservation principle with

    applications to the ATM traffic control, Comput. NetworksISDN Systems 24 (1992) 321-334.

    151 A. Charny, An algorithm for rate allocation in a cell-switching network with feedback, Tech. Rept. MIT TR-601,May 1994.

    [6] A. Charny, D. Clark and R. Jain, Congestion control withexplicit rate indication, AF-TM 94-0692, July 1994; Updatedversion published in: Proc. ICC95, June 1995.

    171 M. Hluchyj et al., Closed-loop rate-based tra ffic management,AF-TM 94-021 lR3, April 1994.

    [ 8 ] M. Hluchyj et al., Closed-loop rate-based tra ffic management,AF-TM 94-0438R2, September 1994.[ 91 D. Hughes and P Daley, Limitations of credit based flowcontrol, AF-TM 94-0776, September 1994.

    [ 101 D. Hughes and P Daley, More ABR simulation results, AF-111112]13

    TM 94-0777, September 1994.D. Hunt, S. Sathaye and K. Brinkerhoff, The realities of flowcontrol fo r ABR serv ice, AF-TM 94-0871, September 1994.J. Jaffe. Bottleneck flow control, IEEE Trans. Comm. 29(7), 954-962.

    1 R. Jain, D. Chiu and W. Hawe, A quantitative measure offairness and discrimination for resource allocation in sharedsystems, Tech. Rept. DEC TR-301, September 1984.[ 141 R. Jain, The Art of Computer Systems Performance Analysis(Wiley, New York, 199 1)

    ] 151 R. Jain, Myths about congestion management in highspeed networks, Internetworking: Research and Experience3 (1992) 101-113.( 161 R. Jain, Congestion control in computer networks: Issuesand trends, IEEE Network Mag. (May 1990) 24-30.] 171 R. Jain, A delay-based approach for congestion avoidance ininterconnected heterogeneous computer networks, Comput.Commun. Rev. 19 (5) (1989) 56-71.[ 181 R. Jain, K.K. Ramakrishnan and D.M. Chiu, Congestionavoidance in computer networks with a connectionlessnetwork layer, Tech. Rept., DEC-TR-506, Digital EquipmentCorporation, August 1987; also in: C. Partridge, ed.,Innovations in Internetworking (Artech House, Norwood,MA, 1988) 140-156.[ 191 R. Jain, S. Kalyanaraman and R. Viswanathan, TheOSU scheme for congestion avoidance using explicit rateindication. AF-TM 94-0883, September 1994.

    t Throughout this section, AF-TM refers to ATM Forum [42] W. Stallings, ISDN and Broadband ISDN with Frame ReluyTraffic Management working group contributions. These and ATM (Prentice-Hall, Englewood Cl iff s, NJ, 1995) 58 I.contributions are available only to Forum member organizations. [43] H. Tzeng and K. Siu, A class of proportional rate controlAll our contributions and papers are available on-line at schemes and simulation results, AF-TM 94-0888, Septemberhttp://www.cis.ohio-state.edu/ jainl 1994.

    [ 201 R. Jain, S. Kalyanaramanand R. Viswanathan, The EPRCASscheme, AF-TM 94-0988, October 1994.[21] R. Jain, S. Kalyanaraman and R. Viiwanathan, The transient

    performance: EPRCA vs EPRCA++, AF-TM 94-l 173,November 1994.[22] R. Jain, S. Kalyanaraman and R. Viswanathan, A sampleswitch algorithm, AF-TM 95-0178Rl. February 1995.1231 R. Jain, S. Kalyanaraman, R. Goyal, S. Fahmy and F. Lu.ERICA+: Extensions to the ERICA switch algorithm, AF-TM 95-l 145R1, October 1995.[24] D. Kataria, Comments on rate-based proposal, AF-TM 94-0384, May 1994.[25] H.T. Kung, et al., Adaptive credit allocation for flow-controlled VCs, AF-TM 94-0282, March 1994.[26] H.T. Kung et al., Flow controlled virtual connectionsproposal for ATM t raff ic management, AF-TM 94-0632R2,September 1994.

    [27] B. Lyles and A. Lin, Definition and preliminary simulationof a rate-based congestion control mechanism with explici tfeedback of bottleneck rates, AF-TM 94-0708, Ju ly 1994.[ 281 P Newman and G. Marshall, BECN congestion control, AF-TM 94-789Rl. July 1993.[29] P Newman and G. Marshall, Update on BECN congestioncontrol, AF-TM 94-855R1, September 1993.

    [ 301 P Newman, Tra ffic management for ATM local areanetworks, IEEE Commun. Msg. (August 1994) 44-50.[ 311 K.K. Ramakrishnan, D.M. Chiu and R. Jain, Congestionavoidance in computer networks with a connectionlessnetwork layer, Part IV: A selective binary feedback schemefor general topologies, Tech. Rept. DEC-TR-510. DigitalEquipment Corporation, August 1987.

    [ 321 K.K. Ramakrishnan, Issues with backward explicitcongestion noti fication based congestion control, 93-870,September 1993.[33] K.K. Ramakrishnan and J. Zavgren, Preliminary simulationresults of hop-by-hop/VC flow control and early packet

    discard, AF-TM 94-023 1, March 1994.[ 341 K.K. Ramakrishnan and P Newman, Credit where credit isdue, AF-TM 94-0916, September 1994.[35] A. Romanov, A performance enhancement fo r packetizedABR and VBR+ data, AF-TM 94-0295, March 1994.[36] L. Roberts, Enhanced PRCA (Proportional Rate-ControlAlgorithm), AETM 94-0735R1, August 1994.

    [ 371 L. Roberts, The benefits of rate-based flow control for ABRserv ice, AF-TM 94-0796, September 1994.[ 381 L. Roberts et al., New pseudocode for explicit rate plus EFClsupport, AF-TM 94-0974, October 1994.[ 391 L. Roberts, Rate-based algorithm for point to multipointABR service, AF-TM 94-0772R1, November 1994.

    [40] S. Sathaye, Draft ATM forum traff ic managementspecificat ion, Version 4.0, AF-TM 95-0013, December 1994.[41] J. Scott et al., Link by link. per VC credit based flow control.AF-TM 94-0168, March 1994.

  • 8/6/2019 Congestion Control and Traffic Management in ATM Networks Recent Advances and a Survey

    16/16

    1738 R. Jain/Computer Network and ISDN Systems 28 (1996) 1723-I 738[ 441 K. Siu and H. Tzeng, Adaptive propottional rate control

    for ABR serv ice in ATM networks, Tech. Rept. 94-07-01,University of California, Irvine.

    1451 H. Tzeng and K. Siu, Enhanced Credit-based CongestionNotification (ECCN) flow control for ATM networks, AF-TM 94-0450, May 1994.1461 L. Wojnaroski, Baseline text for tragic management sub-working group, AF-TM 94-0394114, September 1994.

    Raj Jain is a Professor of Computer In-formation Sciences at The Ohio StateUniversity in Columbus, Ohio. Prior tojoining the university in April 1994, hewas a Senior Consulting Engineer atDigital Equipment Corporation, Littie-ton, Massachusetts, where he was in-volved in design and analysis of manycomputer systems and networks includ-ing VAX Clusters, Ethernet, DE&et,OSI, FDDI and ATM networks. He isthe author of Art of Computer Sys-tems Performance Analysis, publishedby Wiley and FDDI Handbook: High-Speed Networking withFiber and Other Media published by Addison-Wesley. He is anIEEE Fellow, au ACM Fellow, an ACM Lecturer and an IEBEDistinguished Visitor. He is on the Editorial Boards of ComputerNetworks and ISDN Systems, Computer Communications (UK),Journal of High Speed Networking (USA) and Mobile Networksand Nomadic Applications (NOMAD). In the past, he was alsoon the Editorial Board of IEEE/ACM ACM Transactions on Net-works and is currently a member of the publication board of theIEEE Computer Society.