ch04-core network qos

Upload: nguyen-ngoc-thai

Post on 03-Jun-2018

238 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/12/2019 Ch04-Core Network QoS

    1/134

  • 8/12/2019 Ch04-Core Network QoS

    2/134

    2Telecomm. Dept.Faculty of EEE NQoS2013HCMUT

    Chapter 1: Network QoSInternet todayQoS definition,QoS parameters

    Application : Voice, Video and VPNChapter 2: Per-hop QoSPacket classification: ToS, Traffic Classs, DSCP,Policing, Marking, Shaping

    Queue managementSchedulingCall Admission Control (CAC)

    Content

  • 8/12/2019 Ch04-Core Network QoS

    3/134

    3Telecomm. Dept.Faculty of EEE NQoS2013HCMUT

    Chapter 3: Edge-to-Edge Network ModelsIntegrated Service modelDifferential Service modelEdge-to-edge IP QoSQoS and end-to-end TCP performance

    Chapter 4: QoS in Core NetworksQoS in ATM networksQoS in MPLS networks

    Chapter 5: QoS in Wireless NetworksQoS in WLANQoS in 3G Cellular networksQoS in Ad-hoc networks

    Content

  • 8/12/2019 Ch04-Core Network QoS

    4/134

    4Telecomm. Dept.Faculty of EEE NQoS2013HCMUT

    Chapter 6: Traffic EngineeringM/M/1, M/M/c modelsM/G/1, M/D/1 modelsQueuing networks

    Application of queuing theory in Network QoSChapter 7: Computer simulation

    TCP performance evaluation

    Simulation of QoS in MPLS networksSimulation of QoS in ATM networkQoS evaluation of multimedia traffic in IP networks

    Content

  • 8/12/2019 Ch04-Core Network QoS

    5/134

    5Telecomm. Dept.Faculty of EEE NQoS2013HCMUT

    Rudimentary ATM Concepts

    Physical layer

    Signaling

    Cell formatConnection types

  • 8/12/2019 Ch04-Core Network QoS

    6/134

    6Telecomm. Dept.Faculty of EEE NQoS2013HCMUT

    ATM Building Blocks

    ATM signalingUNI and NNI

    Virtual connectionsVCC, VP, and VC

  • 8/12/2019 Ch04-Core Network QoS

    7/134

    7Telecomm. Dept.Faculty of EEE NQoS2013HCMUT

    ATM Signaling

    UNI = User-to-Network InterfaceNNI = Network-to-Network InterfaceCell header content varies dependingon whos talking to whom

    TokenRing

    Public UNI

    UNI

    Public ATMNetwork

    NNI

    NNI

    NNI

    Private ATM Network

    Public ATMNetwork

    B-ICI

  • 8/12/2019 Ch04-Core Network QoS

    8/134

    8Telecomm. Dept.Faculty of EEE NQoS2013HCMUT

    Virtual Path and Virtual Channels

    ATM Physical LinkVirtual Channel Connection (VCC)

    Virtual Path(VP)

    Contains Multiple VCs

    Virtual Channel Connection(VCC)

    Contains Multiple VPs

    Virtual Channel(VC)

    Logical PathBetween ATM End Points

    Virtual Channels (VC)

    Virtual Channels (VC)

    E3

    OC12 Virtual Path (VP)

    Virtual Path (VP)

    Connection Identifier = VPI/VCI

  • 8/12/2019 Ch04-Core Network QoS

    9/134

  • 8/12/2019 Ch04-Core Network QoS

    10/134

    10Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    VP and VC Switching

    VCI 1 VCI 2 VCI 3 VCI 4

    VPI 2VPI 3VPI 1

    VPI 2

    VPI 3

    VPI 5

    VPI 1

    VPI 4

    Port 1

    Port 2

    Port 3

    VCI 1

    VCI 2

    VCI 1

    VCI 2

    VP Switch

    VC Switch

    VCI 3

    VCI 4

    VCI 1

    VCI 2

  • 8/12/2019 Ch04-Core Network QoS

    11/134

    11Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Virtual Channels & Virtual Paths

    This hop-by-hop forwarding is known as cell relay

    Virtual Channel Connection (VCC)

    Virtual PathConnection (VPC)

    VPSwitch

    VCSwitch

    VCSwitch

    NNI NNI

    VPI = 2VCI = 44

    VPI = 1VCI = 1

    VPI = 26VCI = 44

    VPI = 20VCI = 30

    UNIUNI

  • 8/12/2019 Ch04-Core Network QoS

    12/134

    12Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Rudimentary ATM Concepts

    Physical layer

    Signaling

    Cell formatConnection types

  • 8/12/2019 Ch04-Core Network QoS

    13/134

    13Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Creating Cells from PacketsDest.

    AddressSource

    Address DataFrameCheck

    PayloadHeader

    Packet

    Cells

    PayloadHeader

    PayloadHeader

    PayloadHeader

    SARSegmentation and Reassembly

    Segmentation Happens at Source

    Reassembly Happens at Destination

  • 8/12/2019 Ch04-Core Network QoS

    14/134

    14Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    ATM Cell Header

    5 ByteHeader

    48 BytePayload

    ATM Cell

    53 Bytes

  • 8/12/2019 Ch04-Core Network QoS

    15/134

    15Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    ATM Cell Header Details

    GFC Generic Flow ControlUNI Cells Only!

    VPI/VCI Identifies VirtualPaths and Channels

    PTI Payload Type Identifier 3 Bits:

    1. User/Control Data2. Congestion3. Last Cell

    CLP Cell Loss Priority Bit

    HEC Header Error Check8 Bit CRCATM NNI Cell

    48 BytePayload

    VPI (12) VCI (16)

    PTI CLP

    HEC

    ATM UNI Cell

    48 BytePayload

    GFC (4)VPI (8)

    VCI (16)

    PTI CLP

    HEC

  • 8/12/2019 Ch04-Core Network QoS

    16/134

    16Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Rudimentary ATM Concepts

    Physical layer

    Signaling

    Cell formatConnection types

  • 8/12/2019 Ch04-Core Network QoS

    17/134

    17Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    ATM Connection Types

    PVC

    SVC

    Soft PVC

  • 8/12/2019 Ch04-Core Network QoS

    18/134

    18Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Connection Types

    Connectionless: Packet Routing

    Path 1 = S1, S2, S6, S8Path 2 = S1, S4, S7, S8

    Data can take different pathand can arrive out of order

    Connection Oriented: Cell Switching

    VC = S1, S4, S7, S8Data takes the same pathand arrives in sequence

    S2 S6

    S4 S7

    S3 S5

    S1 S8

    1

    1

    1

    2 2

    2

    S2 S6

    S4 S7

    VC

    S1 S8S3 S5

  • 8/12/2019 Ch04-Core Network QoS

    19/134

    19Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Permanent Virtual Circuit (PVC)

    VPI/VCI tables in network equipment updated by administrator

    A

    B

    D

    C

    Input OutputPort VPI/VCI Port VPI/VCI

    1 33 3 022 15 3 14

    1 64 3 29

    3 29 1 64

    Input OutputPort VPI/VCI Port VPI/VCI

    1 29 3 452 52 4 15

    1 64 3 29

    3 29 1 64

    Input OutputPort VPI/VCI Port VPI/VCI

    1 45 2 16

    2 52 1 291 64 3 29

    3 29 1 64

    29

    52

    10

    16

    15

    4514

    43

    Input OutputPort VPI/VCI Port VPI/VCI

    1 16 2 43

    3 14 4 101 64 3 29

    3 29 1 64

    1

    2

    4 2

    3

    3

    2

    4

    12

    3

  • 8/12/2019 Ch04-Core Network QoS

    20/134

    20Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Switched Virtual Circuit (SVC)

    Dynamically set up connections via signaling

    B

    D

    1

    2

    4 1

    3

    3

    2

    4

    12

    UNISignaling

    NNISignaling C

    UNISignaling

    Input OutputPort VPI/VCI Port VPI/VCI

    1 64 3 29

    3 29 1 64

    Input OutputPort VPI/VCI Port VPI/VCI

    2 52 1 291 64 3 29

    3 29 1 64

    Input OutputPort VPI/VCI Port VPI/VCI

    1 29 3 45

    1 64 3 29

    3 29 1 64

    Input OutputPort VPI/VCI Port VPI/VCI

    1 64 3 29

    3 29 1 64A

    3

  • 8/12/2019 Ch04-Core Network QoS

    21/134

  • 8/12/2019 Ch04-Core Network QoS

    22/134

    22Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Switched Virtual Circuit (SVC)

    Dynamically tear down connections via signaling

    B

    D

    1

    2

    4 1

    3

    3

    2

    4

    12

    NNISignaling C

    UNISignaling

    Input OutputPort VPI/VCI Port VPI/VCI

    1 64 3 29

    3 29 1 64

    Input OutputPort VPI/VCI Port VPI/VCI

    2 52 1 291 64 3 29

    3 29 1 64

    Input OutputPort VPI/VCI Port VPI/VCI

    1 29 3 45

    1 64 3 29

    3 29 1 64

    Input OutputPort VPI/VCI Port VPI/VCI

    1 64 3 29

    3 29 1 64A

    3

    UNISignaling

  • 8/12/2019 Ch04-Core Network QoS

    23/134

    23Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Soft PVC

    PVC established manually across UNI and dynamically across NNI

    B

    D

    1

    1

    2

    NNISignaling

    UNISignaling

    Input OutputPort VPI/VCI Port VPI/VCI

    1 64 3 29

    3 29 1 64

    Input OutputPort VPI/VCI Port VPI/VCI

    2 52 1 291 64 3 29

    3 29 1 64

    Input OutputPort VPI/VCI Port VPI/VCI

    1 29 3 45

    1 64 3 29

    3 29 1 64

    Input OutputPort VPI/VCI Port VPI/VCI

    1 64 3 29

    3 29 1 64

    3

    UNISignaling

    1 29 3 45

    1 16 2 43

    C

    A

  • 8/12/2019 Ch04-Core Network QoS

    24/134

    24Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    ATM Reference model

  • 8/12/2019 Ch04-Core Network QoS

    25/134

    25Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    ATM Layer

    Provides VPI/VCI values in headerEnsures that cells stay in the correct order

    ATMAdaptation Layer

    (AAL)

    ATM Layer

    Virtual Channel

    Connection (VCC)

    Virtual PathConnection (VPC)

    VPSwitch

    VCSwitch

    VCSwitch

    NNI NNI

    VPI = 12VCI = 44

    VPI = 0VCI = 38

    VPI = 26VCI = 44

    VPI = 0VCI = 36

    UNI UNI

    Physical Layer

  • 8/12/2019 Ch04-Core Network QoS

    26/134

  • 8/12/2019 Ch04-Core Network QoS

    27/134

    27Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    AALThe AAL segments larger packets from Frame Relay, X.25, Ethernet etc.into cells and back again. In addition it takes care of applications thatneed Constant Bit Rates (CBR) and Variable Bit Rates (VBR) . The twosublayers within the AAL that perform these functions arethe Convergence Sublayer (CS) and the Segmentation and ReassemblySublayer (SAR) . These are detailed in the adaptation layer header that sits

    between the ATM cell header and the payload data.The CS provides the timing relationships between source and destinationfor CBRs and VBRs, plus it provides the correct mode for connectionoriented or connectionless services. The Common Part (CP) works withthe SAR and provides management information and the Service Specific

    (SS) sublayer is specific to the type of service.The SAR examines the packets, determines the number of cells requiredfor each packet and creates SAR-PDUs which are the 48-byte payloads.The 5-byte header is then added to form the ATM cell

  • 8/12/2019 Ch04-Core Network QoS

    28/134

    28Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    SAR

    CS

    AAL

    ATM Adaptation LayerAAL

    AAL = CS + SARCScell taxSARcell packet

    PBXATMAdaptation Layer (AAL)

    ATM Layer

    Physical Layer

  • 8/12/2019 Ch04-Core Network QoS

    29/134

  • 8/12/2019 Ch04-Core Network QoS

    30/134

  • 8/12/2019 Ch04-Core Network QoS

    31/134

    31Telecomm. Dept.Faculty of EEE

    NQoS2013

    HCMUT

    AAL2

  • 8/12/2019 Ch04-Core Network QoS

    32/134

    32Telecomm. Dept.Faculty of EEE

    NQoS2013

    HCMUT

    AAL3/4

  • 8/12/2019 Ch04-Core Network QoS

    33/134

    33Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    AAL5

    ATM S i C t i

  • 8/12/2019 Ch04-Core Network QoS

    34/134

    34Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    ATM Service Categories

    Service CriteriaTraffic descriptors

    QoS parameters

    Service CategoriesConstant Bit Rate (CBR)

    Variable Bit Rate (VBR)Unspecified Bit Rate (UBR)

    Available Bit Rate (ABR)

    ATM S i C it i

  • 8/12/2019 Ch04-Core Network QoS

    35/134

    35Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    Contract

    ATM Service Criteria

    Contract

    ContractTraffic Descriptors

    Peak cell rate

    Sustainable cell rateMaximum burst sizeMinimum Cell Rate

    Quality of ServiceDelayCell loss

    ATM Network

    ATM S i C it i

  • 8/12/2019 Ch04-Core Network QoS

    36/134

    36Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    ATM Service Criteria

    Peak Cell Rate PCRMaximum data ratea connection can handle without losing dataSustainable Cell Rate SCRAverage ATMcell throughput the application is permittedMaximum Burst Size MBSSize of themaximum burst of contiguous cells that

    can be transmittedMinimum Cell Rate MCRRate of anapplications ability to handle latency

    Traffic Descriptors

    ATM Service Criteria

  • 8/12/2019 Ch04-Core Network QoS

    37/134

    37Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    ATM Service Criteria

    Maximum Cell Transfer Delay MCTDHow long the network can take to transmit a cell

    from one endpoint to anotherCell Delay Variation Tolerance CDVTLine distortion caused by change in interarrival

    times between cells aka jitter

    QoSDelay

  • 8/12/2019 Ch04-Core Network QoS

    38/134

    ATM Service Categories

  • 8/12/2019 Ch04-Core Network QoS

    39/134

    39Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    ATM Service Categories

    Service CriteriaTraffic parameters

    QoS parameters

    Service CategoriesConstant Bit Rate (CBR)

    Variable Bit Rate (VBR)Unspecified Bit Rate (UBR)

    Available Bit Rate (ABR)

    ATM Service Categories

  • 8/12/2019 Ch04-Core Network QoS

    40/134

    40Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    ATM Service Categories

    QoSTraffic Descriptor

    Application

    Constant Bit Rate (CBR)

    Real Time Voice and Video

    LOW HIGH

    Tolerance

    Cell DelayCell Loss

    PCRPeak Cell Rate

    ATM Service Categories

  • 8/12/2019 Ch04-Core Network QoS

    41/134

    41Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    ATM Service CategoriesVariable Bit Rate (VBR-RT/VBR-NRT)

    QoSTraffic Descriptor

    LOW HIGH

    Tolerance

    Cell Delay(NRT)Cell Delay (RT)

    PCR

    SCR

    Peak Cell Rate

    Sustainable Cell Rate

    MBSMaximum Burst Size Cell Loss

    Packetized Voice/Video, SNA

    Application

    ATM Service Categories

  • 8/12/2019 Ch04-Core Network QoS

    42/134

    42Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    ATM Service Categories

    QoSTraffic Descriptor

    Application

    Unspecified Bit Rate (UBR)

    Data Transfer

    LOW HIGHNo GuaranteesSend and Pray

    Tolerance

    Cell LossCell Delay

    ATM Service Categories

  • 8/12/2019 Ch04-Core Network QoS

    43/134

    43Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    ATM Service CategoriesAvailable Bit Rate (ABR)

    Also usesCongestionFeedback

    Mechanisms

    LAN Interconnect for Data

    QoSTraffic Descriptor

    LOW HIGHPCR

    MCRPeak Cell Rate

    Minimum Cell Rate

    Tolerance

    Cell Loss Cell Delay

    Application

    Traffic Management

  • 8/12/2019 Ch04-Core Network QoS

    44/134

    44Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    Traffic Management

    Why traffic management?Traffic control techniquesABR congestion feedback

    Why Traffic Management?

  • 8/12/2019 Ch04-Core Network QoS

    45/134

    45Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    Why Traffic Management?

    Proactively combat congestionProvision for priority control

    Maintain well-behaved traffic

    Why Traffic Management?

  • 8/12/2019 Ch04-Core Network QoS

    46/134

    46Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    Ethernet (1500 Bytes) = 32 CellsFDDI (4470 Bytes) = 96 Cells

    IP over ATM1577 (9180 Bytes) = 192 Cells

    Why Traffic Management?

    Lose one cell and the rest are uselessNeed to re-transmit 32+ cells for one cell lostCongestion collapse is the result

    TCP/IP Packet

    X

    Cell Loss Datas Critical Enemy

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    47/134

    47Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    Traffic Control Techniques

    Connection management Acceptance

    Traffic management PolicingTraffic smoothing Shaping

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    48/134

    48Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    Contract

    Traffic Control Techniques

    Contract

    Contract

    Traffic ParametersPeak cell rateSustainable cell rateBurst toleranceEtc.

    Quality of ServiceDelay

    Cell loss

    ATM Network

    Connection Management

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    49/134

    49Telecomm. Dept.

    Faculty of EEE

    NQoS2013

    HCMUT

    Traffic Control Techniques

    Connection Admission Control (CAC)

    ATM Network

    I want a VC:X Mbps

    Y DelayZ Cell Loss

    CACCan I Support thisReliably without

    Jeopardizing OtherContracts?

    Noor

    Yes, Agree to aTraffic Contract

    Guaranteed QoS Request

    Contract

    Connection Management

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    50/134

    50Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Traffic Control Techniques

    ATM Network

    You areNot in Conformance

    with the Contract.What Should the

    Penalty Be??

    PASS MARK CLP BIT DROP

    ?DECISION?

    Contract

    Usage Parameter Control (UPC) aka Policing

    REBELAPPLICATION

    Traffic Management

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    51/134

    51Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Traffic Control Techniques

    CLP ControlWhen congested, drop marked cellsPublic UNIGeneric Cell Rate Algorithm (GCRA)

    0 0 0 0 1 0Marked

    UPC

    PASS MARK CLP BIT DROP

    ?DECISION?

    Traffic Management

    Dr op

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    52/134

    52Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    q

    Tail Packet Discard (TPD)Discard cells from same bad packet

    0 0 0 0 1 0Marked

    UPC

    Traffic Management

    3 2

    Dr op

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    53/134

    53Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    ATM/Switch

    6

    1 0

    5

    4

    Output Buffer EOM

    UPC

    7 3 2 1

    X

    Intelligent Tail Packet Discard

    Output Queue

    EOM

    EPD Threshold

    EOM

    Early Packet Discard

    aka UBR+

    qTraffic Management

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    54/134

    54Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Switch without Packet Discard

    Switch with Intelligent Packet Discard

    Maximize Goodput

    qTraffic Management

  • 8/12/2019 Ch04-Core Network QoS

    55/134

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    56/134

    56Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    RMResource Management cells

    Rate-based feedback mechanisms:EFCI markingRelative rate marking

    Explicit rate markingVS/VD

    ABR Congestion Feedback

  • 8/12/2019 Ch04-Core Network QoS

    57/134

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    58/134

    58Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Relative rate markingSwitches can set congestion flagin backward RM cells

    Source

    Forward

    BackwardCongestionExperiencedSlow Down

    RM X

    RM

    ABR Congestion Feedback

    Destination

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    59/134

    59Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    ABR Congestion Feedback

    Explicit rate markingSwitches can tell source at exactly what rate to transmit

    Source

    Destination

    Forward

    Backward

    RM

    RM

    Congestion ExperiencedSlow Down X Amount

    CongestionExperienced

    Slow Down

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    60/134

    60Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    VS/VDVirtual source/virtual destinationBreaks the feedback loop into separate segmentsShortens length of feedback loop

    Source

    Destination

    Forward

    Backward

    CongestionExperienced

    Slow Down

    ABR Congestion Feedback

    Traffic Control Techniques

  • 8/12/2019 Ch04-Core Network QoS

    61/134

    61Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Absorb traffic bursts from simultaneous connectionsSwitches schedule traffic based on priorityof traffic according to QoSSwitch must reallocate buffers as the traffic mixchangesEffective buffering maximizes throughput

    of usable cells as opposed to raw cells(aka goodput)

    Buffers Are Your Friend

    IP/ATM Model

  • 8/12/2019 Ch04-Core Network QoS

    62/134

    62Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Mapping cells into SONET/SDH

    Using logical IP subnet (LIS) instead of using regular IPsubnetTwo models: Classical IP/ATM (CIP) and LAN Emulation(LANE) for IP/ATMClassical IP/ATM (CIP): link driver has two concerns:

    Address resolution: discovering the link-level next-hopcorresponding to a given IP next-hop address known to be onthe linkPacket Transport: actually establishing link-level resources totransfer the packet to the indicated next-hop

    AAL5 is used for encapsulation of IP packets For GS use CBR or rt-VBR For CL use non-rt VBR or ABR with a minimum cell rate For BE use UBR or ABR

    IP/ATM QoS

  • 8/12/2019 Ch04-Core Network QoS

    63/134

    63Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Controlling QoS in IP/ATM involves 2 aspects:Upgrading the IP node with sufficient CQS architecture toprovide differentiated scheduling of traffic toward any next-hopDefining rules for establishing and using parallel ATM VCstoward a given next-hop

  • 8/12/2019 Ch04-Core Network QoS

    64/134

    Internet core network

  • 8/12/2019 Ch04-Core Network QoS

    65/134

    65Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Router based core network

    Internet core network (cont)

  • 8/12/2019 Ch04-Core Network QoS

    66/134

    66Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Processing can not meet bandwidth demands

    Bottle-neck in software-based routersAvailable router interfaces not provide trafficaggregation

    Metric-based routing was no longer scalableDensely connected networks lead to inefficient use ofnetwork resourcesDestination based routing tends to aggregate alltraffic to the same destination: not utilize links

    Internet core network (1)

  • 8/12/2019 Ch04-Core Network QoS

    67/134

    67Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Switch based core network

    Internet core network (cont)

  • 8/12/2019 Ch04-Core Network QoS

    68/134

    68Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Faster and simpler forwarding, better traffic aggregation

    Fix size cells can be handled in hardware to speed upConnection-oriented forwarding algorithm improves

    performance gain: based on short fix length connectionidentifiers

    The ASIC technology : IP packets can be forwarded with highspeed. ATM interfaces have even fallen behind the latestincreases of optical network (packet over SDH/SONET)

    Waste of bandwidth : 5/48 bytes of header.

    Complex network management: physical ATM switchedinfrastructure and logical IP network topology. Each layer usesits own addressing scheme and routing protocol

    Internet core network (cont)

  • 8/12/2019 Ch04-Core Network QoS

    69/134

    69Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    The n-squared problem : when adding or shutting

    down any router will create enormous signaling loadIGP stress : intra-domain routing is not conceived for

    fully meshed topology. With high number of routingpeer routers: too much routing information has to beexchanged

    Multi-Protocol Label Switching (MPLS) can offer

    solutions that create a combination of the advantagesfrom both of these worlds

    MPLS NGN

  • 8/12/2019 Ch04-Core Network QoS

    70/134

    70Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    MPLS NGN (cont)

  • 8/12/2019 Ch04-Core Network QoS

    71/134

    71Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Forwarding based on label: speed up processing at node

    Forwarding mechanism: label (explicit routing) or IP header(hop-by-hop routing)

    Operating on any layer 2 technologies: ATM, Ethernet, FR

    Allows for both traffic aggregation and disaggregation

    Support VPN : using 64-bit VPN address (total 92 bits)Allow SPs embed into the IP network : TE and traditional QoS of

    layer 2 : using DSCP and processing queues based on its packetspriority

    Easy management and operation

    Traditional IP Routing

  • 8/12/2019 Ch04-Core Network QoS

    72/134

    72Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Choosing the next hop

    Open Shortest Path First (OSPF) to populate the routingtable

    Route look up based on the IP addressFind the next router to which the packet has to be sent

    Replace the layer 2 address

    Each router performs these steps

  • 8/12/2019 Ch04-Core Network QoS

    73/134

  • 8/12/2019 Ch04-Core Network QoS

    74/134

    Distributing Routing Information(1)

  • 8/12/2019 Ch04-Core Network QoS

    75/134

    75Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    3

    1

    0125.50

    145.40

    2

    AddressPrefix

    Address

    Prefix

    AddressPrefix

    Path Path

    Path

    125.50

    145.40

    125.50

    145.40

    125.50

    13

    3 0

    2

    Data 125.50.33.85Data 125.50.33.85

    Disadvantages

  • 8/12/2019 Ch04-Core Network QoS

    76/134

    76Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Header analysis performed at each hop

    Increased demand on routersUtilizes the best available pathSome congested links and some underutilized links!

    Degradation of throughputLong delays

    More losses

    No QoSNo service differentiation

    Not possible with connectionless protocols

    Need for MPLSR id th f I t t

  • 8/12/2019 Ch04-Core Network QoS

    77/134

    77Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Rapid growth of Internet

    New latency dependent applicationsQuality of Service (QoS)

    Less time at the routers

    Traffic EngineeringFlexibility in routing packets

    Connection-oriented forwarding techniques withconnectionless IP

    Utilizes the IP header information to maintain interoperability with IP

    based networksDecides on the path of a packet before sending it

    What is MPLS?l l l h h

  • 8/12/2019 Ch04-Core Network QoS

    78/134

    78Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Multi Protocol supports protocols even other than IP

    Supports IPv4, IPv6, IPX, AppleTalk at the network layerSupports Ethernet, Token Ring, FDDI, ATM, Frame Relay, PPPat the link layer

    Label short fixed length identifier to determine a

    routeLabels are added to the top of the IP packetLabels are assigned when the packet enters the MPLSdomain

    Switching forwarding a packetPackets are forwarded based on the label valueNOT on the basis of IP header information

    MPLS BackgroundIntegration of layer 2 and layer 3

  • 8/12/2019 Ch04-Core Network QoS

    79/134

    79Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Integration of layer 2 and layer 3Simplified connection-oriented forwarding of layer 2

    Flexibility and scalability of layer 3 routing

    MPLS does not replace IP; it supplements IP

    Traffic can be marked, classified and explicitly routed

    QoS can be achieved through MPLS

    IP/MPLS comparison

  • 8/12/2019 Ch04-Core Network QoS

    80/134

    80Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Routing decisions

    IP routing based on destination IP addressLabel switching based on labels

    Entire IP header analysis

    IP routing performed at each hop of the packets path in the network

    Label switching performed only at the ingress router

    Support for unicast and multicast data

    IP routing requires special multicast routing and forwarding

    algorithmsLabel switching requires only one forwarding algorithm

  • 8/12/2019 Ch04-Core Network QoS

    81/134

    Forwarding Equivalence Class (FEC)A f k h i h f di

  • 8/12/2019 Ch04-Core Network QoS

    82/134

    82Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    A group of packets that require the same forwarding

    treatment across the same pathPackets are grouped based on any of the following

    Address prefixHost addressQuality of Service (QoS)

    FEC is encoded as the label

  • 8/12/2019 Ch04-Core Network QoS

    83/134

    FEC example (contd)Assume packets have the destination address and QoS

  • 8/12/2019 Ch04-Core Network QoS

    84/134

    84Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Assume packets have the destination address and QoSrequirements as124.48.45.20 qos = 1143.67.25.77 qos = 1143.67.84.22 qos = 3124.48.66.90 qos = 4143.67.12.01 qos = 3

    FEC 1 label a FEC 2 label b FEC 3 label c FEC 4 label d143.67.25.77 124.48.45.20 143.67.84.22 124.48.66.90

    143.67.12.01

    Example of a MPLS network

  • 8/12/2019 Ch04-Core Network QoS

    85/134

    85Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

  • 8/12/2019 Ch04-Core Network QoS

    86/134

    Label Switching Router (LSR)A router/switch that supports MPLS

  • 8/12/2019 Ch04-Core Network QoS

    87/134

    87Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    A router/switch that supports MPLS

    Can be a routerCan be an ATM switch + label switch controller

    Label swapping

    Each LSR examines the label on top of the stackUses the Label Information Base (LIB) to decide theoutgoing path and the outgoing label

    Removes the old label and attaches the new label

    Forwards the packet on the predetermined path

    Label Switched Path(LSP)LSP defines the path through LSRs from ingress to

  • 8/12/2019 Ch04-Core Network QoS

    88/134

    88Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    LSP defines the path through LSRs from ingress to

    egress routerFEC is determined at the LER-ingressLSPs are unidirectional

    LSP might deviate from the IGP shortest path

    LSP

    LSP

    LDP

  • 8/12/2019 Ch04-Core Network QoS

    89/134

    89Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Label Distribution Protocol

    LDP

  • 8/12/2019 Ch04-Core Network QoS

    90/134

    90Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    LDP

  • 8/12/2019 Ch04-Core Network QoS

    91/134

    91Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    LDP - AdvantagesExplicit routing

  • 8/12/2019 Ch04-Core Network QoS

    92/134

    92Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    p g

    Set up a LSP between Ingress Router and EgressRouterLabel request for each hop on down-streamLabel mapping : up-streamErrors occur: router sends a alarm message toneighbors or operating routers to re-direct for currentLSP

    Less resources (compared with RSVP)

    LDP - DisadvantagesSlow error recovery

  • 8/12/2019 Ch04-Core Network QoS

    93/134

    93Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    y

    Not support dynamically re-optimization of trafficflowsTransient periods: efficiency of Resource Location

    could be influenced by routing traffic.Require means to restore the LSP to the original

    routes once congestion has subsidedFATE : using dynamic reroute mechanism

    Shim HeaderA short, fixed length identifier (32 bits)

  • 8/12/2019 Ch04-Core Network QoS

    94/134

    94Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    , g ( )

    Sent with each packetLocal between two routersCan have different labels if entering from different routersOne label for one FEC

    Decided by the downstream routerLSR binds a label to an FECIt then informs the upstream LSR of the binding

  • 8/12/2019 Ch04-Core Network QoS

    95/134

    Time To Live (TTL)TTL value decremented by 1 when it passes through an

  • 8/12/2019 Ch04-Core Network QoS

    96/134

    96Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    y p g

    LSRIf TTL value = 0 before the destination, discard thepacket

    Avoids loops may exist because of somemisconfigurations

    Multicast scoping limit the scope of a packetSupporting the traceroute command

    TTL (contd)Shim header

  • 8/12/2019 Ch04-Core Network QoS

    97/134

    97Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Has an explicit TTL fieldInitially loaded from the IP header TTL fieldAt the egress LER, value of TTL is copied into the TTL field ofthe IP header

    Data link layer header (e.g VPI/VCI)No explicit TTL fieldIngress LER estimates the LSP lengthDecrements the TTL count by the LSP lengthIf initial count of TTL less than the LSP length, discard the

    packet

  • 8/12/2019 Ch04-Core Network QoS

    98/134

  • 8/12/2019 Ch04-Core Network QoS

    99/134

  • 8/12/2019 Ch04-Core Network QoS

    100/134

    Label stack (contd)

  • 8/12/2019 Ch04-Core Network QoS

    101/134

    101Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Labels scope and uniquenessLabels are local between two LSRs

  • 8/12/2019 Ch04-Core Network QoS

    102/134

    102Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Rd might give label L1 for FEC F and distribute it to Ru1At the same time, it might give a label L2 to FEC F anddistribute it to Ru2L1 might not necessarily be equal to L2

    Can there be a same label for different FECs?Generally, NOBUT no such specificationLSR must have different label spaces to accommodate bothSHIM header specifies that different label spaces used forunicast packets and multicast packets

    Invalid labelsWhat should be done if an LSR receives an invalidl b l?

  • 8/12/2019 Ch04-Core Network QoS

    103/134

    103Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    label?Should it be forwarded as an unlabeled IP packet?

    Should it be discarded?

    MUST be discarded!

    Forwarding it can cause a loopSame treatment if there is no valid outgoing label

  • 8/12/2019 Ch04-Core Network QoS

    104/134

    Route selection (contd)Hop by Hop

  • 8/12/2019 Ch04-Core Network QoS

    105/134

    105Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Allows each LSR to individually choose the next hopThis is the usual mode today in existing IP networks

    No overhead processing as compared to IP

    Explicit routingA single router, generally the ingress LER, specifies severalor all of the LSRs in the LSP

    Provides functionality for traffic engineering and QoSo Several: loosely explicitly routed

    o All: strictly explicitly routed

    E.g. CR-LDP, TE-RSVP

    Label Information Base (LIB)Table maintained by the LSRs

  • 8/12/2019 Ch04-Core Network QoS

    106/134

    106Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Contents of the tableIncoming label

    Outgoing labelOutgoing path

    Address prefixIncominglabel Address Prefix

    OutgoingPath

    Outgoinglabel

    LSR Forwarding Engine

  • 8/12/2019 Ch04-Core Network QoS

    107/134

    107Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    MPLS forwardingExisting routing protocols establish routes

  • 8/12/2019 Ch04-Core Network QoS

    108/134

    108Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    LDP establishes label to route mappingsLDP creates LIB entries for each LSR

    Ingress LER receives packet,adds a labelLSRs forward labeled packets using label swapping

    Egress LER removes the label and delivers the packet

    FEC in MPLS

  • 8/12/2019 Ch04-Core Network QoS

    109/134

    109Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    MPLS forwarding (contd)

    AddressPrefix

    OutPath

    InLabel

    OutLabel

  • 8/12/2019 Ch04-Core Network QoS

    110/134

    110Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Use label 8 for 145.40

    Use label 9 for 125.50

    Use label 2 for 125.50 andlabel 1 for 145.40

    3

    1

    0125.50

    145.40

    2

    AddressPrefix

    OutPath

    InLabel

    OutPath

    InLabel

    OutLabel

    OutLabel

    125.50 125.50

    125.50

    145.40 145.40

    3

    3

    2

    1

    2

    1 8

    9

    9

    0

    1

    2Address

    Prefix

    MPLS forwarding (contd)

    AddressPrefix

    OutPath

    InLabel

    OutLabel

  • 8/12/2019 Ch04-Core Network QoS

    111/134

    111Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    3

    1

    0125.50

    145.40

    2

    AddressPrefix AddressPrefixOutPath InLabel OutPath InLabelOutLabel OutLabel

    125.50 125.50

    125.50

    145.40 145.40

    3

    3

    2

    1

    2

    1 8

    9

    9

    0

    1

    2

    Data 125.50.33.85 2

    Data 125.50.33.85 9

  • 8/12/2019 Ch04-Core Network QoS

    112/134

    112Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    DiffServ & MPLS

    112

    DiffServ Architecture

  • 8/12/2019 Ch04-Core Network QoS

    113/134

    113Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Differentiated ServicesThe IETF DiffServ Model

  • 8/12/2019 Ch04-Core Network QoS

    114/134

    114Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Use 6 bits in IP header to sort traffic intoBehavior AggregatesAKA Classes!Defines a number of Per Hop Behaviors - PHBsTwo-Ingredient Recipe:

    Condition the Traffic at the Edges Invoke the PHBs in the Core

    Use PHBs to Construct Services such as VirtualLeased Line!

    Defined PHBs

    Expedited Forwarding (EF): RFC2598

  • 8/12/2019 Ch04-Core Network QoS

    115/134

    115Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    p g ( )dedicated low delay queueComparable to Guaranteed B/W in IntServ

    Assured Forwarding (AF): RFC2597

    4 queues 3 drop preferencesComparable to Controlled Load in IntServClass Selector: Compat. with IP PrecDefault (best effort)

    AF PHB Group DefinitionAF Class 1: 001 dd 0

  • 8/12/2019 Ch04-Core Network QoS

    116/134

    116Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    4 independently-forwarded AF classesWithin each AF class, 3 levels of drop priority! This is very useful to protectconforming to a purchased, guarantee rate, while increasing chances ofpackets exceeding contracted rate being dropped if congestion isexperienced in the core.

    AF Class 2: 010 dd 0

    AF Class 3: 011 dd 0

    AF Class 4: 100 dd 0

    Eg. AF12 = Class 1, Drop 2, thus 001100

    dd = drop preference

    DiffServ Scalability via Aggregation

    1000sof flows

  • 8/12/2019 Ch04-Core Network QoS

    117/134

    117Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    DiffServ scalability comes from:- aggregation of traffic on Edge- processing of Aggregate only in Core

    Diff-Serv:

    Aggregated Processing inCore

    Scheduling/Dropping (PHB)based on DSCP

    Diff-Serv:

    Aggregated Processing inCore

    Scheduling/Dropping (PHB)based on DSCP

    Diff-Serv:

    Aggregation on Edge

    Many flows associated witha Class (marked with DSCP)

    Diff-Serv:

    Aggregation on Edge

    Many flows associated witha Class (marked with DSCP)

    MPLS Scalability via Aggregation

    1000sof flows

  • 8/12/2019 Ch04-Core Network QoS

    118/134

    118Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    MPLS scalability comes from:- aggregation of traffic on Edge- processing of Aggregate only in Core

    MPLS:

    Aggregated Processing inCore

    Forwarding based on label

    MPLS:

    Aggregated Processing inCore

    Forwarding based on label

    MPLS:

    Aggregation on Edge

    Many flows associated witha Forwarding EquivalentClass (marked with label)

    MPLS:

    Aggregation on Edge

    Many flows associated witha Forwarding EquivalentClass (marked with label)

    MPLS & DiffServ - The Perfect Match!

    1000sof flows

  • 8/12/2019 Ch04-Core Network QoS

    119/134

    119Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Because of same scalability goals, both models do:- aggregation of traffic on Edge- processing of Aggregate only in Core

    MPLS: flowsassociated withFEC, mapped

    into one label

    MPLS: flowsassociated withFEC, mapped

    into one label MPLS:Switchingbased onLabel

    MPLS:Switchingbased onLabel

    DS:Scheduling/Droppingbased on DSCP

    DS:Scheduling/Droppingbased on DSCP

    DS: flows associatedwith Class, mappedto DSCP

    DS: flows associatedwith Class, mappedto DSCP

    Non-MPLSDiff-Serv Domain

    MPLSDiff-Serv Domain

    MPLS - The Shim Header!!

  • 8/12/2019 Ch04-Core Network QoS

    120/134

    120Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    DSCPfield is not directly visible to MPLS Label Switch Routers (theyforward based on MPLS Header)

    Information on DiffServ must be made visible to LSR in MPLS Header(using EXP field / Label)

    0 1 2 30 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1

    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

    +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+

    | Label | EXP |S| TTL |

    DSCP

    IPv4 Packet MPLS Header

    DSCP

    Driving PHB at Core LSR

  • 8/12/2019 Ch04-Core Network QoS

    121/134

    121Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Driving PHB at Core LSRQoS consideration

    Each distinct queuing and

  • 8/12/2019 Ch04-Core Network QoS

    122/134

    122Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    scheduling behavior may beencoded as a new FEC (LSP),ignoring the EXP fieldThe EXP field encodes up to 8

    queuing and schedulingbehaviors for the same FEC (LSP)

    The EXP field encodes up to 8queuing and scheduling

    behaviors independent of FEC(LSP)

    Driving PHB at Core LSRSome possible approaches:

    Using Label to select a queue (Service Class) and using one or

  • 8/12/2019 Ch04-Core Network QoS

    123/134

    123Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    more EXP bits to encode different drop precedence levelsUsing Label to select a group of four queues, using 2 bitsfrom EXP to select one of those 4 queues and using theremaining bit from EXP to encode the drop precedence

    Ignoring the Label and using four shared queues per outputinterface. 2 bits from EXP to select one of those 4 queues,and the remaining bit from EXP encodes the dropprecedence

    Using Label to select a group of N queues (N

  • 8/12/2019 Ch04-Core Network QoS

    124/134

    124Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

  • 8/12/2019 Ch04-Core Network QoS

    125/134

    125Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    MPLS Traffic Engineering

    125

    MPLS TE & QoS The RelationshipMPLS TE designed as tool to improve backboneefficiency independently of core QoS

  • 8/12/2019 Ch04-Core Network QoS

    126/134

    126Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    techniques:MPLS TE compute routes for aggregates across all PHBs. A Single Chunk of Bandwidth requested for the Tunnel

    MPLS TE performs admission control over a global b/w pool. Un-aware of bandwidth allocated to each Class / PHB

    MPLS TE and MPLS DiffServ:Can run simultaneously in a network.Can provide their own individual benefits TE distributes aggregate load DiffServ provides differentiation

    Are unaware of each other

    Traffic Aggregate in IP networks

  • 8/12/2019 Ch04-Core Network QoS

    127/134

    127Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Traffic Aggregate in MPLS networks

  • 8/12/2019 Ch04-Core Network QoS

    128/134

    128Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    Traffic Aggregate in MPLS networks

  • 8/12/2019 Ch04-Core Network QoS

    129/134

    129Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    TE Fast Reroute - Tunneling

  • 8/12/2019 Ch04-Core Network QoS

    130/134

    130Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    TE Global Fast Reroute (Makam)

  • 8/12/2019 Ch04-Core Network QoS

    131/134

    131Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    TE Region Fast Reroute

  • 8/12/2019 Ch04-Core Network QoS

    132/134

    132Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    TE Local Fast Reroute

  • 8/12/2019 Ch04-Core Network QoS

    133/134

    133Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT

    TE Haskin Fast Reroute

  • 8/12/2019 Ch04-Core Network QoS

    134/134

    134Telecomm. Dept.Faculty of EEE

    NQoS2013HCMUT