new thesis

42
1 Chapter 1 Introduction 1.1 Background. MPLS stands for Multi Protocol Label Switching. Multi protocol Label Switching (MPLS) was proposed by Internet Engineering Task Force (IETF) as a technology, which plays an important role in providing the facilities for traffic engineering and quality of service in the network. MPLS network provide the ISPs with the required tractability to handle the traffic through ER-LSPs. In networking, communication is carried out in the form of frames which travel from source to destination covering a precept of hop by hop transport in a store and forward manner. When the frames comes at each node it find the next hop in order to make sure that the frame manage its way towards its destination by performing a route table lookup. MPLS provide connection oriented service for variable length frame and rising as a standard for the next generation internet, MPLS is mechanism where labels are assigned to data packets and forwarding is done based on the contents of those labels without checking the originals packets itself, permitting tractability in using protocols and to forward packet via any type of transport medium. In networking, MPLS is a rising technology i.e. replacing the existing technology and it is highly in demand now a days. MPLS provide better solution and tractability to divert and forward packet when failure occurs. MPLS was designed by keeping in mind the strength and weakness of ATM because ATM (asynchronous transfer mode) and frame relay are ancestors of MPLS. MPLS is replacing technology as it requires less over head. As the Internet grows tremendously in the last few years a lack of availability, reliability and scalability was found for mission critical networking environment. In conventional routing, packets are forwarded on the bases of destination address and a single metric like minimum hop-count or delay. So conventional routing suffers from drawback, as this approach induces traffic to converge into the same link; therefore congestion increases in a significant manner in the network and because of that network comes in the state of an unbalanced network resource utilization condition. Traffic Engineering (TE) comes as the solution of this problem, which provides bandwidth guarantee, explicit route and an effective resource utilization of the network. Now a day’s main demand in the network is background speed, so the current research focuses on traffic engineering with LSPs for better control over the traffic distribution in the network. 1.2 The Bigger Picture of Internet In the past few years Internet has achieved a great success and become the backbone of our lives. The size of Internet increases in significant manner as the number of users is grown exponentially. Now a day’s million of users worldwide communicate each other through the Internet and the number is still growing continuously at a tremendous rate.

Upload: ravindra-k-singh

Post on 26-Oct-2014

40 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: New Thesis

1

Chapter 1

Introduction

1.1 Background.

MPLS stands for Multi Protocol Label Switching. Multi protocol Label Switching (MPLS) was proposed by Internet

Engineering Task Force (IETF) as a technology, which plays an important role in providing the facilities for traffic

engineering and quality of service in the network. MPLS network provide the ISPs with the required tractability to

handle the traffic through ER-LSPs.

In networking, communication is carried out in the form of frames which travel from source to destination covering

a precept of hop by hop transport in a store and forward manner. When the frames comes at each node it find the

next hop in order to make sure that the frame manage its way towards its destination by performing a route table

lookup. MPLS provide connection oriented service for variable length frame and rising as a standard for the next

generation internet, MPLS is mechanism where labels are assigned to data packets and forwarding is done based on

the contents of those labels without checking the originals packets itself, permitting tractability in using protocols

and to forward packet via any type of transport medium. In networking, MPLS is a rising technology i.e. replacing

the existing technology and it is highly in demand now a days. MPLS provide better solution and

tractability to divert and forward packet when failure occurs. MPLS was designed by keeping in mind the strength

and weakness of ATM because ATM (asynchronous transfer mode) and frame relay are ancestors of MPLS. MPLS

is replacing technology as it requires less over head.

As the Internet grows tremendously in the last few years a lack of availability, reliability and scalability was found

for mission critical networking environment. In conventional routing, packets are forwarded on the bases of

destination address and a single metric like minimum hop-count or delay.

So conventional routing suffers from drawback, as this approach induces traffic to converge into the same link;

therefore congestion increases in a significant manner in the network and because of that network comes in the state

of an unbalanced network resource utilization condition. Traffic Engineering (TE) comes as the solution of this

problem, which provides bandwidth guarantee, explicit route and an effective resource utilization of the network.

Now a day’s main demand in the network is background speed, so the current research focuses on traffic engineering

with LSPs for better control over the traffic distribution in the network.

1.2 The Bigger Picture of Internet

In the past few years Internet has achieved a great success and become the backbone of our lives. The size of Internet

increases in significant manner as the number of users is grown exponentially. Now a day’s million of users

worldwide communicate each other through the Internet and the number is still growing continuously at a

tremendous rate.

Page 2: New Thesis

2

Figure 1.1: Showing the numbers of users are connected to each other via Internet.

Internet is formed by connecting many Local Area Networks (LANs), Metropolitan Area Networks (MANs) and

Wide Area Networks (WANS), therefore Lan, Man and Wan combined forming the Internet through a backbone.

The backbone provides a trunk connection to the Internet [1] through which clients can get access by sharing a set of

lines or frequencies instead of providing them individually.

Figure 1.2: Simple ISP [2]

Page 3: New Thesis

3

A point-of-presence (POP) is an access point from one place to the rest of the internet. Each Internet service provider

(ISP) has a point-of-presence on the Internet or we can say an average ISP can have more than 50 POPs those are

interconnected having a ring topology to guarantee reliability. In POP Border Router (BR) are connected to other

ISPs, Access Router (AS) are connected to remote customers, hosting Routers are connected to the web server and

the Core Routers are the one that are connected to other POPs [3].

1.3 Research Motivation and Problem

As the Internet grows rapidly, it is becomes more and more difficult to satisfy the needs of customers and service

provider because people want more advanced services from the Internet rather than the existing best effort services.

Quality of service (QoS), Virtual Private Network (VPN) and Traffic Engineering (TE) enables ISPs to provide

improved services to particular traffic flow. But IP, which was designed to provide best effort services, does not able

to provide these new features. Multiprotocol Label Switching (MPLS) is a networking technology which is able to

handle these features in an efficient manner.

MPLS got a lot attention by the researchers, not only because it provides new features like QoS, VPN and TE, but it

provide a way to control particular traffic in an efficient manner and it also improve the network performance and

efficiency. A crucial component of Quality of service (QoS) is fault tolerance and recovery from network failure.

Real time applications and other critical task need instant response to a network failure; otherwise the required QoS

cannot be met.

In conventional IP network, error handling is done by detecting a link or node failure in the network at the local

router, spreading this information to the whole network area, so that network topology changes accordingly. One

major flaw of this recovery mechanism is its slow response. The reasons of this problem are as follows:

Failure can be detected at the lower layer very quickly, but they don’t have any knowledge of where failure

should be reported to, as they have no information for routing.

As layer 3 is a connection less protocol, so it does not report it either.

However, MPLS deals with failures in an efficient manner, because MPLS is connection oriented and it lies between

layer 2 and layer 3, this means it can detect failure more quickly. Unlike OSPF, MPLS knows the information of

current sessions and connections in network. This provide great aid to the node of MPLS network, in order to notify

the network failure to the edge node and take the immediate restoration steps for those paths even before the network

topology finally converges.

The main idea of this work is based on three criteria: reducing request blocking probability, minimizing network

cost and load balancing. For the reduction of request blocking probability, Minimum Interference Routing Algorithm

(MIRA) [4] is the best algorithm for reducing request blocking probability, but MIRA suffers from computational

complexity problem, as this algorithm frequently computes maximum flow. While minimum network cost

algorithms suffers from bad performance in terms of request rejection ratio in heavily loaded network, and load

balancing gives undesirable results in lightly loaded network. So we can say that alone none of this approach is

sufficient.

Page 4: New Thesis

4

Therefore in this thesis, we propose an effective approach for enhancing the fault-tolerant performance based on all

these criteria. When there is a failure in LSP, due link or node failure, then the proposed approach is initiated to

perform the recovery of the carried traffic in failed LSP and the backup is chosen on the basis of above three criteria.

1.4 Benefits of MPLS

MPLS benefits include better performance, lower total cost of ownership, greater flexibility to accommodate new

technologies better security and survivability. MPLS provide better performance by using classes of service and

priority queuing, by this network know which traffic s important and ensures that it takes priority over other traffic.

MPLS allow devices to handle IP traffic by enabling IP capacity on that devices and forward packets using pre-

calculated routes that is not used in regular routings along explicit paths. MPLS interfaces to existing routing

protocols such as RSVP and OSPF. In addition its supports IP, ATM, frame relay, L2 protocols. An MPLS-enabled

network simplifies the overall network infrastructure with the convergence of multiple technologies. Enterprises can

eliminate multiple, complex overlay networks and are able to transport a variety of new applications over the

network using voice, video and data. Simplification of the network greatly reduces capital and operating costs.

MPLS promise a foundation that allows ISPs to deliver new services that are not supported by conventional IP

routing techniques. In order to meet the growing demand of resources ISPs face changeless not only providing

superior baseline service but also providing latest high quality service according to the modem need. Packet

forwarding has been made easy and effective since router simply forward packet base on the fixed labels and support

the delivery of services with quality of service (QOS) along with appropriate level of security to make IP secure

while reducing overheads like encrypting data that is required the secure information on public IP networks.

1.5 Outline

This thesis presents an overview of MPLS, Internet and benefits of MPLS in chapter 1. Chapter 2 will give a detail

overview on MPLS architecture and its components. Topics like MPLS significance, MPLS label, components,

LSPs, Label distribution methods and RSVP will also be discussed in detail. In chapter 3, literature survey on

recovery strategies is presented. In chapter 4 problem statement is given and a detailed description of proposed

approach is presented. Chapter 5 gives the detail of simulation environment and results. Finally chapter 6 provides a

summary of the limitation and ideas concerning future work related to MPLS.

Page 5: New Thesis

5

CHAPTER 2

Multiprotocol Label Switching

2.1 Background

This chapter explains the significance of Multi Protocol Label Switching (MPLS) as a rising multi-service Internet

technology. MPLS terminology like labels, label Switching Path (LSPs), Forward Equivalence Classes (FECs), label

stacking, distribution and merging will be discussed in detail.

2.2 MPLS Overview

Increasing demand of the Internet to carry more traffic in reliable manner and provide support for bandwidth

guarantee, Quality of Services (QoS), it is important to have a high level of performance mechanism. Traffic

Engineering is a process of mapping traffic demand on to the network by minimizing the resource utilization. Multi-

Protocol Label Switching (MPLS) is a tool for network traffic engineering and hence is becoming the technology of

choice for Internet backbone.

MPLS uses label to route the packets, for each individual packet an independent and unique label is assigned as the

packet passes through the network so that switching of packets can be performed. With these labels routing and

switching of the packets is done in the network. Label is a short fixed length packet identifier which is used to

identify the particular path. The basic idea of MPLS was developed long before and exits in the form of networking

technologies like X.25, frame relay and ATM. As they are first one to be using the label switching technology in the

networking world. Label switching technology was developed in mid 90's to enhance the quality of service and

performance of network. This technology was used earlier by Lpsilon / Nokia (IP switching). Then IETF (Internet

Engineering Task Force) standardized label switching technology and from here MPLS was emerged as a

standardized label switching technology.

In MPLS label are used to make forwarding decisions i.e. labels are used to identify the particular path. So process

switching is not in use. When the packet enters in MPLS network, layer-3 analysis is done and on the basis of layer 3

destination address label is assigned to each incoming packet. In MPLS network intermediate nodes are called Label

Switched Router (LSRs), while other nodes that connect with IP routers or ATM switches are called Label Edge

Router (LERs). Those router within MPLS domain that connects with the outside world, thorough which a packet

enters the network are called ingress routes and the one through which the packets leaves the MPLS domain is called

egress route. In MPLS the basic idea is to bind a label to a packet at the ingress router within the MPLS network and

afterward can be applied to make forwarding decisions instead of looking up for the destination address at each point

because the label define the fast and effective label switch path (LSP) to direct the traffic all the way to the

destination.

2.3 MPLS Objective

MPLS is a standardized network based technology, which uses labels to make forwarding decision with

network layer routing in the control components.

Page 6: New Thesis

6

The objective is to provide a solution that MPLS provide integrated service model including RSVP and

support operation, administration and maintenances facilities.

MPLS must run over any link layer technology and support uni-cast and multicast forwarding.

MPLS must be capable of dealing the ever growing demand of traffic onto the network and provide

extending routing capabilities more than just destination based forwarding.

Along with reduced cost and offers new revenue generating customer’s services in addition with providing

high quality of base services.

2.4 MPLS Concepts

Basically Multi Protocol Label Switching (MPLS) technology is developed from CISCO tag switching. MPLS del it

lies somewhere between layer 2(data link layer) and layer 3(network layer) in the OSI model, therefore it is often

known as layer 2.5 protocol. MPLS has started its way towards the enterprise from the service provider and become

a valuable WAN connection style that overlays all existing WAN types. MPLS provide solution to many problems

being faced by current IP network nowadays. Being an enrich design for the service provider, MPLS not only

provides redundancy but maintain a high level of performance by allowing tractability in using protocols and to route

packet across any type of transport medium with the help of an enormous support required for traffic engineering

and QOS, it has emerged as the standard for the next generation Internet by eliminating dependencies and providing

robust communication between remote facilities by a way of reducing per-packet processing time required at each

router and enhances the router performance across the country or across the world.

2.5 MPLS Application

There are four main MPLS applications are as follows:

Connection-oriented QOS support

Traffic engineering (TE)

Virtual Private Networks (VPNs)

Multi-Protocol support

2.5.1 Connection Oriented QOS Support

In the networks, the network manager and users need progressively QoS support for many reasons. The following are

fundamental requirements:

Provides a guaranteed capacity for particular application, like audio video conference.

Control response time or jitter and provide guaranteed capacity for voice.

Configure varying degrees of QoS for multiple network customers.

A connectionless network, such as in IP-based internetwork, cannot provide truly firm QoS commitments. A

Differentiated Service (DS) framework works in only a general way and upon aggregates of traffic from numerous

sources. An Integrated Services (IS) framework, using the Resource Reservation Protocol (RSVP), has some of the

flavor of a connection-oriented approach, but is nevertheless limited in terms of its flexibility and scalability. For

Page 7: New Thesis

7

services such as voice and video that require a network with high predictability, the DS and IS approaches, by

themselves, may prove inadequate on a heavily loaded network. By contrast, a connection-oriented network has

powerful traffic-management and QoS capabilities. MPLS imposes a connection-oriented framework on an IP-based

internet and thus provides the foundation for sophisticated and reliable QoS traffic contracts [5].

2.5.2 Traffic Engineering (TE)

The ability to dynamically define routes, plan resource commitments on the basis of known demand and optimize

network utilization is referred to as traffic engineering. Current IP networks provide very poor support for traffic

engineering. Specifically, routing protocols like OSPF enable routers to dynamically change the route to a given

destination on a packet-by-packet basis while balancing the network load. In many cases such dynamic routing

provides very little help in reducing network congestion and providing support for QoS. However, by enabling a

connection-oriented framework in MPLS all the traffic between two end points follows the same route, which may

be changed by simply using rerouting algorithms in the event of congestion. MPLS is not only aware of the

individual packets, MPLS is also aware of packet flows, their QoS requirements and network traffic demands. With

MPLS, for efficient load balancing, it is possible to setup routes on the basis of their individual flows or with two

different flows between the same endpoints that follow different routes (maximally disjoint paths).

Practically, MPLS changes routes based on a flow-by-flow basis by taking advantage of the known traffic demands

for each flow, instead of simply changing the route on a packet-to-packet basis,. With MPLS, it is easy to optimize

the use of network resources in order to balance the load and to support various traffic requirement levels.

2.5.3 Virtual Private Networks (VPNs)

MPLS renders an effective mechanism for endorsing VPNs. With a VPN, the traffic of a given enterprise or group

passes transparently through an internet in a way that effectively segregates that traffic from other packets on the

internet, proving performance guarantees and security [5]. A VPN treat Internet as a medium of transport to establish

secure links between business partners and is helpful in extending communication to local and isolated offices by

significantly decreasing the cost of communication up to a greater extend for an increasingly mobile workforce as the

Internet is less expensive and its access is local as compared to dedicated remote access server connection.

2.5.4 Multi-protocol Support

It is quite obvious from its name “Multi Protocol Label Switching” that MPLS has a fascinating feature of

supporting multiple protocols.

Internet is a big place and a combination of many different technologies which includes IP routers, ATM

and Frame Relay switches.

The major advantage of MPLS is that it can be used with other networking technologies as well as in pure

IP, ATM, and Frame Relay Networks.

MPLS can also be used with any combination of the above technologies or even all the three technologies

because MPLS enable router can coexist with the pure IP network as well as ATM and Frame Relay

switches.

Page 8: New Thesis

8

The multi protocol support equips MPLS with universal nature that not only attracts other users with mixed

or different networking technologies but also provides an optimal solution to maximize resource utilization

and to expand QOS support.

2.6 Working of MPLS

In MPLS the transportation of data occurs on label switched path and protocols are used to establish LSPs in order to

pass information among the LSRs. These LSRs are responsible for performing switching and routing of packets

according to the label assigned to them. In MPLS header label is bind to a packet, altogether making a label stack.

Packets are switched using label without looking into IP table. The route traversed by a packet within MPLS network

between ingress and egress nodes while passing through the intermediate LSRs is called Label Switched Path (LSP).

Switching is made possible by forwarding packets along virtual connections called label switched paths. These LSPs

form a logical network on a regular physical network and guarantee connection-oriented processing over the

connection less IP networks. Each MPLS packets has a header that consist of 4 fields, a 20-bit label value field, 3-bit

for class of service field,1-bit label stack (bottom of stack flag).if set will indicate that the current label is the last

label in the stack and the 8-bit time to live field (TTL). Entry and exit point with in an MPLS networks are called

label edge router where as label switch routers are within an MPLS networks. They Examines only the MPLS header

containing the label and forward the packet no matter what ever the underline protocol is. Each LER maintain a

special database for destination address to label the packet when a packet enters MPLS network.

Figure 2.1: Label Switched Path in an MPLS enabled network

The main purpose of appending label to incoming packet is to obviate a route lookup in forwarding the packet as it

makes its way to egress LSR. To distribute labels, label distribution protocol (LDP) or RSVP can be needed, so that

Page 9: New Thesis

9

label switch path (LSP) can be setup. In a hop-by-hop LSP next interface is decided by the label switched Route

(LSR) by analyzing layer-3 routing topology database and transmits a label to request to the next hop. Forward

Equivalence Cass (FEC) consists of a group of packets that resembles same features and has the same necessities for

their transport and are provided with the same treatment and routes to the destination. There are many parameters to

determine the FEC of a packet e.g. it can be determined on the basis of source or destination IP address, source or

destination port number, diff serve code point and the IP protocol identity. A Label Information Base (LIB) table is

constructed by each LSR on the basis of FEC that defines how a particular packet must be forwarded in the MPLS

network.

Figure 2.2: Assignment of labels in an MPLS domain and IP forwarding [6]

The LIB contained of FEC to label binding information. After determining a Forwarding Equivalence, class for a

packet an appropriate label is attached to it from the LIB. Based on that label belonging to a particular FEC the

packet is forwarded through several LSRs along its way to its destination. As the packet traverses each LSR, the

label is examined and replaced by another appropriate label and forward the packet to the next LSR that is close to

the destination along the LSP.

Before assigning a packet to a FEC, LSP must be setup and the QOS parameters must be set. The transmission of

data in MPLS occurs on label switched path (LSP). In order to setup LSPs two protocols are used, so that the

necessary information can be passed among the LSRs. There is a special way through which labels can be distributed

along the path. These LSPs can be controlled driven or it can be explicitly routed (ER-LSP).

2.7 Components of MPLS

2.7.1 MPLS Header

The length of header of MPLS is 32 bit and is created by ingress router [7], [8]. That 32 bits header field is

embedded between layer-2 (data link) and layer-3 (network) headers. Every incoming packet is encapsulated by shim

header. They are shimmed as they are located between the existing headers. MPLS header format is shown in figure.

Page 10: New Thesis

10

Figure 2.3: MPLS Shim Header [8]

2.7.2 Label Binding and Assignment

Binding refers to an operation at a LSR that associates a label with a FEC. Labels are bound to a FEC as a result of

some event that indicates a need for such binding. Label binding can be control driven or data driven. In control

driven bindings, control messages are used to exchange label and FEC information between peers. Data driven

bindings take place dynamically and are based on the analysis of the received packets. Binding methods can be

classified as downstream or upstream bindings.

Figure 2.4: Label Binding

In downstream binding, when an upstream router (Ru) sends a packet to downstream router (Rd), the Rd examines

the packet and finds that the packet is associated with a FEC with label L. The Rd sends a request to the Ru to

associate label L with all the packets intended for that particular FEC. The downstream binding method can be

further divided into unsolicited downstream label binding and downstream on demand label binding. In the

unsolicited downstream mode a downstream LSR locally associates a label for binding with a FEC and advertises its

label binding information to its neighboring peers without being asked. In the downstream on demand mode the

downstream LSR distributes a label only when explicitly requested by an upstream LSR for label binding for a FEC.

In upstream binding, the Ru assigns a label locally to a FEC and advertises the assignment to its neighboring peers.

Figure 2.4 presents a label-switching domain with three LSRs (LSR A, LSR B, and LSR C) and two host machines

with IP addresses 192.168.1.2 and 192.168.1.1. Events 1 and 2 illustrate the downstream label assignment process

between user 192.168.1.2 and LSR A. Events 3 and 4 illustrate the upstream label assignment.

Page 11: New Thesis

11

2.7.3 Label Distribution

MPLS defines a label distribution process as a set of procedures that LSRs use to negotiate label information in

forwarding traffic. There are many methods available to distribute labels in MPLS networks. Label Distribution

Protocol (LDP), Resource Reservation Protocol (RSVP) and Border Gateway Protocol (BGP) are three popular

methods of label distribution. Because of its popularity and support for traffic engineering, LDP is the most popular

method of label distribution. The IETF has proposed LDP as a standard for label distribution in MPLS networks.

Constraint-based Routing LDP (CR-LDP) is another protocol that allows network managers to set up explicitly

routed LSPs, which are used for delay sensitive traffic. CR-LDP was derived from LDP.

2.7.4 Label Merging

The incoming streams of traffic from different FECs can be merged together and switched using a common label if

they are intended for the same final destination. This is known as stream merging or aggregation of flows.

2.7.5 Label Stacking

One of the most powerful features of MPLS is label stacking, which is designed to scale to large networks. A labeled

packet may carry many labels, which are organized as a last-in-first-out stack. Processing is always based on the top

label. At any LSR a label can be added to the stack, which is referred to as a Push operation, or removed from the

stack, which is referred to as a Pop operation. Label stacking allows the aggregation of LSPs into a single LSP for a

portion of the route through different domains. This process is illustrated in Figure 2.5.

Figure 2.5: Label Stacking

Referring to Figure 2.5, assume LSR A, LSR B and LSR C belong to domain B (OSPF) while LSR X and LSR Y

belong to domain A and C (BGP domains). This example also assumes that domain B is a transit domain. In order to

set up an LSP from domain A to domain C, two levels of labels are used since there are two types of routing

protocols. LSR X sends a packet to LSR A with label 21. Upon receiving the packet LSR A Pushes label 33 onto the

label stack and forwards the packet to LSR B where label 33 is replaced with 14 and forwarded to LSR C. LSRs in

domain B (OSPF) operate only on the top label, which is assigned by the exterior LSR of domain B. LSR C Pops the

Page 12: New Thesis

12

label from the label stack before sending it to domain C (OSPF) and LSR Y again operates on label 21, which is

assigned by the BGP domain LSR.

2.7.6 Label Spaces

The label space refers to the manner that the label is associated with the LSR. The label space can be categorized into

two ways.

Per platform label space: Label values are unique across the complete LSR and labels are allocated from a

common pool. No two labels distributed on the different interfaces have the same value.

Per interface label space: The label ranges are associated with interfaces. Multiple label pools are defined

from interfaces and the labels provided on those interfaces are allocated from the separate pools. Label

values provided on different interfaces could be the same. Two octets in the LDP header carry the interface

label spacing identifier.

2.8 Label Distribution Methods

2.8.1 Label Distribution Protocol

The Label Distribution Protocol (LDP) [9] as the name indicates that it is a protocol which is used to distribute label

between LSRs. MPLS architecture does not follow a single method for signaling of label distribution. There is a set

of procedures and methods for creating labels [7] through which LSPs are established by mapping layer-3

information into layer-2 switched path.

Topology-based method

Request-based method

Traffic-base method.

A label distribution protocol is a protocol defined by the IETF in [10], which allow two LSRs to exchange

information about how to map the incoming label. The mapping information is required in order to make forwarding

decisions. When the label reaches at one of the intermediate LSR it uses that information stored in the table and

forward the packet to the adjacent LSR, also known as its peer. LDP operates between directly connected LSRs via

link or between non adjacent LSRs. LSRs that establish session and exchange labels and mapping information with

the help of LDP are called Peer LSRs. When assignment of a label is done the LSR needs to let other LSRs know of

this label with the help of LDP. IETF has defined LDP as a protocol that is used for explicit signaling and

management of label space. They are also responsible for advertisement and withdrawal of labels.

Figure 2.6: Logical exchange of messages in LDP [11].

Page 13: New Thesis

13

LDP uses Protocol Data Units (PDUs) to exchange messages between LDP capable LSRs. The header in the PDU

indicates the length which is followed by one or more messages, intended for the partner LSR

For reliable delivery of LDP information with effective flow control and congestion management TCP is used as

transport layer protocol. Following figure give a light on the concept of LDP message exchange, since there is an

intermediate LSR in between them so the message is exchanged logically and is represented by dotted line. Here the

exchange takes place between the source and the destination LSR where as the intermediate LSR will not take any

action. LDP provide a mechanism that let LSR peers to know each other so that communication can take place.

There are four major types of messages [8], [11]

2.8.1.1 Discovery

In LDP technology peers are discovered dynamically. When an LSR is configured with LDP it then attempt to

discover other LSRs by sending “HELLO” message over the TCP/IP or UDP/IP port. Hello message is sent

periodically over all the MPLS enabled interfaces and in response to this the routers with MPLS enabled on them

will establish a session with the router who sends the hello message. UDP is used for sending hello message and TCP

is used for the establishment of the session. Peers that are not connected directly can be contacted by explicitly

specifying that LDP peer in LSR’s configuration via directed LDP HELLO message.

HELLO MESSAGE FORMAT

Figure 2.7: LDP “Hello” message [12]

Discovery of non adjacent peer is done with the use of unicast address by the hello message. When one LDP peer is

connected to other peer they exchange different types of configuration information. Once configured exchange of

labels request and label binding messages take place and this way a packet follow its way toward its destination.

Establishment of control conversation among adjacent MPLS peers and exchange of capabilities and

options.

Label advertisement

Destroying or withdrawal of labels.

Page 14: New Thesis

14

2.8.1.2 LDP Header

Each LDP message made up of header that indicates the length of the PDU. The header of LDP PDU may be

followed by one or more messages send from one peer to the other within the MPLS network. The header within the

message indicates the length after which the contents of the message will be followed. Type-Length-Field (TLV)

scheme is used by LDP for encoding contents in the message. TLV has made UDP highly extensible by packaging

them in PDUs and increased the backwards compatibility.

Protocol Structure, LDP Label Distribution Protocol.

Table 1: Label Distribution Protocol (Protocol -structure)

2 bytes 2bytes

Version PDU Length

LDP ID (6 bytes)

LDP Message

1. Version – Version fields tells the version number of LDP. At present the number is 1.

2. PDU Length – PDU length field tells the total length of the PDU excluding the version and the PDU

length field.

3. LDP ID – LDP ID identifies the label space of the sending LSR for which this PDU applies. The first 4

octets are used for the encoding of the IP address assigned to the LSR. The last 2 indicate label space in

the LSR [10].

LDP messages format

Table 2: LDP message format

U Message type Message Length

Message ID

Parameters

1. U -The U field indicates an unknown message bit. If set to 1 will be discarded by the receiver silently.

Message type – this field determine the type of message.

2. Message types that exist are: Notification, Hello, Initialization, Keep Alive, initialization message,

advertisement message, Address, Address Withdraw, notification message, Label Request, Label

Withdraw, Label Release, label mapping message and Unknown Message name.

3. Message length -- The length field indicate the length of the message ID in octets, It also determine the

mandatory parameters and optional parameters

4. Message ID -- Message identification of 32-bit. Message ID is unique.

5. Parameters -- The parameters field contain the TLVs. This field also tells about the mandatory and

optional parameters of the message [10].

Page 15: New Thesis

15

TLV format

Table 5: LDP TLV Format

U F Type Length

Value

TLV Format

1. U -- The first field represents the U bit which is an unknown TLV bit.

2. F -- The “F” field forward unknown TLV bit.

3. Type -- Encodes how the Value field is to be interpreted.

4. Length – the length field indicates the length of the Value field in octets.

5. Value – Value field encodes information that is to be interpreted as mentioned in the Type field [10].

2.8.1.3 LDP Messages

Hellos message is a discovery mechanism used during the discovery of the other LSRs in MPLS domain. Each LSR

keep records of all the hello messages that are received from other LSR peers. Hello message is sent periodically

over all the MPLS enabled interfaces and in response to this the routers with MPLS enabled on them will establish a

session with the router who sends the hello message. UDP is used for sending hello message and TCP is used for the

establishment of the session. Peers that are not connected directly can be contacted by explicitly specifying that LDP

peer in LSR’s configuration via directed LDP HELLO message. Discovery of non adjacent peer is done by the use of

unicast address by the hello message. LSR uses UDP port to multicast hello message to all directly connected routers

in the subnet.

Initialization message

Initialization message is used in establishing and maintaining a session between the MPLS peers. When one LDP

peer is connected to other peer they exchange different types of configuration information. Once configured,

exchange of labels request and label binding messages take place, now the packet knows how to follow its way

toward its destination. The configuration information that is exchanged is PDU size, time that a packet can stay alive,

detection of loop, label advertisement discipline and path vector limit. Initialization message is send by the session

requesting peer by using the TCP port.

Advertisement message

Advertisement message is used by an LSR for the advertisement of the label to other peer LSR and it is also used for

creating, deleting and mapping for the FECs. Advertisement message is send by an LSR to its peers using the TCP

port, it can also be used to request mapping of labels.

Notification message

It is quite clear from its name that this message is some sort of notification from one LSR to the other. Notification

message is a notice from one LSR to its peer about some abnormal or unusual error conditions i.e. a notification is

send to the LSR when receiving unknown message, keep alive time expired, session termination occurred

unexpectedly or shutdown by node. When an LSR receives such a message with the status code from its peer the first

Page 16: New Thesis

16

thing it does is to terminate the LDP session and this is done by closing the TCP connection. Once the session is

terminated the LSR discarded all the state of association information with that peer LSR.

Keep alive message

Integrity of the TCP connection between the peers is monitor with the help of keep-alive message. It is a thing to

know that the other LSR is still alive that is there is still a connection between them.

Address message

Address message is used by an LSR to advertise its interface address to its peer LSR. Once the address message

containing interface address is received by the peer LSR, it updates the Label Information Base (LIB) for mapping

and next hop address.

Label mapping message

Label mapping message is used to advertise FEC label binding information between connected peers. A label request

message consists of IP address and their associated labels.

Label request message

In label request message the LSR request its peer for label binding to a particular FEC. The upstream peer send label

request to downstream peer and recognize a new FEC through the forwarding table.

Label abort request message

Label abort request message is used to aborts the outstanding label request when OSPF or BGP prefix advertisements

to change the label request operation.

2.8.2 Resource Reservation Protocol (RSVP)

RSVP is another protocol that is suitable for label distribution and end-to-end QOS. It is developed by IETF’s

Integrated Service (int-serv) that allow request for QOS parameters. End-to-end guarantee mean that the application

should met the minimum required bandwidth and/or minimum amount of delay in connection.

Host Router/Switch

RSVP

Data

Figure 2.8: RSVP in Host and Router.

Application

Routing Process

RSVP Process

Policy Control

Admission Control

Packet Scheduler

Packet Classifier

RSVP Process

RSVP Process

Admission Control

Packet Scheduler

Packet Classifier

Page 17: New Thesis

17

RSVP is a signaling protocol that because it signal QOS requirements to the network. RSVP is suitable for the

extension of MPLS network because of the end-to-end resource reservation. In short RSVP is a network control

protocol, that provide resource reservation and QOS data flow along the routes by working together with IP protocol.

Instead of treating each packet individually RSVP manages the flow of data. Through flow specification QOS

requirements are transmitted. A sequence of datagram that belongs to the same source, representing a distinct QOS

requirement is called data flow. Data flow consists of sessions that are identified by their protocol ID, destination

address and destination port. Both multicast and uni-cast sessions are supported by RSVP.

To establish a session a “Path message” is sent by RSVP to a destination IP address by looking at the routing

information which is available at each node. An appropriate upstream “Reservation-Request message” is sent in

response by the receiver once the path message is in receipt. Resource reservation request is sent in order to reserve

the data path and the appropriate resources. This request is carried by RSVP protocol as it passes through all the

nodes. When the reservation-request message is received by the sender application, he then is aware of the thing that

all the appropriate required resources are reserved. Once the resources are reserved the sender starts sending data

packets. With the help of traffic control QOS parameters are implemented for a particular data flow. In order to

achieve QOS, this traffic control scheme includes admission control, packet scheduling and packet classifier.

Each of them has their work to do to guarantee QOS because the packet scheduler is responsible for resource

allocation required for the transmission of that data flow on the data link layer issued by each interface. While on the

other hand the decision is done locally at each node which is called admission control mechanism in order to know

whether the QOS support can be met or not. An upstream reservation request message is send by RSVP at each node

for admission control mechanism. After the admission control request has been passed the other parameters like

packet classifier and packet scheduler are set by RSVP to acquire the desired QOS otherwise an error message is sent

if the admission control request fails.

2.8.2.1 RSVP Messages

RSVP uses the following messages for the establishment of data flow or the removal of reservation information and

to report the confirmation or error messages.

The following are the types of RSVP messages

Path message

Reservation-request message

Path tear message

Resv tear message

Error message

Path error message

Resv error message

Resv confirm message.

Path messages

Page 18: New Thesis

18

Path message is transmitted periodically to refresh path state by the sender host downstream along the routes (unicast

and multicast) provided by the routing protocol. Path message follow exactly the same path of application data for

maintaining path states along the way in the router, letting the router to know its previous and next hop nodes in that

session. This knowledge of path from source to destination will then be used for sending reservation request message

from upstream.

Resv message

Reservation message is used to reserve resources. Reservation-Request message is sent in response by the receiver

once the path message is in receipt. Request reservation message follow exactly the same path from where the path

message came. Resource reservation request is sent in order to reserve the data path and the appropriate resources.

This request is carried by RSVP protocol as it passes through all the nodes. When the reservation-request message is

received by the sender application, he then is aware of the thing that all the appropriate required resources are

reserved. Once the resources are reserved the sender starts sending data packets.

Path tear message

Path tear message is used by the sender application to remove path state in any router along the path when its path

state is out which was maintained after the path message was sent. Path tear message is not required, yet it is used to

release the network resources very quickly to enhance the network performance.

Resv tear message

Just like the path tear message the resv tear message is used to remove the reservations that were made when a

reservation request message was sent. It is opposite to resource request message that was initiated by the receiver to

de-allocate all the resources.

Error Messages:

Path error message:

A path error message is sent on the receipt of path message from the receiver to the sender of the path message. The

error message is sent usually because of the parameter problem in the path message.

Resv error message:

Reservation error message is sent when there is some problem in reserving the resources that are requested. This

message is sent to the entire list of receivers that are involved in it.

Resv confirm message

When the receiver sends the reservation request message to the sender and the resources are reserved along the way,

the receiver can then request for the confirmation message for that reservations.

2.8.2.2 RSVP Message Format

RSVP comprises a common header that consists of 32-bit words. Following is the RSVP header format.

Table 4: RSVP Message Format

4 4 8 16 16 8 8 32 16 1 16

Page 19: New Thesis

19

Version Flag Type Check

sum Length Reserved

Send

TTL

Message

ID Reserved MF

Fragment

offset

Version: 4 bit field that represents the protocol version number.

Flags: 4 bit field (flag bits are still undefined yet)

Message Type: 8 bit field that can have six possible values.

Checksum: 16 bit field used to specify standard TCP/IP checksum for RSVP.

Length: 16 bit field used to represent the packet size of RSVP in bytes.

Send TTL: 8 bit field represents the time to live value of the message.

Message ID: 32 bit field provides a label that is shared by all fragments of one message from a given RSVP

hop.

More fragment (MF): 1 bit field that is reserved for message.

Fragment offset: 24 bit field used to represent offset bytes for the message.

RSVP message field values are shown in the following table.

Table 5: RSVP Message Field Attributes

Value Message Type

1 Path

2 Reservation request

3 Path error

4 Reservation request error

5 Path teardown

6 Reservation teardown

7 Reservation request ACK

2.8.2.3 RSVP Object Field

RSVP object fields are shown in the following table. Objects carry necessary information that consists of the

contents of RSVP messages. A combination of different objects can be used by RSVP to signal LSPs. Each object

has a fixed length header with a variable length data field. The maximum object content length can be 65.528 bytes.

Table 6: RSVP Object Field

16 8 8 Variable in length

Length Class-num C-Type Object data

Length: 16 bit field that specifies the total length of the object which must be a multiple of 4.

Page 20: New Thesis

20

Class-num: 8 bit field that identifies the class of the object e.g. session.

C-type: 8 bit field that represent the object type that is unique for each class-num. It can accommodate

different types of internet address families like IPV4 and IPV6.

Object data: contain data id by the class-num and c-type fields. Because both of them can be used to define

a unique object.

2.8.3 LSP Tunnel

LSP tunnels helps in steering the packets through the network. LSP tunnel can be created when an ingress router

sends a label binding request from downstream router in an RSVP path message. A label_request object is then

contained in the RSVP path message. A “Path message” with label_request object is sent by RSVP to a destination

IP address by looking at the routing information which is available at each node. An appropriate upstream

“Reservation-Request message” is sent in response by the receiver once the path message is in receipt. Along its way

the label_request object requests the intermediate LSR to provide label binding information for that session.

Resource reservation request is sent in order to reserve the data path and the appropriate resources. This request is

carried by RSVP protocol as it passes through all the nodes. When the reservation-request message is received by the

sender application, then he is aware of the thing that all the appropriate required resources are reserved. Once the

resources are reserved the sender starts sending data packets.

Figure 2.9: Resource Reservation in RSVP [13]

With the help of traffic control QOS parameters are implemented for a particular data flow. If a node fails to provide

a label binding then a path error message with “unknown object class” is sent from the receiver to the sender of the

path message. The error message is sent usually because of the parameter problem in the path message.

Page 21: New Thesis

21

2.8.4 Constraint Based Routing (CBR)

Since link state routing protocols like OSPF and IS-IS lack in distributing the label binding information because they

flood their information to all router those are running link state process. On the other hand distance vector protocol

such RIP, RIP V2 and IGRP are suitable for transmitting label information between routers that are not directly

connected but an extensive modification is required in order to transmit label binding information. For traffic

engineering in MPLS it is necessary to perform the distribution of constraint based routing information to find the

most suitable path in the network to avoid load, congestion and to obtain the optimal delivery.

When we talk about constraint based routing (CBR) parameters like bandwidth, delay, QOS, and hop count etc also

come to consideration [7]. With the help of CBR the traffic engineering demands (like QOS guarantee) in MPLS

network are met. Explicit routing is an integral part of CBR where a route from source to destination is computed

before setting up the LSP based on metrics.

To setup explicitly routed LSPs, a derivation from the LDP is “Constraint based Routing (CD-LDP)” used by the

network managers in the management of sensitive traffic. Once the path is determined signaling protocols such as

LDP or RSVP are used to setup Explicitly Routed Label Switched Paths (ER-LSPs), from the ingress to the egress

node. When the underutilized links in the network will be used by the ER-LSR then proper utilization of the network

takes place.

Figure2.10: TE in MPLS Network using Explicit Routing [14]

The two protocols that can be used to establish ER-LSR are Constraint Based Routing over LDP (CR-LDR) and

modification done in RSVP to handle MPLS TE requirements (RSVP-TE). Due to the traffic management of ER-

LSR network the congestion can be controlled and a bandwidth guaranteed LSP can be achieved.

Page 22: New Thesis

22

CHAPTER 3

Literature Review

3.1 Introduction

MPLS is a packet-forwarding technology which uses labels to make data forwarding decisions. When fault occur in

active LSP, the recovery scheme must re-direct the affected traffic to the recovery path which bypass the fault. The

two basic recovery mechanism defined by IETF used to re-direct affected traffic is rerouting and protection

switching. The main focus of this chapter is to introduce a survey of various fault recovery schemes proposed for

MPLS networks based on rerouting and protection switching. This chapter elaborates the different aspect of various

fault recovery schemes like limitation, efficiency, on which recovery scheme (protection switching or rerouting) they

are based on. The review recovery mechanisms are classified according to the set of characteristics considered

relevant. I also suggest improvement (if possible) for recovery schemes.

3.2 Protection switching mechanism

The protection switching mechanism pre-established a recovery path (before the fault detection) for each active path

(AP). When an AP fails the affected traffic is switched to the pre-established RP. Recovery can also be local or

global and resource or path oriented.

3.2.1 Local Recovery schemes

The local protection is performed in a distributed manner and its main aim is to protect against any contiguous link

or node failure. In order to protect the entire active MPLS path, the local protection mechanisms require significant

recovery path computation and management task. When fault occur the traffic on the working path is switched over

to the recovery path in the MPLS based local protection mechanism. The nearest upstream label switch router is

responsible for the recovery path selection and switching when it uses local recovery according to [15].

If a fault occur and immediate upstream is unable to recover may be because it not a point of repair (POR) or it is

POR but unsuccessful in its effort, then it sends a fault indication signal (FIS) to its immediate node, where a new

effort (of local) recovery take place, we conceive it as local recovery mechanism (even if the node that successfully

recover the AP coincides with the head-end node).

3.2.1.1 Resource Oriented schemes

If a recovery mechanism is focuses to protect the node and/or link then it is said to be resource oriented. Local

recovery with protection switching and resource (link and/or LSR) oriented recovery is provided by various schemes

including [16, 17, 18, 19].

P. Pan et al. [16], gives two Fast Reroute (FRR) schemes define RSVP-TE extension to establish backup label switch

path (LSP) tunnels for local repair of LSP tunnels. If failure occurs, this scheme enables the re-direction of traffic

onto backup LSP tunnels in 10s of milliseconds. The one-to-one method creates detour LSP for each protected LSP

at each potential point of local repair. The facility method creates a bypass tunnel to protect a potential failure point.

Page 23: New Thesis

23

Facility protection will only be relevant to protected LSP with guaranteed bandwidth, as not all paths that use a

resource must be protected in the same way.

Mellah and Mohamed [17], proposed a scheme (protects only link faults) which also uses bypass tunnel but in this no

RSVP-TE extension is required and proposed scheme protect all the LSP that use the protected links but scheme is

restricted within per-platform label space.

In the approach of Hundessa and Pascual [18] uses tagging and buffering techniques to overcome the packet disorder

problem. The tagging technique is used to make each path switch LSR (PSL) on the failed LSP know its last received

packet before the failure. The buffering technique is used to make each PSL actively store the incoming packets after

the failure. By the help of the above two techniques, the in-transit packets and new incoming packets can be carried

by the disjoint and backward backup paths under an in-order manner.

Another scheme proposed by Kang and Reed [20], dedicated to protection uses bypass tunnels with bandwidth

guarantee in MPLS networks. It uses p-cycles (pre-configured cycles). Before any fault occurs p-cycles [10] pre

configure the entire network capacity in cycles that allows not only recovery of fault but also links connecting any

two non –adjacent nodes in the “p-cycle” (straddling links) which contributes significantly for the p-cycle efficiency.

3.2.1.2 Path Oriented schemes

When a recovery mechanism tries to recover a particular active path (AP) then it is said to be path oriented [15].

In the approach of Ho and Mouftah [21] pre-establishes several backup paths for each working LSP. In this

approach, a working LSP is first subdivided into several protected segments. Each protected segment forms a

protection domain, which has a PSL and a PML (path merge LSR). In a protection domain, each backup path is pre-

established and disjoint with its protected segment. Once detecting a failure in a protected segment, the

corresponding PSL switches the affected traffic to the corresponding pre-established backup path. After the affected

traffic is around the faulty segment of the failed LSP, the PML redirects the affected traffic back to the failure-free

segment of the failed LSP. The fault-tolerant idea of the approach of Ho and Mouftah [21] is not novel, which is

fully based on the local recovery to protect the affected traffic. The contribution of this approach is on the capacity

allocation for the backup paths.

In the approach of Haskin and Krishnan [22] a backward backup path is pre-establishes for each active LSP. There

are two backup paths for an active LSP. The route of the backward backup path is reverse with the route of the

corresponding primary LSP. When a failure is detected in an active LSP, new incoming packets are carried by the

disjoint backup path. As for the in-transit packets, they are sent back to the ingress LSR using the backward backup

path. When the ingress LSR receives the in-transit packets, it further redirects the packets to the disjoint backup path.

This approach introduces the packet disorder problem, such that new incoming packets are earlier than the in-transit

packets to be carried by the disjoint backup path.

Two methods proposed in [16].The one-to-one method creates a detour path (protection path) for each protected LSP

for each potential point of local repair. While the facility method creates a bypass tunnel to protect a potential failure

point; by taking the advantage of MPLS label stacking, the bypass tunnel can protect a set of LSPs that have similar

backup constraints.

Page 24: New Thesis

24

The one-to-one method requires a huge number of recovery paths. In order to reduce the number of recovery path,

path merging is implemented whenever possible, while facility backup is available only for protected LSPs possible

with guaranteed bandwidth.

In the recovery scheme of [23] Kodialam and Lakshman’s for a new LSP request, both the primary and backup path

is determined on-line and simultaneously. If the bandwidth requested for primary and backup path is not satisfied,

then the request is rejected. The proposed scheme needs the prior knowledge of bandwidth used by the primary and

backup path in each link, and also the knowledge of free bandwidth in each link, this is called Partial Information

(PI) model (according to the author). Proposed scheme deals with single link failure, but can be extended to manage

single node failure as well.

Xu et al. [24] proposed a refined ILP model and a dynamic programming heuristic for a novel shared segment

protection approach called Protection Using Multiple Segments (PROMISE) that combines the best of link and Path

protection schemes. PROMISE allows for overlapping active segments AS’s (and BS’s) and exploits bandwidth

sharing not only among the BS’s for different active paths, but also among those for the same active path.

The first major contribution Xu et al. [24] is an Link-Labeling scheme which enables us to develop an ILP model

that determine the optimal partition of a given active path into AS’s and find corresponding BS’s. ILP model can

also be used to obtain an optimal solution for a medium-to large network. The second contribution is fast dynamic

programming based heuristic for PROMISE that has polynomial time complexity and this heuristic obtain near-

optimal set of AS’s and their BS’s for a given active path.

3.2.2 Global recovery schemes

The global protection is executed in a centralized manner and its main aim is to protect against any node or link

failure on the entire path or the segment of path. In case of global protection, the path switch LSR (PSL) switches the

traffic from the failed working path to the recovery path. However, the PSL is not generally next to the point of

failure. In order to switch the affected traffic to recovery path, the fault notification should be propagated to PSL.

When the failure on the working path is modified the traffic may be switched over to the working path [15].

Recovery schemes based on global recovery is shown below.

In [23, 25] three information scenario are described, the first is no information scenario in which only the reserved

and residual (free) bandwidth is known for each link. The second is complete information scenario permits the best

sharing need large amount of information which prevent it to be used in practical situations and the third is partial

information scenario is fairly modest in terms of amount of information to be maintained. These algorithms only

consider single link failure, but can easily be entered to handle single node failures as well.

In [23, 25] both algorithm explain how to share the backup path bandwidth. In the no information case sharing of

bandwidth is not possible while in complete information case only inter-demand sharing is possible and in partial

information case some inter-demand i.e. not full sharing is allowed. The active and backup are selected by using

linear programming models and are computed at ingress node for both no information and partial case, but in actual

reservation it can reserve the actual amount of protection bandwidth in each line provided that the backup path is

signaled , carrying the complete path of the corresponding protected active path .

Page 25: New Thesis

25

In [26] another PI model was proposed for global path protection in which more information is used as compared to

the approach of [25], because of that lower additional bandwidth is provided to the link belong to RP. The author

claimed that his scheme is better than [25].

Yetginer and Karasan [27] consider off-line computation of disjoint active and recovery path. They first compute

maximum number of paths for each demand. They propose four different approaches for selecting active and

recovery paths, the first two methods handle active and recovery path design separately. A traffic uncertainty model

is developed in order to evaluate performances of these four approaches based on their robustness with respect to

changing traffic pattern. They show that by carefully distributing the traffic load over network resources the joint

design approaches in carrying additional traffic resulting from traffic uncertainty. This scheme appears to be hardly

implementable for non-trivial sized networks (in any four approaches) because it requires solving complex integer

linear programming problems. Even though the off-line characteristic may lessen this constraint, for medium or

bigger networks this approach seems unwise.

Wei et al. [28] presented a novel approach to alleviate restoration of LSP in the MPLS network. The proposed

approach establishes bypass tunnels rather than backup paths, because the bypass tunnels are established to backup

up all protected LSPs, not for one particular protected LSP. In order to know what links are necessary between LSRi

and LSRj, the Max-Flow Min-Cut theorem is adopted to find the necessary links through which all paths between

LSRi and LSRj must pass. The shortest augmenting path algorithm is then adopted to establish the backup path. The

main purpose of the shortest augmenting path algorithm is to find all paths passing through all links in a min-cut.

[28] also compares the pre-established bypass tunnel (PBT) algorithm and the PBT algorithm with disjoint bypass

tunnels (PBT-D). The approach has less in rerouting and can allow more affected LSPs to reroute traffic than RSVP

and efficient Pre-Qualify.

Huang et al. [29] presented a scheme for minimizing the delay of notification messages when fault is detected. This

scheme proposes a reserve notification tree (RNT) structure for efficient and fast distribution of fault notification

messages. A faster Hello protocol for fault detection and a lightweight transport protocol for handling notification

messages are also proposed in this scheme. However, the approach of Huang et al. [29] has the packet loss problem

since it does not reroute the packets currently carried in the failed LSP (the in-transit packets). In [30] propose

several additional objects for the Resource Reservation Protocol (RSVP) extension allowing the establishment of

explicitly routed label switched paths using RSVP as signaling protocol that enables timely node failure detection.

The authors also show that the RNT can be implemented in any layer [31].

3.2.2.1 Load distribution schemes

Xiaoming et al. [32] propose an adaptive load balancing scheme based on the real-time measurement, which is able

to hold integrity per flow while minimizing congestion. This approach focuses on load balancing among multiple

parallel Label Switched Paths (LSPs) in Multi-protocol Label Switching (MPLS) networks. The goal of the ingress

node is to distribute the traffic across the LSPs so that the loads are balanced and congestion is thus minimized. The

traffic to be balanced by the ingress node is the aggregated flows that share the same destination. The introduction of

CAC function guarantees the QoS of flows already into LSPs.

Page 26: New Thesis

26

Dana et al. [33] proposes a scheme in which pre-assigned LSPs is used for recovery, when fault occurs. In this faulty

traffic is distributed by using case-based reasoning (CBR) to the pre-assigned LSPs. CBR is a method to find out the

amount of traffic forwarded on each pre-assigned LSP based on past experiences. The pre-assigned LSPs and the

percentage of traffic splitting are calculated Online based on desired QoS and capacity constraints. Finding solution

by using CBR approach is slow with large library case.

Sook and Kim [34] compare a protection mechanism based on load distribution. The protection mechanism based on

load distribution modeled as a Fully Shared Mechanism (FSM), where all paths may be used simultaneously as

active paths doubling as recovery paths, when needed and the Partially Shared Mechanism (PSM), where some paths

are used as active paths and others are set aside to use as recovery paths. The comparison of FSM and PSM is based

on numerical equations representing the relationships between reliability and utilization. This study imposes

homogeneity on the network traffic and reserved bandwidth.

3.3 Rerouting mechanism

For rerouting mechanism the recovery path is dynamically found after the fault detection in the protected AP.

3.3.1 Recovery schemes based on Establish-on demand

In the approach of Ahn et al. [35], each LSR has one or more corresponding candidate protection merging LSRs

(candidate PMLs). The candidate PMLs of an LSR indicate the LSRs located on the downstream direction of the

LSR. While an LSP is established, each LSR on the LSP also finds the information about its candidate PMLs. Once

detecting a failure in an LSP, the LSR that detects the failure first calculates all the costs of the possible recovery

paths from it to each its candidate PMLs. Then, the LSR selects the recovery path with the least cost. Next, the

constraint-based label distribution protocol (CR-LDP) is used to explicitly establish the route of the best recovery

path. However, [35] incurs a nontrivial recovery time for finding the least-cost recovery path.

In the approach of Agarwal and Deshmukh [36] each ingress LSR assigns one of other ingress LSRs as its backup.

When failure occurs in ingress LSR, the failed ingress LSR is unable to execute label switching to forward the

packets of source host. In such case, the source host will send the faulty packets to the assigned backup ingress LSR.

The backup ingress LSR forwards the packets along itself corresponding LSP using label stacking. When the packets

arrive at the egress LSR of the corresponding LSP, the egress LSR strips off the stacking labels of the packets and

forwards them to the destination host using normal IP routing. But the approach is not applicable to the intermediate

or egress LSR failure.

Hong et al. [37] propose a LSP rerouting scheme that dynamically aligns the restoration scope (RS) in MPLS

network. Because this scheme dynamically adjusts the restoration scope depending on the fault and congested

location and the overall network status. In this scheme, each iteration searches an optimal alternative path from the

PSL to the working path egress LSR. In order to minimize the recovery time, the scheme begins to look for such a

path on a subset of the LSRs, where the tentative PSL is the LSR closest to the failure and the PML is always the

egress LSR. This subset (delimiting the range of recovery) is successively enlarged whenever a recovery path is not

found. Every time the subset is extended, in a process called hierarchical restoration, a new PSL is used (the next

upstream LSRs in the working path). This procedure, if unsuccessful, is repeated until the PSL is the ingress node.

Page 27: New Thesis

27

These schemes are very efficient on resource utilization, which is partially explained by the rerouting model. In [37],

the number of iterations required defines the speed of recovery (as the total number of operations increases, each

successive iteration is harder to solve). However, as the authors [37] pointed out, using too small recovery ranges

may result in ignoring the available bandwidth.

3.3.2 Pre- Qualified recovery approaches

Pre-qualified recovery establishes an optimized new recovery path without path selection time i.e. before the

occurrence of a fault, namely at protection LSP setup time and there is no waste of resource overhead for recovery

path in normal cases. Since it does not reserve resources for recovery path beforehand [38].

Park et al. [39] proposed a pre-qualified mechanism for MPLS networks named dynamic path protection, in order to

quickly recover node or link failures. When fault occurs in the LSP, a recovery path is selected among the existing

active paths that start or transit there and with the same destination (if there are more than paths to choose from, the

paper proposes some ranking criteria). If later a fault occurs in the previously selected recovery path, the same

procedure is followed again. This scheme is fast as it avoids signaling a new path, and needs simple routing table

changes. In case, if no such paths can be found, the LSR must create a new path (from itself up to the original

destination). The author claimed in [30] that this scheme does not require signaling protocol extensions.

Yoon et al. [38], proposed a pre-qualified recovery mechanism, When a LSR detects a failure between it and its next

LSR, the corresponding pre-qualified recovery path is actually established. The bandwidth resource is also reserved

for the pre-qualified recovery path. Since the route of the recovery path is pre-determined, the approach of Yoon et

al. [26] does not take actions to find the recovery route after the failure. However, during normal time, this approach

incurs the route calculation overhead.

Jenn-Wei et al. [40] presented an efficient approach for enhancing the fault-tolerant performance in MPLS network.

The proposed approach utilizes failure-free working LSPs (the active LSPs without suffering from failures) to carry

the traffic of the failed LSP (the affected traffic). For transmitting the affected traffic along a failure-free working

LSP, IP tunneling technique is used to encapsulate each packet of the affected traffic to be with the forwarding

equivalence class (FEC) type of the LSP. With IP tunneling technique, it is not required to perform additional label

assignment. To minimize the influence of the affected traffic on failure-free working LSPs, the proposed approach

applies the solution of minimum cost flow to determine the amount of affected traffic to be transmitted by each

failure-free LSP. They also propose a permission token scheme and piggyback method to solve the packet disorder

and loss problems.

Page 28: New Thesis

28

CHAPTER 4

Problem Discussion and Proposed Solution

4.1 Problem Statement

When fault in LSP, due to failure of link or node in the network, the carried traffic in failed LSP has to be transmitted

through the backup LSP and the selection of backup LSP is based on the following criteria:

Reducing the request blocking probability

Minimizing cost of network

Load balancing

4.1.1 Reducing request blocking probability

The major task of traffic engineering is to reduce the request blocking probability, to make sure that maximum

numbers of requests are accepted in the network, in order to improve operator revenues and increase client

satisfaction. Minimum Interference Routing Algorithm (MIRA) [41] is one of the best algorithms for constraint

based routing which reduces the request blocking probability. The basic concept of MIRA is based on the

relationship between the maximum flow [42] value between two nodes and the bandwidth (that can be routed

between nodes). In MIRA critical links are the links, which cause a decrease in maximum flow values between pair

of nodes. Therefore, weights are allocated to the links according to their criticality. In the end a shortestpath- like

algorithm is used to evaluate the path with minimum critical links. But MIRA suffers from computational complexity

problem, as this algorithm frequently computes maximum flow.

4.1.2 Minimizing costs of network

To accomplish a minimum cost of network, metrics like minimum hop count or link costs, have been conventionally

included in routing algorithms. In order to minimize the cost of network many algorithms are proposed, for example

Minimum hop algorithm [43]. Moreover, many other algorithms are proposed to make improvement in Minimum

hop algorithm. Minimum hop algorithms are easy and computationally proficient. But in case of heavily loaded

network, they give worse result [44] in terms of request refusal ratio. Link cost corresponds to the physical link

length, so they are used in algorithms mainly for traffic engineering and they have no huge influence in networking

architectures.

4.1.3 Load balancing

In network, load balancing plays an important role to decrease congestion. The basic concept of load balancing is to

distribute load in such a way that improves the overall performance of network. But in lightly loaded network load

balancing shows bad performance, for example routing packets on longer paths.

4.2 Proposed Solution

In this section, we presents a proposed solution for fault tolerance in MPLS network, based on the combination of

above three criteria, Load Balancing, MIRA and Minimizing cost of network. Our main focus is to present the

Page 29: New Thesis

29

significance of the combined consideration of these criteria. Therefore, the main challenge is to define the optimal

weighting (W1, W2, W3) for each element in the cost function given by (Eq. 1).

Initially, the weight connected with MinHop should be increased, in order to show, its good performance under

lightly loaded network. Therefore equation 2 shows, weight (W1) is inversely proportional to the total network load.

So we can say that, weight (W1) is predominant under lightly loaded network and it starts to decrease as the total

network load increases to reach the total network capacity. Next, Minimum Interference Routing Algorithm (MIRA)

comes into play, when links criticality is changing (links are getting rapidly loaded). So we see in equation 2 weight

(W2) is directly proportional to the network load. For these W1, W2 values, MIRA becomes prevailing compared to

MinHop when the network load passes the twenty-five percent of the total capacity (due to the multiplicative

constant 16). Finally, in equation 3 a new parameter for the load metric element that will control load balancing

influence in the overall cost function by limiting its undesirable effects under light load. Moreover, constants a, b and

c are used in order to scale the numeric values to a comparable range. In simulation, we can see that the performance

of our proposed method is good in overall situation. The request blocking probability of the proposed scheme is

comparable with MIRA results. The load standard deviation values are comparable with values for load balancing

under light load. Under high load, the proposed scheme achieves performance bounded by load balancing (upper

standard deviation) and MIRA (lower standard deviation) due to the equally combined effect of these algorithms.

Finally, we see the influence of the MinHop element under light load; the integrating solution has good performance

compared to MinHop. Therefore, these results justify our weighting approach. Even with a set of intuitive weights,

we show the relevancy of the three objectives, and the benefit of their combination.

퐶표푠푡(푒) = 푊 + 푊 × 푐푟푖푡푖푐푎푙푖푡푦(푒) + 푊 × 푙표푎푑(푒)…………………………………………………………..1

푊 = 푎 × __

; 푊 = 16 × 푏 × __

; 푊 = 푐…………………………………………………………2

푙표푎푑(푒) = 푓(푥) =0 푖푓 푙표푎푑(푒) < 푡ℎ푟푒푠ℎ표푙푑 = 푐푎푝(푒)/3

( )( )

표푡ℎ푒푟푤푖푠푒 ……………………3.

Page 30: New Thesis

30

CHAPTER 5

Simulation and Analysis

5.1 Simulation Environment

In this thesis, before I decided which simulator environment, I would use, I looked at some different available

network simulators like J-Sim, NS2 and AMPL. Then after studying all these simulators, I decided to go with Tarvos

and AMPL In this section I will describe why I choose AMPL as simulator environment in this thesis.

5.1.1 J-Sim

J-Sim [45] (formally known as Java Sim) is an open source component based network simulator written in Java,

developed by Hung-ying Tyan and some other people at the Ohio State University. It provides MPLS support

through a third-party extension, but it does not include the RSVP-TE signaling protocol. The documentation is

available from the simulator’s website, and it does include good descriptions of native code implementation, the

philosophy behind the simulator and some tutorials and guides for new implementations.

Installation of the simulator, as it seems usual with Java applications, requires setting environment variables and

compiling the source codes with third-party tools more common in Linux/Unix platforms, and then applying patches

needed by the extensions such as MPLS. J-Sim is a dual language environment, where the user manipulates classes

written in Java using Tcl scripts, much resembling NS-2. This poses the same problems related to NS-2, i.e., the need

to know both Java and Tcl in order to use the simulator and implement non-existent characteristics.

5.1.2 NS2

NS-2 [46] is a discrete event simulator targeted at network research. It is open source, developed mainly by VINT

project, Xerox PARC, UCB, USC/ISI, and contributions by several other researchers and users.

NS-2 is coded in C++ in a modular fashion. The user interfaces with the simulator using the object-oriented script

language OTcl. It was conceived natively to run under Unix systems (including Linux), although it is possible to

install it under Microsoft Windows .

MPLS and RSVP-TE are not available as standard libraries in NS- 2. They were implemented through contributions

from other researchers. The MNS (MPLS for Network Simulator) module was developed by Gaeil Ahn [47][48], its

original location no longer being available in the internet. This module contains MPLS and CR-LDP, but not RSVP-

TE. The MNS module was further extended by [49] and [50][51] to include RSVP-TE functionalities. These

modules cannot be obtained directly from their authors’ websites, but only through request by email or from users

who already own the modules.

NS-2 learning curve is significantly steep. One has to know the script OTcl language and learn how to build scripts

that interface with the simulation objects coded in C++. The available documentation is not written in a didactic

style, making it difficult for the beginner to build initial simulations without investing a considerate amount of time

in trial and error. The documentation is especially poor for the MPLS and RSVP-TE modules, requiring the user to

read the source code in order to learn how to interface with it and detect the offered capabilities. It is open, but

Page 31: New Thesis

31

implementation of new functions or modifications demand studying large portions of the source code. Generation of

results and statistics is not automatic. One has to build a trace file from the simulation and perform a post processing

on the file, calculating the desired statistics, by means of a processing language such as awk. Simulations can easily

produce very large trace files, demanding significant post processing times.

Due to those characteristics and to the fact that the main module needed for the simulations, MNS, is yet not fully

supported, NS-2 also stimulated the devise of the new simulator.

5.1.4 AMPL

AMPL (for A Mathematical Programming Language) is a modeling language for mathematical programming. It is a

commercial utility but a student version with fair limitation exists and is free to be downloaded from

www.ampl.com. AMPL can be used further to pass the model to a solver, free or commercial, to be optimized, that is

to be maximized or minimized with respecting a number of constraints. It is important to note that AMPL does not

solve the problem by itself; it is simply used to model a system and to pass the system to a solver. AMPL can define

system variables, objective, and constraints. Constraints may be linear or non-linear, equalities or inequalities.

5.1.4.1 AMPL uses three file types:

.mod files: are used to store system model containing constraints, objective and variables. Equations are

written in AMPL syntax and can contain a large variety of functions and operators. Groups of similar

constraints and variables can be formed easily. A model file must start with a model; command.

.dat files: define system data like parameter values, initial points, etc. A data filele must start with a data;

command.

.ampl files: are AMPL script files. They are sort of batch files that are used to encapsulate different

commands and execute them one after another. The command file can be passed to AMPL as an argument.

For example the command AMPL myModel.ampl executes AMPL commands in the file myModel.ampl

one by one.

5.2. Simulation

In the following, Minimum hop and Min Length refer respectively to a minimal hop [42] and a minimal length

algorithm minimizing respectively the number of hops and the physical length of the chosen path. Whereas, MIRA

and load balancing refer respectively to the approaches described in previous chapter. These algorithms are evaluated

in a simulation environment. Traffic demands are uniformly distributed between all ingress/egress pairs and the

associated bandwidth request is uniformly distributed between [0, 10] Kbps. We use an integer linear programming

(ILP) approach to calculate the LSP route according to each algorithm. The objective (Eq. 4) is to find the path with

minimal cost where the cost function is giver by (Eq. 7). Note that for MinLength the cost is equal to the link length,

while the MIRA cost is consistent with the definition introduced in [41]. A flow conservation constraint (Eq. 5)

ensures that the algebraic sum of the flows at each node is null except (Eq. 7) for the source and destination nodes of

the LSP. Moreover, a capacity limitation constraint (Eq. 6) ensures that the resulting bandwidth on each link does not

violate the edge capacity.

Page 32: New Thesis

32

5.2.1 Simulation Model

Model:

G = (V,E) is a directed graph representing the network with:

V is the set of vertices (MPLS router)

E is the set of edges

Action

Determine the optimal set of binary variable x(e) and y(e) that:

푀푖푛푖푚푖푧푒: ∑ 푐표푠푡(푒) ∗ [푥(푒) + 푦(푒)]…………………………………………………………………..4

Subject to: ∑ [푥(푒) − 푦(푒)]( ) − ∑ [푥(푒) − 푦(푒)] = 휀(푣) 푓표푟 푣휖푉( ) ……………..….....5

: [푥(푒) + 푦(푒)] × [푙표푎푑(푒) + 푏] ≤ 푐푎푝(푒) 푓표푟 푒휖퐸………………….…………6

With: 휀(푣) =+1 푣 = 푂−1 푣 = 퐷

0 푣 ≠ {푂,퐷} cost(e)=

⎩⎨

( ) ( )

( ) 푙표푎푑 푏푎푙푎푛푐푖푛푔

푐푟푖푡푖푐푎푙푡푦(푒) 푀퐼푅퐴

……..7

Figure 5.1: Network Topology

Page 33: New Thesis

33

Figure 5.2: Average number of hop

Figure 5.3: Average length

Page 34: New Thesis

34

Figure 5.4: Average Load

Figure 5.5 Blocking Probability

Page 35: New Thesis

35

Figure 5.6: Load Standard Deviation

In figures 5.2-5.6, we can see that the performance of our proposed method is good in overall situation. The blocking

probability (Fig. 5.5) of the proposed scheme is comparable with MIRA results. The load standard deviation (Fig.

5.6) values are comparable with values for load balancing under light load. Under high load, the proposed scheme

achieves performance bounded by load balancing (upper standard deviation) and MIRA (lower standard deviation)

due to the equally combined effect of these algorithms.

Page 36: New Thesis

36

CHAPTER 6

Conclusion and Future Work

In this thesis, we identify relevant objectives for fault tolerance of MPLS network. We set up clear common criteria

for these algorithms, namely: request reducing blocking probability, minimizing cost of network, and load balancing.

We categorize and evaluate the appropriate approaches for this problem. The study shows the drawbacks of partial

considerations, and the need for a global solution. Finally, we propose a solution that covers the different criteria

presented in the thesis. Our formulation helps in clarifying all the trade-offs involved in CBR, thus enables the

design of more complete solutions. Our approach shows that combination of our set of objectives achieves better

overall satisfying results. The simulations presented in this thesis could be extended to encompass a discrete-event

approach taking into account limited life-time LSPs. Moreover, the objectives we fixed can be the basis for further

studies of CBR with emphasis on techniques for on-line design of survivable networks with multi-priority traffic.

Page 37: New Thesis

37

References [1]. Wikipedia, “Internet backbone”. Free encyclopedia of information [Online]. Available:

http://en.wikipedia.org/wiki/Internet_backbone .

[2]. H. Jonathan Chao, Xiaolei Guo. “Quality of service control in high-speed networks”, illustrated ed.

Reading, MA: Wiley-IEEE, 2001.

[3]. Xipeng Xiao Hannan, A. Bailey, B. Ni, L.M. “Traffic engineering with MPLS in the Internet”. IEEE

Network Magazine. Pub. Mar/Apr 2000. Vol. 14. Issue. 2. pp. 28-33.

[4]. K. Kar, M. Kodialam, T.V. Lakshman, “Minimum Interference Routing of Bandwidth Guaranteed Tunnels

with MPLS Traffic Engineering Applications”, IEEE Journal on Selected Areas in Communications,

December 2000.

[5]. Callon, R., Doolan, P., Feldman, N., Fredette, A., Shallow, G., Viswanathan, A., "A Framework for

Multiprotocol Label Switching", Internet Draft, Sep 1999.

[6]. I Gallagher, M Robinson, A Smith, S Semnani and J Mackenzie, " Multi-protocol label switching as the

basis for a converged core network ", BT Technology Journal, vol. 22, No 2, April 2004.

[7]. International Engineering Consortium, “White Papers: Multiprotocol Label Switching (MPLS)”,

International Engineering Consortium. IEC, 2007.

[8]. B. S. Davie and A. Farrel, MPLS: Next Steps, Volume 1, “The Morgan Kaufmann Series in Networking”,

2008.

[9]. L. Andersson, P. Doolan, N. Feldman, A. Fredette and B. Thomas. “LDP Specification”, IETF RFC 3036,

January 2001

[10]. L. Andersson, I. Minei, Ed and B. Thomas. “LDP Specification”, RFC 5036, October 2007.

[11]. V. Alwayn, “Advanced MPLS Design and Implementation”, Volume 1, the Cisco Press, USA September

2001.

[12]. Cisco Learning System: “Implementing Cisco MPLS”, Vol 1, Ver Student Guide 2.1, Cisco Press, 2004

[13]. N. Yamanaka, K. Shiomoto, “Eiji Oki: GMPLS Technologies”, Volume 1, Taylor & Francis Group CRC

Press, 2006.

[14]. J M O Petersson: MPLS Based Recovery Mechanisms, UNIVERSITY OF OSLO, May 2005.

[15]. V. Sharma, F. Hellstrand, B. Mack-Crane, S. Makam, K. Owens, C. Huang, J. Weil, B. Cain, L.

Anderson, B. Jamoussi, A. Chiu, and S. Civanlar. “Framework for multi-protocol label switching

(MPLS)-based recovery”. IETF RFC 3469, February 2003.

[16]. P. Pan, G. Swallow, and A. A. (Eds.). “Fast reroute extensions to RSVP-TE for LSP tunnels”.

IETF RFC 4090, May 2005.

[17]. H. Mellah and A. F. Mohamed. “Local path protection/restoration in MPLS-based networks”. In

the ninth Asia- Pacific Conference on Communications - APCC 2003, volume 2, pages 620-622,

September 2003.

Page 38: New Thesis

38

[18]. Hundessa, L., Pascual, J.D. “Fast rerouting mechanism for a protected label switched path”. In

Proceedings of the 10th International Conference on Computer Communications and Networks,

pages 527–530, April 2001.

[19]. J . Kang and M. J. Reed. “Bandwith protection in MPLS networks using p-cycle structure”. In

Design of Reliable Communication Networks (DRCN) 2003, pages 356-362, Banff, Alberta,

Canada, October 2003.

[20]. W. D. Grover and D. Stamatelakis. “Cycle-oriented distributed pre-configuration: Ring-like

speed with mesh-like capacity for self-planning network restoration”. In Proceedings of IEEE

ICC'98, pages 537-543, Atlanta, Georgia, June 1998.

[21]. Ho, Pin-Han, Mouftah, H.T., 2004. “Reconfiguration of spare capacity for MPLS-based

recovery in the internet backbone networks”. IEEE/ACM Transaction on Networking (TON),

pages 73–84, February 2004.

[22]. Haskin, D., Krishnan, R., 2000. A method for setting an alternative label switched paths to

handle fast reroute. IETF RFC 2026, June 1999.

[23]. M. Kodialam and T. V. Lakshman. “Dynamic routing of locally restorable bandwidth

guaranteed tunnels using aggregated link usage information”. In Proceedings of IEEE

INFOCOM 2001, pages 376-385, April 2001.

[24]. D. Xu, Y. Xiong, , and C. Qiao. “Novel algorithms for shared segment protection”. IEEE Journal

on Selected Areas i n Communication, 21(8): 1320-1331, 2003.

[25]. M. Kodialam and T. V. Lakshman. “Restorable dynamic quality of service routing”. IEEE

Communications. ' Magazine, pages 72-81, June 2002.

[26]. C. Qiao and D. Xu. “Distributed partial information management (DPIM) schemes for survivable

networks – part I”. In Proceedings of IgEE INFOCOM 2002, pages 302 -311, 2002.

[27]. E. Yetginer and E. Karasan. “Robust path design algorithms for traffic engineering with

restoration in MPLS networks”. In Proceedings of the Seventh International Symposium on

Computer and Communications – ISCC 2002, pages 933-938, 2002.

[28]. Wei Kuang Lai *, “Zhen Chang Zheng, Chen-Da Tsai. Fast reroute with pre-established bypass

tunnel in MPLS”. In the Journal of Computer Communication 31, pages 1660–1671, 2008.

[29]. C. Huang, V. Sharma, K. Owens, and S. Makam. “Building reliable MPLS networks using a

path protection mechanism”. IEEE Communications Magazine, pages 156 - 162, March 2002.

[30]. D. Awduche, L. Berger, D. Gan, T. Li, V. Srinivasan, and G. Swallow. “RSVP-TE: Extensions

to RSVP for LSP tunnels”. IETF RFC 3209, December 2001.

[31]. R. Aggarwal, D. Papadimitriou, and S. Y . (Eds.). “Extensions to RSVP-TE for point to

multipoint TE LSPs”. IETF Draft, April 2005.

Page 39: New Thesis

39

[32]. Xiaoming He, Hong Tang, Mingying Zhu Qingxin Chu. “Flow-level based adaptive load

balancing in MPLS networks”. In fourth IEEE International Conference on Communications and

Networking, pages 1-6, China, 2009.

[33]. A. Dana, A. K. Zadeh, K. Badie, M. E. Kalantari, and N. Reyhani. “LSP restoration in MPLS

network using case-based reasoning approach”. In Proceedings of ICCT2003, pages 462-468,

2003.

[34]. S.-Y. Kim. “Effect of load distribution in path protection of MPLS”. International Journal of

Communication Systems, 16(4):321-335, February 2003.

[35]. G. Ahn, J . Jang, and W. Chun. “An efficient rerouting scheme for MPLS-based recovery and its

performance evaluation”. Telecommunication Systems, pages 481-495, 2002.

[36]. Agarwal, A., Deshmukh, R., 2002. “Ingress failure recovery mechanisms in MPLS networks”.

In Proceedings MILCOM, pages 1150–1153, October 2002.

[37]. D.-K. Hong, C. S. Hong, and Dongsik-Yun. “A hierarchical restoration scheme with dynamic

adjustment of restoration scope in an MPLS network”. In Network Operations and Management

Symposium, pages 191-204, April 2004.

[38]. S. Yoon, H. Lee, D. Choi, Y. Kim, G. Lee, and M. Lee. “An efficient recovery mechanism for

MPLS-based protection LSP”. In Joint 4th IEEE International Conference on ATM (ICATM

2001), pages 75-79, Seoul, Korea, April 2001.

[39]. P.-K. Park, H.-S. Yoon, S. C. Kim, J . Park, and S. Yang. “Design of a dynamic path protection

mechanism in MPLS networks”. In The 6th International Conference on Advanced

Communication Technology, volume 2, pages 857-861, 2004.

[40]. Jenn-Wei Lin *, Huang-Yu Liu. “Redirection based recovery for MPLS network systems”. In

the Journal of Systems and Software, April, 2010. [41]. K. Kar, M. Kodialam, T.V. Lakshman, “Minimum Interference Routing of Bandwidth Guaranteed Tunnels

with MPLS Traffic Engineering Applications”, IEEE Journal on Selected Areas in Communications,

December 2000.

[42]. J. Moy, “OSPF: Anatomy of an Internet Routing Protocol”, New York: Addison-Wesley, 1998

[43]. R. Guerin, D. Williams, A. Orda, “QoS routing mechanisms and OSPF extensions”, in Proc. Globecom,

1997.

[44]. Q. Ma, P. Steenkiste, “On path selection for traffic with bandwidth guarantees”, In Proceedings of IEEE

International Conference on Network Protocols, Atlanta, GA, October 1997.

[45]. Hung-ying Tyan. J-Sim Simulator. http://www.j-sim.org.

[46]. NS Network Simulator. http://www.isi.edu/nsnam.

[47]. Gaeil Ahn. MNS (MPLS Network Simulator). http://flower.ce.cnu.ac.kr/~fog1/mns/.

[48]. Gaeil Ahn; Woojik Chun. “Design and implementation of MPLS network simulator supporting LDP and

CR-LDP”. IEEE International Conference on Networks (ICON 2000), p. 441-446.

[49]. Boeringer, R. RSVP-TE patch for MNS/ns-2. http://www.ideo-labs.com/index.php?structureID=44.

Page 40: New Thesis

40

[50]. Adami, D. et al. “Signalling protocols in diffserv-aware MPLS networks: design and implementation of

RSVP-TE network simulator”. IEEE Global Telecommunications Conference (GLOBECOM’05), vol. 2, p.

792-796.

[51]. Callegari, C. Vitucci, F. RSVP-TE patch for MNS/ns-2. http://netgroup-serv.iet.unipi.it/rsvp-te_ns/.

Page 41: New Thesis

41

APPENDIX A

PUBLICATIONS/ WITH COMMUNICATED

[1]. Rahul Tyagi “Importance of Traffic Engineering in MPLS Networks” in International Conference on Electronic

Communication & Instrumentation (e-Manthan), April 2012.

[2]. Rahul Tyagi “Survey of Fault Tolerance Strategies in MPLS Networks” in International Journal of Computer

Science Issues, in IJCSI Volume 9, Issue 3, May 2012. Accepted

Page 42: New Thesis

42

Personal Detail

I am Rahul Tyagi, pursuing Master of Technology in Computer Science and

Engineering from Jaypee University of Engineering and Technology, Guna

(M.P.).

I have completed Bachelor of Engineering in Computer Science and Engineering

from Global Institute of Technology, under Rajasthan University, Jaipur (RAJ)

with 62% in 2009.

My area of interest includes Database Management System, Data Structure,

Algorithms, Object Oriented Programming System and Computer Network.

Address: D/O Mr. S.C Tyagi Flat no. H-2, Mehak Arcade 8/16, Rajendra nagar,

sec-03, Sahibabad, U.P. INDIA

Email: [email protected]