chapter 2 literature review -...

33
17 CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION Chapter 2 reviews the various studies carried out in Active Networks and describes the existing approaches and techniques that have been applied to Network for to achieve the Quality of Service. The review also focuses in detail on the application of Congestion Control and QoS Routing. 2.2 QUALITY OF SERVICE IN NETWORKS 2.2.1 Queuing Techniques Bigdeli and Haeri (2008) introduced Predictive functional control (PFC) as a new active queue management method in dynamic TCP networks supporting ECN. The ability of handling system delay in PFC controllers as well as its simplicity and low computational load makes PFC applicable as an AQM method to achieve good performance in both queue regulation and compensation of the dynamic variations in high speed networks, having a rough estimate of the network model. The controller is designed for the small signal linearized fluid-flow model of TCP/AQM networks and then a closed- form transfer function representation of the developed PFC/AQM to analyze the robustness of the closed-loop system with respect to system and controller parameters' variations analytically. That is, the control scheme performs very well in regulating the queue length to its desired value in all the situations for

Upload: others

Post on 11-Mar-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

17

CHAPTER 2

LITERATURE REVIEW

2.1 INTRODUCTION

Chapter 2 reviews the various studies carried out in Active

Networks and describes the existing approaches and techniques that have

been applied to Network for to achieve the Quality of Service. The review

also focuses in detail on the application of Congestion Control and QoS

Routing.

2.2 QUALITY OF SERVICE IN NETWORKS

2.2.1 Queuing Techniques

Bigdeli and Haeri (2008) introduced Predictive functional control

(PFC) as a new active queue management method in dynamic TCP networks

supporting ECN. The ability of handling system delay in PFC controllers as

well as its simplicity and low computational load makes PFC applicable as an

AQM method to achieve good performance in both queue regulation and

compensation of the dynamic variations in high speed networks, having a

rough estimate of the network model. The controller is designed for the small

signal linearized fluid-flow model of TCP/AQM networks and then a closed-

form transfer function representation of the developed PFC/AQM to analyze

the robustness of the closed-loop system with respect to system and controller

parameters' variations analytically. That is, the control scheme performs very

well in regulating the queue length to its desired value in all the situations for

Page 2: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

18

both single and multiple bottleneck topologies. Fast response, low queue

fluctuations and consequently low delay, jitter, high link utilization,

scalability and low marking probability are other features of the developed

method with respect to the other well-known AQM methods.

Stylianos Dimitriou and Vassilis Tsaoussidis (2010) proposed an

Active Queue Management scheme, namely Size-oriented Queue

Management, which realizes service differentiation based on the Less Impact–

Better Service principle. SQM manages to satisfy broadly the quality

constraints of real-time applications, without compromising the performance

of bulk data applications. Using packet size as criterion, they are able to

distinguish time-sensitive flows and apply different dropping and scheduling

policies to favor time-sensitive traffic.They also demonstrated how SQM

classifies traffic and how it applies different policies to each packet depending

on its size, the sizes of packets currently in the queue and the contention

levels in the router. The results indicated that SQM is indeed practical and

efficient. Their future plans include integrating SQM in a routing device, in

order to obtain results to calibrate their scheme into a more realistic behavior.

Nir Perel and Uri Yechiali (2009) introduced and analyzed customers’

impatience that arises as a result of a slowdown in the servers’ service rate.

They analyzed three Markovian models: the single server case, the multiple

server case and the infinite-server case. For each model they derived explicit

expressions for the PGF of the number of customers in the system, both when

the servers are slow and when the system functions normally. They also

calculated the mean total number of customers in the system. In the M / M / 1

and M / M / c (c < ) queues they solved a differential equation in order to

derive the PGFs. When analyzing the M / M / queue, they made use of a

related model. They concentrate on deriving analytic solutions to the queue-

length distributions. They derived, for each case of c, the corresponding

Page 3: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

19

probability generating function, and calculated the mean queue size. Several

extreme cases were examined and numerical results were presented.

Louvros et al (2006) implemented two traffic models for cellular

networks. The first model, the classical one, is a typical queue model with an

overall queue for the cell under consideration. The new proposed model is

considering a partition in the cell queue and for each TRX of the cell a

different queue is considered. Fixed channel assignment is considered in both

models. Performance characteristics based on blocking probabilities, mean

waiting time on queue and cost functions are derived in order to compare the

two models. In both models, a number of channels are reserved exclusively

for handoff calls while the remaining channels are used for both new and

handoff calls. Blocked calls are cleared from the system immediately. They

compared the two models by using the probabilities of the system. They

found that these probabilities are less for the new model for all values of

offered load and queue size. They also obtained the average waiting time for

queued handoff calls and we found a small increment in the new model.

Sabato Manfredi et al (2006) concerned with the design of improved

active queue management (AQM) control schemes for time-delay systems

taking into account explicitly the presence of delays in the controller design.

A robust controller coping with uncertainties on the network parameters such

as round-trip time and load variations was proposed. This is based on an

appropriate robust reduction method for time delay systems. A robust

observer for time-delay systems is used to estimate online the average

transmission window resulting in a robust output feedback stabilization

scheme for AQM. In particular, a robust output feedback scheme based on

reduction method for AQM was proposed. The design proceeded in two

stages. Firstly, a set of transformations were considered to render the system

robust against parameter uncertainties. Then, the resulting feedback

Page 4: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

20

controller, relying on full availability of the states, was modified by inserting

an appropriately designed robust observer for time-delay systems. The

observer allowed the online estimation of the average transmission window of

the overall sources accessing to the bottleneck. Thus, a robust AQM was

synthesized and tested numerically.

Misja Nuyensa et al (2007) analyzed the results from both of these

research streams to provide a survey of state-of-the-art theoretical results

characterizing the performance of FB. They were concerned primarily with

traditional queueing metrics such as measures of the queue-length distribution

and response-time distribution. With respect to these measures, they

repeatedly found that FB performs well when the service distribution is

heavy-tailed, but that it can behave very poorly if the service distribution is

light-tailed. Since many computer applications have service distributions that

are typically modeled as heavy-tailed distributions, these results suggest that

FB is quite applicable in practical applications. FB has mean response times

that are as large as possible under any work-conserving policy, and for light-

tailed service distributions, FB has a tail behavior that matches the heaviest

tail possible under work conserving policies. However, they found that FB

can perform badly for distributions with high variability; thus, an important

task that remains is to better characterize under which classes of service

distributions FB performs badly.

Nima Sanajian et al (2008) exploited the distributional Little’s law to

obtain the steady-state distribution of the number of customers in a GI/G/1

make-to-stock queueing system. Non-exponential service times in make-to-

stock queue modeling are usually avoided or at best, considered in

approximations due to difficulties in developing an exact method. They

analyzed make-to-stock queues to study the impact of production time

variability (leading to variability in production lead time). It is difficult to

Page 5: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

21

obtain the steady-state distribution of the number in the GI/G/1 system; such

an analysis could not be incorporated in many of the studies in this field. They

provided a solution for this problem, which is an exact approach that involves

numerical approximations. The numerical results showed that ignoring the

variability of the production times results in tremendous errors. They also

pointed out that incorporating demand variability correctly into the analysis is

more crucial when production time variability is low.

Banik (2009) obtained queue length distributions at various epochs

such as, service completion/ vacation termination, pre-arrival, arbitrary,

departure, etc. Some important performance measures like mean queue

lengths and mean waiting times etc. have been obtained. He analyzed the

BMAP/G/1/1 queue with a variant of multiple vacation policy. He suggested a

procedure to obtain the steady state distributions of the number of customers

in the queue at service completion-, vacation termination-, departure-,

arbitrary- and pre-arrival-epochs.

Jau-Chuan Ke et al (2009) examined an M½x_=G=1 queueing

system with a randomized vacation policy and at most J vacations. Whenever

the system is empty, the server immediately takes a vacation. A cost model

was developed to determine the optimum vacation policy. By using the

analytic properties of the cost function, they developed an efficient decision

criterion for searching the joint suitable value of ðp; JÞ. Some numerical

examples are performed to investigate the effects of some parameters on the

expected number of customers in the system and the expected waiting time of

customers in the system.

Wall and Worthington (2006) obtained a new exact model for the

time-dependent behavior of virtual waiting time in discrete queuing systems

of the form M (t)/G/c. They extended these models to include the time-

dependent behavior of virtual waiting time. Statistical approximations for the

Page 6: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

22

distributional form of virtual waiting times are then developed and tested.

These approximations reduced the computational effort involved in the

evaluation of the exact expressions by a factor of over 1000 while still

maintaining very high accuracy levels.

Joris Walraevens et al (2008) presented the transient analysis of the

system content in a two-class discrete-time MX /D/1 priority queue. Packets

of two types arrive in the system and packets of type 1 have priority over

packets of type 2. Using generating functions, they analyzed the transient

generating functions of the system contents of both classes at the beginning of

slots. Furthermore, they showed how to calculate the moments of the transient

system contents of both types and of the total system contents. They

illustrated our analytic approach by means of a couple of examples.

Jong-hwan Kim et al (2011) explained two major goals of queue

management are flow fairness and queue-length stability. However, most

prior works dealt with these goals independently. Here they showed that both

goals can be effectively achieved at the same time. They proposed a novel

scheme that realizes flow fairness and queue-length stability. In the proposed

scheme, high-bandwidth flows are identified via a multilevel caching

technique. They calculated the base drop probability for resolving congestion

with a stable queue, and applied it to individual flows differently depending

on their sending rates. Via extensive simulations, they showed that the

proposed scheme effectively realizes flow fairness between unresponsive and

TCP flows, and among heterogeneous TCP flows, while maintaining a stable

queue. They also proposed a new queue management scheme to realize both

flow fairness and queue-length stability. The scheme consisted of a multilevel

caching technique to detect high-bandwidth flows accurately with minimal

overhead; and a drop policy for achieving both flow fairness and queue-length

stability at the same time. Performance evaluation showed that 1) high-

Page 7: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

23

bandwidth unresponsive flows can be effectively controlled with the proposed

scheme, 2) fairness among heterogeneous TCP flows can be significantly

improved, and 3) the proposed scheme can effectively deal with short-lived

flows. Based on the evaluations, it was observed that the proposed scheme

can maintain queue-length stability.

Kanchan Chavan et al (2011) explained AQM for wireless

networks. Unlike a wired link, which is assumed to have a fixed capacity, a

wireless link has a capacity that is time-varying due to fading. Thus, the

controller is required to meet performance objectives in the presence of these

capacity variations. They proposed a robust controller design that maintains

the queue length close to an operating point. They treated capacity variations

as an external disturbance and designed a robust controller using control

techniques.They also considered the effect of round-trip time in our model.

Their method of incorporating the delay into the discretized model simplifies

controller design by allowing direct use of systematic controller design

methods and/or design packages. They demonstrated the robustness of the

controller to changes in the load condition and in the round-trip time through

simulations. AQM algorithms have been extensively studied for wired

networks. However, the design of AQM for wireless network has not been

adequately addressed. They proposed the design of AQM for wireless

networks. Specifically, they proposed a way to address capacity variations of

the wireless link by treating it as an external input. The controller design was

done offline using a linearized model about a suitably chosen operating point.

There is no online tuning or adjustment of parameters to be done by the user

while monitoring the network performance. Their simulation results on the

IEEE 802.11-based wireless network demonstrated that the proposed

algorithm achieves significant advantage in terms of queue stability over

various AQM algorithms with a minor increase in packet drop. Further, the

Page 8: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

24

controller is robust to wide variations in the capacity and the simulations also

demonstrated the robustness to significant variations in the round trip time.

Heejung Byun et al (2011) proposed a control-based approach to the

duty cycle adaptation for wireless sensor networks. The proposed method

controlled the duty cycle through the queue management in order to achieve

high performance under variable traffic rates. To achieve energy efficiency

while minimizing the delay, they designed a feedback controller, which

adapted the sleep time to the traffic change dynamically by constraining the

queue length at a predetermined value. In addition, they proposed an efficient

synchronization scheme using an active pattern, which represented the active

time slot schedule for synchronization among sensor nodes, without affecting

neighboring schedules. Based on the control theory, they analyzed the

adaptation behavior of the proposed controller and demonstrate system

stability. The simulation results showed that the proposed method

outperforms existing schemes by achieving more power savings while

minimizing the delay. They proposed a control-based approach to the

adaptive duty cycle control for wireless sensor networks. The proposed

approach controlled the duty cycle through the queue management in order to

achieve high performance under network condition changes. To achieve

energy efficiency while minimizing the delay, they designed a feedback

controller, which changed the sleep time dynamically by constraining the

queue length at the predetermined value. This results in lower power

consumption and faster adaptation to traffic changes. Generally, some

limitations on scalability are imposed by the fact that it requires explicit state

information for each flow in each intermediate node. However, their scheme

only requires the local queue length for computing the duty cycle, which adds

good scalability to the system. In addition, they proposed a new

synchronization scheme so that the receiver and sender nodes are active at the

same time, while keeping the duty cycles different from those of all other

Page 9: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

25

nodes. Their simulation results showed that the proposed algorithm improves

significantly both energy efficiency and delay performance by adapting the

duty cycle properly under network changes.

2.2.2 Congestion Control Mechanism

Thushara Weerawardane et al (2009) analyzed the theoretical and

modelling aspects of the TNL congestion control algorithm in the HSPA

simulation model. The effects of the TNL congestion control algorithm for the

HSUPA performance are presented and validated using different HSUPA

traffic deployments. The simulation results confirmed (for both traffic

models: FTP worst and 3GPP FTP traffic) that TNL congestion control

algorithm can achieve much better overall performance in HSUPA network

compared to other simulation configurations. Such combined (preventive and

reactive) congestion control mechanisms can optimized the effective link

utilization by minimizing the higher layer retransmissions and also can

achieve high end user throughputs and high fairness.

Nishanth Sastry and Simon Lam (2005) presented CYRF, a novel

approach to protocol design that is guaranteed to converge to fairness and

efficiency. It allowed protocol designers to choose the appropriate response

function given the application and network issues at hand, without having to

worry about fairness and efficiency. Such protocols can also be made TCP-

friendly easily. Using this framework, they designed and evaluated a protocol

called LOG, with intermediate smoothness and aggressiveness.

Soohyun Cho and Riccardo Bettati (2005) proposed a new,

measurement based, collaborative congestion control scheme called

TCP/DCA-C for parallel, or quasi-parallel, TCP flows, which exchange

indicator signals about imminent congestion within the group in order to

improve performances of all the flows in the group. In TCP/DCA-C, flows in

Page 10: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

26

a group can manage their data sending rates more accurately to achieve better

performance by taking advantage of information that comes from other TCP

flows, which experience congestion earlier, and by treating their congestion

signals as indicators of imminent congestion in network.

Feng Xie et al (2005) analyzed and evaluated the influences of

NAK-based retransmission mechanism on congestion control. They also

developed an accurate mathematical model for the steady-state throughput of

RMCC schemes by capturing the congestion avoidance and fast

retransmit/fast recovery procedures. With the equations obtained they

predicted the multicast throughput with the round-trip time and loss rate of the

“worst” receiver. Moreover, selecting the “worst” receiver as the nominee to

send congestion control feedbacks are critical to ensure fairness and

congestion avoidance in single-rate multicast congestion control schemes. As

the TCP throughput equations are used in existing schemes for nominee

selection, the obtained multicast throughput equations can be used to enhance

the mechanisms since they predict the multicast throughput more accurately

than TCP throughput equations.

Mihail Sichitiu and Peter Bauer (2006) considered a detailed model of

a class of congestion control systems. For the considered systems, they

proved a theorem. which states that computer congestion control systems with

linear controllers, the stability of the system with a single source is equivalent

to the stability of the system with multiple sources. The proof is based on a

well-known result on stability of time-variant systems. The usefulness of the

theoretical result was accentuated by the NP hard nature of stability tests for

time-variant systems. A key ingredient in the proof of the small-gain theorem

is the theory of asymptotically autonomous systems, which requires in

particular that the equilibria of the subsystems in the interconnection contain

no chains.

Page 11: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

27

Rohan de Silva (2005) presented the approach which uses historical

data to find the congestion-price of a connection. Once the price is found, it

can be used to calculate the appropriate rate of the connection. This method

used only the naturally available Round Trip Time (RTT) and historical data

to find the congestion-price of a connection. They forwarded a plausible

method of congestion pricing-based congestion control which uses historical

data to compute the price. This eliminated the problem of unfair rate

allocation resulting from the need of all sources to reduce or increase their

rates in the same proportion. If the path is congested, this method first

decreases the rates of the sources and then once the congestion in the path is

relieved, the rates are increased to a value that is sufficient to leave a small

fixed number of packets in the queues along the path.

Hsu-Feng Hsiao et al (2006) proposed a congestion control algorithm

for user datagram protocol rate-based layered streaming of scalable video,

e.g., 3-D wavelet based scalable video streaming, which provides a variety of

video bit rates. This proposed congestion control mechanism, as an extension

of explicit control protocol that is a newly proposed congestion control

protocol believed to be superior to transport control protocol, accommodates

both window-based and rate-based flows to the heterogeneous network

environment which can include wired and wireless channels and also it

introduced the notion of reserved packet length so that the traffic of layered

video can better share the bandwidth of a network by taking account of the

max-min fairness with other traffic.

Jin Wu et al (2006) discussed a new approach for network

congestion control. By using Artificial Intelligence methods in network

congestion control, a knowledge driven approach is introduced to tackle the

congestion control problem. Congestion Control framework used to organize

knowledge is proposed, and its application in solving TCP Inter-Flavour-

Page 12: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

28

Friendliness problem is discussed. It is to show that the knowledge driven

approach has potentials in: 1) improving the performance of existing

congestion control algorithm; 2) tackling congestion control problem in

varying and uncertain environments by constructing complex congestion

controller in an easy and reliable way.

Yueping Zhang et al (2007) proved that single-link congestion control

methods with a stable radial Jacobian remain stable under arbitrary feedback

delay (including heterogeneous directional delays) and that the stability

condition of such methods does not involve any of the delays. They extended

this result to generic networks with fixed consistent bottleneck assignments

and max–min network feedback. They investigated the properties of Internet

congestion controls under non-negligible directional feedback delays. They

focused on the class of control methods with radial Jacobians and showed that

all such systems are stable under heterogeneous delays. To construct a

practical congestion control system with a radial (symmetric in particular)

Jacobian, They made two changes to the classic discrete Kelly control and

created a max–min version they called MKC. Combining the latter with a

negative packet-loss feedback, they developed a new controller EMKC and

showed in theory and simulations that it offers smooth sending rate and fast

convergence to efficiency.

Siddharth Ramesh and Sneha Kumar Kasera (2007) proposed two best

effort, search-based, session (or flow) level congestion control strategies for the

Internet, to complement existing packet-level congestion control schemes.

Their strategies controlled the number of competing flows to optimize for the

flow completion rate and the flow completion time. Furthermore, their session

control mechanisms do not require any per-flow state or computation at the

routers, make no assumption about input traffic characteristics and

requirements, avoid starvation of new flows when existing flows do not leave

Page 13: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

29

the system, and do not require any end host TCP modifications. Using

evaluations under a wide variety of static and varying traffic load conditions,

they demonstrated the significant performance and fairness gains that their

session control mechanisms provide. They introduced two novel search-based

session-level congestion control mechanisms (GSS + GA and CP) to

complement existing packet level mechanisms. Comparing both algorithms,

they found that overall, CP performs better than GSS+GA, not only in terms

of significantly reducing the flow completion time, but also in terms of

maximizing the flow completion rate. It was fairly robust to all changes in

traffic patterns including pulse-like variations. Hence, they recommend CP as

a viable session control mechanism for the Internet.

Qiao-Yan Kang et al (2007) proposed an expert-control-based

multicast congestion control mechanism for wireless networks, termed

ECBMCC. In this mechanism, multicast receivers sent their feedback

information to the expert controller rather than the sender, and the expert

controller made sure the state of TCP connection by inferring according to the

feedback information. Multicast congestion control is one of the key factors

which restrict the development of multicast application, especially in the

wireless environment. Here they proposed an expert-control-based multicast

congestion control mechanism for wireless network——ECBMCC, and

analyze the performance of ECBMCC by simulation. Results showed that

ECBMCC adapts to wireless environment well. And ECBMCC works

normally in wireless environment with high BER. Moreover, ECBMCC

achieved excellent performance in TCP-Friendliness on low BER wireless

channel.

Mingyu Che et al (2009) analyzed the type of feedback that is

primarily used as a congestion measure, congestion control methods can be

generally classified into two categories: marking/loss-based or delay-based.

Page 14: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

30

While both marking and queueing delay provide information about the

congestion state of a network, they have been largely treated with separate

control strategies. They proposed the notion of the NQD which serves as a

combined congestion measure of delay and marking information. Utilizing

normalized queueing delay (NQD), they proposed an approach to congestion

control that allows a source to scale its rate dynamically to prevailing network

conditions through the use of a time-variant set-point. By incorporating NQD

into a congestion avoidance strategy, delay-based TCP window controllers are

able to dynamically determine a buffer set-point that is scalable with respect

to the number of users, link capacity, and buffering resources. This addresses

an important open problem. The new TCP window flow control algorithm

.Therefore, NQD is a useful congestion measure for practical congestion

control in ECN-enabled networks.

Chong Liu et al (2008) introduced the traditional congestion control

policy and analysed their principles and feasibilities. Active networks

congestion control policy introduced the active detection and the passive

notification mechanism, the Random Early Detect (RED) queue management

and the load balance technology based on the traditional congestion control

policy. They introduced active networks congestion controlling the

application of research and analysis the advantages and disadvantages of

between the traditional congestion control policy and active networks

congestion control policy. From the direction of development, active network

had greater flexibility, and provide superior to the traditional network

performance. Users can provide customized services or applications. Active

network congestion control policy had the great potential.

Israr Ullah and Raees Khan (2008) developed a new application

level protocol above UDP, which is named UDP based Data Transfer protocol

(UDT). UDT has its own congestion control mechanism to achieve the

Page 15: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

31

efficiency, fairness and stability objectives, whereas its application level

nature enables it to be deployed with the lowest cost, without any changes in

the network infrastructure and operating systems. They concluded that there is

one more reason of packet losses that is due to end system interaction with

operating system and context switches with other applications.

Emmanuel Jammeh et al (2007) proposed an interval type-2 FLC that

achieved a superior delivered video quality compared with existing traditional

controllers and a T1 FLC. To show the response in different network

scenarios, tests demonstrated the response both in the presence of typical

Internet cross-traffic as well as when other video streams occupy a bottleneck

on an All-internet protocol (IP) network. All-IP networks are intended for

multimedia traffic, it is important to develop a form of congestion control that

can transfer to them from the mixed traffic environment of the Internet. It was

found that the proposed type-2 FLC, although it is specifically designed for

Internet conditions, can also successfully react to the network conditions of an

All-IP network. When the control inputs were subject to noise, the type-2

FLC resulted in an order of magnitude performance improvement in

comparison with the T1 FLC. The type-2 FLC also showed reduced packet

loss when compared with the other controllers, again resulting in superior

delivered video quality. When judged by established criteria, such as TCP-

friendliness and delayed feedback, fuzzy logic congestion control offers a

flexible solution to network bottlenecks. These findings offered the type-2

FLC as a way forward for congestion control of video streaming across

packet-switched IP networks.

Krishna Jagannathan et al (2009) characterized the tradeoff between

the rate of control and network congestion for flow control policies. They

considered a simple model of a single server queue with congestion-based

flow control. The input rate at any instant is decided by a flow control policy,

Page 16: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

32

based on the queue occupancy. They identified a simple ‘two threshold’

control policy, which achieves the best possible congestion probability, for

any rate of control. They determined the optimal amount of error protection to

apply to the control signals by using a simple bandwidth sharing model.

Shahram Jamali et al (2009) developed a stable congestion control

algorithm that is inspired from nature. Toward this purpose, the window size

of any connection is viewed as population size of flow species and congestion

control problem is redefined as population control of these flow species (W).

In order to control population size of W species, they used a realistic predator-

prey model and map it to the Internet congestion control issue.

Liu Pingping et al (2009) proposed a new congestion control

algorithm, which by predicting the instantaneous queue length of next time to

decide congestion. It detected the congestion at the earliest time and make the

best of network resource. They introduced the AQM, and compared some of

them, and proposed the NEW method, the method can effectively assimilate

the network oscillation, make the packet dropping possibility tend to easy and

the output smoothness and ideal.

Weili Huang and Xiangguang Kong (2009) introduced the basic

principle of layered multicast and congestion control, and analyzed the goals

of layered multicast congestion control. Then discussed the advantages and

shortcomings of several typical layered multicast congestion methods, and

compared them in detail, mainly focused on how to solve the layered

multicast congestion control problems that are crucial. The research deepened

the topic of the layered multicast congestion control further, widened

multicast congestion control solution ideas, and laid a certain foundation for

the related subjects of research in the future.

Page 17: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

33

Kai Shi et al (2009) proposed (RACC), in which the sender still

performs loss-based control, while the receiver performs delay-based control.

The receiver measures the network bandwidth based on the interpacket delay

gaps, and computes an appropriate congestion window size according to the

measured bandwidth and then feedbacks the value to the sender. The receiver

computes an appropriate congestion window based on the measured

bandwidth and RTT, and then feed them back to the sender. The sender

adjusts the congestion window according to the advertised window of the

receiver. Through this receiver assistant method, the sender can increase the

congestion window quickly to the available bandwidth, thus improving the

network utilization. On the other hand, when timeout happens, the receiver

can feed this information timely back to the sender to relieve the impact of

timeout to TCP performance.

Kai Shi et al (2009) proposed fuzzy logic congestion control

mechanism to improve throughput performance of transmission control

protocol (TCP) in wireless high-bandwidth delay networks. It is based on

receiver centric method of which the available bandwidth is measured at the

receiver. The receiver centric fuzzy logic congestion control method considers

bandwidth utilization and variation besides the packet loss. The mechanism

uses one-way packet interval to estimate bandwidth. Compared with TCP

Westwood, bandwidth estimation is more accurate, and the mechanism can

make better use of bandwidth further. Fuzzy logic congestion control can

judge out the network congestion accurately, thus reduce the timeout

probability of TCP.

Ning Jia et al (2009) proposed a congestion control mechanism

based on bandwidth estimation and packet’s arrival rate forecast for wireless

networks. This mechanism estimates node’s available bandwidth by

monitoring the working state of node’s wireless link in real-time and forecasts

Page 18: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

34

arrival rate of each packet by monitoring network traffic, then the node’s

congestion degree indicator can be obtained, and congestion can be controlled

in accordance with the type of packet.

Kai Shi et al (2010) proposed a sender and receiver combined

congestion control mechanism. The receiver estimates a congestion window

deemed to be appropriate from the measured bandwidth and RTT, and then

advertises the window size (feeds this information back) to the sender. The

sender then adjusts its congestion window according to the advertised window

of the receiver. Through this receiver-assisted method, the sender can increase

the Congestion window quickly to the available bandwidth, thus improving

the network utilization

Guangxue Wang and Kai Liu (2009) proposed an upstream hop-by-hop

congestion control (UHCC) protocol based on cross-layer design to achieve

precise congestion control in many-to-one wireless sensor networks. It takes

advantage of unoccupied buffer size and traffic rate at MAC layer of each

node as congestion level indication, based on which, every upstream traffic

rate is adjusted with its node priority to mitigate congestion hop by hop.

Finally, simulation results show that the UHCC protocol achieved higher

throughput, better priority-based fairness and lower packet loss ratio than both

CCF and PCCP protocols.

Guang et al (2010) studied on congestion control in internet were

divided into congestion control based on measurement, improvement on

AIMD mathematic models and congestion control based on control theory.

The actuality and development of current congestion control studies were

summarized in the three sorts. Congestion control algorithms exhibited the

characteristics of multiplicity and agility. But improving the QoS of internet

with large bandwidth-delay product and heterogeneous flows is the only goal

of all algorithms.

Page 19: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

35

Xiaolong Li et al (2009) proposed a distributed ECN-based

congestion control protocol which is refered as Double-Packet Congestion

Control Protocol (DPCP). DPCP is capable of relaying a more precise

congestion feedback compared to earlier proposed Variable-structure

Congestion-control Protocol (VCP) yet preserving the utilization of the two

ECN bits. By distributing (extracting) congestion related information into

(from) a series of packets, DPCP is able to circumvent the limitations of VCP

related to the use of three congestion levels encoded into two ECN bits. They

implemented DPCP in Linux and demonstrate its performance improvements

compared to VCP through experimental studies.

Lei Ye et al (2008) aimed at designing a family of optimization

based, end-to-end transport layer protocols to support various QoS

requirements for the real-time applications. It enables a set of class of services

(CoSs) including Assured Forwarding Service (AFS), Minimum Rate

Guaranteed Service (MRGS), and Minimum Rate Guaranteed and Upper

Bounded Rate Service (MRGUBS). These control laws maximize the same

utility function as the TCP congestion control protocol does. As a result, they

are by design TCP friendly. These control laws are implemented as window-

based congestion control protocols, similar to the window-based TCP

congestion control protocol. First, they reverse engineered the utility function

of TCP. Second, they derived a family of QoS aware control laws sharing the

same utility function with the TCP. These control laws were then mapped into

a family of packet-based control protocols.

Filipe Abrantes et al (2011) explored the problem of operating

XCC mechanisms in transmission media with variable or unknown capacity.

Explicit congestion control (XCC) is emerging as one potential solution for

overcoming limitations inherent to the current TCP algorithm, characterized

by unstable throughput, high queuing delay, RTT-limited fairness, and a static

Page 20: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

36

dynamic range that does not scale well to high bandwidth delay product

networks. In XCC, routers provide multibit feedback to sources, which, in

turn, adapt throughput more accurately to the path bandwidth with potentially

faster convergence times. Such systems, however, require precise knowledge

of link capacity for efficient operation. In the presence of variable-capacity

media, e.g., 802.11, such information is not entirely obvious or may be

difficult to extract. They explored three possible algorithms for XCC which

retain efficiency under such conditions by inferring available bandwidth from

queue dynamics and test them through simulations with two relevant XCC

protocols: XCP and RCP. They proposed three alternative control algorithms:

Blind, ErrorS, and MAC, which were evaluated both through simulation and

in a FreeBSD testbed. Blind and ErrorS use queue properties such as queue

speed or queue accumulation to infer the instantaneous capacity of the

medium while the MAC algorithm uses information from the MAC layer,

such as idle and busy periods. It showed that these algorithms maintained

most of XCC properties, such as stable throughput, low queuing delay,

accurate flow fairness, and high efficiency regardless of the network BDP,

making these algorithms suitable for multimedia transport in high-speed

variable-capacity networks, such as IEEE 802.11n.

Przemyslaw Ignaciuk et al (2011) addressed the problem of

congestion control in communication networks from a control-theoretic

perspective. In this type of complex, dynamical systems, the primary obstacle

in the design of efficient control is the delay in the feedback loop which may

be subject to significant fluctuations during the control process. They

presented a new approach to solving the congestion problem in multisource

networks, in which each flow is characterized by different and time-varying

delay, with the application of discrete-time sliding-mode control. The

proposed controller, operating at a network node, guarantees that in the

considered networks the packet losses are eliminated and all of the available

Page 21: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

37

bandwidth at the node output interface is used for the data transfer. The

controller is demonstrated to be robust with respect to the abrupt and

unpredictable changes of networking conditions, such as delay and bandwidth

variations, which need not be correlated with each other. The controller

parameters are selected by minimizing a quadratic cost functional. A closed-

form solution of the optimization problem allows for a straightforward and

operationally efficient implementation of the proposed congestion control

strategy in real network nodes. They proposed accurate control-theoretic

approach was used to the design of a robust congestion controller for

communication networks. The controller ensures that packet losses related to

congestion are eliminated and the available bandwidth is entirely used for the

transmission of data. The focus was placed on the analysis of the controller

robustness to the time-varying input-output delay. It was shown that proposed

nonlinear controller guarantees the maximum throughput in the

communication system serving multiple flows with different and variable

latency. The designed controller quickly reacts to the bandwidth changes and

keeps the oscillations of the queue length (induced by delay variability)

minimal. The controller can provide faster dynamics and smaller buffer space

than other robust algorithms previously proposed for a similar network model.

The simple form of the proposed algorithm, which is the result of an

analytical solution of the optimization problem, ensures straightforward

implementation, easy dynamics tuning, and operational efficiency in real

network nodes. Moreover, since in the proposed algorithm fairness control is

separated from flow control, various rate-partitioning algorithms can be

introduced, such as max-min, or proportionally fair, or policy-based user

differentiation, without downgrading its performance related to the traffic

flow regulation. Finally, in the proposed scheme, signaling generates very

limited network traffic. This is due to the discrete nature of the proposed

controller which requires feedback information only once in every control

period.

Page 22: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

38

Miguel Sepulcre et al (2011) Proposed a new congestion control

policy for the wide scale deployment of cooperative vehicular ad-hoc

networks which required the design of efficient congestion control that

guarantee stable and reliable communications between vehicles and with

infrastructure nodes. These policies should reduce the load on the

communications channel, while satisfying the strict application’s reliability

requirements. They proposed a contextual cooperative congestion control

policy that exploits the traffic context information of each vehicle to reduce

the communications channel load without sacrificing the traffic application’s

reliability. With the proposed policy, vehicles cooperate and are able to

reduce unnecessary interferences by exploiting the knowledge of the traffic

context obtained through the periodic exchange of broadcast messages. In

addition, they proposed a framework to extend the proposed policy to multi-

application scenarios through the design of a novel communications

adaptation layer.

Marios Lestas et al (2011) introduced a novel estimation algorithm

that is based on online parameter identification techniques and is shown

through analysis and simulations to converge to the effective number of users

utilizing each link. The algorithm does not require maintenance of per-flow

states within the network or additional fields in the packet header, and it is

shown to outperform previous proposals that were based on point wise

division in time. The estimation scheme is designed independently from the

control functions of the protocols and is thus universal in the sense that it

operates effectively in a number of congestion control protocols. It can thus

be used in the design of new congestion control protocols. They illustrated its

universality, by using the proposed estimation scheme to design a

representative set of Internet congestion control protocols. Using simulations,

they demonstrated that these protocols satisfy key design requirements. They

guided the network to a stable equilibrium that is characterized by high

Page 23: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

39

network utilization, small queue sizes, and max-min fairness. In addition, they

are scalable with respect to changing bandwidths, delays, and number of

users, and they generate smooth responses that converge quickly to the

desired equilibrium. Their main contribution is to design a novel estimation

scheme of the effective number of users utilizing a link or multiple bottleneck

links, which is based on online parameter identification techniques and is

shown to work effectively, outperforming previous proposals. The estimation

scheme is designed independently from the control functions of the protocol

and is thus universal in the sense that it operates effectively in a number of

congestion control protocols. It can thus be successfully used to improve

recently proposed congestion control protocols and also to design new ones.

Here they use the proposed estimation scheme to design three representative

Internet congestion control protocols and also to demonstrate through

simulations that all three representative protocols satisfy key design

requirements.

Sumit Rangwala et al (2011) explored mechanisms for achieving

fair and efficient congestion control for multihop wireless mesh networks.

First, they designed an AIMD-based rate-control protocol called Wireless

Control Protocol (WCP), which recognizes that wireless congestion is a

neighborhood phenomenon, not a node-local one, and appropriately reacts to

such congestion. Second, they designed a distributed rate controller that

estimates the available capacity within each neighborhood and divides this

capacity to contending flows, a scheme called Wireless Control Protocol with

Capacity estimation (WCPCap). Using analysis, simulations, and real

deployments, they found that the designs yield rates that are both fair and

efficient. WCP assigns rates inversely proportional to the number of

bottlenecks a flow passes through while remaining extremely easy to

implement. An idealized version of WCPCap is max-min fair, whereas a

practical implementation of the scheme achieves rates within 15% of the max-

Page 24: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

40

min optimal rates while still being distributed and amenable to real

implementation. The work is a significant step in understanding congestion

control for mesh networks. Their main contributions include: the first

implementation of fair and efficient rate control for mesh networks that yields

nearly optimal throughputs, a plausibly implementable available capacity

estimation technique that gives near-optimal max-min fair rates for the

topologies and insights into the impact of various factors on performance.

2.2.3 QoS Routing

Iftekhar Ahmad et al (2005) presented a new preemption-aware

QoS routing algorithm for Instantaneous Request (IR) call connections in a

QoS-enabled network where resources are shared between Instantaneous

Request (IR) and Book-Ahead (BA) call connections. They adopted a new

strategy to incorporate future BA and current IR load information to make a

routing decision. A mathematical derivation is presented to calculate the

preemption probability of an incoming IR call at each link. This calculated

preemption probability and used as a metric to formulate a new link cost

function for least cost routing.

Saad Alabbad and Woodward (2006) presented a simple credits

based localized algorithm for QoS routing that performs routing using only

flow statistics collected locally. They compared its performance against the

psr algorithm and they demonstrated through extensive simulations that our

algorithm outperforms the psr algorithm. They also compared its performance

against the wsp algorithm and showed that cbr gives a comparable

performance with better time complexity and very low communication

overhead which confirms the localized

Page 25: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

41

Venkatesh Sarangan et al (2006) introduced a new framework for

estimating the routing capacity of a domain in an internetwork based on “network-flow” techniques this routing capacity is advertised as an aggregate

parameter along with the conventional widest path bandwidth. The proposed

routing capacity helps to achieve a better performance by reducing the contention for resources along the shorter paths.

Eva Marín-Tordera et al (2006) proposed a new QoS routing

mechanism called Prediction-Based Routing based on predicting the

availability of links and routes regardless from the network state information. Consequently, update messages are not required, hence reducing signalling overhead and providing a major enhancement in terms of scalability.

Himanshu Agrawal et al (2007) proposed an algorithm for delay-

constrained problems. Multimedia applications have stringent constraints on delay, delay-jitter, cost, etc. The main purpose of QoS routing is to find a

feasible path that has sufficient resources to satisfy the constraints. The delay

and cost constrained routing problem is NP-complete. They presented a

technique called E-LARAC based on Lagrange Relaxation that gives a lower bound on the theoretical optimal solution.

Ahmed Alzahrani and Michael Woodward (2008) analyzed

localized delay-based QoS routing (DBR) algorithm which relies on delay

constraint that each path satisfies to make routing decisions. They demonstrated through simulation that the two proposed algorithms, although

simple, outperform global routing schemes under different traffic loads and network topologies, even for a small update interval of link state.

Thriveni et al (2008) proposed a QoS Preemptive Routing protocol

with Bandwidth estimation (QPRB) that computes the available bandwidth in

the route and then sets up the route based on the network traffic and maintains

the route using preemptive routing procedure. This protocol provides QoS

Page 26: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

42

support to the real time applications by providing a feedback about the

network status. The algorithm improves network performance and performs well in route breakage conditions as better routes are found in advance to

route breakage. The cost incurred to detect the route breakage and to find a new route is avoided.

Xing-Wei Wang et al (2009) proposed the knowledge of the fuzzy mathematics, probability theory and gaming theory, a QoS multicast routing

scheme with ABC supported. It uses the interval to describe the user QoS

requirement and the edge (link) parameter, introducing the user satisfaction degree and the edge evaluation functions. With the help of the gaming

analysis and based on the small-world optimization algorithm, it tries to find a

QoS multicast tree with the Pareto optimum under the Nash equilibrium on both the network provider utility and the user utility achieved or approached.

Abdulbaset Mohammad and Michael Woodward (2010) proposed a

congestion avoidance routing (CAR) scheme which combines the concept of

localized QoS routing with admission control in order to avoid congestion.

The CAR algorithm is designed to make routing decisions for each connection request and they have compared the CAR algorithm with other

localized CBR and QBR schemes and have demonstrated through simulations that scheme outperforms the CBR and the QBR in all situations considered.

Turki Al Ghamdi and Michel Woodward (2009a) presented new localized routing algorithm called Highest Minimum Bandwidth routing

(HMB), which serves to avoid the issues associated with the existing localized

QoS routing techniques and so generate better performance. They analyzed the functionality of the CBR which is the best among existing global and

localized routing algorithms. They offered two methods to improve the

performance of the CBR algorithm and introduced new localized routing

algorithm HMB. In three types of networks, ISP, RAND45 and RAND80, our

Page 27: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

43

algorithm consistently performed better than CBR. The proposed algorithm acts perfectly using bandwidth as QoS metric.

Abdulbaset Mohammad and Michael Woodward (2009) proposed

the localized QoS routing with admission control concept to overcome the

problems associated with using global QoS routing. They demonstrated

congestion avoidance routing (CAR) scheme which combines the concept of localized QoS routing with admission control in order to avoid congestion

CAR algorithm designed to make routing decision in each connection request, they have compared CAR algorithm with other localized CBR and QBR.

Turki Al Ghamdi and Michel Woodward (2009b) offered two

methods to improve the performance of the CBR algorithm and introduced

three new localized routing algorithms, HMB, HLABH and BRB. They

analyzed their performance and compared them to CBR for different network topologies. The methods offered for selecting the candidate paths, which are

disjoint paths and recalculation, not only improved the function of the CBR

algorithm but allowed the three proposed algorithms to perform more beneficially.

Abdulbaset Mohammad and Michael Woodward (2008) proposed

Quality Based Routing algorithm (QBR) using average path quality to select a

path from the set of candidate paths by routing traffic among them. They used

different network topologies to compare the performance of our algorithm against the Credit Based Routing (CBR) algorithm under a wide range of

traffic loads. They showed that proposed algorithm outperforms CBR with the same time complexity.

Lajos Hanzo II et al (2011) proposed and evaluated new solutions

for providing quality of service (QoS) assurances in a mobile ad hoc network

(MANET) for the difficulties due to node mobility, contention for channel

access, a lack of centralized co-ordination, and the unreliable nature of the

Page 28: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

44

wireless channel. A QoS-aware routing (QAR) protocol and an admission

control (AC) protocol are two of the most important components of a system attempting to provide QoS guarantees in the face of the above mentioned

difficulties faced. Many QAR and AC-based solutions have been proposed,

but such network layer solutions are often designed and studied with idealized

lower layer models in mind. This means that existing solutions are not designed for dealing with practical phenomena such as shadow fading and the

link quality-dependent fluctuation of link transmission rates. This paper

proposes and evaluates new solutions for improving the performance of QAR and AC protocols in the face of mobility, shadowing, and varying link SINR.

It is found that proactively maintaining backup routes for active sessions,

adapting transmission rates, and routing around temporarily low-SINR links

can noticeably improve the reliability of assured throughput services. They proposed several new protocols, related to the StAC protocol, and evaluated

their performance in the face of increasingly severe shadowing attenuation

fluctuations. First, the StACbackup protocol added a feature that attempts to provide a pre-capacity-tested backup route to each active data session. The

novelty lay in the method of maintaining the status of backup routes regarding

their capacity at data source nodes without incurring any test packet overhead,

as well as in the combination with StAC. Use of such backup routes allowed the elimination of “available capacity” status update packets used by StAC

while reducing the risk of rerouting to routes for which there is no knowledge

of their free capacity. However, it was found that with severe shadowing

induced signal strength fluctuations, the pretesting of backup routes was less significant, although merely proactively seeking backup routes still improved

the achieved QoS. This suggests that routing protocols benefit from

proactively requiring that backup routes exist. However, pretesting more than one backup route is counterproductive (when all traffic is throughput

sensitive) due to the excess overhead incurred when initiating state

information setup.

Page 29: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

45

The required level of link disjointness between data sessions’

primary and backup routes was also studied. With severe shadowing fluctuations, the parameter does not have much effect because pretested

backup routes often break before they come into use, and instead, fast

rerouting using cached routing information at intermediate nodes is used to

better effect. This can be done without fear of using overly congested routes because poor link quality guarantees that there will always be some free

channel time in the system, since nodes can only transmit in a reduced

fraction of the time. The pretesting of backup routes’ available capacity is more important with lower shadowing standard deviation values and when

there is other non-admission-controlled background traffic using capacity in

the network. Second, they also-proposed StAC-multirate protocol adds

multiple link transmission rate awareness to the AC and routing process, as well as features to route around temporarily low-quality links. Adaptive

modulation enables higher SINR links to be exploited by StAC-multirate for

admitting more traffic, as well as facilitating the adaptation of the packet reception probability to the shadowing-dependent time variant link quality.

2.3 ACTIVE NETWORKS

Tilman Wolf et al (2000) proposed the use of “selectors for (active)

packet flows” similar to tags employed in the IP world. They have built an Active Network Node that implements the selector-based Simple Active

Packet Format (SAPF). They described SAPF, a tag-based protocol for active

networks that allows very efficient demultiplexing of packets to their handler

code or EE. They demonstrated how tags are exchanged between active TAN nodes and how regular IP routers and TAN interoperate.

Dhananjai Madhaua Rao and Philip Wilsey (2000) analyzed the

design and implementation of the parallel co-simulation framework along

with the results obtained from our co simulation studies. It is also indicate that parallel simulation techniques considerably reduce simulation tames for even

Page 30: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

46

small sized network models They clearly highlighted the performance

improvements that can be achieved by employing optimistic parallel simulation techniques for co-simulation of conventional and active networks.

Shigehiro Ano et al (2001) proposed an active internetwork system

using the Stream Code based active network. They developed the execution

environment of Stream Code called SC Engine. They also developed a complicated mechanism of dynamic routing protocol that is similar to

OSPF.They implemented an execution environment of Stream Code and

evaluated its performance and also they showed how a link state type routing protocol like OSPF is implemented using Stream Code.

Kenneth Calvert et al (2001) concerned the Concast service and

show how it can be implemented in a backward-compatible manner in the

Internet. They have presented Concast, a many-to-one network service that is in many ways symmetric with multicast. Concast offers a solution to a

problem arising in many networked group communication applications: how

to collect feedback while avoiding implosion. It is especially useful in the

context of reliable multicast but does not rely on multicast in any way for implementation.

Maxemchuk and Low (2001) showed how active routing can

extend the capabilities of MPLS. They establishes a framework to

quantitatively compare networks and service providers. They demonstrated two mechanisms, sandboxes and pricing. Sandboxes protect the resources that

have been assigned to a customer and also prevent that customer from

acquiring unassigned resources. Pricing can be used to make a customer’s best interest and the network’s best interest the same.

Bond et al (2002) discussed the challenges of programming active

networks and then presents four new active network services, PAMcast,

Concast, ESP, and LWP, that simplify the task of programming active

Page 31: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

47

networks. They presented four new “application friendly” active network

services, PAM cast, Concast, Ephemeral State Processing, and Lightweight Processing Modules, designed to make it easier for applications, particularly

group communication applications, to access and utilize active network services in a scalable way.

Zhaoyu Liu et al (2003) presented a design and a description of the implementation for securing the node of an active network using active

networking principles. The secure node architecture includes an active node

operating system security API, an active security guardian, and quality of protection (QoP) provisions. It is based on active network principles and

provides authentication, authorization, integrity, dynamic access control, and

quality of protection for active applications. The architecture supports highly

customized and situational policies created by users and applications dynamically. It permits active nodes to satisfy the application-specific

dynamic security and protection requirements. They demonstrated the

integration of secure node architecture into an active network software system to demonstrate its flexible and innovative features.

Les Cottrell et al (2006) proposed the new techniques to detect

network performance problems proactively in close to real time. They

implemented to detect persistent network problems using anomalous variants analysis in real end to end Internet Performance Measurements. They also

provided method or guidance for how to set the user settable parameters. The

measurements based on the active probs running on 40 production network paths with bottlenecks varying from o.5 Mbits/s to 1000 Mbits/s

Hashim et al (2002) described the architectural framework for

active networks and they reviewed and compared the design of several

experimental implementations of active networks that exist today, outlining

the different approaches used in each of the design. In the process, the. differences between two approaches to active networks i.e., a discrete

Page 32: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

48

approach and an integrated approach is illustrated. Finally, they provided

examples of efforts to improve performance of applications using active networking.

2.4 ACTIVE NETWORK IMPLEMENTATION IN NETWORK PROCESSOR

Yuhong Li and Lars Wolf (2003) discussed a new approach for managing the resources in AN nodes by focusing on adaptations among the

same and different types of resources. Resource Vectors (RVs) are used to

describe the various resource usages in the node system. And an adaptive

resource management mechanism based on RV is proposed and the

implementation of it in an active node is given. The adaptable RV space is

applied to describe the adaptation capabilities of active applications. Both

provide basis for the adaptation among the same and different types of resources

Farshad Khunjush et al (2003) concerned with the current status of

NF'U development. Immediate challenges in design and implementation of

MUS, with respect to the rapid network expansion and increasing traffic demand, are also discussed. They discussed issues and challenges that face an

NPU designer. Having described the requirements for an ideal NPU, they defined some common networking tasks that exist in all network applications.

David Fuin et al (2004) presented the two approaches usually chosen to obtain quality of service in active networks. The first one called

“active approach” allows to define protocols (or services) adapted to payloads

of flows (in particular their semantics) but does not permit to set priorities to flows. The second one uses QoS provided by the subjacent IP layer (under active networks).

Page 33: CHAPTER 2 LITERATURE REVIEW - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/7/07_chapter2.pdf · CHAPTER 2 LITERATURE REVIEW 2.1 INTRODUCTION ... Nir Perel and Uri Yechiali

49

Takahiro Murook et al (2004) introduced a versatile network node

architecture called A-BOX that can be used to deploy various new network applications to an operating network The key is the flexibility to processing

packet headers, i.e., processing with either hardware or software, while also

offering software-based payload data processing. The implemented system

was evaluated with several new applications, and the results indicate its effectiveness for them.

Hang Qiu et al (2009) explained the design and implementation of

active node, discuss the Code Distribution Scheme which is based on mobile code, demand loading, and caching techniques. The proposed architecture is

capable of managing both active node and traditional node. Compared with

NTS, EANTS provides higher efficiency. The navigation models will free the

management application programmer from developing distributed algorithms. Programmer can focus on the specific management application and select a

navigation model that captures the requirements for running the management program.

Ali Ahmadi and Timothy Green (2009) presented an optimal power flow (OPF) solution based on maximizing Distributed Generator (DG) real

power output with a restraint on network losses for radial and meshed

distribution networks. They concentrated on the determination of initial points and effective adjustment of barrier parameter to maintain accuracy and speed

of convergence. Several implementation issues such as initial points,

calculation of barrier parameter and stopping criterion are discussed and

investigated to evaluate the performance of the algorithm when applied to meshed and radial distribution networks.