Download - A Comparative Analysis Of Diffrent Real Time Applications Over Various Queuing Technics Of Voip
View with images and charts
A Comparative Analysis of Different Real Time Applications over
Various Queuing Techniques of VOIP
Chapter 1
INTRODUCTTION1.1 General Introduction
QoS is a network mechanism, which effectively controls traffic flood scenarios, generated by
a wide range of advanced network applications. This is possible through the priorities
allocation for each type of data stream.[8] Quality of Service allows control of data
transmission quality in networks, and at the same time improves the organization of data
traffic flows, which go through many different network technologies.[1] Such a group of
network technologies includes ATM (asynchronous transfer mode), Ethernet and 802.1
technologies, IP based units, etc.; and even several of the abovementioned technologies can
be used together.[7]
The internet is expanding on a daily basis, and the number of network infrastructure
components is rapidly increasing. Routers are most commonly used to interconnect different
networks.[3] One of their tasks is to keep the proper quality of service level. The leading
network equipment manufacturers, such as Cisco Systems, provide on their routers
mechanisms for reliable transfer of time-sensitive applications from one network segment to
another. In case of VoIP the requirement is to deliver packets in less than 150ms.[2] This
limit is set to a level where a human ear cannot recognize variations in voice quality. This is
one of the main reasons why leading network equipment manufacturers implement the QoS
functionality into their solutions. QoS is a very complex and comprehensive system which
belongs to the area of priority congestions management. It is implemented by using different
queuing mechanisms, which take care of arranging traffic into waiting queues.[5]
Time-sensitive traffic should have maximum possible priority provided. However, if a proper
queuing mechanism (FIFO, CQ, WFQ, etc.) is not used, the priority loses its initial meaning.
[4] It is also a well-known fact that all elements with memory capability involve additional
delays during data transfer from one network segment to another, so a proper queuing
mechanism and a proper buffer length should be used, or the VoIP quality will deteriorate.[6]
QoS identification is intended for data flows recognition and recognition of their priority. To
ensure the priority a single data stream must first be identified and then marked (if this is
needed). These two functions together partly relate to the classifying process, which will be
described in detail in the next section. Identification is executed with access control lists
(ACL).[9] ACL identifies the traffic for the purpose of the waiting queue mechanisms, for
example PQ -Priority Queuing or CQ - Custom Queuing. These two mechanisms are
implemented into the router, and present one of its most important subparts.
1.2 Goal of the research Work
QoS classification is designed for executing priority services for a specific type of traffic. The
traffic must first be pre-identified and then marked (tagged).
Classification is defined by the mechanism for providing priority service, and the marking
mechanism. At the point, when the packet is already identified, but it has not yet been
marked, the classification mechanism decides which queuing mechanism will be used at a
specific moment (for example, the principle of per-hop). Such an approach is typical in cases
when the classification belongs to a particular device and is not transferred to the next router.
Such a situation may arise in case of priority queuing (PQ) or custom queuing (CQ). The
remainder of the paper is organized as follows
Chapter 2QOS (QUALITY OF SERVICE)
2.1 What is QOS?
Quality of Service allows control of data transmission quality in networks, and at the same
time improves the organization of data traffic flows, which go through many different
network technologies. Such a group of network technologies includes ATM (asynchronous
transfer mode), Ethernet and 802.1 technologies, IP based units, etc.; and even several of the
abovementioned technologies can be used together. An illustration of what can happen when
excessive traffic appears during peak periods can be found in everyday life: an example of
filling a bottle with a jet of water. The maximum flow of water into the bottle is limited with
its narrowest part (throat). If the maximum possible amount of decantation (throughput) is
exceeded, a spill occurs (loss of data). A funnel used for pouring water into a bottle, would in
case of data transfer be in the waiting queues. They allow us to accelerate the flow, and at the
same time prevent the loss of data. A problem remains in the worst-case scenario, where the
waiting queues are overflowed, which again leads to loss of data (a too high water flow rate
into the funnel would again result in water spills).Priorities are the basic mechanisms of the
QoS operating regime, which also affects the bandwidth allocation. QoS has an ability to
control and influence the delays which can appear during data transmission. Higher priority
data flows have granted preferential treatment and a sufficient portion of bandwidth (if the
desired amount of bandwidth is available). QoS has a direct impact on the time variation of
the sampling signals which are transmitted across the network. Such sampling time variation
is also called jitter (T. & S. Subash Indira Gandhi, 2006). Both mentioned properties have a
crucial impact on the quality of the data and information flow throughput, because such a
flow must reach the destination in the strict real-time. A typical example is the interactive
media market. QoS reflects their distinctive properties in the area of improving data-transfer
characteristics in terms of smaller data losses for higher-priority data streams. The fact that
QoS can provide priorities to one or more data streams simultaneously, and also ensure the
existence of all remaining (lower-priority) data streams, is very important. Today, network
equipment companies integrate QoS mechanisms into routers and switches, both representing
fundamental parts of Wide Area Networks (WAN), Service Provider Networks (SPN), and
finally, Local Area Networks (LAN).
Based on the abovementioned points, the following conclusion can be given: QoS is a
network mechanism, which successfully controls traffic flood scenarios, generated by a wide
range of advanced network applications. This is possible through the priorities allocation for
each type of data stream.
2.2 How QoS Works?QoS mechanism, observed as a whole, roughly represents an intermediate supervising
element placed between different networks, or between the network and workstations or
servers that may be independent or grouped together in local networks. The position of the
QoS system in the network is shown in (Figure 2.1) This mechanism ensures that the
applications with the highest priorities (VoIP, Skype, etc.) have priority treatment. QoS
architecture consists of the following main fundamental parts: QoS identification, QoS
classification, QoS congestions management mechanism, and QoS management mechanism,
which handle the queue.
Figure 2.1: Qos System’s Position in the network
2.2.1 QoS Identification
QoS identification is intended for data flows recognition and recognition of their priority. To
ensure the priority a single data stream must first be identified and then marked (if this is
needed). These two functions together partly relate to the classifying process, which will be
described in detail in the next section. Identification is executed with access control lists
(ACL). ACL identifies the traffic for the purpose of the waiting queue mechanisms, for
example PQ - Priority Queuing or CQ - Custom Queuing. These two mechanisms are
implemented into the router, and present one of its most important subparts. Their operation
is based on the principle of "jump after a jump", meaning that the QoS priority settings
belong only to this router and they are not transferred to neighboring routers, which form a
network as a whole. Packet identification is then used within each router with QoS support.
An example where classification is intended for only one router can be found with the
CBWFQ (Class Based Queuing Weighted Fair) queuing mechanism. There are also
techniques which are based on extended control access-list identities. This method allows
considerable flexibility of priorities allocation, including the allocation for applications,
users, destinations, etc. Typically, such functionality is installed close to the edge of the
network or administrative domain, because only in this case each network element provides
the following services on the basis of a particular QoS policy.
2.2.2 QoS Classification
QoS classification is designed for executing priority services for a specific type of traffic. The
traffic must first be pre-identified and then marked (tagged). Classification is defined by the
mechanism for providing priority service, and the marking mechanism. At the point, when
the packet is already identified, but it has not yet been marked, the classification mechanism
decides which queuing mechanism will be used at a specific moment (for example, the
principle of per-hop). Such an approach is typical in cases when the classification belongs to
a particular device and is not transferred to the next router. Such a situation may arise in case
of priority queuing (PQ) or custom queuing (CQ). When the packets are already marked for
use in a wider network, the IP priorities can be set in the ToS field of the IP packet header.
The main task of classification is identification of the data flow, allocation of priorities and
marking of specific data flow packets.
2.3 QoS queuing management mechanism
When the queue is full, it cannot accept any new packets, meaning that a new packet will be
rejected. The reason for rejection has been already discovered: the router simply cannot avoid
discarding packets when the queue is full, regardless of which priority is applied in the ToS
field of the packet. From this perspective the queue management mechanism must execute
two very important tasks:
- Try to ensure a place in the round-robin queue or try to prevent the queue from
Becoming full. With this approach a queuing management mechanism provides the
Necessary space for high-priority frames;
- Enable the criterion for rejecting packets. The priority level applied in the packet must be
checked at the beginning, after which the mechanism decides which packet will be Rejected
and which not. Packets with lower priority are rejected earlier in comparison to those with a
higher priority. This allows undisturbed movement of high-priority traffic flows, and if there
is some additional space at the available bandwidth, other low priority traffic flows can also
pass through the network. Both described methods are included in the Weighted Random
Early Detect mechanism, which can be found in various sources under the acronym WRED.
2.4 QoS Overview
Quality of Service allows control of data transmission quality in networks, and at the same
time improves the organization of data traffic flows, which go through many different
network technologies. Such a group of network technologies includes ATM (asynchronous
transfer mode), Ethernet and 802.1 technologies, IP based units, etc.; and even several of the
abovementioned technologies can be used together. QoS is a network mechanism, which
successfully controls traffic flood scenarios, generated by a wide range of advanced network
applications. This is possible through the priorities allocation for each type of data stream.
QoS mechanism, observed as a whole, roughly represents an intermediate supervising
component placed between different networks, or between the network and workstations or
servers that may be autonomous or grouped together in local networks. The position of the
QoS system in the network is shown in (Figure 2.1) This This mechanism ensures that the
applications with the highest priorities (VoIP, Skype,fring etc.) have priority treatment. QoS
architecture consists of the following main elementary parts: QoS identification, QoS
classification, QoS congestions management mechanism, and QoS management mechanism,
which handle the queue.
Chapter 3TYPICAL QUEUING DISCIPLINE
3.1 Typical Queuing Discipline
As part of the resource allocation mechanisms, each router must implement some queuing
discipline that governs how packets are buffered while waiting to be transmitted. Various
queuing disciplines can be used to control which packets get transmitted (bandwidth
allocation) and which packets get dropped (buffer space). The queuing discipline also affects
the latency experienced by a packet, by determining how long a packet waits to be
transmitted. Examples of the common queuing disciplines are first-in first- out (FIFO)
queuing, priority queuing (PQ), and weighted-fair queuing (WFQ).
3.2 FIFOFIFO is an acronym for First In First Out .This expression describes the principle of a queue
or first-come first serve behavior: what comes in first is handled first, what comes in next
waits until the first is finished etc. Thus it is analogous to the behavior of persons” standing in
a line” or “Queue” where the persons leave the queue in the order they arrive. First In First
out (FIFO) is the most basic queuing discipline. In FIFO queuing all packets are treated
equally by placing them into a single queue, then servicing them in the same order they were
placed in the queue.
FIFO queuing is also referred to as First Come First Serve (FCFS) queuing. Generally, FIFO
queuing is supported on an output port when no other queue scheduling discipline is
configured. In some cases, router vendors implement two queues on an output port when no
other queue scheduling discipline is configured: a high-priority queue that is dedicated to
scheduling network control traffic and a FIFO queue that schedules all other types of traffic.
Figure 3.1: First In First out (FIFO)
3.3 PQ
PQ is a simple variation of the basic FIFO queuing. The idea is to mark each packet with a
priority; the mark could be carried, for example, in the IP Type of Service (ToS) field. The
routers then implement multiple FIFO queues, one for each priority class. Within each
priority, packets are still managed in a FIFO manner. This queuing discipline allows high
priority packets to cut to the front of the line. Priority queuing (PQ) is the basis for a class of
queue scheduling algorithms that are designed to provide a relatively simple method of
supporting differentiated service classes. In classic PQ, packets are first classified by the
system and then placed into different priority queues. Packets are scheduled from the head of
a given queue only if all queues of higher priority are empty. Within each of the priority
queues, packets are scheduled in FIFO order.
Priority queuing mechanism provides a smooth transition of important traffic (packets),
through the network, using management at all intermediate points. PQ works by giving
priority to the most important traffic. Priority queuing can be flexible regarding the allocation
of different traffic parameters such as: the network protocols (IP, IPX, AppleTalk, etc.), input
interfaces, the size of packets, source/destination addresses, and so on. In the PQ case, each
packet (according to the entered priority in the ToS field), is classified into one of the four
queues that are distinguished by different levels (priorities). The lowest level is marked with
a label "low", and then the levels go up in the following subsequent order: "normal",
"medium" and "high". Packets are individually sorted into appropriate queues according to
the declared priority. Packets which are not classified or have not yet been classified (see the
section on data flows classification) through the above described classification mechanism,
automatically fall into the "normal" waiting queue as shown in (Figure 3.2) During the data
transmission the algorithm first handles the high-priority queues and then the low-priority
queues.
Figure 3.2: Priority queuing (PQ)
3.4 WFQWeighted fair queuing (WFQ) was developed independently in 1989 by Lixia Zhang and by
Alan Demers, Srinivasan Keshav, and Scott Shenke. WFQ is the basis for a class of queue
scheduling disciplines that are designed to address limitations of the FQ model. The idea of
the fair queuing (FQ) discipline is to maintain a separate queue for each flow currently being
handled by the router. The router then services these queues in a round-robin manner. WFQ
allows a weight to be assigned to each flow (queue). This weight effectively controls the
percentage of the link’s bandwidth each flow will get. We could use ToS bits in the IP header
to identify that weight.
Figure 3.3: Weighted fair queuing (WFQ) with round robin
(Figure 3.3) shows a weighted bit-by-bit round-robin scheduler servicing three queues.
Assume that queue 1 is assigned 50 percent of the output port bandwidth and that queue 2
and queue 3 is each assigned 25 percent of the bandwidth. The scheduler transmits two bits
from queue 1, one bit from queue 2, one bit from queue 3, and then returns to queue 1. As a
result of the weighted scheduling discipline, the last bit of the 600-byte packet is transmitted
before the last bit of the 350-byte packet, and the last bit of the 350-byte packet is transmitted
before the last bit of the 450-byte packet. This causes the 600-byte packet to finish (complete
reassembly) before the 350-byte packet, and the 350-byte packet to finish before the 450-byte
packet.
Figure 3.4: Weighted Fair queuing (WFQ)
When each packet is classified and placed into its queue, the scheduler calculates and assigns
a finish time for the packet. As the WFQ scheduler services its queues, it selects the packet
with the earliest (smallest) finish time as the next packet for transmission on the output port.
For example, if WFQ determines that packet A has a finish time of 30, packet B has a finish
time of 70, and packet C has a finish time of 135, then packet A is transmitted before packet
B or packet C. In (Figure 3.4). Observe that the appropriate weighting of queues allows a
WFQ scheduler to transmit two or more consecutive packets from the same queue. Class-
based weighted fair queuing (CBWFQ): CBWFQ represents the newest scheduling
mechanism intended for handling congestions while providing greater flexibility. It is usable
in situations where we want to provide a proper amount of the bandwidth to a specific
application (in our case VoIP application). In these cases, the network administrator must
provide classes with defined bandwidth amounts, where one of the classes is, for example,
intended for a videoconferencing application, another for VoIP application, and so on.
Instead of waiting-queue assurance for each individual traffic flow, CBWFQ determines
different traffic flows. A minimal bandwidth is assured for each of such classes. One case
where the majority of the lower- priority multiple-traffic flow can override the highest-
priority traffic flow is video transmission which needs half of the T1- connection bandwidth.
A sufficient link bandwidth would be assured using the WFQ mechanism, but only when two
traffic data flows are present. In a case where more than two traffic flows appear, the video
session suffers the regarding bandwidth, because the WFQ mechanism works on the fairness
principle. For example, if nine additional traffic flows make demands of the same bandwidth,
the video session will get only 1/10th of the whole bandwidth, and this is insufficient when
using a WFQ mechanism. Even if we put an IP priority level of 5 into the ToS field of the IP
packet header, the circumstances would not change. In this case, the video conference would
only get 6/15 of the bandwidth, and this is not enough because the mechanism must provide
half of all the available bandwidth on the T1 link. This can be provided by using the CBWFQ
mechanism. The network administrator just defines, for example, the video-conferencing
class and installs a video session into that class. The same principle can be used for all other
applications which need specific amounts of the bandwidth. Such classes are served by a
flow-based WFQ algorithm which allocates the remaining bandwidth to other active
applications within the network. Chapter 4Simulation
4.1 SimulationSimulation testing network architecture is an imitation of a real network. The main aspiration
of this simulation is to improve the network’s performances. The highest level in Figure
represents the network server architecture designed by the internet service provider (ISP).
Server_ subnet consists of five Intel servers where each of them has its own profile, such as;
web profile (web server), VoIP, E-mail, FTP and video profile. These servers are connected
through a 24 port switch and through a wired link to the private company's router.
Company’s network consists of four LAN segments including different kinds of users. In the
lower left wing of the company are the VoIP users who symbolize technical support to the
company’s Customers.
In the lower right wing of the company is a conference room where employees have meeting.
Two places here are meant for two concurrent sessions. In the upper left wing there is a small
office with only 10 employees who represent the expansion part of the company, and they use
different applications needed for their work. For example, they are searching information on
the web; calling their suppliers, exchanging files over FTP, and so on. The remaining upper
right wing includes fifty disloyal employees who are surfing the net (web) during work time,
downloading files, etc. (heavy browsing).
Each of the company’s wings is connected through a 100BaseT link to the Cisco 7505 router.
This router is further connected to the ISP servers’ switch through a wired (VDSL2) 10Mbit/s
ISPs’ link. Connections between servers and the switch are also type 100BaseT. The wired
link in this case represents a bottleneck, where we have to involve a QoS system and apply
different queuing disciplines.
4.1 Network Server Architecture Designed:
Figure 4.1: The Network Server Architecture Designed
4.2 Qos Attribute Config :
Figure 4.2 : Qos Attribute Config
Chapter 5Results5.1 RESULTSFrom this simulation we have collected delay statistics from various typical queuing
discipline scenarios (CQ, WFQ, PQ, FIFO ) for two different active applications (VoIP and
HTTP) in the network and with different applied priorities by the ToS field of the IP packet
header. It have been defined VoIP traffic flows between clients where such flows symbolize
high-priority traffic; while HTTP traffic represents low-priority flow, based on a best effort
type of service. In our scenarios, we have 82.09% users who use lower priority HTTP traffic
and only 17.91% users who use the high-priority VoIP application. In (Figure 5.5), we can
see that only 17.91% of users take up a majority part of bandwidth, so the lower priority
HTTP traffic, which represents a majority of all traffic, must wait. This is the reason why
delays rapidly increase as can be clearly seen in (Figure 5.4). Obviously VoIP traffic has
lower delays in comparison with HTTP traffic. Best results are obtained with the typical
queuing method, which ensures the requisite bandwidth at possible blockage points and
serves all traffic fairly. Similar results are obtained also in case of HTTP. If the WFQ scheme
is in use, high-priority traffic will be ensured with a fixed amount of available bandwidth
defined by the network administrator. For example, the network administrator, using WFQ,
defines 9Mbit/s for VoIP, in which case only 1Mbit/s remains for all other applications; so
the majority of low-level traffic will be affected by rapid increasing of delays, as shown in
(Figure 5.6).
5.2 ftp Download Response Time :
Figure 5.2 : ftp Download Response Time
(sec)
Figure 5.2 shows ftp Download Response
Time statistics, here as the traffic increased
the performance graph lines are changing.
WFQ and CQ.The performance graph line
of PQ is better then FIFO, CQ.PQ is
higher than WFQ. FIFO, WFQ and PQ
are near bout each other.
5.3 ftp Upload Response Time
(sec) :
Figure 5.3 : ftp Upload Response Time
(sec)
Figure 5.3 shows ftp Upload Response
Time statistics, where it can be observed
that as the traffic increased the
performance graph lines are changing.
FIFO and CQ.The performance graph line
of WFQ is better then FIFO, CQ. CQ is
higher than FIFO and CQ. PQ is best then
rest three.
5.4 User Object Response Time (sec) :
Figure 5.4 : User Object Response Time
(sec)
Figure 5.4 shows User Object Response
Time statistics for Http user, here when the
traffic increased the performance graph
lines are changing. FIFO and CQ.The
performance graph line of PQ is only same
in this case with WFQ, PQ. CQ, FIFO are
higher than PQ And WFQ.WFQ is best
then rest three.
5.5 User Page Response Time (sec) :
Figure 5.5 : User Page Response Time
(sec)
Figure 5.5 shows User Page Response
Time statistics for Http user, where it can
be observed that as the traffic increased the
performance graph lines are changing.
FIFO and CQ.The performance graph line
of PQ is only same in these case with
WFQ, CQ are higher than PQ,FIFO.And
WFQ.is best then rest three.
5.6 Video Conference Packet End
to End Delay:
Figure 5.6 :Video Conference Packet End
to End Delay
Here in Figure 5.6 showa that as the traffic
increased the Video Conference Packet
End to End Delay is decreasing for CQ,
FIFO when Packet end to end delay time
is increasing. Like VoIP, CQ, FIFO
Packet end to end delay time is also
always higher in Video
conference.PQ,WFQ is near to zero.WFQ
and PQ is Better among them.
5.7 Video Conference Packet
Delay Variation:
Figure 5.7 :Video Conference Packet
Delay variation
Figures 5.7 shows Packet delay variation
time for Video conference, where it can be
observed that as the traffic increased the
Packet delay variation time decreased for
WFQ ,FIFO ,PQ . but as Packet delay
variation time is increased for PQ Packet
delay variation time is Lower in Video
conference. and its goes near to zero.CQ is
so higher here.
5.8 Voip Packet End to End Delay
(sec) :
Figure 5.8 :Voip Packet End to End
Delay(sec)
Figures 5.8 shows Packet end to end delay
time for Voip. For both the cases such as
time increase or traffic increase FIFO and
WFQ packet end to end delay line always
higher. But here PQ and CQ packet end to
end delay I near about zero.
5.9 Voip Packet Delay variation:
Figure 5.9 :Voip Packet Delay variation
Figure 5.9 shows Packet delay variation
time for VoIP, for all the cases such as
time increase or traffic increase FIFO and
WFQ groups packet delay time shows
higher then PQ and CQ.here PQ and CQ
shows near about zero.
5.10 Ftp Traffic Sent (byte/sec) :
Figure 5.10 : Ftp Traffic Sent (byte/sec)
Figure 5.10 shows Ftp Traffic Sent
statistics, where it can be observed that as
the traffic increased the performance graph
lines are changing.FIFO and CQ. The
performance graph line of PQ is better
then FIFO, WFQ are higher than FIFO
and CQ. PQ is best then rest three.
5.11 User Traffic Sent (byte/sec) :
Figure 5.11 : User Traffic Sent (byte/sec)
Figure 5.11 shows User Traffic Sent
statistics for Http user, where it can be
observed that as the traffic increased the
performance graph lines are
changing.FIFO and WFQ. The
performance graph line of PQ is only same
in case of FIFO, PQ are higher than FIFO.
And WFQ.is best then rest three.
5.12 Video Traffic Sent (byte/sec) :
Figure 5.12 : Video Traffic Sent(byte/sec)
Figures 5.12 shows Video Traffic
Sent(byte/sec), here in this cases FIFO,
WFQ is higher than the performance graph
of CQ and PQ and both they are near to
zero.
5.13 Voip Traffic Sent (byte/sec) :
Figure 5.13 : Voip Traffic Sent(byte/sec)
Figures 5.13 shows Voip Traffic Sent ,
here in these cases of all FIFO, WFQ is
higher than the performance graph of CQ
and PQ. they are here near to zero.
5.14 Ftp Traffic Receive
(byte/sec) :
Figure 5.14 : Ftp Traffic Receive
(byte/sec)
Figure 5.14 shows traffic Receive statistics
for FTP, where it can be observed that as
the traffic increased the file receiving
performance graph line is same in both
FIFO and WFQ. The performance graph
line of PQ is only same in case of FIFO,
PQ, and WFQ.but CQ is higher then rest
three.
5.15 User Traffic Received
(byte/sec) :
Figure 5.15 :User Traffic Received
(byte/sec)
Figures 5.15 shows User Traffic Received,
here in these cases CQ, FIFO is higher
than the performance graph of WFQ and
PQ.
5.16 Video Traffic Received
(byte/sec) :
Figure 5.16 :Video Traffic Received
(byte/sec)
Figures 5.16 shows traffic Received
statistics for Video conferencing, here in
these cases of FIFO, WFQ video
receiving rate graph is higher than the
performance graph of CQ and PQ. the
performance graph of PQ and CQ is near
zero.
5.17 Voip Traffic Received
(byte/sec) :
Figure 5.17 : Voip Traffic Received
(byte/sec)
Figures 5.17 shows traffic Received
statistics for VoIP, where it can be
observed that as the traffic increased the
performance graph line increased in both
FIFO and WFQ. The performance graph
line of FIFO is always higher compared to
the PQ and CQ.
Chapter 6
CONCLUSION
6.1 ConclusionIn this simulation, we have observed the effect of typical queuing mechanism regarding real
time application such as FTP, VoIP, User (HTTP or Web). We have compared the results
with each other and among all the techniques it is found that except HTTP application all
other application has great impact on PQ queuing technique on the real time network. After
the comparison of the results it is clear
that in case of PQ performance always depends on traffic’s number and shows better output
even when the number of clients increases and it’s performance is better than FIFO in some
cases. FIFO performance is also better in some cases compared with PQ cording to the
dropped packet graph it is also found that the packet delay among all the queuing method PQ
and WFQ has optimum. if we compare WFQ with FIFO and PQ in case of traffic drop, File
receiving, voice data receive and video conferencing; WFQ always shows the best
performance among them. Some problems with the integration between VOIP architecture
and Internet. For any meaningful exchange, voice data communication must be a real time
stream. This is in contrast with the Internet architecture that is made of many routers, which
can have a very high roundtrip time (RTT), this can delay the voice data in reaching the
proper address at a continuous rate . But according to the simulation it is already proven that
a modernized format of fair queuing WFQ can perform better. So ,it can be said with
confidence that user traffic stream like voice, video, data can be easily transferred with it’s
efficient level performance by using Weighted Fair Queue algorithm in routers where the
voice, video and data streams are routed to go to their desired destination.
In future, hybrid queuing method can be implemented in the network and observe the overall
impact.
References
[1] Mansour J. Karam, Fouad A. Tobagi, “Analysis of the Delay and Jitter of Voice Traffic Over the Internet”, IEEE InfoCom 2001
[2] Velmurugan, T.; Chandra, H.; Balaji, S.; , "Comparison of Queuing Disciplines for Differentiated Services Using OPNET," Advances in Recent Technologies in Communication and
Computing, 2009. ARTCom '09., Vol., no., pp.744-746, 27-28 Oct. 2009
[3] Fischer, M.J.; Bevilacqua Masi, D.M.; McGregor, P.V.; "Efficient Integrated Services in an Enterprise Network," IT Professional, vol.9, no.5, pp.28-35, Sept.-Oct. 2007 Cisco Systems, Understanding Jitter in Packet Voice Networks (Cisco IOS Platforms).
[4] Impact of hybrid queuing disciplines on the VoIP traffic delay Saša Klampfer, Jože Mohorko, Žarko Čučej Faculty of Electrical Engineering and Computer Science, 2000 Maribor, Slovenia E-pošta: [email protected]
[5] A comparative study of different queuing techniques in VoIP, video conferencing and file transfer by Mohammad Mirza Golam Rashed and Mamun Kabir Department of ETE, Daffodil International University
[6] Internetworking Technology Handbook – Quality of Service (QoS), Cisco Systems.
[7] G. 729 Data Sheet.
[8] http://en.wikipedia.org/wiki/Type_of_Service
[9] VoIP performance measurement using QoS parameters A.H.Muhamad Amin IT/IS Department, Universiti Teknologi PETRONAS, 31750 Tronoh, Perak Darul Ridzuan,
Malaysia.3