[ieee 2014 ieee 27th canadian conference on electrical and computer engineering (ccece) - toronto,...

5
CCECE 2014 1569888451 An Adaptive Compression Technique Based on Real- Time RT T Feedback Fuad Shamieh, Ahmed Refaey, Xianbin Wang Dept. of Electrical and Computer Engineering The University of Western Ontario, London, Canada Email: {fshamieh.ahusse7.xianbin.wang}@uwo.ca Ahsact-The dynamic nature of traffic over Internet pro- tocol (IP) networks oſten induce high end-to-end latency and packet loss rate. These problems hamper the Quality of Service (QoS) of various conventional and emerging applications over Internet. In order to mitigate these challenges and improve the network efficiency, an adaptive compression technique (ACT) is proposed. ACT exploits lossless data compression algorithms where compression is applied seamlessly to a packet's payload. Our adaptive compression is based on the situational awareness of a given network derived from gathered network statistical data, such as the varying Round ip Time (RTT) as well as the packet loss rate during a transmission session. The real-time observation of the varying RTT and packet loss rate triggers the ACT compression when a defined threshold, which is compared to the observed values, is crossed. Using Network Simulator 3 (NS3), two different real-time latency reduction schemes using ACT were compared with an uncompressed transmission. The results show ACT improvement in network conditions such as reducing the number of dropped packets by approximately 30%, as well as, reducing delayed packet transmissions by 26.5% which results in fundamentally increasing the TCP efficiency by approximately 3%. I. INTRODUCTION Real-time applications over information and communi- cations technology (lCT) infrastructure are becoming more popular. Emerging real-time applications, such as data retrieval from data centers, are delay sensitive thus certain network performance metrics are expected for Quality of Service (QoS) provisioning. Maintaining these expectations is becoming more difficult within any given network as the overall end-to-end delay increases. The challenge is to minimize the instigated end-to-end delay as the behaviour and usage of a given network changes. The rapid growth of the number of clients and their dynamic behaviour within an IP network induces large varying data flows and thus high data congestion occurs. Thus, in highly congested networks large end-to-end delays are inevitable. In order to improve the usage of real-time applications, end-to- end delays must be reduced [ 1][2]. In this paper, the focus will be on design and validation of adaptive payload compression. Payload compression reduces the size of the larger portion of a packet which allows enhanced network bandwidth utilization. As the packet size decreases, the congestion within a network decreases and as a result the packet loss rate is reduced. Also, due to the enhanced bandwidth utilization, the number of simultaneous network 978-1-4799-3010-9/14/$31.00 ©2014 IEEE 1 applications has the ability to increase whilst maintaining a smooth communication session. However it is important to compress packets' payloads in an adaptive manner rather than constant compression due to the fact that there is slight pos- sibility where compressing a packet's payload might increase the overall transmission time [3]. Additionally, there are several schemes that propose packet compression inside and outside the network. For example, in [4], the proposed scheme, compresses data within the network, achieves a low packet delay and packet discard rate. However, in [4], the optimal number of nodes required to achieve optimal compression for different sized networks is not shown. Moreover, in [5], the proposed scheme improves the end-to-end throughput. However, in [5], data is compressed in blocks, which may result in high delays if packets are dropped. In [6], the authors proposed a revolutionary header compression technique that uses Software Defined Networking (SDN) concepts. However, in [6], the proposed technique is designed for packets with a relatively minute size. Furthermore, there are various applications of Round Trip Time (RTT) in real-time scenarios. In [7] and [8], RTT is used for real-time delay measurement between sensors and detecting stepping stone attacks, respectively. However, in this paper real-time RTT measurements are used to enhance network communications in an Ethernet based network. The primary goal of this paper is to propose a new set of adaptive compression techniques for reducing network congestion and delays. These techniques will improve the communication conditions for real-time applications by means of packet-by-packet payload compression using lossless al- gorithms. These proposed schemes are characterized as two different modes which are passive and intermediate mode, where these modes use real-time RTT feedback to decide whether to be active or inactive. The RTT values are observed during a transmission session in which they are used to trigger payload compression. Using the proposed techniques, the condition of a given network is improved as shown in the results section of this paper. The payload is compressed according to the selected mode where all of the different modes have an initial phase which precedes the transmission phase. In the passive mode, compression is applied based on the history of the previous transmissions of the network. While in the intermediate mode, compression is applied according to a set of conditions. These CCECE 2014 Toronto, Canada

Upload: xianbin

Post on 25-Feb-2017

215 views

Category:

Documents


3 download

TRANSCRIPT

CCECE 2014 1569888451

An Adaptive Compression Technique Based on

Real- Time RT T Feedback

Fuad Shamieh, Ahmed Refaey, Xianbin Wang Dept. of Electrical and Computer Engineering

The University of Western Ontario, London, Canada Email: {fshamieh.ahusse7.xianbin.wang}@uwo.ca

Ahstract-The dynamic nature of traffic over Internet pro­tocol (IP) networks often induce high end-to-end latency and packet loss rate. These problems hamper the Quality of Service (QoS) of various conventional and emerging applications over Internet. In order to mitigate these challenges and improve the network efficiency, an adaptive compression technique (ACT) is proposed. ACT exploits lossless data compression algorithms where compression is applied seamlessly to a packet's payload. Our adaptive compression is based on the situational awareness of a given network derived from gathered network statistical data, such as the varying Round Trip Time (RTT) as well as the packet loss rate during a transmission session. The real-time observation of the varying RTT and packet loss rate triggers the ACT compression when a defined threshold, which is compared to the observed values, is crossed. Using Network Simulator 3 (NS3), two different real-time latency reduction schemes using ACT were compared with an uncompressed transmission. The results show ACT improvement in network conditions such as reducing the number of dropped packets by approximately 30%, as well as, reducing delayed packet transmissions by 26.5% which results in fundamentally increasing the TCP efficiency by approximately 3%.

I. INTRODUCTION

Real-time applications over information and communi­cations technology (lCT) infrastructure are becoming more popular. Emerging real-time applications, such as data retrieval from data centers, are delay sensitive thus certain network performance metrics are expected for Quality of Service (QoS) provisioning. Maintaining these expectations is becoming more difficult within any given network as the overall end-to-end delay increases.

The challenge is to minimize the instigated end-to-end delay as the behaviour and usage of a given network changes. The rapid growth of the number of clients and their dynamic behaviour within an IP network induces large varying data flows and thus high data congestion occurs. Thus, in highly congested networks large end-to-end delays are inevitable. In order to improve the usage of real-time applications, end-to­end delays must be reduced [ 1][2].

In this paper, the focus will be on design and validation of adaptive payload compression. Payload compression reduces the size of the larger portion of a packet which allows enhanced network bandwidth utilization. As the packet size decreases, the congestion within a network decreases and as a result the packet loss rate is reduced. Also, due to the enhanced bandwidth utilization, the number of simultaneous network

978-1-4799-3010-9/14/$31.00 ©2014 IEEE

1

applications has the ability to increase whilst maintaining a smooth communication session. However it is important to compress packets' payloads in an adaptive manner rather than constant compression due to the fact that there is slight pos­sibility where compressing a packet's payload might increase the overall transmission time [3].

Additionally, there are several schemes that propose packet compression inside and outside the network. For example, in [4], the proposed scheme, compresses data within the network, achieves a low packet delay and packet discard rate. However, in [4], the optimal number of nodes required to achieve optimal compression for different sized networks is not shown. Moreover, in [5], the proposed scheme improves the end-to-end throughput. However, in [5], data is compressed in blocks, which may result in high delays if packets are dropped. In [6], the authors proposed a revolutionary header compression technique that uses Software Defined Networking (SDN) concepts. However, in [6], the proposed technique is designed for packets with a relatively minute size.

Furthermore, there are various applications of Round Trip Time (RTT) in real-time scenarios. In [7] and [8], RTT is used for real-time delay measurement between sensors and detecting stepping stone attacks, respectively. However, in this paper real-time RTT measurements are used to enhance network communications in an Ethernet based network.

The primary goal of this paper is to propose a new set of adaptive compression techniques for reducing network congestion and delays. These techniques will improve the communication conditions for real-time applications by means of packet-by-packet payload compression using lossless al­gorithms. These proposed schemes are characterized as two different modes which are passive and intermediate mode, where these modes use real-time RTT feedback to decide whether to be active or inactive. The RTT values are observed during a transmission session in which they are used to trigger payload compression. Using the proposed techniques, the condition of a given network is improved as shown in the results section of this paper.

The payload is compressed according to the selected mode where all of the different modes have an initial phase which precedes the transmission phase. In the passive mode, compression is applied based on the history of the previous transmissions of the network. While in the intermediate mode, compression is applied according to a set of conditions. These

CCECE 2014 Toronto, Canada

rules are related to crossing two different thresholds. The first threshold is the instantaneous RTT (IRTT) crossing the baseline RTT (BLRTT), which is the RTT of the network when it is congestion-free, and the second threshold is the number of dropped packets exceeding a predefined value.

The remainder of this paper is organized as follows: Section II gives an overview on the problem definition, payload identi­fication as well as types of compression. Section III describes how passive mode and intermediate mode are employed, respectively. Section IV discusses the simulated model, the obtained results and their analysis. Eventually, in section V, the concluding remarks are stated.

II. SYSTEM STRUCTURE

A. Problem Definition A packet's payload contains a large amount of data

that may be compressed. Successfully compressed the data will reduce a network's congestion which ultimately leads to reduced packet loss and increased bandwidth utilization efficiency. The proposed techniques were developed to work alongside any network utilizing the IEEE 802.3 standard. The proof of concept is provided in Section IV.

Also, it is important to know that in the proposed solution, the packet-by-packet payload compression process, occurs before transmission. Two different lossless data compression algorithms, Lempel-Ziv-Oberhumer (LZO) and ZLIB, are used in this paper. To improve the communication efficiency by easily differentiating between uncompressed packets, com­pressed packets and employed compression algorithm, the sender and receiver use the identification technique proposed in Subsection B of Section II.

To minimize implementation costs, the proposed tech­niques have the capability of being deployed using software installations at the end-host.

B. Compressed and Uncompressed Packet Identification The receiver will be able to identify whether the packet's

payload is compressed or not by identifying and using the value of the EtherType header field, which is located in the MAC sub-layer of the Data Link layer header. The value of the header field indicates the condition of the payload as shown in Figure l.

The decision tree shown in Figure 1 helps the receiver identify the type of compression, if any, is present. A header will be added to the beginning of a payload which will include two identification bits. More precisely, the added header will contain the value 00 if the payload is not compressed, and if it is compressed using LZO or ZLIB, it will contain 10 or 1 1 bits.

8 Bytes 8 Bytes 2 Bytes 46-1500 Bytes 4 Bytes

DMAC SMAC EtherType Data CRC

I NC r� OO � l

.�I NUpmlPiessedl I Icompressedr

O�------jl , ,

LZO ZUB

Fig. 1: Decision Tree to Identify a Packet's Compression Status

2

C. Packet-by-Packet vs. Blocks Compression There are different types of payload compression. The

payload of a packet may be compressed either individually or in blocks. Each kind of payload compression has certain advantages and disadvantages.

The term block compression refers to a conglomerate of data being compressed at once. A lossless compression algorithm with an excellent compression ratio is considered to better explain the difference between block and packet­by-packet compression. This algorithm, known as DEFLATE, is used by ZLIB to compress data. When using DEFLATE compression algorithm, payload compression in blocks will provide a better compression ratio due to the fact that there is a longer stream of data which in turn provides better matches of duplicate strings [9]. However, when transmitting the compressed data that was originally compressed as a block, the receiver must wait for the entire stream of data to arrive before processing it. If the receiver holds on to data before processing it, two major problems arise. The first problem is that the receiver must have a rather large buffer to store the received packets. The second problem is the possibility of dropped packets, which will result in a large delay [3]. This delay is broken down to processing, decompression, and transmission delay. The dropped packets must either be retransmitted or the received data will be rendered useless. Also holding the packets in the buffer while waiting for a retransmitted packet may result in other packets being dropped due to a full buffer.

The second type of compression is compressing the pay­load of packets individually, also known as packet-by-packet compression. For the sake of consistency, assume the same lossless compression algorithm as in the previous paragraph is employed. The compression ratio will not be as large as when compressing data in blocks, due to the fact that there is going to be worse matches of duplicate strings since there is less data. However, there will be less delay, since the buffer of the receiver must not wait until the entire stream of data arrives to process it. The receiver may now decompress data immediately as it is received [3]. In this work, each packet is compressed individually.

III. MODE SELECTION

The following subsections are accompanied by a flow chart showing the behavior of the two distinct modes. The purpose of the flow chart is to show the motion of the different modes while transitioning from the initial testing phase to the data transmission phase.

In Figure 2, the hatched boxes refer to core algorithmic steps that are shared between the passive mode and the intermediate mode. Also, the boxes with a dashed outline are passive mode specific, while the boxes with a complete outline are intermediate mode specific.

A. Passive Mode Passive mode is a history based technique in which certain

network data is stored and utilized for future transmissions. Knowing the network history gives the future transmissions the necessary edge to improve the communication sessions. Only a fraction of the total data is necessary for this mode.

The only negative aspect of this technique is that it requires memory space on the sender's side, hence making it unusable by end-hosts with an extremely small amount of memory.

When passive mode is employed to optimize the conditions of a network, the transmitter is required to do the following:

1 - The history of the network during a regular transmission or test session is observed and utilized in the future.

2 - The history considered is the RTT values of the network from the beginning of the transmission session.

3 - The history considered must be of substantial quantity which is used as a sample of the entire transmission session, otherwise a bigger sample is required.

In order to find out when to apply compression, a new BLRTT value must be set. Figure 2 clearly shows that the BLRTT value is calculated, based on observed network history, during the initial phase. In the initial phase, multiple test packets are sent over the simulated network. Once the history is observed, the RTT values are stored within an I RTTarray where the values are sorted in ascending order. Only 10% - 15% of the entire array values, assuming the array will contain a large history, are used to calculate the required I RTTcurrent value.

The new BLRTT value is equal to the calculated I RTTcurrent. In the centre of Figure 2, there is a comparator that continuously compares the newly defined BLRTT value against the IRTT value, in which there are two possible out­comes. If the IRTT value is greater than BLRTT, a compression algorithm is employed before transmission, however, if the IRTT value is less than or equal to BLRTT, the algorithm jumps back to the comparison block.

The following algorithm describes the relationship between the observed history and BLRTT:

Algorithm 1 Calculate BLRTT

1: Create IRTTarray from history 2: Sort IRTTarray from min to max 3: Consider 15% of sorted data 4: Find IRTTcurrent=median(sorted IRTIarray) 5: Set BLRTT=IRTTcurrent

The reason why only 10% to 15% of the entire I RTTarray was considered was because of an observed phe­nomenon whilst testing the technique. This phenomenon was regardless of the number of packets transmitted the behaviour of the network was consistent.

B. Intermediate Mode The intermediate mode is composed of combining both

compression algorithms, LZO and ZLIB, in two different methods. Where the first method merges the mode's conditions similar to an AND logic gate, as in, both conditions must be true before any action is take. The second method merges the mode's conditions similar to an OR logic gate, as in, one condition must be true before any action is taken. Both of these combinational methods shall be known as the AND/OR method.

3

Calculate Baseline RTT

(median value) , _ -:1 _

Set IRTTcurrent =

I _ ,==-RTT _

JL-I _

Keep track of # of packets dropped.

Apply Compression

If IRTT =< BLRTT && # of dropped

packets>=1

Fig. 2: Passive mode and Intermediate mode flow chart

The mode's conditions are watched for after an initial BLRTT is crossed by a certain threshold. As shown in Figure 2, the first BLRTT is calculated to be the network's congestion­free RTT value. Once again, the comparison block contin­uously compares the IRTT value against the initial BLRTT value. If the initial BLRTT is crossed, LZO compression algorithm is applied to the payload of the packets before transmission. Afterwards, newBLRTT is set, and based on the conditions below, ZLIB compression algorithm is applied to the payload. In the case where the number of dropped packets is greater than a decided value while the IRTT value is below the initial BLRTT, ZLIB compression algorithm is applied and the conditions below are ignored.

The conditions of the AND/OR combinations are respectively:

1 - IRTT > newBLRTI AND / OR,

2 - The number of packets dropped > certain decided value.

The method which this mode follows is based on progres­sive stages that are in a ladder-like fashion. The initial phase sets the BLRTI to the value of the RTT of the congestion­free network. Once this value is crossed by a certain threshold, the LZO compression algorithm is applied to the payload of the packets. As the transmission continues, a newBLRTT is configured. This value is based up on previous history of the network, which is calculated using the method described in subsection A, or an arbitrary value.

Depending on the network load, available bandwidth, type of data being transmitted, and payload size, the combinational mode is chosen. On one hand, if there is a network with a large amount of bandwidth available, the AND mode is preferred. On the other hand, if there is a network with a very limited bandwidth, the OR mode is preferred. In fact, in small bandwidth networks, the likelihood of crossing either the BLRTT threshold or the accepted number of dropped packets

is higher than a network with a large bandwidth due to the network's minimal tolerance.

IV. SIMULATION MODEL

In order to test and analyze the proposed techniques on a valid network model, a simulation was conducted on an infa­mous platform. This platform is known as Network Simulator 3 which was installed and operated on a Linux machine.

As shown in Figure 3, the network model used for proof of concept was known as a parking lot topology network. All of the proposed techniques were tested on the same model. Indeed, the proposed techniques are very efficient in a simple network topology, however, it proves the possibility of using these techniques efficiently in highly complex networks.

In this network, node 0 and node 4 are the packet senders while node 3 and node 5 are the packet receivers. Stream 1, which is node 4 sending packets to node 5, will be observed and analyzed. The packets were being transmitted at a constant rate of 0.6 Mbps while the link speeds were 1.0 Mbps. All packet transmission delays in the network were set to 0 ms.

Each packet sent used Transmission Control Protocol (TCP) for the reliability provided by the protocol and IPv4 for addressing. The size of each uncompressed packet payload was 500 bytes. The total number of packets sent by each sending node was 1500.

A. Results There are different ways of measuring the efficiency and

the QoS of a TCP transmission session. In this paper, a number of different attributes will be considered such as: number of dropped packets, average IRTT, buffer delay, TCP efficiency, and duration of recovery.

The number of dropped packets shows how many packets were lost during the overall transmission session which in­dicates the reliability of data transfer. There are some cases in which dropped packets may be ignored if the application is loss-tolerant such as voice-over-Internet-protocol (VoIP) and video transmission. However, other applications such as banking services cannot risk having any dropped packets. Therefore reducing the number of dropped packets is necessary and a valid step of improving network conditions [ 10].

The average of IRTT indicates the overall response time of the network. The lower the RTT value is generally better however it does not necessarily mean the network condition is at its best. Even though the average IRTT may be low, there is still the possibility of a high number of dropped packets and the necessity of retransmitting these packets. Also, there are cases where the IRTT is high, however the number of packets dropped was lower [ 1 1]. The following equations are used to calculate the average IRTT.

AvgIRTT = TotaIRTTjTransfer_time, ( 1)

RTT = TT P + TT A, (2)

where the term known as TT P is the total time it takes to transmit the packet with a payload from the sender to the receiver and the term known as TT A is the time it takes to transmit the acknowledgement from the receiver to the sender.

4

The buffer delay is a ratio that indicates the increase in the IRTT value from the BLRTT value. This ratio is just a calculation necessary to link the IRTT with the BLRTT [ 1 1].

BD(%) = [(AvgIRTT - BLRTT)jBLRTT] * 100). (3)

In (3), AvgIRTT and BLRTT are calculated using using equations ( 1) and (2) where BLRTT is the RTT value of a congestion-free network.

The TCP efficiency percentage is an important ratio to calculate. This ratio indicates the amount of bytes which were transmitted successfully. There are some cases where the TCP efficiency ratio is considered excellent however it may be at the cost of a high buffer delay ratio. In general the higher the TCP efficiency ratio, the lower the number of packets dropped, which means the network is less congested [ 1 1].

TCPEJj(%) = [(TxD - ReTxD)jTxD] * 100. (4)

The TCP efficiency equation, has two important variables which are TxD and ReTxD. The TxD variable is the total transmitted data during a session where ReTxD is the retransmitted data during the same session.

Finally, the duration of recovery is the period in which the transmission of the sender is throttled, due to a lost segment or duplicate ACKs, until the sender's transmission rate is recovered.

DurationO f Recovery = tree - ttout. (5)

In (5), tree is the time variable where the sender recovers from being throttled and ttout is the time variable where the sender was throttled.

The results of simulating the network are shown in Figure 4 where the quantitative analysis is done in Tables 1 and 2. These items are discussed in the following section.

• r r •

Fig. 3: Network Topology

0.4 0.35 0.3

0.25 �

I - NC- CZO-ZLIBI 00�----�------�==�-6�----�======'�0 ====�12

Elapsed Time

Fig. 4: Passive mode Stream 1

TABLE I Internal Network Values

Passive Mode

NC LZO ZLffi

Attributes Packets dropped: 3S I Packets dropped: 24 I Packets dropped: L9 Time: L2.0 seconds I Time: LLA seconds I Time: LI.3 seconds

TABLE II Calculated Network Values Passive Mode

Stream I AVG RTT(ms) TCP Efficiency (%) BuITer Delay (%) Recovery (s)

NC 28.0820 97.2 65.2 3.4 LZO 27.4902 98.1 61.7 2.7 ZUB 27.3293 98.5 60.8 2.5

B. Analysis In this subsection, only passive mode will be analyzed. The

BLRTT value is calculated to be l3.92 ms. This BLRTT value represents the calculated BLRTT of the initial phase.

The NC line in Figure 4 represents the behavior of sending uncompressed packet payloads and receiving ACK in return. The NC session of Stream 1 and Stream 2 lost a total 35 packets where Stream 1 was responsible for dropping 34 of the 35 packets. This was the highest number of packet drops when compared to any of the conducted simulations.

The average RTT of the NC session was approximately 28 ms which is l3 ms higher than the BLRTT. This increase is shown with a buffer delay value of 65.2%. On one hand, the buffer delay indicates that RTT of the network increased by 65.2% which is not necessarily a negative aspect, however it may indicate network congestion. On the other hand, the TCP efficiency was only 97.2 %. The TCP efficiency value is directly related to the number of packets dropped. It shows that 97.2% of the entire data stream was successfully transmitted.

When passive mode was activated where the LZO algo­rithm was employed, Stream 1 and Stream 2 lost a total of 24 packets where Stream 1 was responsible for dropping 23 of the 24 packets. The decrease in packet drops indicates the reduction of data congestion within the network. The decrease in data congestion is directly reflected on the TCP efficiency percentage where it did increase by 0.9%. This indicates a higher number of successful transmissions were made whilst using the LZO algorithm to decrease the size of the packets.

When passive mode was activated where the ZLIB algo­rithm was employed, Stream 1 lost a total of 19 packets. There is a massive decrease in the number of packet drops which indicates a less congested network. The decrease in network congestion is directly reflected in the TCP efficiency percentage. The number did increase by 1.3% indicating a higher rate of successful transmissions.

The major difference between all of the previous scenarios is the packet payload size. During the original uncompressed transmission, the packet payload is 500 bytes. As LZO and ZLIB are used to compress the data, the payload size is 483 bytes and 444 bytes, respectively. This is a clear indication that the number of packet drops and network congestion are directly related to the packet size. As the payload size decreased, the network is less congested resulting in fewer

5

packet drops. It is not a linear relationship, however they are directly correlated.

When looking at the recovery time for the 3 different cases in passive mode, it is clear that there is a sharp decrease from the NC session to the LZO session. The recovery time for the NC session is 3.4 s while during the sessions where LZO and ZLIB are used, the recovery time is 2.7 s and 2.5 s, respectively. The decrease is a positive implication of transmission recovery indicating that the congestion window is no longer throttling the sender's transmission speed.

Finally, it is important to know that the time to compress and decompress a packet's payload while using LZO was approximately 30 us and 20 us, respectively. The time to compress and decompress a packets' payload using ZLIB was approximately 100 us and 30 us, respectively. Clearly the time spent to compress and decompress a payload was worth spending due to the improved transmission results.

V. CONCLUSION

In this paper, adaptive compression techniques have been proposed and used to reduce network latency and the number of dropped packets. The proposed techniques are deployed based on the real-time observation of RTT feedback and number of dropped packets. The new techniques used lossless compression algorithms, such as, LZO and ZLIB to ensure the integrity of the data. Using NS3 as a simulation platform, it is shown that the proposed techniques reduced the number of dropped packets and improved network conditions when compared with an uncompressed transmission session. The number of dropped packets is reduced by approximately 30% and transmission delays are decreased by 26.5%.

REF ERENCES

[I] w. Liu, L. Parziale, and C. Mathews, "Chapter 8: Quality of Service," in TCP/IP Tutorial and Technical Overview, USA:IBM International Technical Support Organization, 2006, ch.8, pp.287-289.

[2] R. Braden, D. Clark, and S. Shenker, "Integrated Services in the Internet Architecture: an Overview," IETF RFC 1633, 1994.

[3] Y. Matias and R. Refua, "Delayed-dictionary compression for packet networks," in Annual Joint Conference of the IEEE Computer and Communications Societies, pp.1443-1454, 2005.

[4] M. Shimamura, T. Ikenaga, and M. Tsuru, "Compressing Packets Adap­tively Inside Networks," in Ninth Annual International Symposium on Applications and the Internet, pp.92-99, 2009.

[5] C. Krintz and S. Sucu, "Adaptive on-the-fly compression," in IEEE Transactions on Parallel and Distributed Systems, voU7, no. 1 , pp.15-24,2006.

[6] S. Jivorasetkul, M. Shimamura, and K. Iida, "Better network latency with end-to-end header compression in SDN architecture," in Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM), pp.l83-188, 2013.

[7] H. Yi, H. Kim, S. Kwon, and J. Choi, "Design of networked control system using RTT measurement over WSN," in IEEE International Conference on Wireless Information Technology and Systems, pp.I-4, 2012.

[8] P. Li, W. Zhou, and Y. Wang, "Getting the Real-Time Precise Round­Trip Time for Stepping Stone Detection," in International Conference on Network and System Security, pp.377-382, 2010.

[9] P. Deutsch, "DEFLATE Compressed Data Format Specification version 1.3," lEfT RFC 1951, May 1996.

[10] J. F. Kurose and K. W. Ross,"Chapter 7: Multimedia Networking Applications," in Computer Networking: A Top-Down Approach , USA:Addison-Wesley Publishing Company, 2009, pp.592.

[II] B. Constantine, G. Forget, R. Geib, and R. Schrage, "Framework for TCP Throughput Testing," IETF RFC 6349, 2011.