low latency adaptive streaming over tcp authors ashvin goel charles krasic jonathan walpole...

28
Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Upload: emma-warner

Post on 21-Jan-2016

222 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Low Latency Adaptive Streaming over TCP

AuthorsAshvin Goel

Charles KrasicJonathan Walpole

Presented BySudeep RegeSachin Edlabadkar

Page 2: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Introduction• TCP is the most popular choice of a transport protocol for media streaming due to

its efficient handling of congestion, flow control and packet loss on the network.

• However, applications must adapt their streams according to the bandwidth estimated by TCP.

• Adaptive streaming protocols use techniques like dynamic rate shaping and packet dropping to adapt their media quality. These techniques depend upon bandwidth feedback delay.

Page 3: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Introduction• TCP introduces considerable latency at the application level, so the quality of the

adapted stream is quite poor.

• This latency occurs at sender side due to throughput optimizations by TCP.

• The paper suggests the use of adaptive buffer size tuning to drastically reduce end to end latency.

• It aims to improve the responsiveness of the adaptive protocols and so the quality of the media.

Page 4: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Related Work • Various alternatives, such as DCCP, have been proposed as alternatives to TCP.

• DCCP provides congestion control but does not handle packet retransmissions, leaving it to higher layers.

• Loss recovery schemes like FEC have to be implemented, causing high bandwidth overhead.

• Other VoIP applications such as Microsoft NetMeeting used UDP to provide best effort service.

• Active queue management and Explicit Congestion Notifications(ECN) can reduce packet loss in TCP, thus reducing the latency.

Page 5: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Features of TCP• TCP is a fixed size sliding window based protocol sliding on a sender buffer of a

fixed size.

• The window represents unacknowledged packets in flight over network.

• When the sender receives an acknowledgement for a packet, TCP sends a new packet from buffer, adjacent to the sliding window.

• Packets dropped can be retransmitted from the sender buffer. • The throughput of TCP can be given as CWND where RTT CWND = Window Size and RTT= Round Trip Time

Page 6: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

TCP Bandwidth Estimation• TCP estimates bandwidth in a best service network which can be handled by

network without packet loss.

• It does so by increasing the window size i.e. CWND = CWND + 1 every RTT.

• When congestion (packet loss) occurs, TCP realizes the maximum bandwidth capability of the network and halves the window size i.e. CWND = CWND/2.

• Latencies in TCP occur due to packet retransmissions, congestion control and sender size buffering.

Page 7: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Latency in TCP• Packet retransmissions take a delay of 1 RTT, due to the time taken for congestion

event feedback to reach the sender.

• TCP Congestion control reduces the size of CWND by half, hence the first lost packet takes a delay of 1 ½ RTT to retransmit, and all subsequent packets take ½ RTT to transmit.

• These latencies can be reduced by using active queue management by sending ECN bits to sender when congestion is imminent.

• Instead of dropping packets, router sets ECN bit which propagates to sender through the acknowledgement . Sender then reduces the window size.

Page 8: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Latency in Sender Buffer• A packet in the send buffer cannot be sent until the window slides on it i.e. it

becomes the first packet after CWND packets are in flight and an ACK opens the window.

• Sender Buffer Latency is caused due to the presence of such blocked packets.

• If the rate at which application sends packets to the buffer is more than the throughput of the network, a high number of such blocked packets are present.

• The latency due these packets is typically of the range of 10-20 RTT, much higher than latency due to packet loss and congestion control.

Page 9: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Latency in Sender Buffer

Page 10: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Adaptive Buffer Tuning

• Solution: Make sure that sender buffer stores only no of packets = CWND.

• As CWND changes over time, sender buffer size should change accordingly.

• Buffer size should not go below CWND, as the throughput of TCP will suffer.

• Thus, adaptive buffer tuning ensures that there are no blocked packets present.

• Moves latency to the application level, which has more control of packet rate, thanks to techniques like prioritized packet sending.

Page 11: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Evaluation • The authors evaluated the working of the MIN_BUF TCP under varying and heavy

network loads.

• This evaluation was performed on a Linux 2.4 test bed which simulated WAN conditions by introducing delay at an intermediate router.

• The topology used in these simulations was “dumbbell” topology, with the router in the middle to simulate latency, and handle forward and reverse congestion flow.

Page 12: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Evaluation

• Heavy load was simulated by running varying long lived TCP streams, short burst TCP streams and a constant bit rate UDP stream.

• The latency in the above experiments was measured by application read and write times for packet on the receiver and sender side respectively

• The authors chose the round trip delay to be 100 ms on a 30 Mbps bandwidth connection to simulate the connection between East and West America coasts

• Both TCP and MIN_BUF TCP are simulated with various other streams mentioned above started and stopped at random times. Both forward and reverse flows simulated.

Page 13: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Results

• MIN_BUF TCP runs better than TCP on both forward and reverse congestion flows

• Reverse MIN_BUF TCP runs comparatively worse than forward MIN_BUF TCP , probably due to loss of ACKS

Page 14: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Results

• Both protocols tested with ECN enabled, using DRD active queue management, which marks a percentage of packets with ECN when the queue length in router exceeds a value• Both protocols work better with ECN than without, and MIN_BUF TCP still shows

less latency than TCP• Each spike in TCP graph due to blocked packets and on MIN_BUF TCP graph due

to decrease in CWND

Page 15: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Effect On Throughput

• MIN_BUF approach impacts network throughput as there are no new packets in the send buffer. • TCP stack has to wait for application to write more bytes before new

data can be sent.• Standard TCP implementation has no such issues because the send

buffer is large enough to accommodate new packets. • Slightly increasing the size of send buffer can fix the problem.• Need to study the event that trigger sending of new packets

Page 16: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Effect On Throughput• ACK Arrival – ACK received for the first packet in TCP window. One

new packet can be sent. • Send buffer size: CWND + 1

• Delayed ACK – To save bandwidth, one ACK is sent for two packets. Two new packets can be sent.• Send Buffer Size: CWND + 2

• CWND Increase – During additive increase phase of TCP steady state, for each ACK, CWND is incremented. Two new packets can be sent• Send Buffer Size: CWND + 2. With Delayed ACK, Size: CWND + 3

• ACK Compression – Sender receives ACK’s in bursty manner from routers. Worst Case, CWND packets are ACKed together.• Send Buffer Size: 2 * CWND. With CWND increase, Size: 2* CWND + 1

Page 17: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

• To study the impact of send buffer size on latency and throughput, we add two parameters A (>0) and B(>= 0)• Send Buffer Size = A * CWND + B• MIN_BUF stream with A and B is denoted as MIN_BUF(A,B)

Page 18: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Protocol Latency distribution for Forward Path Congestion Topology

Page 19: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Protocol Latency Distribution for Reverse path congestion topology

Page 20: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Normalized Throughput

Page 21: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

System Overload• MIN_BUF TCP reduces latency and allows application to write to data to kernel

with fine granularity• Can cause higher system overhead because more system calls are invoked to

write the same amount of data as standard TCP.• Write and Poll System calls are the costliest

Page 22: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Implementation

• TCP stack can be modified to limit the send buffer size to A * CWND + MIN(B, CWND)

• Application writes data to buffer when at least one new packet can be admitted to the buffer• SACK Correction – For Selective Acknowledgements, sacked_out term

is introduced to keep count of the selectively acknowledged packets• As the application has finer control over the data, the data to be sent

can be aligned into MSS sized packets to minimize fragmenting or coalescing latency

Page 23: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Application Level Evaluation

• Qstream – an open source adaptive streaming application• Uses Scalable MPEG(SPEG) – Similar to MPEG – 1. Has conceptual data layers

with the base layer of least quality. Each subsequent layers improves the quality of base layer• Uses Priority-progress Streaming (PSS). • Adaptation Period – Period in which sender sends data in prioritized order.

Base layer has highest priority• Adaptation Window – Data within the adaptation period. Unsent data from

this window is dropped• Dropped Windows – Due to low bandwidth on sender. Entire windows can be

dropped

Page 24: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

• Latency Distribution v/s Latency tolerance• Adaptation Window = 4 Frames or 133.3 ms• Figures show that with increasing load, the percent of transmitted packets

that arrive in time is only marginally affected for MIN_BUF

Page 25: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

• Adaptation Window = 2 Frames or 66.6 ms• Latency tolerance can be made tighter when the adaptation window is

made smaller. Trade off of this change is more varying video quality

Page 26: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar
Page 27: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Conclusion

• Paper shows that low latency streaming over TCP is feasible by tuning TCP’s send buffer so that it keeps just the packets that are currently in flight• Few extra packets in the send buffer help to recover much of the lost

network throughput• This approach can be used in any application that prioritizes data.• As an example, Qstream application was used to prove that TCP buffer

tuning yields significant benefits in terms of end-to-end latency

Page 28: Low Latency Adaptive Streaming over TCP Authors Ashvin Goel Charles Krasic Jonathan Walpole Presented By Sudeep Rege Sachin Edlabadkar

Questions?