electronic visualization laboratory university of illinois at chicago emerge deep tech mtg oliver...
TRANSCRIPT
Electronic Visualization Laboratory University of Illinois at Chicago
EMERGE Deep Tech Mtg
Oliver Yu, Jason Leigh, Alan Verlo
Electronic Visualization Laboratory University of Illinois at Chicago
Performance Parameters
• Latency= Recv Time - Send Time• Note: Recv Host and Send Host are synchronized.
• Jitter = E[{Li - E[L]}]– Note: E[ ] is the expection of data set.
• L is the set of 100 most recent Latency samples.
• Packet Loss Rate
Electronic Visualization Laboratory University of Illinois at Chicago
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
Latency vs. Time
20Mbps
40Mbps
60Mbps
80Mbps
Time
Late
ncy
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
Row
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
Jitter vs. Time
20Mbps
40Mbps
60Mbps
80Mbps
Time
Jitt
er
20Mbps 40Mbps 45Mbps 50Mbps 60Mbps 80Mbps
0
0.1
0.2
0.3
0.4
0.5
Packet Lost Rate vs. Background Traffic
Background Traffic
Pac
ket
Lo
st R
ate(
%) Note:
These experiments were based on best effort platform.These experiments will be repeated on DiffServ platform when available.
Background Traffic Load
Foreground Traffic Load is 250Kbps
Foreground Traffic Loadis 3 Mbps
Electronic Visualization Laboratory University of Illinois at Chicago
Forward error correction scheme for low-latency delivery of error sensitive data
• Ray Fang, Dan Schonfeld, Rashid Ansari
• Transmit redundant data over high bandwidth networks that can be used for error correcting UDP streams to achieve lower latency than TCP.
• Transmit redundant data to improve quality of streamed video by correcting for lost packets.
Electronic Visualization Laboratory University of Illinois at Chicago
FEC Experiments
• EVL to SARA- Amsterdam (40Mb/s 200ms RT latency)
• Broader Ques:– Can FEC provide a benefit? How much?– Tradeoff between redundancy and benefit?
• Specific Ques:– TCP vs UDP vs FEC/UDP– How much jitter does FEC introduce?– High thru put UDP vs FEC/UDP to observe loss & recovery– UDP vs FEC with background traffic– FEC over QoS: WFQ or WRED congestion management-
hypothesis: WRED is bad for FEC
Electronic Visualization Laboratory University of Illinois at Chicago
UDP vs TCP vs FEC/UDP with 3:1 redundancy
UDPLatency
(ms)
TCPLatency
(ms)
FEC over UDPLatency
(ms)128 77.0 115 90.3
256B 81.7 121 95.3
512 101.0 150.8 126.0
1024 143.0 210 189.0
2048 227.3 339 314.3
Packet size(bytes)
Electronic Visualization Laboratory University of Illinois at Chicago
Latency of transmitting 100 packets underUDP, TCP, FEC/UDP with 3:1 redundancy.
0
50
100
150
200
250
300
350
400
0 500 1000 1500 2000 2500
Packet size in bytes
1-w
ay la
ten
cy in
ms
UDP
TCP
FEC over UDP
FEC greatest benefit is in small packets.
Larger packets impose greater overhead.
As redundancy decreases FEC approaches UDP.
Electronic Visualization Laboratory University of Illinois at Chicago
Packet Loss over UDP vs FEC/UDP
Data Rate(bits/s)
Packet Size(Bytes)
Packet Loss Rate in UDP (%)
Packet Loss Rate in FEC over UDP (%)
1M 128 0.4 0
1M 256 0.2 0
1M 1024 0.2 0
10M 128 30 4
10M 256 25 3
10M 1024 21 1.5
UDP
UDP
FEC
Electronic Visualization Laboratory University of Illinois at Chicago
Application Level Experiments
• Two possible candidates for instrumentation and testing over EMERGE:– Teleimmersive Data Explorer (TIDE) – Nikita
Sawant, Chris Scharver– Collaborative Image Based Rendering Viewer
(CIBR View) – Jason Leigh, Steve Lau [LBL]
Electronic Visualization Laboratory University of Illinois at Chicago
TIDE
Electronic Visualization Laboratory University of Illinois at Chicago
CIBR View
Electronic Visualization Laboratory University of Illinois at Chicago
Common Characteristics of both Teleimmersive Applications
CAVE
Im m ersaDesk
Tele-Im m ersionClients
Tele-Im m ersionServer
Rem ote Data &Com putation
Services
Compute or DatabaseQuery Spawned by
Tele-ImmersionClient and M anaged
by Tele-ImmersionServer
Electronic Visualization Laboratory University of Illinois at Chicago
• Research Goal:– Hope to see improved performance with QoS and/or TCP tuning enabled.– Monitor applications and characterize their network characteristics as it
stands over non-QoS enabled networks.– Idenitfy & remove bottlenecks in the application.– Monitor again to verify bottlenecks removed.– Monitor over QoS enabled networks.– End result is a collection of techniques and tools to help tune similar classes
of collaborative distributed applications.
• Instrumentation: Time, Info (to identify a flow), Event (to mark a special event), Inter-msg delay, 1-way latency, Read bw, Send bw, Total read, Total sent
• TIME=944767519.360357 INFO=Idesk_cray_avatar EVENT=new_avatar_entered MIN_IMD=0.000254 AVG_IMD=0.218938 MAX_IMD=1.170086 INST_IMD=0.134204 MIN_LAT=0.055527 AVG_LAT=0.169372 MAX_LAT=0.377978 INST_LAT=0.114098 AVG_RBW=74.292828 INST_RBW=750.061367 AVG_SBW=429.815557 INST_SBW=704.138274 TOTAL_READ=19019 TOTAL_SENT=110033
Electronic Visualization Laboratory University of Illinois at Chicago
Characterization of TIDE & CIBRview streams
Estimated bandwidth
(bits/s)
DiffServ
Types BurstinessLatency sensitive
Jitter sensitive
Error sensitive
UDP avatar 6K x n
(15fps)
Interactive Real-time
Constant Y Y N
UDP audio stream
64K x n Brief Y Y N
UDP video stream
10M
(2-way only)Constant Y Y YN
UDP stream
With Playback dependsNon-
interactive Real-time
Constant Y N YN
TCP control data 7K x n Reliable Brief YN YN Y
TCP bulk datadepends Best Effort
Sustained burst
N N Y
Electronic Visualization Laboratory University of Illinois at Chicago
QoSiMoto: QoS Internet Monitoring Tool
• Kyoung Park• Reads Netlogger data sets
from file or from netlogger daemon.
• CAVE application runs on SGI and Linux
• Information Visualization problem.
• How to leverage 3D.• Averaging of data points
over long traces.• www.evl.uic.edu/cavern/qosimoto