cct technical report series

11
Center for Computation & Technology CCT Technical Report Series CCT-TR-2009-18 Strategies for Remote Visualization on a Dynamically Configurable Testbed— The eaviv Project Andrei Hutanu, Department of Computer Science and Center for Computation & Technology, Louisiana State University, Baton Rouge, LA USA Gabrielle Allen, Department of Computer Science and Center for Computation & Technology, Louisiana State University, Baton Rouge, LA USA Bart Semeraro, National Center for Supercomputing Applications at the University of Illinois, Champaign, IL USA Jinghua Ge, Center for Computation & Technology, Louisiana State University, Baton Rouge, LA USA

Upload: others

Post on 18-Dec-2021

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: CCT Technical Report Series

Center for Computation & Technology CCT Technical Report Series CCT-TR-2009-18

Strategies for Remote Visualization on a Dynamically Configurable Testbed— The eaviv Project

Andrei Hutanu, Department of Computer Science and Center for Computation & Technology, Louisiana State University, Baton Rouge, LA USA Gabrielle Allen, Department of Computer Science and Center for Computation & Technology, Louisiana State University, Baton Rouge, LA USA Bart Semeraro, National Center for Supercomputing Applications at the University of Illinois, Champaign, IL USA Jinghua Ge, Center for Computation & Technology, Louisiana State University, Baton Rouge, LA USA

Page 2: CCT Technical Report Series

PostedDecember2009.

cct.lsu.edu/CCT-TR/CCT-TR-2009-18

Theauthor(s)retainallcopyrightprivilegesforthisarticleandaccompanying

materials.

Nothingmaybecopiedorrepublishedwithoutthewrittenconsentofthe

author(s).

Page 3: CCT Technical Report Series

STRATEGIES FOR REMOTE VISUALIZATION ON A DYNAMICALLYCONFIGURABLE TESTBED – THE EAVIV PROJECT

ANDREI HUTANU(1,2), GABRIELLE ALLEN(1,2), BART SEMERARO(3) JINGHUA GE(1)

Abstract. This report lays out a plan to create a dynamic network connected testbed used todevelop distributed visualization applications as a part of the eaviv project. The eaviv projectaddresses challenges in building efficient and reliable cyberinfrastructure for distributed scientificapplications by focusing on innovative use of network technologies to coherently optimize systems ofcompute and data resources. Focusing on the use case of distributed visualization, the eaviv projectworks on designing strategies for distributed applications on configurable high speed networks, witha vision of considering networks at the same level as compute and storage resources in designing andoptimizing cyberinfrastructure. Working with distributed national (LSU, NCSA, ORNL, TACC)and international (Masaryk University), as well network providers (Internet2, LONI) the project willbuild a real world production testbed for implementation and evaluation of strategies. This docu-ment is based on a proposal submitted to the NSF EArly-concept Grants for Exploratory Research(EAGER) titled “Strategies for Remote Visualization on a Dynamically Configurable Testbed” nowfunded as NSF #0947825.

1. Introduction

Our work addresses fundamental issues in designing and implementing distributed applicationscapable of enabling new modes of scientific enquiry through the use of modern compute, data,network and collaborative services. The eaviv project is developing distributed visualization ap-plications that will help us define general strategies and optimizations for distributed applications,where network services are considered as a first class resource. We investigate challenges for buildingusable tools and strategies, taking into consideration issues that arise when solving real scientificproblems using real machines connected with real networks. One of the main issues applicationdevelopers face today is that there are no available testbeds that enable coordinated use of dynam-ically configurable networks and compute resources.

The NSF Optiputer and other projects have developed and continue to develop a series of toolsthat depend on and are able to take advantage of a certain type of network service [1–3]. Now thereis a need for theoretical analysis and development of strategies for how these technologies should becombined and used in particular scenarios to solve existing problems with the large scale resourcesprovided through NSF and other agencies, such as TeraGrid and Blue Waters.

This document describes a plan for researching and building an experimental visualization appli-cation, distributing the visualization pipeline in three components (data, rendering, video streaming)using high-speed network transport protocols for data transfer, parallel and distributed renderingsoftware and high-performance video streaming to large displays. This application is designed torun on a new testbed combining application-controlled network resources, high-speed data serversand visualization clusters. A prototype of the distributed visualization application was part of thewinning entry at the SCALE 2009 International Scalable Computing Challenge [4].

To achieve our vision we have built a partnership between universities, national centers (NSF andDOE), network providers, and application developers. We are collaborating with ORNL, TACC,

(1) Center for Computation & Technology, Louisiana State University, Baton Rouge, LA 70803, USA.(2) Department of Computer Science, Louisiana State University, Baton Rouge, LA 70803, USA.(3) National Center for Supercomputing Applications at the University of Illinois, Champaign, IL 61820, USA.

1

Page 4: CCT Technical Report Series

2

Internet2, Masaryk University and network providers such as LONI 1, ESNet 2, ICCN 3 and Om-niPoP 4.

The eaviv project’s core thrusts are:(1) Develop and research strategies for designing and optimizing distributed applications on

configurable high speed networks where the networks are considered of the same importanceas compute and storage resources with a focus on distributed visualization;

(2) Build and evaluate a real world production testbed for implementation and evaluation ofstrategies.

This work will pave the way in showing how dedicated networks can be immediately used withnational resources such as NSF TeraGrid/XD and Blue Waters to provide new levels of servicefor scientific applications. We expect the findings from our research to have immediate impact inthese projects, and to potentially influence the design of future cyberinfrastructure and distributedapplications.

2. Background and Challenges

2.1. Large Scale Visualization.Dealing with the challenge of analyzing large data is a persistent problem in computational

science. The panel session on terascale visualization held at the 1997 IEEE visualization conferenceconcluded that:

“Massively Parallel Supercomputers are once again quickly outpacing our abilityto organize, manage, and understand the prodigious amounts of data they gener-ate. Graphics technology and algorithms have greatly aided in analyzing the modestdatasets of years past, but rarely with enough interactivity to squelch the end-user’sexploratory questions.. . . What is the architecture of tomorrow’s high-end visualiza-tion systems? How much data can we even expect to pull off of these massivelyparallel machines? What are the new computer graphics technologies that can aid interascale visualization?”

We are asking many of the same large scale visualization questions now in the era of petascalecomputing that we asked a decade ago. What constitutes large data is largely determined by thecapability of state of the art computing and observational facilities to produce data that is beyondthe processing ability of a visualization system.

The measure of “large” data has changed over the years from gigabytes in the early to mid 1990sto the current petabyte, and beyond in the near future. During this time the nature of many of thechallenges imposed by data size have remained substantially unchanged. What has changed is thecomputing environment or cyberinfrastructure. Our goal is to understand the challenges of largescale visualization in the context of the current and future scientific computing environment andexamine possible strategies for improving the state of the art.

Visualization Pipeline. The visualization pipeline (see Fig. 1) is a well-known method of describingthe visualization process. We can consider the pipeline to be composed of five stages, a data source(disk or memory), a data filter (selection of data of interest), geometry generation (for example cre-ating triangles), rendering and image display on a screen. Rendering is the process that transformsvisualization primitives(such as points, or triangles) into images. For large datasets, a commonapproach used to create visualization systems is to build distributed visualization pipelines [5, 6], atechnique that we utilize in the eaviv project.

1Louisiana Optical Network Initiative, (http://www.loni.org)2Energy Sciences Network, (http://www.es.net/)3Intercampus Communications Network, (http://www.cites.illinois.edu/projects/iccn/index.html)4OmniPoP (http://www.cic.net/Home/Projects/Technology/OmniPoP/Introduction.aspx)

Page 5: CCT Technical Report Series

3

Figure 1. Visualization pipeline with five stages

Visualization Process. The visualization challenges cannot be understood without understandingthe visualization process. This process involves translating observed or simulated data into a visualrepresentation and then displaying that representation to the user. The visual representation illus-trates some characteristic of the data of interest to scientists. This part of the process encompasseseverything that happens to raw data between its origin and the pixels that represent it on the dis-play surface. Another, often overlooked, part of the process is the interaction of the user with imagedata. At this stage, the user examines the visualization and decides what information it holds. Heor she then uses that information to modify parameters used to create the visualization to alter theimage to gain insight. The quality of this interactive experience is the measure of the performanceof a scientific visualization system.

Challenges. As described in [7] the challenges of large scale data are related to data scale and com-plexity. Scale challenges are related to the difficulty in managing large amounts of data. Complexitychallenges relate to the problem of reducing complex data to a comprehensible form.

To illustrate the challenges associated with data size we consider the visualization of the resultsof a large scale direct simulation of fluid turbulence. The solution domain is a regular cube with12,288 points in each direction and the output results in over 44 terabytes of data per time stepand 200 timesteps. Transferring a single time step of this data over a 10 Gbps connection wouldrequire nearly 10 hours. 44 terabytes of data cannot be visualized on a single workstation even ifthe data transfer were possible. If only ten percent of the cells contributed a single triangle to anisosurface representation of the data, that would amount to over 185 billion triangles. With currenthigh-end graphics cards this gives a single frame rendering rate of about 10 minutes. This exampleillustrates some of the fundamental problems associated with large data. These are:

• Insufficient Memory. An analysis system capable of handling terabytes of data must have alarge main memory footprint in order to avoid the IO penalty of out of core analysis. eavivwill use clusters to have access to larger memory capacity.

• I/O cost. Reading and writing large data requires a high performance parallel file system.eaviv will use distributed resources connected with high-speed networks to address this.

• Impracticality of data transfer. It is impractical to move large amounts of data from onelocation to another over a network. We are looking at visualization of subsections of data,using data filtering techniques on data servers and progressive visualization on the renderingcluster to reduce the amount of data that is transferred.

• Insufficient rendering capability. The previous example highlighted the inability of commonworkstations to handle large data. eaviv will use large visualization clusters for parallelrendering in order to achieve interactive rendering rates.

2.2. Networks. Networks have traditionally been considered opaque for applications. The TCP/IPdesign that stands at the base of the Internet architecture has adopted the layered approach wherethe network operations and network services are almost completely hidden from applications. Inthe high-end computing area, we are witnessing an explosion of data generated by simulations andscientific experiments. The effect of this on the network is an increase in data transfer volumes thatcannot be managed using regular Internet services.

Page 6: CCT Technical Report Series

4

To move towards providing sufficient network capacity for scientific applications, new networks(e.g. Internet2, LONI, etc) have been deployed for the scientific community. The network capacityhowever is only a part of the problem for applications. Increased capacity increases the expectationson the performance of network tools. Existing tools based on the current Internet architecture andprotocols fail to provide the expected performance [8] (see Transport Protocols, Sec. 3.3).

Another issue is the non-deterministic nature of the shared infrastructure. In the compute worldmost scientific applications are run in a time-share mode where each application has exclusiveaccess over the allocated resources for the time it is executed. A similar method can also be usedfor network resources. Time sharing and reserved, exclusive access to network resources requiresa mechanism that allocates, reserves and separates network resources for individual applications.A promising approach — on-demand provisioning of lightpaths — has been used in a series ofexperiments around the world (Phosphorus, G-Lambda, Enlightened). Another approach separatestraffic on the same physical medium using Virtual LANs5. Some of the first networks that provideddedicated dynamic connections (50Mbps - 10Gbps) were DOE UltraScienceNet [9, 10] and NSFCHEETAH [11]. Internet2 has introduced the ION service that allows users and applications todynamically configure and allocate network circuits that provide guaranteed, dedicated networkbandwidth. ESnet has recently activated a dynamic circuit network dedicated solely to scientificresearch — the Science Data Network (SDN).

eaviv analyzes this new, emerging network infrastructure, which uses a new architecture basedon determinism, schedules and application-level control of network resources, where the network isno longer hidden behind layers of abstraction to the user and applications.

Challenges. This project will address the following challenges:• Transport protocol performance. TCP is not able to provide network transport performance

for high-bandwidth, high-latency network circuits. We investigate variants and alternativesthat enable us to use the available network capacity to the maximum. Image delivery (videostreaming) is particularly sensitive to transport protocol performance and we are planning touse UDP-based video transmission software to maximize the quality of video transmission.

• Network latency. For long distance wide area networks, methods to hide and overcome thedamaging effect of network latency need to be explored. We address this by using a pipelineremote data access system optimized for high-throughput execution.

• Resource selection. If multiple network circuits or multiple types of service are available,the application needs to select the most suitable for the work to be performed.

3. Research and Development Activities

The eaviv testbed will form an essential component of infrastructure for this work, supportingthe research activities relating to visualization and the use of network technologies.

3.1. Testbed. We will build a wide-area testbed combining network, compute and graphics re-sources from three locations in the US (CCT, NCSA, ORNL and TACC) and Masaryk Universityin the Czech Republic (Fig. 2).

Network. Internet2’s ION is a revolutionary, optical circuit network that provides dedicated band-width for the most demanding applications. ION enables users to create point-to-point circuitsacross the Internet2 infrastructure using automated control software and will be available for ourproject use at the three testbed sites as offered by our network providers.

The Louisiana Optical Network Initiative (LONI) is a state-of-the-art optical network infrastruc-ture that connects research universities in Louisiana and Mississippi to each other and to externalnetworks. LONI will provide access to ION services to CCT at LSU.

5Defined as a part of IEEE standards: http://standards.ieee.org/getieee802/

Page 7: CCT Technical Report Series

5

Figure 2. Schematic of experimental testbed connecting CCT, NCSA, ORNL,TACC and MU

The DOE’s Energy Science Network (ESnet) is a high-speed network serving DOE scientists andcollaborators worldwide. ESnet has completed installations for a dynamic circuit network, similarto ION, dedicated solely to scientific research, called the Science Data Network (SDN). SDN ispartnering with ION and ESnet will provide dynamically provisioned capabilities to ORNL.

Four sites (LSU, ORNL, NCSA and TACC) are also connected by the high-speed TeraGridnetwork which can be used for visualization experiments not needing dedicated network services.

Visualization Resources. This work will focus on the production compute and visualization resourcesprovided by TeraGrid, LONI and others. Additionally, we are assembling a set of visualizationresources not part of a production testbed that will be dedicated to research and development.

3.2. Distributed Visualization. We look first at the case where simulation data is stored on alocal data server attached to the supercomputer used to generate it. The user is connected using ahigh-speed network infrastructure to the data server. At various locations in the network, computeand rendering resources are available that can be used to build a distributed visualization pipeline.

Various strategies can be used to tackle the problem, such as transferring the data to machineslocal to the user to be rendered locally or rendering on a machine close to the storage and streamimages to the user. In many cases, the graphics resources available at either location (user ordata) are insufficient, and more powerful rendering clusters in the network need to be utilized.Realistically, a single workstation in a rendering cluster will not be able to render more than a fewgigabytes of data. Because of overhead with parallelism the amount of data rendered interactivelyon a single cluster does not scale linearly with the number of nodes, so even a single cluster maynot be able to render enough data.

eaviv will build an advanced prototype of a distributed volume rendering application that goesbeyond the state of the art in existing distributed visualization systems and combines featuressuch as optimized remote data access, distributed rendering, network awareness and high-end videostreaming not available in this combination in any other systems today.

The application will use multiple clusters, with each cluster rendering a different section of thedataset, or a different dataset. The results of the rendering processes will be streamed in parallel tothe user(s), who will see all the data together on a large display. Users will be able to simultaneouslyinteract with all the rendering processes and this will enable some of the visualization parameters(such as viewpoint, timestep for a time-dependent dataset) to eventually synchronize.

Page 8: CCT Technical Report Series

6

Figure 3. Architecture of proposed distributed volume rendering system. Bluelines indicate video streaming, green lines indicate data transfer, and dotted linesindicate flow of interaction/control.

In addition to parallel and distributed rendering, we will investigate methods of speeding up datatransfer from the data source to the rendering processes. High-capacity, possibly dedicated linksconnecting the data source to the rendering processes cannot be efficiently utilized using standardTCP. Our system will support and tune experimental protocols so that it will be able to achievemaximum throughput in any network. A useful application optimization is to pre-distribute thedata into the network and load it in the main memory of the compute resources to reduce the loadtime experienced by the visualization application. Our proposed approach is illustrated in Fig. 3.

Parallel Rendering. As the data from computational experiments is increasing, we need renderingtools that run on parallel computers with large distributed memories and possibly graphics acceler-ators to enable efficient data exploration and interaction. Existing parallel visualization software,such as VisIt [12] and Paraview [13], provide various parallel visualization algorithms on scalar,vector, or tensor data that run the computational-intensive rendering tasks on parallel supercom-puters. In addition to investigating these existing visualization tools eaviv will use and build upona parallel rendering tool developed at CCT that has been designed to use remote rendering anddistributed I/O.

Video Streaming. Video streaming connects the output of the rendering component to end-users.The Optiputer project has developed SAGE [3], a video streaming mechanism that supports high-resolution displays and high-performance UDP-based network transport that eaviv uses to imple-ment video streaming in the distributed visualization applications.

Page 9: CCT Technical Report Series

7

3.3. Emerging Network Infrastructure. A critical component of our system is the integrationof compute, graphics and network resources. A new emerging type of infrastructure is appearing,where applications can directly interact with and make best possible use of deterministic networkresources. Internet2 now offers ION, a new type of service where users and applications can directlycontrol network resources and are able to create point-to-point circuits with guaranteed reservednetwork bandwidth. This opens the path for a new class of applications to be developed: applicationssuch as the one proposed here that can rely on a deterministic and dedicated network service andwill be adapted to take advantage of it.

Transport Protocols. Data transport performance or utilization of high-bandwidth links is an im-portant issue outside of short distance LANs.

It has been shown that TCP, the standard reliable data transport protocol in Internet has per-formance issues when used on long-distance high-capacity network links and as a result, variants ofthe TCP protocol have been proposed. Most of the TCP variants modify TCP’s congestion avoid-ance algorithm, such as Scalable TCP[14], HighSpeed TCP [15], TCP Vegas [16], FAST TCP [17],Compound TCP [18], BI-TCP [19], CUBIC [20], TCP Westwood and TCP Westwood+ [21].

Other protocols, such as LambdaStream [22] and one of the protocols our prototype systemalready supports - UDT [23] use UDP as the underlying protocol and add reliability on top of it.GTP [24] is a protocol designed for multiple senders–single receiver data transfers.

Our approach uses dedicated network connections, so congestion control mechanisms may beunnecessary. UDP-based protocols that use reliable transmission such as RBUDP [25] and theVisapult [26] network transport may provide better performance. As we work with optical networklinks that usually have a very low packet loss (as low as 0.00001%) using UDP directly withoutreliability enforcement can be a reasonable alternative.

We will measure the performance of existing protocol implementations on our network testbed,and propose strategies on what protocols should be used on the different network connections usingvarious scenarios (point-to-point communication, many-to-one communication) with various datasizes, and under various network conditions (round-trip-time, available capacity).

Remote Data Access. Efficient remote data access is a crucial capability for using network-connectedresources to their full capacity. We are focused on providing high performance to applicationsthrough attention to non-blocking parallel I/O.

Our team, following careful analysis of existing options, has developed a remote data accesssystem that uses a non-blocking, pipeline architecture which provides high throughput for executionof remote operations and optimizes latency hiding for distributed applications. The system supportsconfigurable network transport protocols (currently UDT and TCP) to be able to take advantage ofthe network transport protocol analysis described above. It uses a flexible RPC encoding mechanismbased on XML that enables a variety of data access patterns to be supported.

Network Control — Future Network Service. The current ION service allows users/applicationsto create single point-to-point dedicated connections across the Internet2 backbone. Applicationscan directly use a web service interface to automatically configure network services or networkconfiguration can be undertaken by users via a web interface. We will work together with theION group to integrate network control in existing applications as well as provide feedback andcollaborate on improving ION services.

3.4. Experimental Environment for Distributed Applications. The lack of experimentalsystems such as the proposed testbed limits the imagination of application developers, access tosuch a system could challenge application developers to think more broadly and to see what noveltypes of applications they can design able to take advantage of dynamic networks. The testbed willbe available to other application and system developers for experiments with their systems.

Page 10: CCT Technical Report Series

8

4. Conclusion

This document provides an initial plan for the eaviv project that researches distributed applica-tions with a focus on visualization and will build an advanced distributed testbed for applicationdevelopment.

The eaviv project has the potential to motivate broad changes in scientific computing. Ourinnovative uses of new networking technologies could influence the deployment of advanced servicesand novel applications across national networks as well as the design of next generation Internetsystems. Our focus on distributed visualization will contribute to new analysis services for largescale computing that are crucial for petascale systems.

References

[1] Venkatram Vishwanath, Robert Burns, Jason Leigh, and Michael Seablom. Accelerating trop-ical cyclone analysis using lambdaram, a distributed data cache over wide-area ultra-fast net-works. Future Gener. Comput. Syst., 25(2):184–191, 2009.

[2] Thomas A. DeFanti, Jason Leigh, Luc Renambot, Byungil Jeong, Alan Verlo, Lance Long,Maxine Brown, Daniel J. Sandin, Venkatram Vishwanath, Qian Liu, Mason J. Katz, PhilipPapadopoulos, Joseph P. Keefe, Gregory R. Hidley, Gregory L. Dawe, Ian Kaufman, BryanGlogowski, Kai-Uwe Doerr, Rajvikram Singh, Javier Girado, Jurgen P. Schulze, Falko Kuester,and Larry Smarr. The optiportal, a scalable visualization, storage, and computing interfacedevice for the optiputer. Future Gener. Comput. Syst., 25(2):114–123, 2009.

[3] Luc Renambot, Byungil Jeong, Hyejung Hur, Andrew Johnson, and Jason Leigh. Enablinghigh resolution collaborative visualization in display rich virtual organizations. Future Gener.Comput. Syst., 25(2):161–168, 2009.

[4] Andrei Hutanu, Erik Schnetter, Werner Benger, Eloisa Bentivegna, Alex Clary, Peter Diener,Jinghua Ge, Robert Kooima, Oleg Korobkin, Kexi Liu, Frank Loffler, Ravi Paruchuri, JianTao, Jr. Cornelius Toole, Adam Yates, and Gabrielle Allen. Large-scale Problem SolvingUsing Automatic Code Generation and Distributed Visualization. Technical Report CCT-TR-2001-11, Center for Computation & Technology, 2009. http://www.cct.lsu.edu/CCT-TR/CCT-TR-2009-11.

[5] J. Shalf and EW Bethel. The grid and future visualization system architectures. ComputerGraphics and Applications, IEEE, 23(2):6–9, 2003.

[6] K. W. Brodlie, D. A. Duce, J. R. Gallop, J. P. R. B. Waltonand, and J. D. Wood. Distributedand collaborative visualization. Comput. Graph. Forum, 23(2):223–251, 2004.

[7] Hank Childs. Architectural challenges and solutions for petascale postprocessing. Journal ofPhysics: Conference Series, 78, 2007.

[8] Nageswara S. V. Rao, Weikuan Yu, William R. Wing, Stephen W. Poole, and Jeffrey S. Vetter.Wide-area performance profiling of 10gige and infiniband technologies. In SC ’08: Proceedingsof the 2008 ACM/IEEE conference on Supercomputing, pages 1–12, Piscataway, NJ, USA, 2008.IEEE Press.

[9] N.S.V. Rao, W.R. Wing, S.M. Carter, and Q. Wu. Ultrascience net: network testbed forlarge-scale science applications. Communications Magazine, IEEE, 43(11):S12–S17, Nov. 2005.

[10] N. S. V. Rao, W. R. Wing, S. Hicks, S. Poole, F. Denap, S. M. Carter, and Q. Wu. Ultra-science net: research testbed for high-performance networking. Proceedings of InternationalSymposium on Computer and Sensor Network Systems, April 2008.

[11] Xuan Zheng, M. Veeraraghavan, N.S.V. Rao, Qishi Wu, and Mengxia Zhu. Cheetah: circuit-switched high-speed end-to-end transport architecture testbed. Communications Magazine,IEEE, 43(8):s11–s17, Aug. 2005.

[12] H. Childs, E. Brugger, K. Bonnell, J. Meredith, M. Miller, B. Whitlock, and N. Max. AContract Based System For Large Data Visualization. pages 25–25, 2005.

Page 11: CCT Technical Report Series

9

[13] A. Cedilnik, B. Geveci, K. Moreland, J. Ahrens, and J. Favre. Remote Large Data Visualizationin the ParaView Framework. pages 162–170, 2006.

[14] Tom Kelly. Scalable tcp: improving performance in highspeed wide area networks. SIGCOMMComput. Commun. Rev., 33(2):83–91, 2003.

[15] Sally Floyd. Highspeed tcp for large congestion windows. ftp://ftp.rfc-editor.org/in-notes/rfc3649.txt, December 2003.

[16] Lawrence S. Brakmo, Sean W. O’Malley, and Larry L. Peterson. Tcp vegas: new techniquesfor congestion detection and avoidance. In SIGCOMM ’94: Proceedings of the conference onCommunications architectures, protocols and applications, pages 24–35, New York, NY, USA,1994. ACM.

[17] David X. Wei, Cheng Jin, Steven H. Low, and Sanjay Hegde. Fast tcp: motivation, architecture,algorithms, performance. IEEE/ACM Trans. Netw., 14(6):1246–1259, 2006.

[18] Qian Zhang Kun Tan, Jingmin Song and Murari Sridharan. Compound tcp: A scalable andtcp-friendly congestion control for high-speed networks. In Fourth International Workshop onProtocols for Fast Long-Distance Networks, 2006.

[19] Lisong Xu, Khaled Harfoush, and Injong Rhee. Binary increase congestion control for fast longdistance networks. In INFOCOM 2004, volume 4, pages 2514–2524, March 2004.

[20] Injong Rhee and Lisong Xu. Cubic: A new tcp-friendly high-speed tcp variant. In ThirdInternational Workshop on Protocols for Fast Long-Distance Networks, February 2005.

[21] S. Mascolo, L. A. Grieco, R. Ferorelli, P. Camarda, and G. Piscitelli. Performance evaluationof westwood+ tcp congestion control. Perform. Eval., 55(1-2):93–111, 2004.

[22] Chaoyue Xiong, Jason Leigh, Eric He, Venkatram Vishwanath, Tadao Murata, Luc Renambot,and Thomas A. DeFanti. Lambdastream – a data transport protocol for streaming network-intensive applications over photonic networks. In Third International Workshop on Protocolsfor Fast Long-Distance Networks, February 2005.

[23] Yunhong Gu and Robert L. Grossman. Udt: Udp-based data transfer for high-speed wide areanetworks. Comput. Networks, 51(7):1777–1799, 2007.

[24] Ryan X. Wu and Andrew A. Chien. Gtp: Group transport protocol for lambda-grids. InCluster Computing and the Grid, pages 228–238, April 2004.

[25] Eric He, Jason Leigh, Oliver Yu, and Thomas A. DeFanti. Reliable blast udp: Predictablehigh performance bulk data transfer. In CLUSTER ’02: Proceedings of the IEEE InternationalConference on Cluster Computing, page 317, Washington, DC, USA, 2002. IEEE ComputerSociety.

[26] E. Wes Bethel and John Shalf. Consuming Network Bandwidth with Visapult. In Chuck Hansenand Chris Johnson, editors, The Visualization Handbook, pages 569–589. Elsevier, 2005.