the optiputer and nlr tom defanti, maxine brown, jason leigh, alan verlo, linda winkler, joe...

25
The OptIPuter and NLR Tom DeFanti, Maxine Brown, Jason Leigh, Alan Verlo, Linda Winkler, Joe Mambretti Chicago Larry Smarr, Mark Ellisman, Phil Papadopoulos, Greg Hidley San Diego Ron Johnson, Dave Richardson Seattle

Upload: derek-jennings

Post on 27-Dec-2015

212 views

Category:

Documents


0 download

TRANSCRIPT

The OptIPuter and NLRTom DeFanti, Maxine Brown, Jason Leigh, Alan Verlo,

Linda Winkler, Joe Mambretti

Chicago

Larry Smarr, Mark Ellisman, Phil Papadopoulos, Greg HidleySan Diego

Ron Johnson, Dave RichardsonSeattle

What is the OptIPuter?

• Optical networking, Internet Protocol, computer storage, processing and visualization technologies

• Tightly couples computational resources over parallel optical networks using the IP communication mechanism

• The OptIPuter exploits a new world in which the central architectural element is optical networking, not computers - creating "supernetworks"

What is the OptIPuter?

• The goal of this new architecture is to enable scientists who are generating terabytes and petabytes of data to interactively visualize, analyze, and correlate their data from multiple storage sites connected to optical networks.

Why Now?

• e-Science requires OptIPuter capabilities– Many 2D GigaPixel and 3D GigaZone data objects– Cyberinfrastructure plans are being made-influence designs NOW!

• We can build on Global Grid Research – Grid Middleware has much of what we need– But Grid is built on stochastic foundation

• “Best Effort” Internet means unpredictable latency

– Need determinism: predictable reservable lambdas

• Availability of Technology for Dedicated Light Pipes – State, National, and International Dark Fiber Nets Being Turned-On – Lots of Parallel High Bandwidth Clusters for Endpoints of Users – Shakeout of Optical Switch Market

• 302 Companies, Both OptIPuter Anchors, Inexpensive (~$1000/port)

– Starlight/PacificWave Have Driven Global Lambda Connectivity– Cost of Bandwidth not the bottleneck anymore

George SeweryniakU.S. Department of Energy

• “Optical Networks are critical to the Federal Agencies– Need to closely work together with private

industry on development to deployment– Federal agencies have stepped up to the

plate

• Current Science trends need them now• No sign of abating requirements for big

pipes and dynamic provisioning”

10GE CAVEwaveon the National LambdaRail

Map Source: John Silvester, Dave ReeseTom West, Ron Johnson

I-WIRE in Illinois Connects EVL and NCSA to StarLight and CAVEwave Connects StarLight in Chicago to Seattle and then Joins Pacific WAVE to Cal-(IT)2 in San Diego and Irvine

(later Bay Area and Los Angeles) for OptIPuter/GLIF Experiments.

EVL

What’s a 10GE CAVEwave Cost?

• Less than a 1GE Internet connection• About what a network engineer or full

professor costs, fully loaded• Much less than the computers at each

end to keep it busy • Much less than the router capacity

needed at each end to accept the 10GE• Your institution needs to be a NLR

member (priceless)

European lambdas to US (red)–10Gb Amsterdam—Chicago–10Gb London—Chicago–10Gb Amsterdam—NYC

Canadian lambdas to US (white)–30Gb Chicago-Canada-NYC–30Gb Chicago-Canada-Seattle

US sublambdas to Europe (grey)–6Gb Chicago—Amsterdam

Japan JGN II lambda to US (cyan)–10Gb Chicago—Tokyo

European lambdas (yellow)–10Gb Amsterdam—CERN –2.5Gb Prague—Amsterdam–2.5Gb Stockholm—Amsterdam–10Gb London—Amsterdam

IEEAF lambdas (blue)–10Gb NYC—Amsterdam–10Gb Seattle—Tokyo

CAVEwave/PacificWave (purple)–10Gb Chicago—Seattle–10Gb Seattle—LA—San Diego–10Gb Seattle—LA

Northern Light

UKLight

CERN

Japan

Actual TransLight ExperimentalLit up Lambdas Today

PNWGP

Manhattan Landing

OptIPuter Experiment #1

Wide-Area Vol-a-Tile and JuxtaView Applications– Electronic Visualizaton Laboratory (EVL), University of Illnois at Chicago– University of Amsterdam

JuxtaView displays ultra-high resolution 2D images, such as USGS maps. It invokes LambdaRAM, which pre-fetches portions of large datasets before an application needs them and stores the data in the local cluster's memory for display.

Vol-a-Tile dynamically retrieves, renders and displays large volumetric datasets from remote storage. It invokes OptiStore, which extracts relevant information from raw volumetric datasets and produces visual objects for display.

Aggressive pre-fetching and large bandwidth utilization can overcome network latency that hinders interactive applications.

Datasets are stored on a cluster at University of Amsterdam, streamed on demand over a 4Gbps transatlantic link, and displayed on EVL’s GeoWall2 visualization cluster.

www.evl.uic.edu/cavern/optiputer

30 Megapixel Viewport into a 10 GigaPixel Dataset

OptIPuter Experiment #2

Terabyte Data Juggling and DVC Framework over the OptIPuter Network – Concurrent Systems Architecture Group (CSAG), UCSD– Cal-(IT)2/JSOE, UCSD

Group Transport Protocol (GTP) is a transport protocol that efficiently manages receiver contention likely toarise in high-bandwidth networks. GTP achieves bothhigh-transfer bandwidth for single flows and also efficient sharing and coordination among multiple convergent flows, enabling efficient data fetching from multiple distributed data sources. A multi-gigabyte SIO dataset is moved across four UCSD OptIPuter node sites and between UIC and Amsterdam.

Dynamic Virtual Computer (DVC) middleware ties together novel configurable optical network capabilities with traditional Grid resources. Using an interactive GUI, a prototype implementation of DVC shows dynamic resource aggregation – resource discovery, selection and binding – as well as the running of a message-passing application across these resources. Resources are assembled from several UCSD sites and from University of Amsterdam.

www-csag.ucsd.edu/projects/Optiputer.html

OptIPuter Experiment #3

Grid-Based Visualization Pipeline for OptIPuter Clusters – Information Sciences Institute (ISI), University of Southern California– Electronic Visualization Laboratory (EVL), UIC

The Grid Visualization Utility (GVU) facilitates the construction of scalable visualization pipelines, based on the Globus Toolkit. GVU is used to enable real-time interactive viewing of high-resolution time-series volume datasets.

Current efforts examine how load-balancing issues can be alleviated by fine-grained decomposition and distribution of datasets across clusters and other distributed compute resources.

GVU currently supports interactive viewing of 3D structures culled from large datasets on a single display; Bolivia Earthquake data is used. Future work will focus on tiled displays, such as GeoWall2, for distributing the rendering load to handle the interactive viewing of multiple, complex structures. www.isi.edu/~thiebaux/gvu

OptIPuter Experiment #4

Trans-Pacific HDTV Feedback & Remote-Control Scenarios ofRemote Instrumentation

– National Center for Microscopy and Imaging Research (NCMIR) and Biomedical Informatics Research Network (BIRN), UCSD

– Osaka University– KDDI R&D Labs

NCMIR researchers demonstrate live streaming HDTV from the world’s largest microscope in Osaka, Japan. The video data is streamed to UIC/EVL and UCSD while being controlled by project scientists in San Diego.

High-quality video is essential for resolving useful information, such as changes in gradients in a high-noise, low-contrast environment. HDTV combined with dedicated lambdas will provide lower latencies and control of network jitter, especially important in these large streams of video data. This step is the first in a data acquisition feedback loop for instrumentation steering, control, computation and visualization.

http://ncmir.ucsd.edu

OptIPuter Experiment #5

Application-Controlled Light-Path Provisioning over Multi-Domain OptIPuter Environments

– Electronic Visualizaton Laboratory (EVL), University of Illnois at Chicago– Int’l. Center for Advanced Internet Research (iCAIR), Northwestern Univ.– University of Amsterdam (UvA)

www.evl.uic.edu/research/res_project.php3?indi=217

TeraVision streamed fractal animation by Dan Sandin, EVL

In this demonstration, end-to-end light paths are provisioned across multiple domains, and then local domain controllers are invoked to access and set up optical switches, using the following tools:• EVL’s Photonic Inter-domain Negotiator (PIN)• EVL’s Photonic Data Controller (PDC)• iCAIR’s Optical Dynamic Intelligent Network (ODIN)

over Chicago optical metro OMNInet• UvA’s Inter-domain Generic Authorization,

Authentication, and Accounting (AAA) procedures

Using EVL’s TeraVision application for capturing and streaming high-resolution computer graphics over gigabit networks, an animation is streamed, first between two domains (UvA to EVL) over multi-gigabit transoceanic links, and then among three domains (UvA, NU via OMNInet, and EVL).

Multi-Domain LightpathsPhotonic Interdomain Negotiator

OC-192

Signalling Link

StarLight

OMNInet

All-optical MAN

All-optical LAN

PDC

All-optical LAN

Cluster

All-optical LAN

PIN

Cluster

BOD/AAA

Cluster

Cluster

(Chicago)

PIN

PIN PIN

ODIN/GMPLS

University of Illinois at Chicago

University of Amsterdam

NetherLight(Amsterdam)

PDC

Chicago and

Northwestern at Evanston

OptIPuter Experiment #6

JuxtaView, Vol-a-Tile and GVU at SIO and SIO Visual Objects Distribution – Scripps Institution of Oceanography (SIO), UCSD– SDSC, UCSD– Cal-(IT)2/JSOE, UCSD– Electronic Visualizaton Laboratory (EVL), University of Illnois at Chicago– Information Sciences Institute (ISI), University of Southern California

http://siovizcenter.ucsd.edu

This demonstration showcases OptIPuter applications that display high-resolution imagery (JuxtaView), volume rendering (Vol-a-Tile and GVU) and scene files composed of heterogeneous geologic datasets (SIO Visual Objects).

Here, SIO scientist Debi Kilb interacts with IKONOS satellite imagery using JuxtaView (on left panel) and a scene file that combines the same imagerywith topography, bathymetry and seismic images (displayed on right panel). Both datasets are fetched over the campus OptIPuter fiber from the UCSD storage cluster.

OptIPuter Experiment #7

Online Brain Maps: Deposition, Distribution and Visualization of Large-Scale Brain Maps in a Near-Real-Time Environment

– National Center for Microscopy and Imaging Research (NCMIR) and Biomedical Informatics Research Network (BIRN), UCSD

– SDSC, UCSD– Cal-(IT)2/JSOE, UCSD

http://ncmir.ucsd.edu, https://telescience.ucsd.edu

NCMIR researchers demonstrate a system within the Telescience Portal that allows users to do biological studies involvingelectron microscopic tomography by guiding them through the process, from acquisitionto analysis.

GridFTP is used to distribute very large (>1Gb)brain maps from the UCSD multi-photon microscope to other OptIPuter on-line sites in Southern California (UCSD, UCI, SDSU, and USC). Data is sent from the OptIPuter’s IBM storage cluster to local resources, such as the Geowall2 or computational resources.

The goal is to be able to transfer data from an instrument to OptIPuter-connected resources in real time, enabling researchers to steer the data acquisition process and monitor progress, which can currently take as long as 22 days.

The Crisis Response Room of the Future

SHD Streaming TV -- Immersive Virtual Reality – 100 Megapixel Displays

The Living Room of 2010?The CineGrid Home Office and Game Room

Global Lambda Integrated FacilityGLIF World Map – December 2004

Predicted international Research & Education Network bandwidth, to be made available for scheduled application and middleware research experiments by December 2004.

www.glif.is Visualization by Bob Patterson, NCSA.

Global Lambda Integrated FacilityBandwidth for Experiments, December 2004

www.glif.is Visualization by Bob Patterson, NCSA.

Global Lambda Integrated FacilityBandwidth for Experiments, December 2004

www.glif.is Visualization by Bob Patterson, NCSA.

September 26-30, 2005University of California, San Diego

California Institute for Telecommunications and Information Technology [Cal-(IT)2]United States

Announcing…

iGrid

2oo5T H E G L O B A L L A M B D A I N T E G R A T E D F A C I L I T Y

Thank You!

• TransLight planning, research, collaborations, and outreach efforts are made possible, in major part, by funding from: – National Science Foundation (NSF) awards SCI-9980480, SCI-

9730202, CNS-9802090, CNS-9871058, SCI-0225642, and CNS-0115809

– State of Illinois I-WIRE Program, and major UIC cost sharing– Northwestern University for providing space, power, fiber,

engineering and management– Pacific Wave, StarLight, National LambdaRail, CENIC, PNWGP,

CANARIE, SURFnet, UKERNA, and IEEAF for Lightpaths• DoE/Argonne National Laboratory for StarLight and I-WIRE network

engineering and design