the pacific research platform: a science-driven big-data freeway system
TRANSCRIPT
“The Pacific Research Platform:a Science-Driven Big-Data Freeway System.”
Opening PresentationPacific Research Platform Workshop
Calit2’s Qualcomm InstituteUniversity of California, San Diego
October 14, 2015
Dr. Larry SmarrDirector, California Institute for Telecommunications and Information
TechnologyHarry E. Gruber Professor,
Dept. of Computer Science and EngineeringJacobs School of Engineering, UCSD
http://lsmarr.calit2.net
1
Vision: Creating a West Coast “Big Data Freeway” Connected by CENIC/Pacific Wave to Internet2 & GLIF
Use Lightpaths to Connect All Data Generators and Consumers,
Creating a “Big Data” FreewayIntegrated With High Performance Global Networks
“The Bisection Bandwidth of a Cluster Interconnect, but Deployed on a 20-Campus Scale.”
This Vision Has Been Building for Over a Decade
NSF’s OptIPuter Project: Using Supernetworks to Meet the Needs of Data-Intensive Researchers
OptIPortal– Termination
Device for the
OptIPuter Global
Backplane
Calit2 (UCSD, UCI), SDSC, and UIC Leads—Larry Smarr PIUniv. Partners: NCSA, USC, SDSU, NW, TA&M, UvA, SARA, KISTI, AIST
Industry: IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent
2003-2009 $13,500,000
In August 2003, Jason Leigh and his
students used RBUDP to blast data from NCSA to SDSC
over theTeraGrid DTFnet,
achieving18Gbps file transfer out of the available 20Gbps
LS Slide 2005
Quartzite: The Optical Core of the UCSD Campus-Scale Testbed --Evaluating Packet Routing versus Lambda Switching
Goals by 2007:>= 50 endpoints at 10 GigE>= 32 Packet switched>= 32 Switched wavelengths>= 300 Connected endpoints
Approximately 0.5 TBit/s Arrive at the “Optical” Center
of CampusSwitching will be a Hybrid
Combination of: Packet, Lambda, Circuit --OOO and Packet Switches
Already in Place
Source: Phil Papadopoulos, SDSC, Calit2
Funded by NSF MRI
Grant
Lucent
Glimmerglass
Chiaro Networks
LS Slide 2005
Integrated “OptIPlatform” Cyberinfrastructure System:A 10Gbps Lightpath Cloud
National LambdaRail
CampusOpticalSwitch
Data Repositories & Clusters
HPC
HD/4k Video Images
HD/4k Video Cams
End User OptIPortal
10G Lightpath
HD/4k TelepresenceInstruments
LS 2009 Slide
So Why Don’t We Have a NationalBig Data Cyberinfrastructure?
“Research is being stalled by ‘information overload,’ Mr. Bement said, because data from digital instruments are piling up far faster than researchers can study. In particular, he said, campus networks need to be improved. High-speed data lines crossing the nation are the equivalent of six-lane superhighways, he said. But networks at colleges and universities are not so capable. “Those massive conduits are reduced to two-lane roads at most college and university campuses,” he said. Improving cyberinfrastructure, he said, “will transform the capabilities of campus-based scientists.”-- Arden Bement, the director of the National Science Foundation May 2005
DOE ESnet’s Science DMZ: A Scalable Network Design Model for Optimizing Science Data Transfers
• A Science DMZ integrates 4 key concepts into a unified whole:– A network architecture designed for high-performance applications,
with the science network distinct from the general-purpose network
– The use of dedicated systems for data transfer
– Performance measurement and network testing systems that are regularly used to characterize and troubleshoot the network
– Security policies and enforcement mechanisms that are tailored for high performance science environments
http://fasterdata.es.net/science-dmz/Science DMZCoined 2010
The DOE ESnet Science DMZ and the NSF “Campus Bridging” Taskforce Report Formed the Basis for the NSF Campus Cyberinfrastructure Network Infrastructure and Engineering (CC-NIE) Program
Greg Bell, Director ESnet
On Panel
Based on Community Input and on ESnet’s Science DMZ Concept,NSF Has Funded Over 100 Campuses to Build Local Big Data Freeways
Red 2012 CC-NIE AwardeesYellow 2013 CC-NIE AwardeesGreen 2014 CC*IIE AwardeesBlue 2015 CC*DNI AwardeesPurple Multiple Time Awardees
Source: NSF
Creating a “Big Data” Freeway on Campus:NSF-Funded CC-NIE Grants Prism@UCSD and CHeruB
Prism@UCSD, Phil Papadopoulos, SDSC, Calit2, PI (2013-15)CHERuB, Mike Norman, SDSC PI
CHERuB
The Pacific Research Platform Creates a Regional End-to-End Science-Driven “Big Data Freeway System”
NSF CC*DNI $5M 10/2015-10/2020
PI: Larry Smarr, UC San Diego Calit2Co-Pis:• Camille Crittenden, UC Berkeley CITRIS, • Tom DeFanti, UC San Diego Calit2, • Philip Papadopoulos, UC San Diego SDSC, • Frank Wuerthwein, UC San Diego Physics and SDSC
Amy Walton,PRP NSF Program Officer on Panel
CENIC/PW Backplane –Louis Fox, CEO CENIC, on Panel
FIONA – Flash I/O Network Appliance:Linux PCs Optimized for Big Data
UCOP Rack-Mount Build:
FIONAs Are Science DMZ Data Transfer Nodes &Optical Network Termination Devices
UCSD CC-NIE Prism Award & UCOPPhil Papadopoulos & Tom DeFanti
Joe Keefe & John Graham
Cost $8,000 $20,000
Intel Xeon Haswell Multicore
E5-1650 v3 6-Core
2x E5-2697 v3 14-Core
RAM 128 GB 256 GB
SSD SATA 3.8 TB SATA 3.8 TB
Network Interface 10/40GbEMellanox
2x40GbE Chelsio+Mellanox
GPU NVIDIA Tesla K80
RAID Drives 0 to 112TB (add ~$100/TB)
A UCSD Integrated Digital Infrastructure Project for Big Data Requirements of Rob Knight’s Lab – PRP Does This on a Sub-National Scale
FIONA12 Cores/GPU128 GB RAM3.5 TB SSD48TB Disk
10Gbps NIC
Knight Lab
10Gbps
Gordon
Prism@UCSD
Data Oasis7.5PB,
200GB/s
Knight 1024 ClusterIn SDSC Co-Lo
CHERuB100Gbps
Emperor & Other Vis Tools
64Mpixel Data Analysis Wall
120Gbps
40Gbps
FIONAs as Uniform DTN End Points
Existing DTNs
As of October 2015
FIONA DTNs
UC FIONAs Funded byUCOP “Momentum” Grant -
Tom Andriola, UCOP CIO on Panel
Ten Week Sprint to Demonstrate the West CoastBig Data Freeway System: PRPv0
Presented at CENIC 2015 March 9, 2015
FIONA DTNs Now Deployed to All UC CampusesAnd Most PRP Sites
PRP First Application: Distributed IPython/Jupyter Notebooks: Cross-Platform, Browser-Based Application Interleaves Code, Text, & Images
IJuliaIHaskellIFSharpIRubyIGoIScalaIMathicsIaldorLuaJIT/TorchLua KernelIRKernel (for the R language)IErlangIOCamlIForthIPerlIPerl6IoctaveCalico Project •kernels implemented in Mono, including Java, IronPython, Boo, Logo, BASIC, and many others
IScilabIMatlabICSharpBashClojure KernelHy KernelRedis Kerneljove, a kernel for io.jsIJavascriptCalysto SchemeCalysto Processingidl_kernelMochi KernelLua (used in Splash)Spark KernelSkulpt Python KernelMetaKernel BashMetaKernel PythonBrython KernelIVisual VPython Kernel
Source: John Graham, QI
PRP Has Deployed Powerful FIONA Servers at UCSD and UC Berkeley to Create a UC-Jupyter Hub Backplane
FIONAs Have GPUs and Can Spawn Jobs to SDSC’s Comet
Using inCommon CILogon Authenticator Module
for Jupyter.Deep Learning Libraries
Have Been Installed
Source: John Graham, QI
Pacific Research PlatformMulti-Campus Science Driver Teams
• Particle Physics• Astronomy and Astrophysics
– Telescope Surveys– Galaxy Evolution– Gravitational Wave Astronomy
• Biomedical– Cancer Genomics Hub/Browser– Microbiome and Integrative ‘Omics– Integrative Structural Biology
• Earth Sciences– Data Analysis and Simulation for Earthquakes and Natural Disasters– Climate Modeling: NCAR/UCAR– California/Nevada Regional Climate Data Analysis– CO2 Subsurface Modeling
• Scalable Visualization, Virtual Reality, and Ultra-Resolution Video
17
Key Task for This Workshop:Determine the Big Data Needs of These Teams
and Translate into PRP Cyberinfrastructure Requirements
Science Teams Require High Bandwidth Across Campus and Between Campuses and National Facilities
• Connecting Scientific Instrument Data Production to Remote Campus Compute & Storage Clusters
• Providing Access to Remote Data Repositories
• Bringing Supercomputer Data to Local Users
• Enabling Remote Collaborations
• MORE?
Big Data Flows Add to Commodity Internet to Fully Utilize CENIC’s 100G Campus Connection
PRP Timeline
• PRPv1– A Layer 3 System – Completed In 2 Years – Tested, Measured, Optimized, With Multi-domain Science Data– Bring Many Of Our Science Teams Up – Each Community Thus Will Have Its Own Certificate-Based Access
To its Specific Federated Data Infrastructure.
• PRPv2– Advanced Ipv6-Only Version with Robust Security Features
– e.g. Trusted Platform Module Hardware and SDN/SDX Software– Support Rates up to 100Gb/s in Bursts And Streams– Develop Means to Operate a Shared Federation of Caches