doe magellan openstack user story

20
Magellan Experiences with OpenStack Narayan Desai [email protected] Argonne National Lab

Upload: laurabeckcahoon

Post on 28-Nov-2014

1.223 views

Category:

Technology


0 download

DESCRIPTION

Narayan Desai, Thurs, 4/19, 3:50 talk, DOE Magellan

TRANSCRIPT

Page 1: DOE Magellan OpenStack user story

Magellan Experiences with OpenStack

Narayan [email protected] National Lab

Page 2: DOE Magellan OpenStack user story

The Challenge of High Performance Computing

Scientific progress is predicated on the use of computational models, simulation, or large scale data analysis– Conceptually similar to (or enabling of) traditional experiments

Progress is also limited by the computational capacities usable by applications Applications often use large quantities of resources

– 100s to 100000s of processors in concert– High bandwidth network links– Low latency communication between processors– Massive data sets

Largest problems often ride the ragged edge of available resources– Inefficiency reduces scope and efficacy of computational approaches to tackle particular

large scale problems Historically driven by applications, not services

Page 3: DOE Magellan OpenStack user story

The Technical Computing Bottleneck

Page 4: DOE Magellan OpenStack user story

DOE Magellan Project (2009-2011)

Joint project between Argonne and Berkeley Labs ARRA Funded Goal: To assess “cloud” approaches for mid-range technical computing

– Comparison of private/public clouds to HPC systems – Evaluation of Hadoop for scientific computing– Application performance comparison– User productivity assessment

Approach: Build a system with an HPC configuration, but operate as a private cloud– 504 IBM Idataplex nodes– 200 IBM 3650 Storage nodes (8 Disk, 4 ssd)– 12 HP 1TB Memory nodes– 133 Nvidia Fermi nodes– QDR Infiniband– Connected to the ESNet Research Network

Page 5: DOE Magellan OpenStack user story

Initial Approach

Setup Magellan as a testbed– Several hardware types, many software configurations

Chose Eucalyptus 1.6 as the cloud software stack– Mindshare leader in 2009– Had previous deployment experience– Supported widest range of EC2 Apis at the time

Planned to deploy 500 nodes into the private cloud portion of the system– Bare metal provisioning for the rest, due to lack of virtualization support for GPUs, etc

Page 6: DOE Magellan OpenStack user story

Initial Results

Page 7: DOE Magellan OpenStack user story

Detailed Initial Experiences (2009-2010)

Had serious stability and scalability problems once we hit 84 nodes Eucalyptus showed its research project heritage

– Implemented in multiple languages– Questionable architecture decisions

Managed to get system into usable state, but barely Began evaluating potential replacements (11/2010)

– Eucalyptus 2.0– Nimbus– Openstack (Bexar+)

Page 8: DOE Magellan OpenStack user story

Evaluation Results

Eucalyptus 2.0 better, but more of the same Openstack fared much better

– Poor documentation– Solid architecture– Good scalability– High quality code

• Good enough to function as documentation surrogate in many cases– Amazing community

• (Thanks Vish!)

Decided to deploy Openstack Nova in 1/2011– Started with Cactus beta codebase and tracked changes through release– By February, we had deployed 168 nodes and began moving users over– Turned off old system by 3/2011 – Scaled to 336 than 420 nodes over the following few months

Page 9: DOE Magellan OpenStack user story

Early Openstack Compute Operational Experiences Cactus

– Our configuration was unusual, due to scale • Multiple network servers• Splitting services out to individual service nodes

– Once things were setup, the system mainly ran– Little administrative intervention required to keep the system running

User productivity– Most scientific users aren’t used to managing systems– Typical usage model is application, not service centric– Private cloud model has a higher barrier to entry– Model also enabled aggressive disintermediation, which users liked– It also turned out there was a substantial unmet demand for services in scientific

computing Due to the user productivity benefits, we decided to transition the system to

production at the end of the testbed project, in support of the DOE Systems Biology Knowledgebase project

Page 10: DOE Magellan OpenStack user story

Enable DOE Mission Science

Microbes

Plants

Communities

Page 11: DOE Magellan OpenStack user story
Page 12: DOE Magellan OpenStack user story

Transitioning into Production (11/2011)

Production meant new priorities– Stability– Serviceability – Performance

And a new operation team Initial build based on Diablo

– Nova– Glance– Keystone*– Horizon*

Started to develop operational processes– Maintenance– Troubleshooting– Appropriate monitoring

Performed a full software stack shakedown– Scaled rack by rack up to 504 compute nodes

Vanilla system ready by late 12/2011

Page 13: DOE Magellan OpenStack user story

Building Towards HPC Efficiency

HPC platforms target peak performance– Virtualization is not a natural choice

How close can we get to HPC performance while maintaining cloud feature benefits?

Several major areas of concern– Storage I/O– Network Bandwidth– Network latency– Driver support for accelerators/GPUs

Goal is to build multi-tenant, on demand high performance computational infrastructure– Support wide area data movement– Large scale computations– Scalable services hosting bio-informatics data integrations

Page 14: DOE Magellan OpenStack user story

Network Performance Expedition

Goal: To determine the limits of Openstack infrastructure for wide area network transfers– Want small numbers of large flows as opposed to large numbers of slow flows

Built a new Essex test deployment– 15 compute nodes, with 1x10GE link each– Had 15 more in reserve– Expected to need 20 nodes– KVM hypervisor

Used FlatManager network setup– Multi-host configuration– Each hypervisor ran ethernet bridging and ip firewalling for its guest(s)

Nodes connected to the DOE ESNet Advanced Networking Initiative

Page 15: DOE Magellan OpenStack user story

ESNet Advanced Networking Infrastructure

Page 16: DOE Magellan OpenStack user story

Setup and Tuning

Standard instance type– 8 vcpus– 4 vnics bridged to the same 10GE ethernet– virtio

Standard tuning for wide area high bandwidth transfers– Jumbo frames (9K MTU)– Increased TX queue length on the hypervisor– Buffer sizes on the guest– 32-64 MB window size on the guest– Fasterdata.es.net rocks!

Remote data sinks– 3 nodes with 4x10GE– No virtualization

Settled on 10 VMs for testing– 4 TCP flows each (ANL -> LBL)– Memory to memory

Page 17: DOE Magellan OpenStack user story

Network Performance Results

Page 18: DOE Magellan OpenStack user story

Results and comments

95 gigabit consistently– 98 peak!– ~12 GB/s across 50 ms latency!

Single node performance was way higher than we expected– CPU utilization even suggests we could handle more bandwidth (5-10 more?)– Might be able to improve more with EoIB or SR-IOV

Single stream performance was worse than native– Topped out at 3.5-4 gigabits

Exotic tuning wasn’t really required Openstack performed beautifully

– Was able to cleanly configure this networking setup– All of the APIs are usable in their intended ways– No duct tape involved!

Page 19: DOE Magellan OpenStack user story

Conclusions

Openstack has been a key enabler of on demand computing for us– Even in technical computing, where these techniques are less common

Openstack is definitely ready for prime time– Even supports crazy experimentation

Experimental results shows that on demand high bandwidth data transfers are feasible– Our next step is to build openstack storage that can source/sink that data rate

Eventually, multi-tenancy data transfer infrastructure will be possible This is just one example of the potential of mixed cloud/HPC systems

Page 20: DOE Magellan OpenStack user story

Acknowledgements

Argonne Team– Jason Hedden– Linda Winkler

ESNet– Jon Dugan– Brian Tierney– Patrick Dorn– Chris Tracy

Original Magellan Team• Susan Coghlan• Adam Scovel• Piotr Zbiegel• Rick Bradshaw• Anping Liu• Ed Holohan