the ubercloud hpc experiment paving the way to ?· the ubercloud hpc experiment – paving the way...

Download The UberCloud HPC Experiment Paving the way to ?· The UberCloud HPC Experiment – Paving the way to…

Post on 09-Sep-2018

212 views

Category:

Documents

0 download

Embed Size (px)

TRANSCRIPT

  • The UberCloud HPC Experiment Paving the way to HPC as a Service

    HPC Advisory Council Switzerland Conference 2013

    http://www.hpcadvisorycouncil.com/index.phphttp://www.hpcadvisorycouncil.com/index.phphttp://www.hpcadvisorycouncil.com/events/2013/Switzerland-Workshop/

  • Example: Cloud Computing Usage in Biotech & Pharma Market Study

    Insight Pharma Reports Market Study, 2012

    Which cloud-based platforms is your organization using?

    What do you envision your companys biggest concerns to be on migrating your current platforms to cloud-based platforms?

    Do you feel that the current vendors offer the solutions that your organization needs?

    What proportion of your IT spend is devoted to Cloud Computing?

  • Cloud usage in Pharma/Bio, 2012

  • Cloud-based platform interfaces, 2012

  • The UberCloud HPC Experiment An open humanitarian collaborative community

    Objective:

    Making available HPC as a Service, for everybody, on demand, at your fingertip

    How?

    For SMEs and their engineering applications

    to explore the end-to-end process

    of using remote computing resources,

    as a service, on demand, at your finger tip,

    and learning how to resolve the many roadblocks.

  • Why this Experiment ?

    Foster use of HPC in Digital Manufacturing, for 360.000 SMEs in US

    By using remote resources in HPC Centers & in HPC Clouds

    Support initiatives from Intel, NCMS, and many other organizations to uncover and support the missing middle

    Observation: business clouds are becoming widely accepted, but acceptance of simulation clouds in industry is still in early adopter stage (CAE, Bio, Finance, Oil & Gas, DCC)

    Barriers today: Complexity, IP, data transfer, software licenses, performance, specific system requirements, data security, interoperability, cost, etc.

  • The Industry End-User Benefits

    free, on-demand access to hardware, software, and expertise, with a one stop resource shopping experience

    No hunting for resources in the complex emerging cloud market

    Professional match-making of end-users with service providers

    Perfectly tuned end-to-end, step-by-step process to the HPC Cloud

    Lowering barriers & risks for frictionless entry into HPC in the Cloud

    Leading to increasing competitiveness, agility, innovation.

    Crowd sourcing: End-Users are building relationships with other community members and actively contribute to improvements

    No getting left behind in the emerging world of Cloud Computing

  • The UberCloud Advantages

    HPC Experiment: growing DBase: 6 months 350 orgs and 61 teams

    Resource providers: Amazon, Bull, Gompute, Penguin, Nimbix, SGI, Ohio SC, San Diego SC, Univs Indiana, Michigan, Rutgers, Stanford...

    Software providers Ansys, CD-adapco, Bright Computing, CEI, CST, Cycle, ESI, Globus, NICE, OpenFoam, Simulia, Univa, etc.

    Consulting: Analisis, BioTeam, Dacolt, EASi, ForTech IT, Intel, S&C, SimuTech, StillWater, VorTech, ZenoTech, Universities, etc.

    Plus 60+ companies exploring and testing The UberCloud platform

    Plus tools like Drive and Basecamp for hiring, matching, managing

    Plus MarCom in media, media, media, media, media

  • Where are we with the experiment

    Started last August: Now Round 2: currently 350 participating organizations and individuals

    Experiment reaches to every corner of the globe, participants are coming from 31 countries

    Registration at: www.hpcexperiment.com and www.cfdexperiment.com and www.compbioexperiment.com

    60 teams have been formed in Rounds 1 & 2

    Round 3 in preparation, start April 1, 2013 7

    http://www.hpcexperiment.com/http://www.cfdexperiment.com/http://www.compbioexperiment.com/

  • Some statistics of Round 1 (completed)

    160 registered participants (today: 330) from 25 countries

    36 end-users, 38 experts, 20 ISVs, 17 resource providers, 20 offering edu & training, 29 observers = 99 HPC + 61 CAE

    Status of the Round 1 Teams, end of October:

    15 successfully completed in time

    1 monitoring application execution

    1 starting first job execution

    3 setting up the team (late-comers)

    5 stalled

  • Participants Some of our Resource Providers

    Media Sponsor

    Some of our Providers want to be anonymous

    http://www.digitalmanufacturingreport.com/http://www.sicos-bw.de/index.php

  • Participants Some of our Software Providers

    Some of our ISVs want to be anonymous

  • Participants Some of our HPC Experts

    Some of our HPC Experts want to be anonymous

    and 45 more.

  • Building the teams

    You register as End-User, Software Provider, Resource Provider, or an Expert, and provide your profile

    End-User joins the experiment; we ask the ISV to join

    We select a suitable Team Expert from our database

    End-User and Expert analyze resource requirements

    We suggest a computational Resource Provider

    After all four team members agree, the team is ready to go

  • Teams, its all about teams 20 teams from Round 1:

    Anchor Bolt

    Resonance

    Radiofrequency

    Supersonic

    Liquid-Gas

    Wing-Flow

    Ship-Hull

    Cement Flow

    Sprinkler

    Space Capsule

    Car Acoustics

    Dosimetry

    Weathermen

    Wind Turbines

    Combustion

    Blood Flow

    ChinaCFD

    Gas Bubbles

    Side impact

    ColombiaBio

  • Teams in Round 2

    Simulating stent deployment Ventilation benchmark - Simulating free convection in a room Two-phase flow simulation of a separation column CFD simulations of vertical and horizontal wind turbines Remote Visualization Simulating acoustic field around a person's head (near-field HRTF) Drifting snow in urban environments and on building rooftops Simulation of flow around a hull of the ship Simulating steel to concrete fastening capacity for an anchor bolt Simulating water flow through an irrigation water sprinkler Numerical EMC and Dosimetry with high-resolution models Ensemble simulation of weather at 20km and higher resolution Simulating radial and axial fan performance Gas turbine gas dilution analysis Simulating wind tunnel flow around bicycle and rider Interactive Genomic Data Analysis in the Cloud

  • Team 41: Heavy Duty ABAQUS Structural Analysis in the Cloud

    The Team: Frank Ding, Simpson Strong-Tie; Matt Dunbar, SIMULIA; Steve Hebert, Nimbix; Sharan Kalwani, Intel

    The application: solving anchorage tensile capacity, steel and wood connector load capacity, and special moment frame cyclic pushover

    The Use Case:

    ABAQUS/Explicit and ABAQUS/Standard

    HPC cluster at Simpson Strong-Tie is modest, 32 cores Intel x86-based gear

    Cloud bursting is critical

    Sudden large data transfers are a challenge

    Need to perform visualization for ensuring design simulation is proceeding correctly

  • Pre-processing happens on the end users local workstation to prepare the CAE model

    Files transferred to the HPC cloud data staging area, using a secured FTP process

    Submit the job through the Nimbix.net web portal

    Once the job finishes, a notification email is received

    Result files can be transferred back for post-processing,

    or the post-processing can be done using remote desktop tool like HP RGS on the HPC providers visualization node.

    Team 41: The HPC Cloud workflow

  • Team 41: Challenges

    Scheduling a weekly meeting the Experiment is not our day job

    Needed a fast interconnect (e.g. Infiniband) which was not available.

    Solved with fat nodes, sufficient for testing out the cloud HPC workflow, the actual inter-connect performance of the cluster was not a concern.

    Challenge was to address the need for simple and secure file storage and transfer. Accomplished by using GLOBUS technology. Cloud based storage is mature and ready for prime time HPC, especially in the CAE arena.

    Challenge was to push the limits and stream several jobs simultaneously to the remote HPC cloud resource. This provided solid evidence that bursting was indeed feasible. To the whole teams surprise it worked admirably and had no impact whatsoever overall.

    Challenge now became perhaps the most critical which was the end user perception and acceptance of the cloud as a smooth part of the workflow.

    Remote viz was necessary to see if the simulation results (remotely in the cloud) could be viewed and manipulated as if it were local on the end user desktop.

  • Team 41. What the end user saw..

    With right tuning and useful remote visualization!

  • Team 41: What did we all learn and prove?

    Benefits:

    Clearly established - HPC cloud model can indeed be made to work.

    Recommendations: A few key necessary factors emerged:

    Result file transfers are a major source of concern since most CAE result files can be easily over several gigabytes Depending upon the individual use, a minimum of 2-4 MB/sec sustained and

Recommended

View more >