big questions for the simulation community - … · big questions for the simulation community ......

9
Big Questions for the Simulation Community Althea de Souza Benchmark Commissioning Editor [email protected] 8 T he NAFEMS World Congress is an aspect of NAFEMS that I particularly enjoy and having missed the last one in 2015 due to ill health, I am counting down the days to NWC17. For this event, NAFEMS has been giving specific thought to the benefits that delegates take away; above and beyond the obvious opportunities to hear presentations from world class analysis experts, meet all the vendors in one location, experience the range of applications and industries where simulation is being used and lots of networking. There are some big questions bouncing around the simulation world, both within NAFEMS working and steering groups and in the wider simulation community so I have spent a very interesting and enjoyable time speaking with a number of experts within NAFEMS and several of the keynote speakers for NWC17 to get their views on these topics. The material here has been distilled from a number of group discussions, interviews and written responses and captures the opinions and thoughts of some of the prominent activists in the simulation community, who will be at NWC17. The keynote presentations and several discussion sessions during the conference will further explore these themes and everyone is encouraged to consider the issues.

Upload: trananh

Post on 31-Aug-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

Big Questions for the SimulationCommunityAlthea de SouzaBenchmark Commissioning [email protected]

8

The NAFEMS World Congress is an aspect of NAFEMS that I particularly enjoy and havingmissed the last one in 2015 due to ill health, I am counting down the days to NWC17. Forthis event, NAFEMS has been giving specific thought to the benefits that delegates take

away; above and beyond the obvious opportunities to hear presentations from world classanalysis experts, meet all the vendors in one location, experience the range of applications andindustries where simulation is being used and lots of networking.

There are some big questions bouncing around the simulation world, both within NAFEMSworking and steering groups and in the wider simulation community so I have spent a veryinteresting and enjoyable time speaking with a number of experts within NAFEMS and several ofthe keynote speakers for NWC17 to get their views on these topics. The material here has beendistilled from a number of group discussions, interviews and written responses and captures theopinions and thoughts of some of the prominent activists in the simulation community, who willbe at NWC17. The keynote presentations and several discussion sessions during the conferencewill further explore these themes and everyone is encouraged to consider the issues.

9

Learn How Organisations are GeneratingConfidence in their Simulation CapabilityHaving a simulation capability is no longer adifferentiator. To remain competitive companies need tohave a capability and be able to use the capabilityeffectively to produce results that are reliable andrepeatable. Generating confidence in the capability of thesimulation team is essential to move analysis from beinga ‘tick box’ in the design process to a strategic capability.

How then to move towards a position of confidence ratherthan allow the difficulties and inconsistencies todiscourage improvements? It is important to understandthat it is a progression towards a level of confidence, witha number of steps that need to be taken, rather than animmediate activity. This progressive path for verificationand validation is being discussed by the AnalysisManagement Working Group as part of the wider topic ofsimulation governance. The ASME V&V10 requirement forvalidation is the gold standard, however there are someareas where comparable tests of in-service behaviourcan’t be performed, such as large civil engineeringprojects. Integration between what testing can be doneand analysis, to provide confidence in the simulations, iseven more important for these areas.

When establishing simulation credibility, both over-estimating or under-estimating credibility is problematic.Blind confidence in simulation results, withoutundertaking adequate V&V activities, and thesophistication of the presented results can blindengineers and product managers, leading them to beover-confident in the quality of results. Concerns aboutthe challenges of V&V activities and previous experiencewith poor quality simulations can similarly lead todecision makers lacking confidence in their results.These contrasting views are often found within a singleorganisation, requiring an explicit formulation of thecredibility requirement based around three key aspects:

1. The effective integration of testing and simulations.

2. An assessment of simulation maturity.

3. Communication of credibility and limitations to thedecision maker.

Simulation maturity and readiness can be assessed in astructured manner, such as that developed in theSimReady project, presented elsewhere in thispublication.

Verification and Validation (V&V)

• Code Verification is determining if themathematics is correctly implemented in the code

• Solution Verification is determining that a specificsimulation has been correctly solved and thisincludes checks such as mesh independence andconvergence

• Validation determines if the physics in the modelof interest correctly represents the physical realityin the experimental results.

A recent concept which may drive progress is that of thedigital twin1, a computational representation of all theindividual components in a product, and the in-serviceconditions experienced. The approach is used to supportdecisions on operation and activities such asmaintenance. In order to implement the concept, realistichigh fidelity, credible models are needed as well as a lotof in-service data measurements. Integrating thesimulation and test data is the most significant difficulty;it sounds easy but is actually really difficult and achievingthis requires a substantial commitment frommanagement to integrate information from diversecultures in an organisation. It is a long term goal andcannot be done quickly but should include in-servicethroughout life information for all components. Althoughthere is some scepticism around the approach due thedifficulties, there are impressive examples from theaircraft engine manufacturers, who record the history ofall the individual fan, compressor and turbine blades inan engine throughout the life of the engine. Thecatastrophic nature of failure of these components andthe difficulty in testing rare failure occurrences presentdifficulties validating simulations, which is why progressis required towards much more credible simulationmodels.

“A new scientific truth does not triumph by convincing itsopponents and making them see the light, but ratherbecause its opponents eventually die, and a newgeneration grows up that is familiar with it” Max Planck

“Despite recognition of the importance of simulation,there is not much short term prioritisation frombusinesses for V&V improvements. As a result, we mustconsider how to strengthen managers’ awareness forV&V and credibility assurance.” Jean-François Imbert

RELEVANT CONGRESS SESSIONSKeynote – Tuesday June 13thOn the Formulation and Application of Design RulesB. Szabó (Engineering Software Research &Development, USA)

Keynote – Tuesday June 13thOn the Balance and Integration of Simulation andTest in Engineering Structural DynamicsD. Ewins (Imperial College London, GBR)

Session 5B - Tuesday June 13th – 11:00Simulation Governance Discussion Session

Training Course – Sunday June 11th – 13:15Simulation Credibility

Invited Presentation – Tuesday June 13thIntroducing PSE within ASML – Lessons Learned andthe Way ForwardF. Huizinga (ASML, NED)

References[1] E.H.Glaessgen, D.S.Stargel, “The Digital Twin Paradigm for

Future NASA and U.S. Air Force Vehicles”, A.I.A.A. 53rdStructures, Structural Dynamics and Materials Conference

Learn How Organisations are Extending theBenefits of Simulation to their Non-ExpertUsers Putting simulation into the hands of the non-expert is asubject that elicits much discussion within NAFEMScommittees. NAFEMS aims to acts as an advocate for thedeployment of simulation, however the concern is that ifthe capabilities are not controlled, errors and incorrectassumptions will lead to simulation being viewed withsuspicion or to improper decisions. A crucial element isthe relationship between the simulation expert and thenon-expert, where the responsibility of the expert endsand the ability for the non-expert to be able to usesimulation safely starts. This requires simulation expertsto design smart simulation applications and is somewhatanalogous to the traditional handbooks where expertswould develop solutions in a parametric form and aworking engineer didn’t have to know how thosesolutions were developed but would use a formula andperhaps some graphs in conjunction with the formula tocome to some kind of a prediction. For this approach towork with simulation, it is extremely important for thesimulation application to have incorporated into it asolution verification procedure, since inexperienced andnon-expert users are less likely to be able to judgeaccuracy.

The development of the most effective smart applicationsrequires both external software and internal companydesign processes. In this case, the experience andknowledge of the in-house application experts can beembedded into the interface for the simulation tool.Whilst an expert in simulation is required to developthese context-specific tools, SMEs often do not employdedicated simulation engineers so need to rely onexternal consultants. The most difficult part of developingan application for non-expert users is getting thetechnical requirements from the organisation defined insuch a way that they can be distilled into an applicationthat can then be released to the general population. Itrequires a process that has a clear statement of whatsenior engineering staff expect these applications to doand how do they define success and acceptance for them.The benefits are huge once the tool is created as it canthen be inserted into the workflow, saving engineeringtime and making solutions operator-independent but itrequires acceptance into the culture of the organisation.

According to Oxford Dictionaries, democratisation is theaction of making something available to everyone. In thecontext of engineering simulations, it is makingsimulation tools available to everyone. It could be argued(and has been) that the tools have been available toeveryone for some time, including free and open sourcecodes, tools that can be run on laptop computers withrelatively modest specifications and tools with easy touse interfaces and GUIs. The limiting factor though is theability to be able to use those tools effectively andresponsibly. Another approach to democratisation isbeing increasingly referred to as ‘appification’ wheresmart applications are created (often based on moregeneral purpose tools) with an easy to use interface andsometimes with reduced computational demands, for avery specific and restricted use case. The complexities ofthe simulation are generally hidden from the user (no

need to see the mesh or select the turbulence model)and the terminology used is that of the applicationengineers, rather than simulation engineers. There arenow a number of organisations offering CAE app-buildingfacilities, whether they are mainstream code vendorsallowing users to create apps from their more general-purpose code or independent organisations.

There has been a growing call for software vendors toprovide users with tools for solution verification. Thesetools can be incorporated into the approved workflow foruse by non-experts. There is still a problem withvalidation; it is directly tied to the user. A strongargument is that this is as it should be, since validationentirely depends on how the tool is used and has little todo with the mathematical solution procedure. This isparticularly important with general purpose codes thatare used for a wide range of applications. Can vendorsput anything into the tools to guide the users? Asking fordemonstration of simulation validation from non-expertusers will always prove challenging and so tools designedfor this class of user should be built very carefully arounda proven simulation approach and tightly restricted toprevent use outside the scope of applicability.

Distinguishing between the development of design rulesand the application of design rules can help todifferentiate when an expert or non-expert user canundertake an activity. Developing design rules requiresvalidation and expert involvement, whereas applyingdesign rules can be done by non-experts using smarttools. Smart applications should have been verified andhave embedded modelling rules so users are notexpected to carry out verification. Validation of both thesoftware and the tightly constrained modelling approachis a pre-requisite for smart tools and should beconducted before they are released to users. Is there arole for OEMs to ensure all users in tier suppliers areusing the tools correctly – maybe in a perfect world theOEMs would develop and distribute these smart tools.There is a role for NAFEMS here - to find the route toreach an optimum simulation environment and the stepsalong the way.

“To standardise the [simulation] workflow is a very bigbenefit to an organisation but they have to develop aculture to support that.“ Barna Szabó

“A smart application can’t be commercial software only. Itis a tool which combines commercial software andembedded knowledge-based processes and rules inorder to secure the product simulation and design.”Jean-François Imbert

RELEVANT CONGRESS SESSIONSSession 2B - Monday June 12th – 13:45Democratisation 1

Session 7J - Wednesday June 14th – 10:45Democratisation 2

Invited Presentation – Tuesday June 13thIntroducing PSE within ASML – Lessons Learned andthe Way ForwardF. Huizinga (ASML, NED)

11

Learn How Leading Companies are usingtheir Simulation Capability to SupportProduct CertificationEngineering analysis and simulation offers significantcost and time savings by reducing the amount ofexperimental testing that is required to design a productso that it is fit for purpose.

Validation activities are key to providing confidence insimulation credibility which is a pre-requisite for the useof simulation for product certification. Validation issimulation of the experiment itself, for exampleaerodynamics measurements are often taken withpressure probes, the presence of such probes influencesthe flow (although it is usually considered negligible) sostrict validation requires the probes present in theexperiment to also be modelled in the simulation. Wherethe experiment is significantly different in scale or detailto the physical system of interest, such as a dam, askyscraper, a bridge, atmospheric pollution or nuclearpower, then the simulation is being used to predictsignificantly outside of the zone of physical knowledgeand these simulations cannot be validated. People needto understand that difference. It makes the prediction oferror bounds particularly difficult.

Safety critical applications need to have a betterunderstanding of the predictive capability of theirsimulations and in these areas, simulations are beingincreasingly used to support decisions made regardingsafety. However regulators will not say that certificationrelies on simulation, although it might be a part of it,combined with allowed experiments and databases ofknowledge. Regulators need to understand simulation,the benefits and the risks. There is top level industryrecognition of a need to move towards more simulation-based certification.

There are high level targets within Europe for simulationsto be accepted as a means for aerospace certification2. Inthe US, innovative certification methodologies for aircraft

seats have been developed and there is a short article onthis elsewhere in this issue. At the moment, high riskindustries have conservative test-orientated certificationbut there is increasing recognition of the need to movetowards simulation-based certification and supportcertification bodies in doing that. For example reducingcertification tests at systems level and relying onsimulation to demonstrate compliance plus some smartphysical testing in the lower component levels. Usingsimulation as part of the certification process is acombined approach it does not mean the suppression ofphysical testing.

Organisation such as NAFEMS have a responsibility inhelping regulators understand better the strengths andweaknesses of simulation. There is work to do to explorehow to change the test-based culture in regulatorybodies and organisations to better understand and usesimulation.

“Most regulators are uncomfortable with simulation-based certification, generally they are happier withsimulation-informed certification – a balance betweenexperiment, simulation, and credibility assurance.” –William Oberkampf

RELEVANT CONGRESS SESSIONS

Keynote – Monday June 12th Smarter Testing Through Simulation for EfficientDesign and Attainment of Regulatory ComplianceS. Chisholm (Boeing Commercial Airplanes, USA)

Session 1B - Monday June 12th – 11:00Generating Confidence in Results

References:[2] “ACARE Flight path 2050, Europe’s Vision for Aviation”,

Report of the High Level Group on Aviation Research,EUR088EN, 2011

Big Questions for the Simulation Community

Learn Why Simulation Engineers should beAware of the Role Played by the SystemsEngineerSystems Engineering (SE) involves an integration ofmultiple disciplines to form a development process thatproceeds from concept to production. With increasedcomplexity in modern products this is happening not juston a component level but on a system level – across bothproducts and processes and thus, the role of the systemsengineer has become more prevalent. The CAEdiscipline has a key role in the Systems Engineeringframework. With the integration of SE into thedevelopment process, there is a driver for CAE experts tobroaden their scope and skill sets to incorporatecapabilities such as multi-domain co-simulation andoptimisation in order to interact with a wide range ofdisciplines.

The "traditional" CAE expert was and often still is focusedon high-fidelity simulation in areas like CFD and FEA. Tosupport a normal engineering process entirely on suchlevel is commercially, and from a time-to-marketperspective, not sustainable. One option to address this isto make simulation tools available to non-simulationengineers through automating any repeating simulationprocesses as much as possible. A more efficientapproach though is to utilise the upfront screening powerof lower-fidelity modelling and simulation to narrowdown the final design and then apply higher-fidelitymodelling. At that point the highly qualified CAE expertfocuses mainly on those higher fidelity projects and isresponsible for developing new simulation techniquesand process which will be rolled out to the otherengineering areas.

The tools in the different areas should able to beconnected with each other and in addition the varioustools in SE need to support standards in order to allowinteroperability between the different process steps.

Depending on the process supported, the standard canbe that of a description language, a transfer or transportprotocol or that of data exchange. Therefore, standardsplay a vital role in Systems Engineering.

As per INCOSE (The International Council of SystemsEngineering), SE is an interdisciplinary approach thatenables the realization of successful (usually inmodern days complex) systems. It focuses ondefining customer needs and required functionalityearly in the development cycle, documentingrequirements, and then proceeding with designsynthesis and system validation while consideringthe complete problem. SE considers both thebusiness and the technical needs of all customerswith the goal of providing a quality product thatmeets the user needs.

Those definitions also can be found under the followinglink to the terms & definitions of the SMSWGnafe.ms/2nTIkcq

“Systems engineering is a holistic approach to productdevelopment over the lifecycle of a product. As in anycurrent product development process of cybermechanical systems, simulation plays an important role.”

“In order to be able to have the tools in the different areasconnected with each other, a data backbone andmanagement system should be implemented. Those arereferred to as platforms, with leading solutions on thelevel of innovation platform.” Frank Popielas

RELEVANT CONGRESS SESSIONSSession 7K-8K – Wednesday June 14th10:45 & 13:15Systems Modelling & Simulation 1 & 2

12

13

Join The Discussion on what ConsiderationsShould be taken into Account whenImplementing a SDM SystemCompanies wishing to implement Simulation DataManagement (SDM) often find it very difficult to put off-the-shelf offerings into production. Most managers andpractitioners in Engineering Simulation organisationshave little experience in designing information systemsand are ill-equipped to run a selection process to ensurethat the proposed solution will deliver the expectedbenefits.

The implementation of an SDM system is a process, withseveral steps along the path towards achieving a fullyautomated system. A fundamental part is data storagewith electronic documentation providing traceability, dataheritage and credibility. Without an effective and well-functioning SDM system, effective simulation governanceis difficult or even impossible. There are even greaterbenefits if the SDM system, which should be alreadylinked to the CAD and materials databases (to ensure thecorrect data is being used for a simulation), is also linkedto the test data management system.

So, what are the ingredients for a successful SDMimplementation?

• Define your long term scope and choose a platformthat can accommodate the future scope.

• Decide on your scope and boundaries of your SDMsystem with respect to existing PDM systems andfunctionalities. Minimize system overlap of PDMsystems to prevent confusion for the users.

• Decide whether you are willing to change the CAE-processes to fit your SDM system or if you need tomodify your SDM system to meet your processes.Users are likely to resist changing existing processes.

• Obtain high management involvement andcommitment from the beginning of the project.

• Obtain key user involvement and commitment fromthe beginning of the project.

• Think big but start small. Have a vision for an SDMsupported development process, but start with small,incremental steps to achieve quick results formanagement and users; maintaining a highmomentum during the first 2-3 years.

• Manage expectations – not everything users want canbe economically implemented, but it helps to providea certain level of customization to suit the user’sneeds.

Implementing an SDM system in a large corporation ismore like a marathon than a sprint. It takes years to getthe many different CAE disciplines to move theirprocesses to an SDM supported process. As a result, anSDM implementation project is at least 50% a change-project and not merely an IT-project.

The implementation of an SDM system is most effectivewhen an agile alignment approach is used. Since most

organisations and users are not using SDM right now, itis difficult to specify requirements and if they try, oncethe system arrives, they may discover that it isn’t whatthey want after all. An agile alignment approach requiresmanagers, simulation engineers and the wider businessto all have input which drives requirements and solutionsto evolve through a continuous improvement processwhich is flexible enough to allow for modification.Consider what functionality is available and how it can beused positively. Start with an SDM platform you can builda system on. If a system can be built (involving the usercommunity) within a number of weeks, using theavailable functionality then it is the right approach.

SDM is a sub-set of the wider concept of Product DataManagement (PDM). However the SDM market is lessmature than other parts of the PLM market. Most PLM orCAE vendors don’t offer SDM, so obtaining a system ismore complex than simply adding another module to anexisting software suite. The coverage of simulation types,such as FEA, CFD, MBD, electromagnetics, etc may belimited and the way you use analysis might not becovered. Add to this the small number of people withexperience in the field and it is not difficult to see whyorganisations are struggling to reap the expectedbenefits.

The desired SDM solution needs to fit the corporate ITlandscape in terms of platform standard and legacyinfrastructure. Since mature SDM systems are few andfar between, it is unlikely that there will be an off-the-shelf product available to fully meet all your needs so findan SDM vendor that is flexible and can customize theSDM solution. A focus on modularity and reusability ofsoftware components can help and a system thatsupports different application programming interfaces(API) like python and web-services can be useful to meetyour changing process requirements. Choose an SDM ITarchitecture that allows for scaling up and maintainingperformance with growing data volume and usernumbers as well as supporting multiple data formats andinterface standards.

Find an SDM vendor that knows and understands yourbusiness processes, has the resources to manage alengthy SDM implementation project and has a mediumterm product strategy. They need to have proven softwaredevelopment skills which include agile softwaredevelopment, automated testing, and automateddeployment capabilities. Visit their other clients and lookat their solution concepts and experiences.

In the light of these challenges it is worth taking the timeto investigate if anyone else has successfullyimplemented an SDM system to meet your specific SDMchallenge. Although the tools are developing, theunderlying methods are defined so case studies canprovide insight from past experience. If there are successstories that mirror your use of simulation and what youwant to achieve with SDM, then consider which class ofSDM was used, CAE vendor, PLM vendor or pure SDMvendor. Otherwise you will be helping your supplier todevelop the solution, which will be time consuming,

Big Questions for the Simulation Community

14

expensive and success is not guaranteed. Visit clients ofyour SDM vendor and look at their solution concepts andexperiences. There are mature SDM solutions doing agood job in a range of areas but you must keep in mindthe specific problem you wish to solve. A key selectioncriteria is, will it work with my data and my processes.The solution must solve the problem of interest.

As discussed elsewhere, there is a need to access testresults to enable comparison with simulation, so whyaren’t organisations combining their test and simulationdata into a single system? Firstly most organisationsalready have a test data management system and theeffort to transfer that into another, different type ofsystem would be considerable. Secondly test datasystems have a big data acquisition front end, which youwould not find in an SDM system. For an SDM system,the key feature is traceability of results through theprocess from input data. This is much easier to achievewith test data; there are more steps in doing a simulationthan in a test. In a test, you start the process and thennature takes over. The systems can be made to worktogether if a common format for graphing results is usedand test results are then pulled into the SDM system toallow comparisons between the two. There are examplesof people storing both types of data together in the samesystem but there is no indication of anyone trying toextend a test data management system to storesimulation data.

Getting simulation teams and engineers engaged withusing an SDM system can be particularly challenging,there are however some strategies to help. Involve keyusers from the beginning of the project, starting with usecases and projects that provide an immediate benefit tothe users. Use these success stories to apply the changeto other CAE disciplines. If necessary, delay deploymentto all categories of user until the experience has beenrefined to be a clear benefit, rather than an overhead.Middle management should also be involved from thebeginning of the project, using the SDM system forreviewing CAE results and frequently reporting on usage.SDM systems are very complex so thorough training

should be provided for new users and sufficient andknowledgeable support must be available to resolveproblems quickly. Early guidance on best practices andstandards of setting up the users’ processes are essentialto maintain a uniform data structure.

The benefits of an effective SDM system should beapparent to both simulation engineers and management.It must improve effectiveness and efficiency; if the systemdoesn’t yield benefits to users it will not be used.

NAFEMS is developing a growing body of knowledge inthis area: a ‘What is SDM’ flyer, a white paper on‘Business Value from Simulation Data Management – aDecade of Production Experience’, a publication inprogress on ‘considerations when implementing a SDMcapability and a training course, which is next scheduledfor Stockholm after NWC17.

“Think big but start small. Have a vision for an SDMsupported development process, but start with small,incremental steps to achieve quick results formanagement and users.”

“The benefits of an effective SDM system should beapparent to both simulation engineers and management.It must improve effectiveness and efficiency; if the systemdoesn’t yield benefits to users it will not be used.” MarkNorris

RELEVANT CONGRESS SESSIONSSPDM Conference – Track 8 – Monday 12th andTuesday 13th June

Keynote – Tuesday June 13thSimulation Data Management – The Next ChallengesD. Ruschmeier (Dr. Ing. h.c. F. Porsche, GER)

Training Course – Sunday 11th June – 13:15Introduction to SDM

15

Learn How the Major Engineering AnalysisCodes are Evolving to Address New HPCTrendsThe last decade has seen the cost of hardware falldramatically meaning that the cost of software licenses isnow a significant consideration when selecting hardwareto support an organisation’s analysis capability. Manynumerical codes scale extremely well and are takingadvantage of recent advances in terms of utilising GPUs.With many experts predicting that the end is finally nighfor Moore’s law where does that leave the simulationengineer.

Terminology can be off-putting and HPC (HighPerformance Computing) is one of those terms aroundwhich there are mixed feelings and sometimes a degreeof fear. Although the potential advantages of running onlarger and faster computer systems are apparent tosimulation engineers, the use of HPC capabilities is notas prevalent as might be expected due to difficulties withsimulation speed up, cost, ease of use and ease ofaccess.

Simulation speed upAccess to increased computational resources provided byHPC allows users to run bigger simulations and/or runthe same size simulations faster. However there is a view,particularly in academic circles, that simulation codesdon’t scale well. It is true that there are significantdifference between codes built to run on multiple coresand those (usually the older codes) that are having to beadapted to multiple cores but which were not designed todo that. Explicit simulation codes tend to scale betterthan implicit codes with older codes may scalingadequately up to 16 or 32 cores. This can result in usersshrinking the size of their models, to enable them to runon the available computer resources, leading to coarserand less accurate simulations.

To improve the situation, when larger models or fasterresults are needed, the older algorithms that are notdesigned for parallelisation need to be replaced withthose that are. Alternatively new codes, that arefundamentally designed to be highly parallelisable, canbe used. Most widely used engineering simulation toolsare being continually developed so the view that theycan’t be changed is a myth. The vendors have teams ofpeople working on these issues and they are often onlylimited by the type and size of system to which they haveaccess. There are many codes now available that are nowscaling well on very large numbers of cores, well inexcess of 1000.

CostFor commercial codes the license cost is significant andone of the biggest criticisms of software vendors inrecent years is on how they charge for licensing to useHPC. Some vendors have made significant improvementsand now provide flexible licensing and charging methodsthat are almost independent of the number of cores used. The argument from the users’ viewpoint is that it doesn’t

cost the vendor any more if their software is run onmultiple cores or just one. The argument from thevendors is that there is increased value in running onmany cores. An IDC study on financial return oninvestment (RoI) for HPC states that for every US dollarinvested in HPC (across all application areas – includingCAE) the RoI is $515. This is being used by some vendorsto justify their charging structures; that the charge isbased on the value provided rather than the cost toprovide it. They claim it is necessary to stay in the marketand continue to develop their software.

License charging is not a technical challenge but rather abusiness decision so competition in the market shouldaddress this and drive down the prices. A software vendorneeds to extract the maximum value out of its productsbut also needs to stay competitive, so if the businessenvironment requires it, there will be change. The lag inprogress may be due to the significant issue of vendorlock-in within engineering simulation. Organisations havea lot invested in their simulation tools in terms ofhistorical data, experience and user expertise, so thedecision to change tools is not taken lightly.

For many organisations, the requirement for HPCcapabilities is sporadic, not continuous; licensing modelsto support this would be a valuable improvement. It canbe achieved by using a SaaS approach (Software as aService) or pay-as-you-simulate and there are a numberof cloud providers offering that and in some cases, thesoftware licenses are also available on that basis.

The cost of licences and the need for flexibility in usage isone of the reasons that open source and freely-availabletools are growing in popularity. Ten years ago, fewpresentations at NAFEMS events mentioned open-sourcesimulation tools being used for industrial simulation butnow they appear with regularity. If commercial vendorsdo not modify their approach to licensing for HPC,customers will start looking towards open-sourcealternatives. This has implications in the quality ofresults. It is one thing to expect paid-for software to beverified but when you and others have access to changethe code and you haven’t paid for it then how can youknow it is safe. There are claims that the quality is stillgood and well controlled but the V&V evidence is in poorshape. Software development is not free, it is paid for bysomeone and without the revenue pay for it, the qualitywon’t be the same. Having said that, open-source is agreat area for consulting firms, provided they undertakethe necessary V&V activities.

Aside from the licensing costs, there are other costbarriers to the use of HPC; the user requires much morethan the software. System management, maintenance,installation and energy all need to be accounted for, asdoes the HPC hardware, which is usually about one fifthof the total cost of ownership (TCO). This is significant,often not understood and makes an in-house HPC facilityout of reach for most SMEs. Onsite HPC is good fororganisations that have their own internal softwareteams. These are large organisations, often OEMs, who

Big Questions for the Simulation Community

16

With thanks to r Trevor Dutton Dutton Simulationr Wolfgang Gentzch The UberCloudr Peter Giddings National Composites Centrer Jean Francois Imbert SIMconcept Consultingr Andrew Jones NAGr Lee Margetts University of Manchesterr Mark Norris The SDM Consultancyr William Oberkampf (W.L.Oberkampf Consultingr Frank Popielas Popielas Engineering Consulting, LLCr Chris Rogers CREA Consultantsr Dirk Ruschmeier Porscher Barna Szabo Engineering Software Research and Developmentr Anas Yaghi The Manufacturing Technology Centre

Our thanks are also due to all of the NWC17 keynote speakers, who were willing to contribute but were prevented from doing so by the finite, uni-directional nature of time.

have the capabilities to control software development,performance verification and quality assurance activities.They are also more likely to be able to fully utilise theresource. Below a certain limit, which is between 40 and80% utilisation, it is likely to be cheaper to use a cloudservice to access HPC. The switchover point is not clearcut and is difficult to predict globally since it depends onthe individual circumstances. The challenge is often inassessing the cost since it is difficult to do if you are notalready using HPC. There are organisations offering TCOanalysis for CAE users in order to address this issue.

Ease of use and Ease of accessThere should be less distinction in the user experiencebetween HPC and LPC (low performance computing – ordesktop computers) and there are trends towardsblurring this. Cloud suppliers are redefining the use ofHPC with easy to use interfaces and the idea of ‘time tosolve’, rather than requiring users to deal with specifyingthe complexities of HPC.

Using HPC should not be difficult. In the same way thatmost car drivers don’t understand all the details of theengine, suspension, engine management etc. in theirvehicle, so simulation engineers shouldn’t need tounderstand the details of how an HPC system works tobe able to benefit from it. The responsibility for this isneither in the remit of the hardware suppliers orsimulation software vendors so integrators (such as thecloud providers) are working on it. Currently users seethe benefits of moving off desktops onto an HPC serverbut although HPC can be bought, there are fewer optionsto acquire the integration and training needed to allowthe users to run their simulations. The service providedby cloud providers is therefore very valuable and maybethere should be a similar service for those people whochoose to buy their own, in house HPC systems.

The datacentres have made HPC available to the massesfor many years and the user-base is definitely widening.Security is often cited as a reason not go into the cloud,however unless an industrial grade security team is part

of your organisations datacentre, then the cloud providermay actually be the more secure storage solution.Although traditional methods of accessing HPC are stillavailable and in use, primarily by academic institutions,improved interfaces are bringing HPC capabilities to awider set of users. Cloud services provide a growthpotential for engineers using workstations and if HPC isalready being used, the question is whether it is morecost effective to run on the cloud or in-house.

The message is that there are significant benefits inrunning simulations on larger and more powerfulcomputer systems and the barriers that have held backprogress are being removed. The Cloud is extremelyvaluable for some sets of users and it is growingenormously as it can provide both ease of access as wellas better cost models.

“The question is not, and in most cases never should beHPC vs Cloud. It really should be about Cloud as a growthpotential for people using workstations.” Andrew Jones

“Many end users are quite blind to the total cost ofownership [of in-house HPC], which as soon as you knowit, is an even bigger hurdle.“ Wolfgang Gentsch

RELEVANT CONGRESS SESSIONSKeynote – Monday 12th JuneFighting Bugs with HPCU. Lindblad (Tetra Pak, SWE)

Session 1D – Monday 12th June – 11:00Cloud Computing 1

Session 2D – Monday 12th June – 13:55Cloud Computing 2

Session 5D – Tuesday 13th June – 11:00HPC

Session 4J – Monday 12th June – 17:30Discussion Session - Trends in HPC