4pp nafems & asme v&v

4
What is Verification and Validation? www.asme.org www.nafems.org

Upload: others

Post on 27-Apr-2022

16 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 4pp NAFEMS & ASME V&V

What is Verification and Validation?

www.asme.orgwww.nafems.org

Page 2: 4pp NAFEMS & ASME V&V

Code Verification

In general, Code Verification is thedomain of software developers whohopefully use modern Software QualityAssurance techniques along with testingof each released version of thesoftware. Users of software also sharein the responsibility for code verification,

What is Verification and Validation?

What is Verificatiowww.asme.orgwww.nafems.org

Engineering Simulation involves threetypes of models, namely Conceptual,Mathematical and Computational asindicated in the flow diagram. In

relation to these model types, the widelyaccepted definitions of Verification andValidation (V&V) are:

• Verification:

The process of determiningthat a computational modelaccurately represents theunderlying mathematical modeland its solution.

• Validation:

The process of determining thedegree to which a model is anaccurate representation of thereal world from the perspectiveof the intended uses of themodel.

Put most simply, Verification isthe domain of mathematics andValidation is the domain ofphysics.

Verification

It follows by definition that it is necessary toestablish confidence in the Computationalmodel by carrying out two fundamentalprocesses to collect evidence that for:

1 Code Verification – the mathematicalmodel and solution algorithms are workingcorrectly.

2 Calculation Verification - the discretesolution of the mathematical model isaccurate.

Page 3: 4pp NAFEMS & ASME V&V

even though they typically do not haveaccess to the software source.

Among the Code Verificationtechniques, the most popular method isto compare code outputs with analyticalsolutions; this type of comparison isthe mainstay of regression testing.Unfortunately, the complexity of mostavailable analytical solutions palescompared to even rather routineapplications of most commercialsoftware. One Code Verification methodwith the potential to greatly expand thenumber and complexity of analyticalsolutions is what is termed in the V&Vliterature as manufactured solutions.

Calculation Verification

The other half is what is termedCalculation Verification, or estimatingthe errors in the numerical solution dueto discretization. However, anycomparison of the numerical andanalytical results will contain someerror, as the discrete solution, bydefinition, is only an approximation ofthe analytical solution. So the goal ofcalculation verification is to estimatethe amount of error in the comparisonthat can be attributed to thediscretization.

Discretization error is most oftenestimated by comparing numericalsolutions at two more discretizations(meshes) with increasing meshresolution, i.e. decreasing element size.The objective of these mesh-to-meshcomparisons is to determine the rate ofconvergence of the solution. The main responsibility for CalculationVerification rests with the analyst, oruser of the software. While it is clearly

the responsibility of the softwaredevelopers to assure that theiralgorithms are implemented correctly,they cannot provide any assurance thata user-developed mesh is adequate toobtain the available algorithmicaccuracy, i.e. large solution errors dueto use of an coarse (unresolved) meshare attributable to the software user. The lack of mesh-refinement studies insolid mechanics is often the largestomission in the verification process.This is particularly distressing, since itis relatively easy to remedy usingavailable adaptive meshing techniques.

Validation

Neither part of Verification addressesthe question of the adequacy of theselected models for representing thereality of interest. Answering theadequacy question is the domain ofValidation, i.e. are the mechanics(physics) included in the modelssufficient to provide reliable answers tothe questions posed in the problemstatement.

The manner in which the mathematicsand physics interact in the V&V processis illustrated in the flow chart. After theselection of the Conceptual model, theV&V process has two branches: the leftbranch contains the modeling elementsand the right branch the physical testing(experimental) elements.

This figure is intentionally designed toillustrate the paramount importance ofphysical testing in the V&V process, asultimately, it is only through physicalobservations (experimentation) thatassessments about the adequacy ofthe selected Conceptual and

on and Validation? www.asme.orgwww.nafems.org

Page 4: 4pp NAFEMS & ASME V&V

Further Reading Guide for Verification and Validation in Computational Solid Mechanics available through ASMEpublications as V&V 10-2006

Mathematical models for representingthe reality of interest can be made. Closecooperation among modelers andexperimentalist is required during allstages of the V&V process, until theexperimental outcomes are obtained.Close cooperation is required becauseoften the mathematical and physicalmodel will be different. As an exampleconsider a fixed-end condition, the twogroups will have quite different views ofthe Conceptual model, (clamped)boundary for a cantilever beam as anexample. Mathematically this boundarycondition is quite easy to specify, but inthe laboratory there is no such thing as a‘clamped’ boundary. In general, someparts of the Conceptual model will berelatively easy to include in either themathematical or physical model, andothers more difficult. A dialogue betweenthe modelers and experimentalist iscritical to resolve these differences. To aidin this dialogue, the ‘cross-talk’ activitylabeled as “Preliminary Calculations” inthe chart is intended to emphasize thegoal that both numerical modelers andexperimentalist attempt to model thesame Conceptual model.

Of equal importance is the idea that theexperimental outcomes should not be

revealed to the modelers until they havecompleted the simulation outcomes. Thechief reason for segregation of theoutcomes is to enhance the confidencein the model’s predictive capability. Whenexperimental outcomes are madeavailable to modelers prior to establishingtheir simulation outcomes, the humantendency is to ‘tune’ the model to theexperimental outcomes to produce afavorable comparison. This tendencydecreases the level of confidence in themodel’s ability to predict, and moves thefocus to the model’s ability to mimic theprovided experimental outcomes.

Lastly, the role of uncertaintyquantification (UQ), again for bothmodelers and experimentalists, isemphasized. It is excepted that whenmore than one experiment is performedthey produce somewhat different results.It is the role of UQ to quantify“somewhat” in a meaningful way.Similarly, every computation involvesboth numerical and physical parametersthat have ranges, and likely distributions,of values. Uncertainty quantificationtechniques attempt to quantify the affectof these parameter variations on thesimulation outcomes.

www.asme.org www.nafems.org