0deec524f307153fb4000000

Upload: romoex-r-rock

Post on 08-Jan-2016

216 views

Category:

Documents


0 download

DESCRIPTION

400000

TRANSCRIPT

  • AIAA-2000-1437

    American Institute of Aeronautics and Astronautics1

    VALIDATION OF STRUCTURAL DYNAMICS MODELSAT LOS ALAMOS NATIONAL LABORATORY

    Franois M. Hemez* and Scott W. DoeblingEngineering Analysis Group (ESA-EA)

    Los Alamos National LaboratoryP.O. Box 1663, M/S P946

    Los Alamos, New Mexico 87545

    ABSTRACT

    This publication proposes a discussion of thegeneral problem of validating numerical models fornonlinear, transient dynamics. The predictive quality of anumerical model is generally assessed by comparing thecomputed response to test data. If the correlation is notsatisfactory, an inverse problem must be formulated andsolved to identify the sources of discrepancy between testand analysis data. Some of the most recent worksummarized in this publication has focused ondeveloping test-analysis correlation and inverse problemsolving capabilities for nonlinear vibrations. Among thedifficulties encountered, we cite the necessity to satisfycontinuity of the response when several finite elementoptimizations are successively carried out and the need topropagate variability throughout the optimization of themodels parameters. After a brief discussion of theformulation of inverse problems for nonlinear dynamics,the general principles which, we believe, should guidefuture developments of inverse problem solving arediscussed. In particular, it is proposed to replace theresolution of an inverse problem with multiple forward,stochastic problems. The issue of defining an adequatemetrics for test-analysis correlation is also addressed.Our approach is illustrated using data from a nonlinearvibration testbed and an impact experiment bothconducted at Los Alamos National Laboratory in supportof the advanced strategic computing initiative and ourcode validation and verification program.

    1. INTRODUCTION

    Advances in computational and modelingcapabilities make it possible to simulate a wide range ofdifficult problems that would have been off-limits just afew decades ago. However, developing models andobtaining numerical solutions do not necessarily implythat the resulting predictions are correct. Weatherforecast, prediction of acoustic levels and reliabilityanalysis of mechanical systems are a few examples thatillustrate this difficulty on a daily basis.

    This work addresses the general problem of modelvalidation, that is, how to assess the predictive accuracyof a numerical simulation and its ability to capture thedynamics or physics of interest. Model validationincludes classes of problems that have been and continueto be extensively studied among which we cite healthmonitoring, damage detection and finite element modelupdating. All these have in common the need to fit aparametrized model to a reference solution or test data,therefore, defining an inverse problem. Some of thegeneral principles which, we believe, should guide thedevelopment of inverse problem solving for the 21stcentury, are discussed. These include the development ofgeneral-purpose, nonlinear models; the analysis oftransient, time-domain data; greater imagination infeature extraction and the definition of test-analysismetrics; dedicated numerical analysis tools for increasingthe efficiency of inverse problem solving; decentralizedmeasurement and computational strategies; thepropagation of variability information during the directand inverse calculations; and the formulation ofstatistical, hypothesis testing to assess the consistencybetween test data and multiple numerical simulations.

    *

    Technical staff member, AIAA member, [email protected] , 505-665-7955. Senior technical staff member, AIAA member, [email protected] , 505-667-6950.

    This material is declared a work of the U.S. Government and is not subject to copyright protection in the UnitedStates. This publication has been approved for unlimited, public release. LA-UR-99-4542. Unclassified.

    For Publication in the Proceedings of the 41st AIAA/ASME/ASCE/AHS/ASC Structures,Structural Dynamics and Materials Conference, April 3-6, 2000, Atlanta, Georgia.

  • American Institute of Aeronautics and Astronautics2

    Some aspects of this philosophy are illustratedusing nonlinear vibration data and an impact experimentdesigned for characterizing the behavior of a highlynonlinear hyperelastic material. Various metrics for test-analysis correlation are compared, response surfaces aredefined to optimize the design parameters and fastprobability integration is used for assessing theconsistency of various models with respect to test datain a statistical sense.

    2. MOTIVATION

    The main reason why numerical models havebecome so popular is because it is much less expensiveto use computational time than it is to run asophisticated experiment. Many practical situations alsooccur where the phenomenon of interest can not bemeasured directly. For example, this is the case withlarge space antennas developed for observation andcommunication purposes that do not withstand theirown weight in an environment of 1-g of gravity. So, theanalyst must rely on numerical simulations to establishthe dynamic characteristics of the structure and tovalidate control laws.1 Another example is the diagnosisof cracks or faulty mechanical components in civilengineering structures or airplanes. In this case, testingmethodologies are simply not yet available due to thecomplexity of such systems and engineers must rely onlocalized screening or component testing, which turnsout to be time-consuming and very expensive. Modaltesting-based health monitoring appears to be apromising alternative.2 Hence, the scientific communityhas turned to numerical models that can be parametrizedand used to study a wide variety of situations.

    This argument has been reinforced in recent yearsby the increasing efficiency of processors, the greateravailability of memory, the breakthrough of object-oriented data structures together with the growingpopularity of parallel processing whether it involvescomputers with massively parallel architectures ornetworks of single-CPU workstations. Interestinglyenough, the miniaturization of CPUs and their greaterefficiency have influenced greatly testing procedures,making it possible to instrument structures withhundreds of transducers. Powerful data analysis andfriendly computer graphics are also a driving forcebehind the development of non-intrusive, opticalmeasurement systems such as holography and laservibrometry. These technological breakthroughs are notwithout major consequences on the way engineers areanalyzing structural systems today and on their

    conception of test-analysis correlation and inverseproblem solving. In the first case, an illustration is therapid development of modeling and computationalprocedures for nonlinear dynamics. In the second case,modal-based updating techniques developed originally torefine linear structural dynamics models are evolvinginto the broader notion of model validation. This workdefines and explores this last concept.

    The articulation between testing, modeling andinverse problem solving is illustrated in Figure 1 wherearrows represent the flow of information. Here, inverseproblem solving is replaced by a methodology whereresponse surfaces are generated from the resolution of alarge number of forward analyses. This best utilizes ourcapabilities for modeling nonlinear systems and ourparallel processing resources. Two other importantcontributions to this work are 1) the ability to derivehigh accuracy, physics-based material models and 2) fastprobability integration for large-scale structural analysis.The first one is not discussed in this paper but it isbriefly mentioned because physics-based models ofmaterial behavior are generally obtained from amicroscopic description of the material. As such, theydepend on parameters that can not be measured withgreat accuracy and that are best characterized byprobabilistic distributions. This explains why fastprobability integration techniques are critical to ourwork and why optimization algorithms are required, notonly to adjust parameters of the models, but also toassess the quality of models in a probabilistic sense.

    Figure 1. Flow chart describing the different steps oftesting, modeling, analysis and validation.

    ParametricOptimization

    High Accuracy,Physics-based

    MaterialBehavior

    Developmentof SeveralStructural

    ModelsNonlinear

    Vibration orTransientTesting

    ProbabilisticAnalysis

    Test-AnalysisCorrelation

  • American Institute of Aeronautics and Astronautics3

    3. DESCRIPTION OF LANL TESTBEDS FOR MODEL VALIDATION

    To illustrate our views of model validation, twoexperiments performed at Los Alamos NationalLaboratory (LANL) are briefly described. The firsttestbed is a eight-degree of freedom vibrating system thatexhibits significant friction and nonlinear oscillations.The purpose of the second testbed is to characterize thebehavior of an elastomeric layer of material subjected toa short-duration impact. Both experiments are designedto provide test data that can be studied to quantify thevariability of a component-level experiment and toassess the adequacy of our model validation procedures.

    3.1. TESTBED FOR NONLINEAR VIBRATIONS

    Our testbed for the validation of nonlinearvibration modeling is the LANL 8-DOF system (whichstands for Los Alamos National Laboratory eight degreesof freedom) illustrated in Figure 2. It consists of eightmasses connected by linear springs. The masses are freeto slide along a center rod that provides support for thewhole system. Modal tests are performed on the nominalsystem and on a damaged version where the stiffness ofvarious springs is reduced by 14% or 24%. The exerciseconsists in identifying the location and extent ofstructural damage by optimizing the spring stiffness ofeach spring of the numerical model. This procedureillustrates the conventional approach to model updatingwhere test-analysis correlation requires the definition ofmodal-based features such as, for example, the differencebetween identified and predicted frequencies or the modalassurance criterion (MAC) formed between test andanalysis mode shapes. Obviously, this approach isjustified for linear models when the dynamics isdominated by the systems low frequency modes.

    Figure 2. LANL 8-DOF testbed.

    Even though the original, linear model is in goodagreement with the measured modal parameters, frictionintroduces an unambiguous nonlinearity in the systemsresponse as shown in Figures 3 and 4. They representthe changes in identified modal frequencies (Figure 3)and damping ratios (Figure 4) as the level of force usedto excite the system is increased.

    Figure 3. Evolution of modal frequencies identifiedwith the 8-DOF testbed as the input level is increased.

    Figure 4. Evolution of damping ratios identified withthe 8-DOF testbed as the input level is increased.

    Frequencies do not vary significantly because thedistribution of mass and stiffness is unchanged. Theamount of damping in the system, however, is overallreduced because higher forcing levels tend to reduce thestick and slip phenomenon. As a result, our attemptsto identify the damage based on conventional modelupdating techniques fail as long as friction is notaccounted for in the numerical model.3,4 An importantconclusion is that modal parameters, although popularand widely used in the community of modal analysis,may not be the best indicators when it comes toassessing the dynamics of a system.

  • American Institute of Aeronautics and Astronautics4

    A contact mechanism can also added between twomasses to induce a source of contact/impact. It ispictured in Figure 5. When the system is used in thisnonlinear mode, acceleration data are measured at eachone of the eight masses. Then, features extracted fromthe time series can be compared to their numericalcounterparts to assess the predictive quality of aparticular model or family of models. Examples of suchfeatures are, again, the modal parameters identified fromthe measurements. They can also be defined using thedifference of time series, polynomial fits, principalcomponent decomposition, etc.

    Figure 5. Contact mechanism of the LANL8-DOF testbed for nonlinear vibrations.

    Figure 6. Accelerations measured at sensor 1 (top)and sensor 5 (bottom) for the LANL 8-DOF testbed.

    Figure 6 illustrates the raw test data. Accelerationsmeasured at locations 1 and 5 are shown when thesystem is configured with the impact mechanism andexcited by a random signal at location 1. Severalexamples of features such as those mentioned previouslyare given and their ability to discriminate good models

    from poor models is illustrated in Section 5.3. Thisissue is critical because the transient oscillations featuredby these data make it difficult to establish a comparisonbased, for example, on the root mean square (RMS) errorbetween measured and predicted time series.

    With the nonlinear configuration, the test we areinterested in is two-fold. First, the best possible frictionmodel must be obtained. Then, the ability of inverseproblem solving to identify a damaged spring and todiscriminate between structural damage and impactnonlinearity is investigated. This is achieved by buildinga parametric, explicit finite element model of thesystem; generating the time-domain responses; andminimizing the distance between test data andpredictions, whether the distance is evaluated in the timeor frequency domain. This optimization problem can beformulated as the minimization of the cost functionshown in equation (1) where the first contributionrepresents the metric used for test-analysis correlationand the second contribution serves the purpose ofregularization and promotes minimum-change solutions

    min R (p p) S R (p p)

    p j*

    RRj 1 N

    1

    jjtest

    d

    d d{ }=

    -

    +{ } [ ] +{ }L

    +{ } [ ] { }-d dp S pT pp 1 (1)Constraints such as p (p p ) pmin e e max + d are addedto the formulation to eliminate any local minimum thatwould not be acceptable from a physical standpoint. Theweighting matrices in equation (1) are generally keptconstant and diagonal for computational efficiency. Theycan also be defined as general covariance matrices whichthen formulates a Bayesian correction procedure.5

    3.2. TESTBED FOR TRANSIENT IMPACT

    The purpose of this experiment is to provide uswith test data that can be used for validating thepredictive quality of a numerical model based on explicitfinite element (FE) simulations. The application targetedis a high-frequency shock test that features a componentcharacterized by a nonlinear, viscoelastic material. Majordifferences compared to the previous 8-DOF system arethe dynamics observed (transient as opposed to nonlinearvibrations); the larger computational effort required tosimulate the response (the numerical model consists ofseveral thousand degrees of freedom and Lagrangemultipliers); and the need to develop stochastic modelsthat account for sources of variability and uncertainty.

  • American Institute of Aeronautics and Astronautics5

    3.2.1. Numerical Modeling

    The setup is illustrated in Figure 7. It can beobserved that the main two components (steel impactorand foam layer) are assembled on a mounting plate thatis attached to the carriage. The center of the steelcylinder is hollow and it is fixed with a rigid collar torestrict the motion of the impactor to the verticaldirection. This assures perfectly bilinear contact betweenthe steel and foam components, allowing the structure tobe modeled axi-symmetrically. In spite of this, a fullthree-dimensional model is also developed to verify thisassumptions validity.

    Figure 7. Description of the assemblyof the cylindrical impactor and carriage.

    Figure 8. 3D model of the impact testbed.

    Figure 8 illustrates one of the discretized modelsdeveloped for numerical simulation. The analysisprogram used for these calculations is HKS/Abaqus-Explicit, a general-purpose package for finite elementmodeling of nonlinear structural dynamics.6 It featuresan explicit time integration algorithm, which isconvenient when dealing with nonlinear materials,impact or contact, and high frequency excitations.

    In an effort to match the test data, several FEmodels are developed by varying, among other things,the constitutive law and the type of modeling. Therefore,optimization variables consist of the usual designvariables augmented with structural form parameterssuch as kinematic assumptions, geometry description(2D or 3D), contact modeling and numerical viscosity.Another important parameter is the amount of preloadapplied by the bolt used to hold this assembly together.The torque applied was not measured during testing andit may have varied from test to test. The resultingdifficulty is that the amount of preload applied must beconsidered a random variable because it is believed tohave contributed to the variability of the experiment. Byopposition, variables describing the material areunknown but deterministic because they do not vary aslong as the same sample of material is tested. Thisimplies that the analysis tools must be able to handlerandom variables and test-analysis correlation must berecast into a more general stochastic framework.

    3.2.2. Experiment Setup

    During the actual test, the carriage that weights955 lbm (433 kg) is dropped from various heights andimpacts a rigid floor.7 The input acceleration is measuredon the top surface of the carriage and three outputaccelerations are measured on top of the steel impactorthat weights 24 lbm (11 kg). Figure 9 provides anillustration of the test setup and instrumentation. Thisimpact test is repeated several times to collect multipledata sets from which the experiments repeatability canbe assessed. At impact, the steel cylinder compresses thefoam to cause elastic and plastic strains during a fewmicro-seconds as shown in Figures 10 and 11.

    Figure 9. LANL impact test setup.

    Stee lImpactor

    FoamLayerMounting

    Pla te Carriage( ImpactTable)

    TighteningBolt

  • American Institute of Aeronautics and Astronautics6

    Typical accelerations measured during the impacttests are depicted in Figures 10 and 11. Both data sets aregenerated by dropping the carriage from an initial heightof 13 inches (0.33 meters). The response of a 1/4 inch-thick (6.3 mm) layer of foam is shown in Figure 10 andthe response of a 1/2 inch-thick (12.6 mm) layer isshown on Figure 11. It can be observed that over athousand gs are measured on top of the impact cylinderwhich yields large deformations in the foam layer. Thetime scale also indicates that the associated strain ratesare important. Lastly, the variation of peak accelerationobserved in Figure 10 suggests that a non-zero angle ofimpact is involved, making it necessary to model thissystem with a 3D discretization. Clearly, modalsuperposition techniques would fail modeling thissystem because 1) contact can not be representedefficiently with mode shapes; 2) nonlinear hyperfoammodels are needed to represent the foams hardeningbehavior; and 3) very refined meshes would be requiredto capture the frequency content well over 10,000 Hertz.

    Figure 10. Accelerations measured during alow-velocity impact on a thin foam layer.

    Figure 11. Accelerations measured during alow-velocity impact on a thick foam layer.

    3.2.3. Variability of the Experiment

    Table 1 gives the number of data sets collected foreach configuration tested. The reason why less data setsare available at high impact velocity is because thesetests proved to be destructive to the elastomeric materialand could not be repeated. Figure 12 shows thevariability observed during the impacts when the sameconfiguration of the system (same sample of elastomericmaterial and impact velocity) is tested ten times.

    Table 1. Data collected with the impact testbed.Number ofData SetsCollected

    Low VelocityImpact

    (13 in. Drop)

    High VelocityImpact

    (155 in. Drop)Thin Layer(0.25 in.) 10 Tests 5 Tests

    Thick Layer(0.50 in.) 10 Tests 5 Tests

    Figure 12. Accelerations measured during 10similar impact tests (top: input; bottom: output 1).

    Although the environment of this experiment wasvery well controlled, a small spread in both input andoutput signals is obtained. This justifies our point thatmodel correlation and model validation must beformulated as statistical pattern recognition problems.From Figure 12, the variability of the test data can beassessed and represented in a number of ways, anillustration of which is provided in Figure 13. It showsthe peak acceleration probability density functions(PDF) for each measurement. Such representation tellsus, for example, that 17% of the values measured atoutput sensor 1 are equal to 1,520 gs when similarexperiments are repeated. According to Figure 13, thisvalue is the most probable peak acceleration. What istherefore important is that the correlated models predict

  • American Institute of Aeronautics and Astronautics7

    the acceleration levels with the same probability ofoccurrence as the one inferred from test data.

    Figure 13. Probability density functions of the peakacceleration measured during 10 impact tests.

    4. DIRECT CORRELATION OF TIME SERIES

    One major difficulty of time-domain modelvalidation is the reconstruction of continuous solutionfields during the optimization. This issue is fundamentalbecause, if the inverse problem is not formulatedcorrectly, the optimized, numerical model yieldsdiscontinuous acceleration, velocity and displacementfields which contradicts the laws of mechanics for theclass of problems investigated here.

    With the conventional approach for solving inverseproblems, parametric optimization is formulated byselecting a test-analysis correlation metric denoted by thevector {R} in equation (1). Implementing successiveoptimizations produces several optimized models, onefor each time window considered. This is necessary notonly for computational purposes but also because someof the parameters being optimized may vary in time andfollowing such evolution as it is occurring may becritical to model validation. However, nothing in theformulation of the inverse problem enforces continuitybetween the solution fields obtained from modelsoptimized within the i-th and (i+1)-th time windows.Since the design variables can converge to differentsolutions in successive time windows, the discontinuityof the solution can be written, for example, in terms ofthe displacement field as

    lim x(p , t) lim x(p , t)t tt t

    (i)t tt t

    (i 1)i

    i

    i

    i

    fi

    fi

    + (2)

    The only solution currently available is to re-formulate the inverse problem as a constrainedoptimization where the continuity of the solution fieldis enforced explicitly. This strategy is based on thetheory of optimal control and it relies on the resolutionof multiple two-point boundary value problems(BVP).8,9 When satisfactory solutions of the two-pointBVPs are obtained, the numerical model is guaranteedto match the measured data at the beginning and at endof the time window considered. In addition, a parametricadjustment can be brought to the model to improve thecorrelation with test data and a non-parametric residue isbest-fitted that can be used for identifying anynonlinearity, source of variability or modeling error notaccounted for by the model. We emphasize that the ideaof optimal error control is not original. Full credit mustbe given to the authors of References 8 and 9 althoughtheir original motivation was somewhat different.

    Our application of this technique to a single degreeof freedom system and a four-degree of freedom systemshows that the optimal control approach does indeedresolve the discontinuity. This improvement comes withthe additional cost of formulating a two-point BVP toguaranty continuity of the solution. Since the procedureis embedded within an optimization solver, multipletwo-point BVPs must be solved for. Unfortunately, theimpact on the computational requirement is enormousand practical applications currently remain out-of-reach.(Typically, identifying an unknown nonlinear force witha single degree of freedom, Duffing oscillator mayrequire up to 20 hours of CPU time on a workstation.)For this reason, we adopt the approach of replacing theresolution of inverse problems with multiple forward,stochastic calculations.

    5. VALIDATION OF NUMERICAL MODELS

    Even though model updating and healthmonitoring have been prolific fields of research for manyyears, their applications have mostly been restricted tolinear systems that can be accurately described by asubset of low-frequency modes. Modal-based techniquesbecome rapidly obsolete when systems are subjected tohigh-frequency excitation, when variability is an issue ofconcern or when the dynamics of interest is stronglynonlinear. In the remainder, several issues are discussedthat, we believe, are critical to the success of modelvalidation. They are the following ones:

    Extracting features from the data; Developing fast probability integration tools;

  • American Institute of Aeronautics and Astronautics8

    Solving stochastic optimization problems; Assessing the statistical consistency of data sets.

    After a short discussion of the basic concepts ofmodel validation (Section 5.1) and a description of theoverall computational procedure (Section 5.2), these fourissues are addressed in Sections 5.3 to 5.6.

    5.1. BASIC CONCEPTS

    The philosophy presented here is to replace theformulation of inverse problems by a methodologywhere error surfaces are generated from the resolution ofa large number of forward, stochastic analyses, then,optimized to identify the source of modeling error. Thisis the only alternative to the correct yet computationallyimpractical formulation discussed briefly in Section 4.Besides having to account for uncertain inputs, imperfectmaterial characterization and modeling errors during adesign cycle, the other reason for this approach is torecast model updating as a problem of hypothesistesting. When the predictive quality of a model isassessed, we believe that three fundamental questionsmust be answered:

    Are results from the experiment(s) and simulation(s)consistent statistically?

    What is the degree of confidence associated with thefirst answer?

    If additional data sets are available, by how muchdoes the confidence increase?

    Hypothesis testing permits to answer thesequestions. The difficulty however is to assess theminimum amount of data necessary to formulate ameaningful test and to implement such a test for large-scale, numerical simulations. Although hypothesistesting is well-known, very little literature is availableon the subject of population versus populationtesting. Moreover, applying conventional tools to themultivariate case is not immediate.

    5.2. OVERVIEW OF THE PROCEDURE

    According to the procedure illustrated in Figure 14,optimization parameters and random variables are firstdefined. Multiple FE solutions and multi-dimensionalerror surfaces are generated from statistical sampling.Error surfaces provide a metric for test-analysiscorrelation and model updating. The first useful tool issensitivity analysis employed to reduce the subset ofpotential optimization variables down to the most

    sensitive ones. Then, the best possible model is soughtafter through the optimization of its design parameters.When these consist of random variables, the proceduremust either search for the most likely values (case wheredistributions are known) or optimize the statistics (casewhere distributions are somewhat unknown). Finally,Figure 14 shows that, rather than comparing responselevels, the ability of a probabilistic model to reproducetest data must be assessed using the responses statistics.

    Figure 14. Flow chart showing the successivesteps of model validation.

    Software integration is an important part of theprocedure described previously. Three software packages

    PDF

    Material Modeling

    PDF

    GeometryPDF

    Input Loading

    FastProbabilityIntegration

    CDF

    P e a kAcceleration

    ResponseSurface

    0

    1

    Sensitivity Analysis

    Explicit FEAnalysis

    ParametricOptimization

    Statist icalSampling

  • American Institute of Aeronautics and Astronautics9

    involving four different programming languages areinterfaced. The test-analysis correlation procedure iscontrolled by a library of Matlab functions.10 The reasonfor this choice is flexibility and the possibility todevelop a user graphical interface easily. Depending onthe type of analysis requested by the user, the Matlab-based software writes and compiles Fortran77 routinesthat are used for generating the Abaqus input deck.Drivers written in the script language Python are alsogenerated and used for piloting the FE analyses.11Finally, results are uploaded back into Matlab for test-analysis correlation and parametric optimization. Thisarchitecture should enable the interfacing in the nearfuture of a variety of engineering analysis software,including parallel FE processing packages for runninglarge-dimensional, nonlinear engineering simulations onhigh-performance computing platforms.

    5.3. DATA CORRELATION METRICS

    Large computer simulations tend to generateenormous amounts of output that must be synthesizedinto a small number of indicators for the analysis. Thisstep is referred to as data reduction or feature extractionin the literature. These features are typically used todefine the test-analysis correlation metrics that may beoptimized depending on the predictive quality of themodel. The main issue in feature extraction is to defineindicators that provide meaningful insight regarding theability of the model to capture the dynamics of thesystem investigated. Some of the features defined fornonlinear structural dynamics are reviewed.

    RMS error of time series:

    The simplest of test-analysis correlation metrics isthe difference between measured and predicted timeseries. Equation (3) shows the RMS error between peakacceleration responses cumulated over several sensors.The total simulation error is defined in equation (4).

    J p x x pmeasured speak

    speak

    s sensor

    ( ) ),

    = -( )

    (,

    2 (3)

    J p p( ) x (t) x ( t)measured,s st,times,sensor

    2= -( )

    ; (4)

    Principal component decomposition:

    The principal component decomposition (PCD) isa comparison of manifolds.12 Instead of comparing thesignals directly, the angles between the nonlinearsubspaces spanned by the responses are estimated. To

    do so, time responses are collected into data matriceswhose singular value decomposition efficientlycompares multi-dimensional data sets with automaticnormalization. The PCD metric may be defined as

    J p i( ) U Vijji

    2i

    i

    2j

    ji

    2= ( ) + + ( )

    D D Ds

    (5)

    In equation (5), D U[ ], Ds{ } and D V[ ] represent

    normalized differences between the singular values andvectors of the analysis and test data matrices defined as

    ; ;

    ; ;( ) ( ) ( )

    x ( t ) x ( t )

    x ( t ) x ( t )U V

    1 1 1 m

    n 1 n m

    Tp p

    p pp p p

    L

    M O M

    L

    = [ ][ ][ ]S (6)

    D U U U p ItestT[ ] = [ ] [ ] - [ ]( )( ) (7)

    D V V V p ItestT[ ] = [ ] [ ] - [ ]( )( ) (8)

    DS S S S[ ] = [ ] [ ] - [ ]( )-test test p1 ( ) (9)Although more computationally intensive, this

    feature provides an elegant framework for interpretingthe data by generalizing the notion of mode shapes andmodal contributions to nonlinear systems. It may alsofilter out measurement noise that is typically associatedwith small singular values.

    Shock response spectrum:

    The shock response spectrum (SRS) is obtained bycalculating the response of a single degree of freedomsystem to a known input such as, for example, theacceleration signal denoted by I(t) below. The responseis then characterized by a given criterion. For example,an acceleration spectrum is defined by plotting

    SRS xs speak( ) ( )w w= (10)

    versus w , the systems frequency after the accelerationresponse has been obtained by integrating the equation

    ( ) ( ) ( ) ( )x t x t x t I t+ + =2 2zw w (11)

    The purpose of SRS analysis is to avoid selectingmodal characteristics that may result into significantresponse levels when designing a sub-component. Acorrelation metric based on SRS data is defined as

    J p SRS SRS ptest s i s ii designs sensor

    ( ) )= -( ) ,

    ,,

    ( ) ( ;w w 2 (12)

  • American Institute of Aeronautics and Astronautics10

    The advantage of the previous three features (RMSerror, PCD and SRS) is that no assumption is maderegarding the dynamics encountered. They apply to linearand nonlinear systems alike. The remaining featurespresented below assume specific model forms, fromfirst-order to second-order representations. Thus, they arerelevant to the analysis of linear systems only.

    ARMA-based features:

    An auto-regressive, moving average (ARMA)model can always be best-fitted to the data, whether thesystem is linear or not. To do so, coefficients of thefollowing linear combination are calculated

    ( ) ( ) ( )x t x t i t F t j ts si si N

    sj sj NAR MA

    = - + -= =

    a bD D

    1 1L L (13)

    Coefficients of the models obtained from test andanalysis data can be compared to define the correlationmetric. Another possibility is to employ the coefficientsobtained by fitting the test data to predict the simulationresponse and to estimate the error between the predictedand actual simulation responses. These alternatives areillustrated in equations (14) and (15), respectively

    J p ptest si ii order

    ( ) ( )ss,sensor

    = -( )

    a a

    ,

    ,

    2 (14)

    J p p( ) x ( t) x (t)s st,times,sensor

    2= -( ) ; (15)

    Frequency response functions:

    The frequency response function (FRF) of a linearsystem is defined as the inverse of the dynamic stiffnessmatrix at a given forcing frequency W

    H K j D M( )W W W[ ] = [ ] + [ ] - [ ]( ) -2 1 (16)Equation (16) constitutes the basic tool for

    calculating a models FRF data between specified inputand output locations. Similarly, a systems FRF can beidentified from measurements by dividing the cross-correlation function of a given input-output pair by theinputs auto-correlation function. The RMS errorbetween the two sets of FRF curves can be formed asanother metric for test analysis correlation

    J p H H pij k ij kk frequencyij

    ( ) ( ) ( )measured,,,sensor

    2= -( )

    W W ;

    (17)

    ERA-based features:

    Finally, a second-order, linear model can beformulated by representing the input-output, FRF dataas a superposition of modal contributions

    H ijik jk

    k k kk( )

    ,

    W

    F F

    W W

    =

    + -

    w z w

    2 22 jmode (18)

    Numerous algorithms are available for time-domain or frequency-domain system identificationamong which we cite the Eigensystem RealizationAlgorithm (ERA).13 Its advantage is that it can beautomated to a large amount to extract the resonantfrequencies w k, modal damping ratios z k and modeshapes {F k} directly from the measured or predicted timesignals. Then, various metrics can be defined as

    J pptest k k

    test kk( ) = -

    w w

    w

    ,

    ,,

    ( )2 22

    2

    mode

    (19)

    J pptest k k

    test kk( ) = -

    z z

    z

    ,

    ,,

    ( )mode

    2 (20)

    J p ptest sk skks sensor

    ( ) 2= -( )

    F F

    ,

    ,,

    ( )mode

    (21)

    Figure 15 compares the six metrics defined byequations (4), (5), (12), (15), (17) and (19) for test-analysis correlation based on RMS error, PCD, SRS,ARMA, FRF and ERA features, respectively. Thisillustration is provided with the 8-DOF testbed becauseits measured responses are difficult to characterize (seeFigure 6 where responses look like random, zero-meansignals). Assessing the predictive quality of a numericalsimulation is therefore a difficult task.

    Figure 15. Comparison of various features used torefine the explicit FE model of the 8-DOF system.

  • American Institute of Aeronautics and Astronautics11

    In Figure 15, the horizontal axis represents 175designs evaluated and compared to the test data during aparametric optimization procedure. The objective is toidentify the best possible friction model. Here, bestmodel refers to a design that minimizes a particulartest-analysis correlation metric. It can be observed thatonly the SRS-based metric (12) segregates good frompoor designs. This illustrates the importance of featureselection for test-analysis correlation and optimization.

    5.4. FAST PROBABILITY INTEGRATION

    Fast probability integration (FPI) is used topropagate efficiently variability information duringstructural analysis. Our FPI capability relies onNESSUS (which stands for Numerical Evaluation ofStochastic Structures Under Stress), a software foranalyzing the reliability of mechanical systems thatprovides a practical way of propagating uncertaintythroughout the calculations.14 Using a software packagefor reliability analysis to address test-analysis correlationis achieved by following the steps detailed below.

    First, it is assumed that the models randomvariables collected in a vector {C } are defined. Thesemay include uncertain input forces, random parametersfor material modeling, manufacturing tolerances, etc. Wealso define a response function Z and the objective of theFE calculation is to estimate the value of Z for a givensample {C } of our random variables. Finally, a limitstate function g( C ) is defined that describes thecorrelation with test data. Success is defined if g(C )=0,that is, if the response measured during the test ismatched by the model in a probabilistic sense. It meansthat the problem of model validation consists ofcalculating the PDF or the cumulative density function(CDF) of the Z -response, respectively defined as

    p ( ) Prob ZZ a a= =[ ]F ( ) Prob Z p z dzZ za a

    a

    = [ ] = ( )-

    (22)

    The central aspect of FPI is the search for the mostprobable prediction (MPP) of the model in the presenceof uncertainty. To obtain the MPP, the Z -responsesjoint PDF is maximized under the constraint g( C )=0.This optimization is solved by converting the originalvariables {C } into standardized normal variables {u},that is, variables described by the unit normal CDF

    F (u) 12

    e dss

    2u2

    =

    -

    -

    p

    (23)

    Once the MPP has been determined, the responsesurface can be explored to reconstruct the entire PDF orCDF. The transformation from {C } to {u} and itsinverse are achieved using the Rosenblatt transform15

    u (F (X)),1 Z= -F X F ( (u))Z1= - F (24)

    An illustration is presented with the impact testbeddiscussed in Section 3.2. For this application, several2D and 3D models are developed. Among the parametersvaried are the type of elements used in the discretization;the size of the mesh; the type of contact conditionsimplemented; the material modeling; the preload appliedwhen the center bolt is tightened; the angles of the steelimpactor at impact; and the input acceleration. The twotypes of information obtained by FPI are illustrated inFigures 16 and 17. Here, we are interested in predictingthe probability distribution of the peak acceleration atoutput sensor 1 at the time of impact. From Figure 16,it can be seen, for example, that the probability that thepeak acceleration be less than 1,520 gs is equal to 90%.

    Figure 16. CDF of the peak acceleration predicted bynumerical simulation of the impact test.

    Figure 17. Sensitivity of the CDF with respect tovarious random, design parameters.

  • American Institute of Aeronautics and Astronautics12

    The second type of information obtained from FPIis sensitivity data for comparing the influence of eachrandom variable. Figure 17 summarizes a study wherethe influence of five variables (impact velocity, foamthickness, foam density and parameters of the stress-strain, hyperfoam model) is investigated.

    5.5. OPTIMIZING STOCHASTIC MODELS

    In this Section, we discuss results obtained whenthe explicit, nonlinear FE simulations are optimized tomatch test data. The illustration provided in theremainder involves the impact testbed. When correlationwith test data is not satisfactory, Z -response surfaces areused to generate fast-running models. These, in turn,provide the core of the parametric optimizationalgorithm that fine-tunes a subset of the models designvariables to improve the correlation with test data.

    Figure 18 pictures a typical Z -response surfaceobtained with the 3D model: the two horizontal axesrepresent the values spanned by two parameters and thevertical axis represents the PCD cost function (5) on alog scale. For clarity, the surface is shown as only twoof the seven optimization variables are varied. Thecomplete set includes two coefficients of the hyperfoammaterial model; two angles of impact that simulate asmall free-play in the alignment of the carriage and steelimpactor; the bolt preload; the input acceleration scalingfactor; and a numerical bulk viscosity parameter. A totalof 1,845 FE models are analyzed to generate a fast-running model after having determined the approximatelocation of the MPP from probabilistic analysis.

    Figure 18. Z -response surface. (The metric defined isthe PCD cost (5) based on three accelerations.)

    Figure 19 depicts the correlation before and afterparametric optimization. A clear improvement of the

    models predictive quality is witnessed which, in turn,leads to a more accurate representation of the viscoelasticmaterial. Note that the metric employed for optimizingthe parameters (PCD, shown in Figure 18) is differentfrom the correlation metric that consists in comparingthe time-domain, acceleration signals. One difficulty isthat of determining the optimal distribution of an input,random variable. An analyst may be faced with thisproblem when no a priori information is availableregarding the definition of a variable. The optimizationof unknown distributions is still, to the best of ourknowledge, an area of open research. Subjectiveprobability and Bayesian belief network may resolve thisdifficulty.16 They define an attractive framework forassessing the influence of prior distributions onposterior, test-analysis correlation indicators.

    Figure 19. Correlation of the 3D model.

    The last aspect of model validation addressed inthis Section consists of verifying that the optimizedmodel is indeed correct. This is achieved by comparingpredictions of various models to measured data sets forconfigurations different from the one used during FPIand optimization. For example, the 3D models areoptimized using the thin pad/low impact velocity setup.Then, the 2D, axi-symmetric models are verified withthe thick pad/low impact velocity setup.

    On Figure 20, predictions of the original and final2D models are compared to test data measured during alow-velocity impact using the 0.25 in. (6.3 mm) thickfoam pad. On Figure 21, the response of a 0.50 in.(12.6 mm) thick foam pad is shown. Despite smalloscillations attributed to numerical noise generated bythe contact algorithm, the models predict the accelerationlevels measured during the test. We believe that theseindependent checks constitute the only valid indicationthat the modeling is correct.

  • American Institute of Aeronautics and Astronautics13

    Figure 20. Verification of the predictions:Response of a thin pad (1/4 in., 6.3 mm).

    Figure 21. Verification of the predictions:Response of a thick pad (1/2 in., 12.6 mm).

    5.6. STATISTICAL HYPOTHESIS TESTING

    One of the open research issues that this work hasidentified is the problem of establishing a correlationbetween multiple data sets. By this we mean assessingthe degree to which two populations are consistent witheach other. Our literature review seems to indicate thattools for assessing the distance between multiple datasets are not readily available in the context of statisticalcorrelation and multivariate analysis.

    This difficulty is illustrated in Figure 22. It showsthe peak acceleration values for channels 1 and 2 of theimpact testbed plotted against each other. The data of tenindependent, identical tests are shown together withsimulation results from two different models. For eachone of the two models, a particular design is generatedby varying the angles of impact and the bolt preload.Then, each design is analyzed using the ten different

    input acceleration signals measured. The three ellipsoidsshown in Figure 22 illustrate the 95% confidenceintervals for the test data and two models. The predictivequality of one of the models is better because most of itsdata points (68 of 100) fall within the 95% confidenceinterval of the test data. The other model predicts 34 of100 points within the tests 95% confidence interval.

    Figure 22. Comparison of test and analysis data in atwo-feature space. (The 2D space represents the peak

    accelerations measured or predicted at sensors 1 and 2.)

    By inspection of Figure 22, it is apparent that thepeak magnitudes of measured accelerations 1 and 2 areuncorrelated because the 95% confidence interval isnearly circular. Thus, we suspect that one of the greatersources of variability is the source that affects thechannels differently. This conclusion, however, is notconfirmed by data generated from the two models. Abetter illustration is provided in Figure 23 where thejoint CDF interpolated from test data is compared toCDFs of the two models. It illustrates the disagreementbetween test data and simulations.

    Figure 23. Comparison of cumulative densityfunctions for the test data and two simulations.

  • American Institute of Aeronautics and Astronautics14

    This example illustrates that plotting severalfeatures against each other defines a powerful analysistool. Unfortunately, higher-order graphics are difficult torepresent, therefore, requiring quantitative indicators ofthe model's fit to test data. Such statistical consistencycan be assessed using a standard, multivariateHotellings T2 test. First, statistics are calculated fromthe distributions of features. In the following, thevectors of mean values are denoted by {m } and covariancematrices are denoted by [ S ]. Hotellings T2 test statesthat the mean vector of the model features is an estimateof the mean vector of test features to the (100-a )%confidence level if

    m m m m

    a

    (p) (p)N N 1N N 1

    ( )

    test

    Ttest

    1test

    p s

    s pN , N Np s p

    { } - { }( ) [ ] { } - { }( )

    -( )-( )

    -

    -

    S

    F

    (25)

    Applied to the data shown in Figures 22-23 andcharacterized by Np = 2 features and Ns = 100 samples,the statistics (25) sets the acceptance ratio to 1.0035 atthe 95% confidence level. The Mahalanobis distance inthe left-hand side of equation (25) is equal to 4.0 for thefirst model which clearly indicates that it fails the test.The Mahalanobis distance of the second model is equalto 0.2. This establishes that the mean response predictedby our second model has converged. It can alternativelybe stated that we are 95% certain that the average peakaccelerations predicted by this model are consistent withtest data given the sources of variability of theexperiment and given the sources of uncertainty of themodel. However, this conclusion remains of limitedpractical use for model validation as long as the varianceof the population has not converged as well.

    One of the only possibility available for testingboth mean and variance is to calculate Kullback-Leiblers relative entropy defined as the expected value ofthe ratio between the PDFs of the two populations

    I(Model || Test) E p ( )p ( )

    Zmodel

    Ztest=

    a

    a

    (26)

    If the features used {Z} are normally distributed orif enough data points are available to justify theapplication of the central limit Theorem, the relativeentropy can be approximated using Gaussian PDFs torepresent the test and analysis distributions. Then, thisentropy may be used for assessing the consistencybetween two populations of features and for optimizingparameters of a statistical model. We emphasize that the

    computational requirement associated to this proceduremay become important since the probability distributionof each feature considered for test-analysis correlationmust be assessed for each candidate design evaluatedduring the optimization. This, however, is the onlypossibility to guaranty at a given confidence level thatthe numerical simulation is validated in the context ofuncertainty propagation. For all practical purposes, thenormal approximation to definition (26) is stated as

    I(Model || Test) 12

    Trace (p)

    N logdet (p)det

    (p) (p)

    test

    1

    ptest

    test

    Ttest

    1test

    @ [ ][ ]( )- -

    [ ][ ]

    + { } - { }( ) [ ] { } - { }( )

    -

    -

    S S

    S

    S

    S

    12

    12

    12

    m m m m

    (27)

    An illustration is provided in Figure 24 where thevalue of the Kullback-Leibler entropy is represented withfour different models (the original material model of theimpact testbed and three others obtained after successiveoptimization steps). Almost two orders of magnitude areobtained between the entropy of the original and finalmodels. It demonstrates the efficiency of this statisticalindicator for characterizing the predictive quality of amodel based on multivariate data features.

    Figure 24. Evolution of the Kullback-Leibler entropywhen optimizing the hyperfoam material.

    Unfortunately, statistical tests for verifying apass/fail hypothesis based on the relative entropy (27)are not available in the general case. This limitation iscurrently being addressed by investigating the efficiencyof conventional hypothesis testing.

  • American Institute of Aeronautics and Astronautics15

    CONCLUSION

    In this publication, a general framework isproposed for validating numerical models for nonlinear,transient dynamics. To bypass difficulties identifiedwhen applying test-analysis correlation methods tononlinear vibration data, inverse problems are replacedwith multiple forward, stochastic problems. After ametric has been defined for comparing test and analysisdata, response surfaces are generated that can be used forassessing in a probabilistic sense the quality of aparticular simulation with respect to reference or testdata and for optimizing the models design parameters toimprove its predictive quality. Data sets from severalexperiments conducted at Los Alamos NationalLaboratory in support of our code validation andverification program are used to illustrate the advantagesand drawbacks of this approach. Several directions ofresearch are stated throughout this work. One of them isto implement methods of statistical, hypothesis testingto assess the consistency between test data and numericalsimulations using multivariate test-analysis correlation.Combining the parametrized uncertainty approach withthe estimation of the experiments total uncertainty isalso a direction that may be pursued in the future.Finally, we mention the demonstration of the entireprocedure with a complex experiment during whichnonlinear, structural systems are subjected to transient,explosive loading.

    REFERENCES

    1 Belvin, W.K., "Second-Order State EstimationExperiments Using Acceleration Measurements," AIAAJournal of Guidance, Control and Dynamics, Vol. 17,No. 1, Jan.-Feb. 1994, pp. 97-103.

    2 Doebling, S.W., Farrar, C.R., Prime, M.B.,Shevitz, D.W., "Damage Identification and HealthMonitoring of Structural and Mechanical Systems FromChanges in Their Vibration Characteristics: A LiteratureReview," Report #LA-13070-MS, Los Alamos NationalLaboratory, Los Alamos, NM, May 1996.

    3 Hemez, F.M., Doebling, S.W., "Test-AnalysisCorrelation and Finite Element Model Updating ForNonlinear, Transient Dynamics," 17th SEMInternational Modal Analysis Conference, Kissimmee,FL, Feb. 8-11, 1999, pp. 1501-1510.

    4 Hemez, F.M., Doebling, S.W., "Validation ofNonlinear Modeling From Impact Test Data Using

    Probability Integration," 18th SEM International ModalAnalysis Conference, San Antonio, TX, Feb. 7-10,2000, to be published.

    5 Hemez, F.M., Doebling, S.W., "A Validation ofBayesian Finite Element Model Updating For LinearDynamics," 17th SEM International Modal AnalysisConference, Kissimmee, FL, Feb. 8-11, 1999, pp.1545-1555.

    6 Abaqus/Explicit Users Manual, Version 5.8,Hibbitt, Karlsson & Sorensen, Pawtucket, RI, 1997.

    7 Instruction Manual For the LansmontModel 610 Shock Test Machine, Version 1.0KJG, Lansmont Corp., Pacific Grove, CA.

    8 Mook, D.J., Estimation and Identification ofNonlinear Dynamic Systems, AIAA Journal, Vol. 27,No. 7, July 1989, pp. 968-974.

    9 Dippery, K.D., Smith, S.W., An OptimalControl Approach to Nonlinear System Identification,16th SEM International Modal Analysis Conference,Santa Barbara, CA, Feb. 2-5, 1998, pp. 637-643.

    10 Matlab , Users Manual, Version 5.3, TheMathWorks, Inc., Natick, Massachusetts, 1999.

    11 Lutz, M., Programming Python , OReilly &Associates, Inc., 1996.

    12 Hasselman, T.K., Anderson, M.C., Wenshui, G.,Principal Components Analysis For Nonlinear ModelCorrelation, Updating and Uncertainty Evaluation, 16thSEM International Modal Analysis Conference, SantaBarbara, CA, Feb. 2-5, 1998, pp. 664-651.

    13 Juang, J.N., Applied System Identification,Prentice Hall, Englewood Cliffs, NJ, 1994.

    14 NESSUS , Users Manual, Version 2.3,Southwest Research Institute, San Antonio, TX, 1996.

    15 Rosenblatt, M., Remarks on a MultivariateTransformation, The Annals of MathematicalStatistics, Vol. 23, No. 3, 1952, pp. 470-472.

    16 Hanson, K.M., "A Framework For AssessingUncertainties in Simulation Predictions," Physica D,Elsevier Science, May 1999, to be published.