125356278 methods appls and software for structural reliability

81
5/22/2018 125356278MethodsApplsandSoftwareforStructuralReliability-slidepdf.com http://slidepdf.com/reader/full/125356278-methods-appls-and-software-for-structural-reliabili Fax: (01709) 825337 Telephone: (01709) 820166 Rotherham S60 3AR Moorgate Swinden Technology Centre Corus UK Limited Methods, Applications and Software for Structural Reliability Assessment OPEN Classification 21 August 2001 Date SL/WEM/R/M8663/5/01/C Report No.

Upload: baran-yeter

Post on 12-Oct-2015

39 views

Category:

Documents


4 download

TRANSCRIPT

  • Fax: (01709) 825337

    Telephone: (01709) 820166

    Rotherham S60 3AR

    MoorgateSwinden Technology CentreCorus UK Limited

    Methods, Applications and Softwarefor Structural Reliability Assessment

    OPENClassification

    21 August 2001Date

    SL/WEM/R/M8663/5/01/CReport No.

  • CONTENTS

    Page

    SUMMARY 1

    1. INTRODUCTION 31.1 Background 31.2 Historical Development of Methods 41.3 Scope of Review 4

    2. BASIC CONCEPTS 42.1 Definitions and Acceptance of Risk 42.2 Failure Modes 5

    3. QUANTIFICATION OF RELIABILITY 63.1 Hierarchy of Structural Reliability Methods 63.2 Limit States and Definitions 73.3 Types of Uncertainties 73.4 Types of Analysis 8

    4. REVIEW OF APPLICATIONS IN DIFFERENT INDUSTRIES 104.1 Overview 104.2 Nuclear 104.3 Offshore Structures 124.4 Transport 144.5 Bridges and Buildings 164.6 Power, Process and Chemical Plant 174.7 Pipelines 17

    5. PROBABILISTIC TREATMENT OF FRACTURE AND COLLAPSE 205.1 Description of Failure Assessment Diagram 205.2 Status of Current FAD-Based Approaches 205.3 Inherent Safety Level of FAD Approach and Use of Partial Safety Factors 215.4 Model Uncertainty in the Failure Assessment Diagram 225.5 Probabilistic Treatment of Failure Assessment Diagram 22

    6. TARGET RELIABILITY LEVELS IN DIFFERENT CODES AND INDUSTRIES 236.1 Overview 236.2 Quantifying Societal Consequence 236.3 Treatment of Consequence in Three Major Codes 246.4 Comparison of Target Reliability Levels in Different Industries 27

    7. SOFTWARE FOR RELIABILITY ANALYSIS 287.1 Scope 287.2 STRUREL 287.3 ProSINTAP 297.4 CALREL 307.5 PROBAN 307.6 COMPASS 307.7 NESSUS 307.8 ISPUD AND COSSAN 307.9 STAR 6 307.10 UMFRAP 317.11 FORM and MONTE 31

    8. FURTHER DEVELOPMENT OF PROBABILISTIC METHODS 318.1 Generic Methods 318.2 Failure Assessment Diagrams 318.3 Distributions of Material Properties 328.4 Reduction of Data Uncertainty 32

    9. CONCLUSIONS 32

    REFERENCES 34

    TABLES 41

    FIGURES F1

    APPENDIX 1 STRUCTURE OF VARIOUS RELIABILITY SOFTWARE PACKAGES A1/1

    SL/WEM/R/M8663/5/01/C

    C2

  • 21 August 2001OPEN

    SUMMARY

    METHODS, APPLICATIONS AND SOFTWARE FOR STRUCTURAL RELIABILITYASSESSMENT

    S.E. Webster and A.C. Bannister

    Changes in legislation, the trend to life extension and increasing computing power have led to anincrease in the use of reliability methods in many industrial sectors. The advantages of theseapproaches are that overdesign can be avoided, uncertainties can be handled in a logical way,sensitivity to variables assessed and a more rational basis for decision making followed. Themethods have been extensively applied in the Nuclear, offshore, rail, shipping, aerospace, bridge,building, process plant and pipeline industries. Failure processes that can be addressed includefracture, collapse, fatigue, creep, corrosion, bursting, buckling, third party damage, stress corrosionand seismic damage.

    In this report, basic concepts of risk, reliability and consequences are first introduced. The types offailure modes that can be addressed probabilistically are then described with reference to global andlocal effects and time-dependency. Types of calculation methods are covered, with emphasis onMonte-Carlo Simulation and First Order Reliability Method, and the sources and treatment ofuncertainty described.

    A review of codes providing guidance on target reliability levels related to consequence of failure,and industry practice in defining acceptable failure probability is then presented. The levelsgenerally depend on the reliability of the input data, the consequences of failure and the cost ofreducing the risk. The capabilities of various commercial and development software are assessed; Arange of reliability analysis software is available for general applications, covering any failure mode,and also for fracture specific applications.

    Current trends include refinement of calculations of risk throughout a structure's lifetime: 'Reliabilityupdating' coupled with structural health monitoring with sensors enables real-time reliability status tobe defined. Risk consideration as a primary input in component/structure design is becoming morewidespread and the use of the methods for optimisation of materials selection, design and cost isincreasing. Future developments include design and material selection optimisation throughreliability methods, interaction of failure modes to more accurately reflect real materials' behaviour,standardisation of consequence scenarios, increased use of time-dependent reliability analysis andbenchmarking of methods and software.

    Corus UK Limited SL/WEM/R/M8663/5/01/CSwinden Technology Centre

    1Fax: (01709) 825337Telephone: (01709) 820166

    5Appendix Pages:Rotherham S60 3AR24Figure Pages:Moorgate47Text/Table Pages:Swinden Technology Centre2Cover Pages:Corus UK Limited

  • METHODS, APPLICATIONS AND SOFTWARE FOR STRUCTURAL RELIABILITYASSESSMENT

    1. INTRODUCTION

    1.1 Background

    There are numerous sources of uncertainty in structural design and the absolute safety of astructure cannot be guaranteed due to unpredictability of future loading, variations of materialproperties as they exist in the structure, simplifications to analysis methods for predicting behaviourand human factors. However, the risk of a failure with unacceptable consequences can be reducedto an acceptably low number; estimation of this level of risk is the subject of this report.

    The advantages of a reliability approach are twofold: it enables uncertainties to be handled in arational and logical way in design and assessment, in particular it enables the sensitivity ofuncertainty to various design variables to be determined. Secondly, while decisions are seldomclear cut and are never perfect, it provides a more rational basis for decision making than with apurely deterministic analysis.

    The fundamental concept for reliability analysis is that resistance and load factors are statisticalquantities with a central tendency (mean), dispersion about the mean (variance) and some form ofdistribution (probability density function, e.g. Normal). When combined together via an expression todescribe the limit state (such as fracture or collapse) there will be a finite probability that the load willexceed the resistance; this defines the probability of failure (Pf) and since reliability is equal to 1-Pf,the inherent reliability of the component against a particular failure mode, and with given resistanceproperties, is defined. The basic definition of this is shown in Fig. 1.

    The use of probabilistic methods in structural design and analysis has grown rapidly in thepast five years, in parallel with increasing computing power. There is now a general agreement onthe philosophy behind the use of probabilistic methods in decision making, methods of uncertaintymodelling are accepted and being unified, and numerical techniques have been developed tocompute failure probabilities and sensitivity factors efficiently(1).

    Probabilistic methods were originally used for calibration of safety factors in structural codes andtechnical standards. One of the first calibrations was for the 1974 Canadian Standards Associationoffshore code, and since then almost all major codes for land based and offshore structures havebeen developed through a formal calibration process involving some element of probabilisticanalysis. In recent years probabilistic methods have also been used directly in design to account forvarious failure modes for which there was little previous experience, very costly structures or thosewith very large failure consequences. These methods are now being extended to covertime-dependent failure modes and to link component reliability with system reliability.

    Recently, probabilistic methods have been further developed to account for new informationbecoming available after the design stage, a process known as reliability updating. Such informationmay be from fabrication, such as control of materials and welding, or from service experience, whereinspection and monitoring provide important additional information. With the additional informationmuch of the uncertainty present at the design stage is removed and improved decisions on repair,strengthening, inspection planning and change in use can be made in a quantitative manner whichwould not be possible if based only on deterministic design information. This has particularrelevance in fatigue loaded structures such as bridges and offshore structures.

    1.2 Historical Development of Methods

    SL/WEM/R/M8663/5/01/C

    3

  • While probabilistic methods can be applied to any aspect of structural design or operation, it is theiruse in failure prevention and safety analyses which is the subject of this review. Initial concepts inthis area of probabilistic fracture mechanics were developed in the nuclear and offshore industries inthe 1980s, applications which have associated with them a very high consequence of failure. Morerecently these methods have begun to be used in more conventional structures and guidance nowexists in many design and integrity analysis codes. This may be either in the form of a directreference to such methods, their use to derive partial safety factors or their application tomaintenance and inspection. Public perception and understanding of risk, the associated role ofregulatory bodies and the necessity for a common basis on policy where safety is an issue havefurther strengthened the move to reliability based methods(2).

    The advantage of such methods in integrity analyses is that the use of pessimistic assumptions fordata inputs is avoided, and the compounding effects of such assumptions can be minimised. Thiscompound effect makes the results of deterministic analyses often very conservative leading to alack of credibility in their results. The methods can be applied to any mode of failure (e.g. fracture,collapse, fatigue, corrosion, creep and buckling) providing that the limit state can be described by anequation(s) and that one or more of the variables in the equation is statistically distributed.

    1.3 Scope of Review

    Basic concepts of risk, reliability and consequences are first introduced. The types of failure modesthat can be addressed probabilistically are then described with reference to global and local effectsand time-dependency. Types of calculation methods are covered, with emphasis on Monte-CarloSimulation and First Order Reliability Method, and the source and treatment of uncertaintydescribed.

    General treatment in various industries of fracture, fatigue, corrosion and high temperature failureare then covered. Probabilistic definition of the failure assessment diagram (FAD) is addressed insome detail and a review of codes providing guidance on target reliability levels related toconsequence of failure summarised. Finally, the capabilities of various commercial and developmentsoftware is presented, followed by a view on future developments and conclusions.

    2. BASIC CONCEPTS

    2.1 Definitions and Acceptance of Risk

    For structural applications, the probability of failure is assessed in the context of 'consequence offailure' such that 'risk' can be defined where:-

    Risk = Probability x consequence

    A high probability of failure can be accepted where the consequences of that failure arelow: conversely, a high consequence of failure must be allied to a low probability of occurrence.Societal and governmental acceptance of risk dictates that different industries and structures willhave different combinations of probability of failure and consequence of failure, Fig. 2. There arealso governmental targets of what is considered to be negligible risk, unacceptable risk and a regionin between where risk is treated in terms of 'ALARP' (As Low As Reasonably Practical). High riskcan be treated in terms of mitigating either the frequency of occurrence or reduction of consequence.

    SL/WEM/R/M8663/5/01/C

    4

  • The interpretation of failure probability must be made in the context of the type of structure orcomponent(3). Mass produced components (pumps, valves, electrical devices) can be assessed interms of failure frequency, or time to failure, due to the numbers involved and the fact that theygenerally comprise parts which wear out, rather than fail by some unexpected or complex mechanismwhich may involve human factors. In contrast, engineering structures tend to be unique in theirstructural form and location and are subjected to a range of operating conditions which can causefailure by one failure mode or a combination of many.

    The concept used for structures is therefore to sample from the input distributions many times andtheoretically create similar structures under the full range of operating conditions. For the case of anexisting structure, information can be gained on its behaviour and this can be used to refine thecalculations of risk, a form of reliability updating which is not possible with newly designedstructures. The assessed reliability is not solely a function of the structure itself but is alsodependent on the amount and quality of information available for the structure(4).

    The perception and acceptance of risk depends on the level of understanding of the particularactivity or structure, the level of confidence in the source of information and the freedom of choicethat an individual has; it would normally be expected that if a definite choice was made, then a higherlevel of risk would be tolerated. A summary of certain societal risks and broad indicators of what isconsidered tolerable are given in Tables 1 and 2(5). Failure rates for populations of buildings andbridges(5) are given in Table 3 for comparison.

    2.2 Failure Modes

    The general concept behind all probabilistic methods is that some or all of the inputs containinherent uncertainty and these can combine to give an uncertainty rating for structural performance.For general structural assessment purposes it is standard practice to assess safety by comparison ofload and resistance effects using established design rules to predict the likelihood of failure. Wherethere are uncertainties in the input variables, or scatter in the material properties, reliability-basedmethods can be employed to determine the probability that the load effects will exceed theresistance effects. Inherent scatter in a material property will affect the failure probability and it istherefore not only the mean value of a property which is important, as in deterministic analysis, butalso its variance and the type of distribution used to represent the dataset.

    Depending on the failure mode, material properties, temperature, geometry and loading will influencethe reliability of the component. It is more usual to assess failure modes which contribute to theultimate limit states rather than serviceability limit states. These include yielding, fracture, fatigue,creep, corrosion, stress-corrosion cracking, bursting and buckling.

    These can be divided firstly into those which act only at a crack tip as compared to those which actglobally. Secondly, those which have a time element associated with them (time-variant) and thosewhich are time-invariant can also be defined. This leads to a 2 x 2 matrix, Fig. 3. However, sincehuman factors also play a major role in risk, structural reliability should also acknowledge, if notquantify, those factors which are not directly incorporated in the calculation procedure but will affectrisk, this adds a third dimension to the matrix; an example of the complex interrelationship betweenthese different factors is shown in Fig. 4(6).

    A schematic of a reliability analysis for failure of a corroded pipe is shown in Fig. 5(7). This shows theinteraction of materials and operating data, together with inspection data and model uncertainties,and illustrates the range of inputs needed for a typical reliability analysis.

    3. QUANTIFICATION OF RELIABILITY

    SL/WEM/R/M8663/5/01/C

    5

  • 3.1 Hierarchy of Structural Reliability Methods

    The underlying principles for reliability analysis were defined in the 1950s by Pugsley andFreudenthal(8,9). The subsequent evolution of methods was initially slow, first order methods beingdeveloped in the 1970s and only in the 1980/90s were the methods extended to structural systems.The reasons for this include(3):

    The traditional approach of overdesign, which carries relatively low cost penalties atthe design stage.

    The priority for understanding modes of failure rather than risk of failure.

    Probabilistic methods were not considered relevant in traditional engineering.

    The increased use of risk-based approaches is thought to be due to:

    Change in legislation to safety leading to the need to quantify risk.

    The trend of life extension of existing plant and structures, many of which do not meetthe requirements of current codes.

    Increased experience with probabilistic approaches and increased computing power.

    The potential cost savings which can be made when applying risk-based methods.

    It is generally accepted that reliability methods can be characterised into one of 4 levels:

    Level 1 uses partial safety factors to imply reliability and is used in simple codes.

    Level 2 is known as second moment, First Order Reliability, Method (FORM). Therandom variables are defined in terms of means and variances and are considered tobe Normally distributed. The measure of reliability is based on the reliability index b. InAdvanced level 2 methods the design variables can have any form of probabilitydistributions.

    Level 3 have multi-dimensional joint probability distributions. System effects andtime-variance may be incorporated. They include numerical integration and simulationtechniques.

    Level 4 includes any of the above, together with economic data for prediction ofmaximum benefit or minimum cost.

    All methods are approximate and the problems become more difficult as the number of randomvariables and the complexity of the limit state function increase and when statistical dependencebetween random variables is present. The asymptotic approximate methods such as FORM are themost suitable for a large variety of structural reliability problems, although simulation methods areuseful as complementary methods. A summary of this hierarchy is given in Table 4, although in thepresent report, most attention is given to the methods of Monte-Carlo Simulation (MCS) andadvanced First Order Reliability Method (FORM).

    3.2 Limit States and Definitions

    SL/WEM/R/M8663/5/01/C

    6

  • The fundamental notion is the limit state function which gives a discretised assessment of the stateof a structure or structural element as being either failed or safe(10). The limit state function isobtained from traditional deterministic analysis, but uncertain input parameters are identified andquantified, as shown for a pipeline analysis in Table 5(17). Interpretation of what is considered to bean acceptable failure probability is made with consideration of the consequences of failure, whichcan be societal, environmental or financial.

    The general case of reliability, shown in Fig. 6, enables definition of the following parameters:

    Safety Margin.

    Limit state.

    Probability of Failure (Pf).

    Reliability.

    Reliability index (b).

    The limit state, M, is a function of material properties, loads and dimensions; M>0 represents safety,M

  • always be a gap between predicted and experienced risks, this gap is generally 1 to 3 orders ofmagnitude(11), although accounting for modelling uncertainty alone in fracture analyses gives adifference of one order of magnitude(12) and for these reasons, predicted reliability levels are bestreferred to as notional, rather than absolute, levels and are better suited to comparison purposes.

    3.4 Types of Analysis

    3.4.1 Simulation v Transformation Methods

    In simulation methods, a number of random samples are made and the probability determined bysimple ratios; in transformation methods, the integrand is transformed into a standard type ofdistribution which can then be analysed using the particular properties of the distribution. Decisionof relevant failure modes and their limit states are common to both classes of analysis, as isinterpretation of the consequences of failure. The methods differ in the middle step of determinationof failure probability from distributions of applied and resistance factors.

    3.4.2 Monte-Carlo Simulation (MCS)

    MCS is a relatively simple method which uses the fact that failure probability can be expressed as amean value of the result of a large number of random combinations of input data. An estimate istherefore given by averaging a suitably large number of independent outcomes (simulations) of thisexperiment.

    The basic building block of this sampling is the generation of random numbers from a uniformdistribution. Simple algorithms repeat themselves after approximately 2 x 103 to 2 x 109 simulationsand are therefore not suitable to calculate medium to small failure probabilities.

    Once a random number u, between 0 and 1, has been generated, it can be used to generate a valueof the desired random variable with a given distribution. A common method is the inverse transformmethod. To calculate the failure probability, one performs N deterministic simulations and for everysimulation checks if the component analysed has failed. The number of failures is NF, and anestimate of the mean probability of failure is the ratio of NF to N. A schematic of the MCS method isshown in Fig. 8.

    An advantage with MCS, is that it is robust and easy to implement into a computer program, and fora sample size tending to infinity, the estimated probability converges to the exact result. Anotheradvantage is that MCS works with any distribution of the random variables and there are norestrictions on the limit state functions.

    However, MCS is rather inefficient, when calculating failure probabilities, since most of thecontribution to Pf is in a limited part of the integration interval. In addition, for very low failureprobabilities, a large number of simulations is required for the result to converge to the actual value;in these case FORM is preferred or the method of Importance sampling (MCS-IS) can be used.

    3.4.3 Monte-Carlo Simulation with Importance Sampling (MCS-IS)

    MCS-IS is an algorithm that concentrates the samples in the most important part of the integrationinterval. Instead of sampling around the mean values, as in MCS, sampling is made around the mostprobable point of failure. This point, called MPP, is generally evaluated using information from aFORM/SORM analysis and as such the MCS-IS has limited application except for cases whereconvergence in FORM cannot be achieved due to complexity of the limit state.

    SL/WEM/R/M8663/5/01/C

    8

  • 3.4.4 First Order Reliability Method (FORM)

    FORM uses a combination of analytical and approximation methods and comprises threestages: Firstly, independent of whether each parameter has been defined as a Normal, Log-Normalor Weibull distribution, all variables are first transformed into equivalent Normal space with zeromean and unit variance. The original limit state surface is then mapped onto the new limit statesurface. Secondly, the shortest distance between the origin and the limit state surface, termed thereliability index b, is evaluated; this is termed the design point, or point of maximum likelihood, andgives the most likely combination of basic variables to cause failure. Finally, the failure probabilityassociated with this point is then calculated via the relationship between b and Pf. This is shownschematically for the case of a linear safety margin in Fig. 9. For non-linear limit states, the failuresurface is linearised at the design point, Fig. 10, the error in b depending on the non-linearity of thefunction at this point.

    By transforming the variables into equivalent Normal variables in standard Normal space (mean = 0and standard deviation = 1). This gives the joint probability density function as the standardisedmultivariate Normal which has many useful properties; This is known as the Hasofer-LindTransformation(14) and by its application the original limit state surface g (x) = 0 then becomesmapped onto the new limit state surface gU (u) = 0. Calculation of the shortest distance between theorigin and the limit state surface, b, requires an appropriate non-linear optimisation algorithm. Amodified Rackwitz and Fiessler(15) algorithm is used as the default algorithm in most reliabilityanalyses, which works by damping the gradient contribution of the limit state function, is a robustalgorithm and converges quite quickly for most cases. Finally, the failure probability is calculatedusing an approximation of the limit state surface at the most probable point of failure, and therelationship shown in Fig. 7 is used for this.

    FORM is more efficient than MCS in terms of computing time and accurate results can be obtainedeven when the failure probability is low. All the random parameters must however be continuous andlarge errors can also result if there are local minima in the limit state or high non-linearity at thedesign point(16). Despite these limitations, FORM is the most popular reliability analysis method, canbe easily extended to non-linear limit states and has a reasonable balance between ease of use andaccuracy.

    3.4.5 Second Order Reliability Method (SORM)

    The approximation of the limit state at the design point as a straight line is a step which leads toerrors in FORM analyses, the magnitude of which depends on the degree of non-linearity of the limitstate equation. In SORM, a parabolic, quadratic or high order polynomial is used to describe thelimit state surface, centred on the design point. This leads to higher accuracy but is not generallyconsidered necessary for the majority of engineering applications.

    Examples of calculations made using MCS, FORM and SORM for one analysis case are presentedin Reference (17).

    4. REVIEW OF APPLICATION IN DIFFERENT INDUSTRIES

    4.1 Overview

    In the following sections the application of reliability methods in a number of different industries isreviewed. This is limited to those industries in which the method forms an integral part of design,construction and operation and covers predominantly fracture, fatigue and corrosion failure

    SL/WEM/R/M8663/5/01/C

    9

  • processes. Examples of application to inspection scheduling, life extension, design and change inoperating conditions are described.

    4.2 Nuclear

    4.2.1 General Characteristics

    The application of reliability-based methods in the nuclear industry is widely documented and only aselection of representative literature is reviewed here. The R5(18) and R6(19) methodologies are themost widely applied for high and low temperature failure assessments respectively, and both can betreated probabilistically. R5 reliability analysis is currently at the development stage while examplesof R6 application in reliability are well documented.

    Examples of the application of each in a reliability context are given in Reference (20). The use ofthe R6 method in support of safety cases, and determination of acceptable levels of reserve factors,has however demonstrated that it is usually the lack of high quality input data, particularly defect sizedistributions, that limits the usefulness of the approaches rather than any inherent limitation of themethods themselves(21). Similar approaches for AGRs(22) emphasise the application of the NuclearSafety Principles (NSPs) of prevention, protection and mitigation to initiating events in variouscombinations depending on the acceptable probability of occurrence, which in turn is inverselyrelated to the severity of consequences of the event. In order of increasing consequence, the failureprobabilities (Pf) and protection against the event(22) are:

    v Frequent: Pf>10-3 per year, protection and mitigation with two lines of protection.

    v Infrequent: 10-3>Pf>10-5 per year, protection and mitigation with one line of protectionwith redundancy.

    v High Integrity: 10-5>Pf>10-7 per year, Demonstrate that all reasonably practical stepshave been taken to provide a line of protection.

    v Incredibility of Failure (IoF): Pf

  • 4.2.2 Creep Analysis

    The R5 method enables analysis of creep crack growth; The material properties describing creepcrack growth and creep strain responses are usually treated probabilistically, while all otherparameters are handled as deterministic quantities. Monte-Carlo simulation is generally used due tothe complexities of the equation describing the limit state.

    An example of the application of this is given in Reference (20): MCS was used with the creep crackgrowth properties and the creep strain being defined as probability density functions and all otherinputs as deterministic. The example showed how an initial distribution of defects would change bycreep crack growth in 1 year increments over a ten year period. Conditional probabilities were alsoaddressed since ductile materials tend to have higher creep strains and lower crack growth rates; byaccounting for this interrelationship of material properties the failure probability at the end of the tenyear period was reduced by an order of magnitude.

    4.2.3 Fracture and Collapse Analysis

    The R6 method uses the well-known Failure Assessment Diagram (FAD) which enablessimultaneous analysis of fracture and collapse for a component with a flaw, Fig. 11. Materialproperties and flaw sizes are usually treated probabilistically, while applied and residual stress isdeterministic. MCS is relatively straightforward with the R6 method and other codes using the FAD,although FORM analysis can also be applied but with some limitations to ensure convergence ofsolutions. The FAD approach and its treatment from a reliability aspect is covered in more detail inSection 5.

    Ideally, full conditional probabilities for materials properties should be established since strength andtoughness are related but alternatively realistic lower tails can be imposed on the distributions toreduce the level of pessimism. This approach is also described in the context of corrosionperformance in relation to steel composition(23): By using the actual composition from a testcertificate, and based on knowledge of the performance of different compositions, a more realisticestimate of failure probability can be obtained than if the minimum or maximum allowable limits ofeach element had been assumed.

    Dependence of failure probability on quality of flaw and fracture toughness data is emphasised indetail in many publications(15), and an example of effect of quality of NDE data on resultant failureprobability is shown in Fig. 12(13), while in Fig. 13 the effect of increasing scatter of fracturetoughness on resultant Pf is highlighted(24), an increase in COV in toughness from 8 to 10% leadingto an increase in Pf of two orders of magnitude. Time dependence is also relevant since irradiationembrittlement over time can lead to reduced toughness and hence increased failure probability; Aschematic of the effect of this is shown in Fig. 14 for the case of reducing toughness (e.g. due toirradiation) and increasing stress (e.g. due to loss of area by corrosion) with time.

    SL/WEM/R/M8663/5/01/C

    11

  • 4.2.4 Sensitivity and Benchmark Studies

    A further issue in most safety assessments, and in nuclear components in particular, is that there ishigh reliability and a relatively small number (in statistical terms) of components or plant. Theconcept of actual failure probabilities is therefore better interpreted as relative, or notional, failureprobability(25). Round robin analyses have been carried out to investigate the reproducibility of theseapproaches(25,26). The study in Japan(26) indicated that while seven different computer programmesgave failure probabilities which were within a factor of 2-5 of each other, the sensitivity of the resultto assumed fracture toughness was such that the degree of neutron irradiation greatly influences thejudgement on plant life extension. Such sensitivities are further studied in Reference (25) whereFORM (First Order Reliability Method), SORM (Second Order Reliability Method) and a round robinMCS (Monte-Carlo Simulation) were compared, Fig. 15. The treatment of failure probabilities asrelative values, indicating where greatest sensitivity to inputs lies, and the fact that only 'orders ofmagnitude' of Pf values are of interest seem to be the main conclusions of comparative studies.

    The US Heavy Section Steel Technology (HSST) Programme(27) concentrated on the effects ofmaterials and flaw data distributions on calculated failure probabilities. The input flaw sizedistribution has a dominant influence on Pf, a detailed analysis of inspection capabilities(28) showsthat the probability of detection (POD) and probability of correct sizing (POS) vary significantly withinspection method and quality, and type and size of flaw, Fig. 16. The advantages of applyingconstraint corrections for the cases of shallow cracks have also been identified(27). Inclusion ofductile tearing prior to cleavage fracture was seen as a mode that should be treated probabilisticallyand the possibility of including both constraint and tearing analysis has been demonstrated based onthe Weibull Stress concept(28). As well as the influence of the actual flaw distribution on failureprobability, the effects of using varying qualities of inspection on resultant failure have beendemonstrated probabilistically and specific use of reliability-based methods for definition ofinspection and maintenance schedules are also widely documented. It is predicted(30) that reliabilityapproaches will play an increasing part in structural integrity safety cases.

    4.3 Offshore Structures

    4.3.1 General Characteristics

    Probabilistic methods have also been applied for many years in the offshore industry, although dueto the loading regimes there is more emphasis on linking fatigue and fracture than in the nuclearindustry. Safety cases are now required for offshore structures in the North sea, and inspectionplans are linked to these. The ALARP principle is a cornerstone of reliability analysis of offshorestructures, most analyses are linked to the optimisation of inspection plans in terms of location,frequency and reliability of detection methods. Floating Production, Storage and Offloading (FPSO)structures are covered under Section 4.4 of this report.

    4.3.2 System Reliability

    Due to the nature of welded jacket structures one of the key issues is to link individual memberfailure to overall system failure(31): The degree of redundancy, consequences of failure, warning timeof impending collapse, accessibility of joint for inspection and limits of the probabilistic model areusually linked to define the potential probability of failure by different paths. Redundancy must alsobe defined in terms of ultimate strength, fracture and fatigue limit states. These interrelationshipsinevitably lead to complex analysis with the number of failure paths increasing factorially with thenumbers of members and joints: Methods for assessing this complexity are described in Reference(32).

    SL/WEM/R/M8663/5/01/C

    12

  • 4.3.2 Reliability Updating and Inspection Scheduling

    Since fatigue is the dominant damage mechanism in offshore jacket structures, there have beennumerous examples of reliability assessments linking crack growth with inspection requirements.Most approaches involve a reliability-based interpretation of the S-N curve in conjunction withMiner's Rule. This method can be used to define the required inspection interval to maintain aspecific level of reliability(31), Fig. 17, or for reliability updating to refine the original crack growthcalculations based on knowledge obtained by inspection of the actual, or similar, joints. An exampleof this second application(33), shows the potential advantages for carrying out remnant lifeassessments of existing structures and applies the following steps:

    (a) Estimate of fatigue reliability, b, for each joint.

    (b) Identification of critical joints where b is less than the target value (Pf higher).

    (c) Identification of a subset of critical joints to be inspected.

    (d) Carry out inspections.

    (e) Reliability updating for all joints.

    (f) Planning of following inspection surveys.

    Activity (c) requires a correlation to be established between joints so that the results of the inspectioncan be extended to the non-inspected joints; this is achieved by the following;

    A reliability model based on S-N to account for fatigue failure and crack detectionduring inspection.

    A rational correlation between joints based on type, geometry and response to loads.

    A Bayesian updating of the reliability estimates for all joints, based on the results of theinspected joints. This enables system reliability updating using FORM/MCS in real timeand uses all information made available by the surveys.

    A method for accounting for inspection uncertainty by modifying the PDFs for cracksizes.

    This method has proved successful for life extension, significant improvements in reliability beingobtained due to the updating method, Fig. 18.

    Reliability approaches have also been used to demonstrate the suitability of inspection techniques.Flooded Member Detection (FMD) uses the presence of water in a normally air-filled structuralmember to indicate through-wall damage. In order for this method to be viable the structure musttolerate the damage without an increase in the overall probability of failure and risk-based methodshave been used to demonstrate this(34). The required size of an inspection sample needed to updatereliability based on the original safety index of an uninspected joint(33) can also be determined, Fig.19.

    SL/WEM/R/M8663/5/01/C

    13

  • 4.3.3 Application of LEFM Crack Growth Methods

    Although the above methods apply mainly S-N approaches for fatigue, LEFM-based crack growthmethods have also been applied: In Reference (35) the da/dN approach was used mainly to calibrateS-N design lines which were not available for the particular joint configuration, and environmentaleffects were also incorporated. In Reference (36) the crack growth law was treated deterministicallybut with local hot-spot' stress ranges determined from FE, factored by probabilistically defined waveexceedance data, giving a Weibull distribution for the hot-spot stress range. The method used issummarised in Fig. 20, the mean fatigue life of each joint was characterised as one of four states:

    Time to first detectable crack growth.

    Time to 20% through-thickness cracking.

    Time to through-thickness cracking.

    Time to joint failure, defined as when joint stiffness is reduced to 50% intact value.

    4.3.4 Other Failure Modes

    Although fatigue crack growth parameters show inherent scatter, the loading and characterisation ofrealistic load spectra dominate most fatigue analyses and this is where most probabilistic effort tendsto be focused in structures where fatigue is the dominant limit state. Hence, this review has beenconcentrated on these effects.

    probabilistic methods have also been applied to offshore corrosion problems, fire and blast and seastate modelling but these are not addressed here.

    4.4 Transport

    4.4.1 General Characteristics

    In most transport applications the variable which is most difficult to define with any level of certaintyis the loading and most reliability work is focused on fatigue rather than fracture. In manyapplications reliability methods are used to set inspection regimes optimised for safety and cost, butunlike nuclear and offshore structures, many transport applications directly involve public use andthe aim target reliabilities reflect the duty of care that this entails.

    4.4.2 Aircraft

    Aircraft structures are subjected to a wide variety of live loadings and the principal issue is one offatigue life estimation and the linking of this to inspection schedules. Consequently, the loadingspectrum forms the largest single parameter to be treated probabilistically in aircraft applications(37).The approach used for fatigue estimation often involves full scale testing of components which,coupled with reliability analysis enables optimum inspection scheduling to be made. The issue ofengine reliability is beyond the scope of this review.

    SL/WEM/R/M8663/5/01/C

    14

  • 4.4.3 Rail

    In rail applications, probabilistic methods are being increasingly used to set track and vehicleinspection intervals. The former tend to be based on traffic composition (axle load and speed), thefatigue crack growth characteristics of the rail and the interaction with other degradation modes suchas wear and corrosion. The identification of factors that increase the risk of failure, coupled with thedefinition of high risk joints enables rail NDE to be applied where it is most needed(38). Theoptimisation of inspection strategies for safety critical rail vehicles has also been made based onfatigue analysis, but linked to a fracture approach for defining critical flaw size. One approach(39) isto use MCS with the fatigue crack growth parameters in the Paris Law, the threshold stress intensityfactor and the loadings defined as probability density functions and all other parameters defineddeterministically. A risk-cost benefit analysis is then used to set optimum inspection intervals as afunction of mileage.

    4.4.4 Ships

    In ships, the issue again is complexity and unpredictability of loading, coupled with the presence ofhuge distances of weld runs, largely uninspectable regions and failure mode interaction. Loadingspectra in ships are dependent on factors such as trading patterns, cargo loading arrangements,speed, heading angles and time at port. A recommended method for estimation of wave loadings,frequencies and structural response from a fatigue perspective is given in Reference (40), whileestimation methods for statistical characteristics of random load variables in ships(41) providesguidance on treating wave-induced bending loads and provides recommendations on COVs forthese, Table 7. A prototype code for probability-based design requirements for fatigue of shipstructures has been developed(42), in which the S-N curve is described probabilistically. Thestrength modelling error, in this case the uncertainty in Miner's Rule, is quantified by letting thedamage at failure being a log-Normally distributed random variable with median 1.0 and a COV of30%. Four levels of sophistication of calculation are presented, with target safety levels definedaccording to a three-level ranking based on consequence of fatigue cracking.

    4.4.5 Floating Production, Storage and Offloading Vessels

    The use of converted tankers or new-build vessels as Floating Production Storage and Offloading(FPSO) units offshore is an increasing trend. The integrity of such units is usually assessed on acase-by-case basis as the environmental loadings are site-specific. Rules for tankers involved inworld trade were previously used for this although reliability-based codes for FPSO assessment hasbeen published by both Lloyd's Register and ABS. A Probability of Exceedance (POE) approach isoften used for quantifying wave-induced dynamic loads(44,45) which are then applied in probabilisticfatigue analysis. Reliability analyses for assessment of safety levels of FPSO hulls involve fivefailure modes(44): hull girder collapse, hull girder yield, failure of unstiffened panels, failure of stiffenedpanels and fatigue. FORM analyses have been used(44) taking all inputs (dimensions, flaw sizes,material properties, loadings) as random variables with specific distributions and coefficients ofvariation from various studies(41). Deterministic analysis was first carried out to identify the mostcritical regions and then a probabilistic analysis carried out on the failure mode with the lowestreliability index. The resulting reliability indices were compared with Recommended target reliabilityindices for ships and FPSOs and indicated that current codes have adequate, but not excessive,safety margins.

    SL/WEM/R/M8663/5/01/C

    15

  • 4.5 Bridges and Buildings

    4.5.1 Fatigue Loading

    Reliability methods applied to bridges usually address fatigue issues and life extension, but increasein failure probability throughout life due, for example to increased traffic load and loss of load bearingarea through corrosion, must also be considered. A comprehensive description of the developmentof a method for assessing the risk of fatigue failure in highway bridges is given in Reference (46).Initial crack shape, fatigue crack growth data and fracture toughness were treated as statisticaldistributions. Over 100 stress range histograms were obtained from 40 bridges and equivalentconstant amplitude stress ranges determined. An MCS approach was used with Variance ReductionTechnique (similar to importance sampling) to reduce the required number of simulations when therisk of failure is small. The simulation output gave the risk of fatigue failure for specific bridgedetails, the total system failure probability then being calculated by the total number of details in thesystem and the correlation coefficients between them. The model enabled definition of maximumlength of service life extension, specified inspection intervals and maximum fatigue failure risk atwhich the bridge must be inspected, Fig. 23.

    Methods for incorporating random variable amplitude traffic loading with data from inspection havebeen combined into a probabilistic fatigue analysis to identify the statistical properties of damagecontribution(47). The fatigue lifetime of each critical detail was then assessed, the method haspotential for assessing future damage due to increased traffic growth and truck weights. Themethods have also been applied to general bridge management programmes(48), and bridgedeterioration(49).

    4.5.2 Seismic Loading

    Assessment of structural failure due to seismic activity, both for bridges(50) and buildings(51,52) hasbeen carried out probabilistically. In Reference (50) a force based seismic code with dynamicreliability theory was applied and showed the complex nature of the problem of integrating structuralresponse with seismic response, although probability of failure generally increased as periodbetween the vibration peaks increased, Fig. 24. The effect of variability of material ductility wasstudied from the viewpoint of vertical deflection capacity of a cantilever beam, as would occur in anearthquake(51). Probabilistic models for yield stress and hardening capacity (as indicated by theinverse of the yield/tensile ratio) were introduced into a FORM analysis and studied for a wide rangeof steel beam and column geometries. The results can be applied in design by the definition ofgeneralised parameters involving cross-sectional properties and material uncertainty coefficients.

    A FORM method has also been used for determining the required fracture toughness of columnsused in direct-welded moment connections in seismic areas(53). The aim target reliability was thatquoted in the design code for a severe earthquake with low return period, stresses were determinedfrom a finite element analysis and fracture toughness requirements evaluated using a FORManalysis of the Failure Assessment Diagram.

    4.5.3 Static Strength Modelling

    Methods have been applied to the static strength modelling of structures which are highly sensitiveto geometrical variation. The dominant failure mode in these cases is buckling and a variety ofstiffened steel structures have been analysed(54,55). The application is however not limited to steel,concrete structures have also been assessed from the aspect of material property variations and theeffect of loading cases on the buckling of GFRP cylinders has also been analysed(56).

    SL/WEM/R/M8663/5/01/C

    16

  • 4.6 Power, Process and Chemical Plant

    The main application of reliability methods in fossil power, process and chemical plant appears to bein the area of risk-based maintenance and inspection (RBMI). The types of plant to which RBMI isapplied is vast but the majority of failures are associated with pressure equipment(57) such as tanks,storage vessels, pumps, towers, heat exchangers and boilers, Fig. 25. Formal approaches detailingsuch methods have been, or are soon to be, adopted in the USA(58,59,60). An extensive review ofindustries applying RBMI was carried out in the preparation of the RIMAP proposal(61), which aims toprovide a unified approach for a probabilistic lifetime and consequence analysis method, Fig. 26.

    One criticism of current quantitative methods for RBMI, and therefore limiting their wider use, is theircomplexity and the unrealistic nature of such analyses being carried out by plant engineers in shorttimescales. A complicating factor is the vast number of damage mechanisms (thinning, fatigue,stress corrosion cracking, metallurgical damage, mechanical damage) and the complex interactionsbetween them. Furthermore, while the definition of consequences of failure will always besemi-qualitative and open to individual interpretation, RBMI is an expanding area because of thebenefits it confers and current thinking is that computational methods and individual experienceshould go together to form a workable RBMI strategy(57). A move towards this has already beenmade with the development of appropriate software to formalise the processes of consequenceanalysis to target maintenance effectively(62).

    4.7 Pipelines

    4.7.1 General Characteristics

    As for nuclear applications, there is a large amount of literature relating to risk and reliabilityassessments of pipelines and only a brief overview is given here. Probabilistic analysis of pipelinesis introduced in Reference (63), risk analysis approaches for these applications being pioneered byBritish Gas and now applied widely in this field. Despite there being over 100000 km of pipelines inwestern europe alone, the failure rate is extremely small. Extensive legislation governs themanufacture, installation, operation and inspection of pipelines and the aim of this regulation is tolimit the likelihood and consequences of any failure. Consequently, pipelines are the safest methodof transporting energy(64). Where flaws are detected, fitness-for-purpose must be demonstrated andthis is the main aspect of pipelines in which risk-based methods are applied. FAD-type approaches(R6, BS 7910) form the backbone of such assessments, although obtaining representative data isusually the limiting factor.

    The major cause of damage in onshore pipelines is third party interference, although groundmovement can also affect some locations and fabrication flaws may be an issue in ageing olderpipelines; 70% of pre-1968 pipelines would be classed as unacceptable to current standards,whereas this value reduces to 10% for those fabricated between 1968 and 1972(63).

    Particular emphasis is placed on consequence analysis, a function of the stored energy in thesystem and the human population density in the vicinity. The probabilistic methods are applied tothe assessment of damage or flaws, the setting and optimisation of inspection frequencies, settingmaintenance schedules, life extension and pressure uprating of existing pipelines. Examples of eachare described briefly below. It is noted in many of the references cited that the codification of limitstates and reliability-based pipeline design in the USA has trailed behind that of Europe. Corporateinterest in this issue in the USA varies widely, evidently due to concerns regarding public perceptionand liability.

    4.7.2 Third Party Damage

    SL/WEM/R/M8663/5/01/C

    17

  • Use of the FORM method for estimating pipeline failure frequencies has demonstrated that thismethod represents a suitable compromise between accuracy and useability(65). The probability offailure, given the presence of third party damage, can be estimated using this method, although thefrequency of occurrence of impacts can only be determined through the use of historical data.Interrelationships between dents, gouges and failures are determined from existing 'models'(66,67) anda two-parameter Weibull distribution fitted to damage data. However, the fitting of lines to thesedistributions is itself subject to confidence limits, and by different fitting methods thefailure probabilities were found to change by a factor of 1.5-3, highlighting sensitivity to distributionfitting.

    A reliability-based limit state approach has also been developed for the design of pipelines to resistthird party mechanical damage. This involved statistical quantification of damage, Fig. 27, coupledwith strength properties defined as random variables which were then applied to existing puncturemodels in a probabilistic manner and compared with test results. This enabled sensitivity studies tobe made which characterise the failure probability as a function of each input variable. Applied loadand excavator tooth contact length were found to be the most significant variables, mainly due to thehigh COV of the load (45%). Examples of reliability levels for various design factors are shown inFig. 28.

    4.7.3 Pressure Uprating

    The effect on reliability of pressure uprating of pipelines has been demonstrated in several studies inorder to justify the safe use of higher pressure levels. An extensive study for justification of apressure uprating of a sub-sea pipeline is reported in Reference (69). This involved a probabilistictreatment of all credible failure modes: these are yielding (serviceability limit state), and, for ultimatelimit states, bursting, external corrosion, internal corrosion and fatigue crack growth of weld defects.Using such an approach it was demonstrated that only for the fatigue limit state was the probabilityof reaching a failure condition greater than 'negligible'. At the time of the study, the pipeline wasoperating at 100 bar g, uprating to 130 and 135 bar g gave a calculated increase in failure probabilityof 12 and 156 times respectively, demonstrating the sensitivity of certain limit states to small changesin input values, Fig. 29. Based on experience of the pipeline the uprating to 130 bar g wasconsidered acceptable.

    The application of limit state reliability methods to pipe operation above 80% SMYS has beendemonstrated using FORM analysis with limit state equations to describe rupture of new pipe, fromcorrosion damage and from dent-gouge damage(70). Yield strength, Charpy impact energy, pipediameter, wall thickness, operating pressure, flaw/gouge size and corrosion rate were considered tobe random variables with all other parameters treated deterministically. This showed that the burstof defect-free pipe was not a credible failure mode and that it is important to consider damage andtime-dependent deterioration. The failure rates for corrosion defects were expressed as atime-dependent function, Fig. 30(a), while those for dent-gouge failure were represented as afunction of wall thickness, Fig. 30(b). Overall, the suitability of the FORM method for justifyingpressure uprating was demonstrated, although the importance of linking this with a pipelinemanagement system for maintaining, monitoring and controlling structural integrity through the full lifewas highlighted.

    SL/WEM/R/M8663/5/01/C

    18

  • 4.7.4 Fracture and Collapse

    In a study on reliability of pipeline girth welds, both plastic collapse and unstable fracture limit stateswere assessed(71) using the Failure Assessment Diagram with respect to specific target reliabilitylevels (b = 1-5), although an industry-wide acceptable target was not thought to be presentlyfeasible.

    The necessity for accurate flaw sizing and the definition of realistic COVs for material properties isemphasised(71), fracture toughness being treated as a mean and 50% of this value in orderto account for variability of CTOD not revealed due to a limited number of test data being available.Once the target reliability levels had been established the FORM method was again used, but withthe aim of deriving appropriate partial safety factors for application to the main data inputs (flowstress, toughness, stress, flaw size) in a limit state approach. The dominant overall uncertainty forboth limit states was found to be flaw size and a high degree of conservatism was noted in thefracture analysis as a simple LEFM approach was used. Furthermore, the difficulty in predicting theoccurrence and magnitude of ground movement meant that the contribution of this to overall failureprobability could not be quantified. The analysis is applicable on a 'per-weld' basis, and the conceptof system reliability and time-dependency were not addressed.

    FORM/SORM have also been applied in a study to define the upper limits of yield/tensile ratio forreliable pipeline operation(7). Following review of ultimate and serviceability limit states, and stressand strain controlled cases, the three main failure modes in which Y/T played a major role weredefined as pipe burst, local buckling and axial rupture. Models describing each of these failuremodes are being assessed probabilistically, with particular emphasis on the definition of flow stress,after first demonstrating suitable model accuracy by comparison with existing test data.

    FORM and MCS methods based on the failure assessment diagram have been used to justify safeuse of pipes containing girth welds with low weld metal fracture toughness(72). A system reliabilityapproach was adopted and the total reliability determined from the product of individual probabilitiesfor a flaw giving a failure prediction, a flaw existing in a tension zone and a flaw existing in the weld,multiplied by the number of welds considered in the pipeline.

    A similar approach has also been used to investigate the probability of pipe failure duringhydrotesting(73), using TWI sofware(74).

    Although not limited to pipeline applications, a reliability-based approach has also been used toassess HAZ fabrication hydrogen cracking(75). Nomograms for preheat requirements as a function ofCEV, heat input, combined thicknesses and hydrogen scales were combined with a probabilistictreatment of HAZ hardness to derive probability of cracking values as a function of heat input. Themethodology and typical output are shown in Figs. 32 and 33 respectively.

    4.7.6 Inspection Planning

    Since third party damage is the principal cause of pipeline failure, reliability-based methods havealso been applied to evaluate the effectiveness of different design and maintenance practices usedto protect pipelines(76). Mechanical damage assessment statistics were collated from numeroussources and a two-component model based on a fault tree applied. The first component relates tofrequency of mechanical interference, the second to probability of puncture when interferenceoccurs. Probability of occurrences were determined from historical statistics, while cost implicationsof pipeline outage were compared with the costs of mitigation measures to give a cost-benefitanalysis for maintenance/repair measures compared to preventative measures. Similar approachesare also used for scheduling inspection for corrosion damage.

    SL/WEM/R/M8663/5/01/C

    19

  • 5. PROBABILISTIC TREATMENT OF FRACTURE AND COLLAPSE

    5.1 Description of Failure Assessment Diagram

    The Failure Assessment Diagram (FAD) gives a graphical representation of the potential effect of adefect on the integrity of a structure. The FAD is a two dimensional plot and indicates the propensityof the defect to cause failure by plastic collapse and brittle fracture. The basic FAD has two axes, Krand Lr where:

    Kr = Applied stress intensity/fracture toughness.

    Lr = Applied stress/yield stress.

    Kr is known as the brittle fracture parameter and Lr the plastic collapse parameter. The threeprincipal inputs which are necessary for a basic deterministic calculation to be performed are cracksize, stress and fracture toughness. If all three are known the safety of a structure can be evaluated,while if any two are known the critical level of the third parameter can be determined. The brittlefracture parameter can also be defined in terms of J or CTOD-based fracture toughness.

    Once the co-ordinates of the analysis point has been evaluated and plotted on the FAD furtherinformation can be gained depending on the relative position of the analysis point in FAD space.The FAD locus divides this space into 'safe' and 'unsafe' regions, the shape of the locus allowing forthe interaction of yielding and fracture. Furthermore, depending on where the analysis point falls themost likely failure mode can be estimated; the regions of 'fracture-dominated', 'collapse-dominated'and 'intermediate' behaviour are divided up according to the ratio of Kr/Lr. Another feature of theFAD is that some element of work hardening is allowed for since the Lr cut-off level of 1.0 representsan allowable maximum stress equal to the mean of yield stress and UTS.

    5.2 Status of Current FAD-Based Approaches

    The main analysis methods using the FAD are R6(19), BS 7910(77), SINTAP(78) and API579(79).BS 7910 is a new standard which replaces the former BS PD 6493. While changes in the scope ofthe standard have been made and the treatment of data inputs revised, the basic concept of the FADremains unchanged. The plastic collapse parameter in the FAD is now defined in terms of yieldstress, with a subsequent change to the shape of the FAD and definition of the cut-off in terms ofyield stress/ultimate tensile stress ratio. The SINTAP method also applies a FAD method based onyield stress but has alternative options for addressing cases such as weld strength mismatch and thetreatment of constraint.

    Establishing the significance of a result in FAD space can involve one, or a combination of, thefollowing concepts:-

    Sensitivity analysis.

    Definition of reserve factors.

    Use of partial safety factors.

    Probabilistic analysis.

    SL/WEM/R/M8663/5/01/C

    20

  • The definition of reserve factors and sensitivity analysis can be linked by determining how sensitivethe final result is to variations in the input data: A higher reserve factor is needed for those situationswhere the result is very sensitive to realistic variations in these data.

    Similarly, if a high reserve factor is deemed necessary, this can also be achieved by applying partialsafety factors to those variables which are shown to have the greatest sensitivity on the analysispoint.

    5.3 Inherent Safety Level of FAD Approach and Use of Partial Safety Factors

    The results of analyses using the above methods are based on the assumption that failure will occurwhen an assessed defect gives rise to a point which falls on the failure assessment diagramwhereas in practice it is often found that the FAD gives safe predictions, due to its inherentconservatism, rather than critical ones. Data from wide plate test programmes to validate the failureassessment diagram approach were used to investigate the effects of the conservatism inherent tothe failure assessment diagram approach and to derive appropriate partial safety factors(80,81), Fig.34.

    A relatively high level of safety can be observed in the FAD, indicating that the method is inherentlysafe. By expressing the distance from the origin to the failure locus as a ratio of the distance fromthe origin to each data point, the inherent safety factor can be determined for each test result atdifferent angles around the FAD, Fig. 35. The region of the FAD is expressed as the angle Thetawhere q = 90 equates to pure brittle fracture (Kr = 1) and q = 0 corresponds to pure plastic collapse(Sr = 1). The resultant plot, Fig. 36, shows that in all except two cases (R/r 1) the method is safeand that the highest safety factor is obtained in the brittle fracture region of the FAD. In BS 7910 andSINTAP allowance has not been made for the inherent level of conservatism of the FAD, furtherstudies would be needed before this could be included in fitness-for-purpose analyses, discussed inSection 5.4.

    The resulting recommendations for partial safety factors to be applied to the best estimate (mean)values of maximum tensile stresses and flaw sizes, and to the characteristic (i.e. minimum specified)value of toughness and yield strength, are given in Table 8. It is emphasised that the partial safetyfactors will not always give the exact target reliability indicated but should not give a probability offailure higher than this target value, although this premis has been questioned(82).

    Additionally, there is no unique solution for partial safety factors and even when a preliminaryseparation is made into load and resistance groups there are still many alternative combinations ofpartial factors which could be applied to the separate input variables to give the same required targetreliability. The most appropriate solutions are those for which the partial safety factors remainapproximately constant over a wide range of input values. The ratios between the different factorsshould be primarily dependent on the relative COVs of the input data but, as noted in Reference(80), it was found that there was some effect of the absolute values of the some input variables. Ithas been suggested that a probabilistic assessment should be used in conjunction with the partialsafety factor approach since the latter may not always give the target failure probability and wherethis is the case the results are likely to be unconservative(82). This is because the relationshipbetween probability of failure and reserve factor depends on standard deviation of the variable andthe position of the point in the FAD: Fig. 37 shows how the position of constant failure probabilitywithin the FAD varies with standard deviation in flaw size.

    It should be noted that the partial safety factors on fracture toughness are applicable to mean minusone standard deviation values as an approximate estimate of lowest of three. It is recommended that

    SL/WEM/R/M8663/5/01/C

    21

  • sufficient fracture toughness tests should be carried out to enable the distribution and mean minusone standard deviation to be estimated satisfactorily.

    5.4 Model Uncertainty in the Failure Assessment Diagram

    The relationship between partial safety factor and overall failure probability is linked directly to thevariability and uncertainty of specific random data inputs and was studied in detail in Reference (80).Furthermore, the consequence of failure may also affect both the target reliability and the weightinggiven to the partial safety factors, given in Table 8 for BS 7910 recommended values withoutmodelling uncertainty, to achieve this target reliability. These conservatisms may arise from anumber of effects but under loading conditions similar to the dataset, the inherent conservatism canbe considered as a modelling error. Including these modelling uncertainties in the calculations ofpartial factors leads to a modified set of safety factors where it is desired to remove theseuncertainties and where they are known to be represented by conditions of the wide plate tests,given in Table 9(80).

    Typically, removal of the modelling uncertainty allows a reduction in the general recommendedpartial safety factors of the order of 0.05 to 0.1 on stress, and 0.2 to 1.0 on fracture toughness. It isnoted in Reference (81) that incorporating the modelling uncertainty into such assessments reducedthe failure probability by generally less than one order of magnitude and that any improvements aremost likely at low failure probabilities and in the elastic-plastic (knee region and higher Lr values)region of the FAD, Table 10. Furthermore, the modelling uncertainty can be characterised by a threeparameter Weibull distribution, although where there is confidence regarding the dominant region ofthe FAD then the appropriate factor for that region can be used.

    5.5 Probabilistic Treatment of Failure Assessment Diagram

    Probabilistic fracture mechanics is based on the concept that all or some of the input parameters foran FAD analysis contain inherent uncertainty, for example due to lack of detailed information, testingvariation or material variability. The uncertainty in the data inputs manifests itself as an uncertaintyin the resulting analysis point, and those due to uncertainty in flaw size and material toughness aregenerally considered to have the greatest effect on uncertainty of the final result. The most likelyanalysis point on the FAD, its associated statistical distribution and the relationship between thesetwo aspects and the failure/no-failure boundary of the FAD enables the probability of failure to bedetermined for a given set of inputs and their distributions.

    MCS and FORM are the two most widely used methods for a reliability-based interpretation of theFAD. Various programs are available for automating the analyses, including TWI's FORM/MONTEprogram, British Energy's STAR6 program and the SINTAP consortiums proSINTAP software:Within proSINTAP, the following parameters are treated as random parameters:

    Fracture toughness.

    Yield strength.

    Ultimate tensile strength.

    Defect size given by NDE.

    These random parameters are treated as not being correlated with one another and can follow aNormal, log-Normal, Weibull or exponential distribution. This and other software is covered in moredetail in Section 7.

    SL/WEM/R/M8663/5/01/C

    22

  • 6. TARGET RELIABILITY LEVELS IN DIFFERENT CODES AND INDUSTRIES

    6.1 Overview

    Target reliability levels depend on the consequence and the nature of failure, the economic losses,the social loss or inconvenience, environmental consequence and the amount of expense and effortrequired to reduce the probability of failure. Target levels are usually calibrated againstwell-established cases that are known from past experience to have adequate reliability, althoughnovel types of structure require formal approaches to define appropriate levels. The reliability indexof a structure is often quoted rather than failure probability since there is a substantial differencebetween the notional probability of failure in the design procedure and the actual failure probability.Most codes apply the ALARP principle (As Low As Reasonably Practical) which recognises that inreliability the level of return of incremental safety improvement diminishes with increasing reliability.The process is therefore one of optimisation of safety and cost.

    6.2 Quantifying Societal Consequence

    One of the more qualitative aspects of reliability analysis is the estimation of consequences.Attempts to formalise this have only been partially successful due to the difficulty in assigning 'typical'scenarios and an unwillingness in some industries to be seen to assign any fatality as being anacceptable condition. The current convention of the HSE(2) is a benchmark value of ~1m for the'Value of a Statistical Life' (VOSL): This concept is usually interpreted as that which people areprepared to pay to secure a certain averaged risk reduction and equates to a reduction of individualrisk of 1 x 10-5 being worth ~10, it is not the value assigned to compensation for loss of life.Structural reliability is important first and foremost if people may be killed or injured as a result ofcollapse. In ISO 3294(83) it is suggested that an acceptable maximum value for the failure probabilityin those cases might be found from a comparison with risks resulting from other activities. Takingthe overall individual lethal accident rate of 10-4 per year as a reference, a value of 10-6 seemsreasonable to use. The maximum allowable probability of failure of the structure then depends onthe conditional probability of a person being killed, given the failure of the structure(83):

    P(f | year) P(d | f)

  • Where Ks is a social criterion varying from 0.005 for structures which pose a threat to generalsociety to 5 for structures which do not affect the general public, nd is the design lifetime in yearsand nr is the number of people at risk. Alternatively, one expression has been developed whichtakes account of activity type (e.g. normal, high exposure) and warning factor(85):

    Pf t= 1E-5 (A/W)nd x n-0.5 . . . (4)

    Where A is an activity factor varying from 0.3 to 10 for low and high exposure structures respectively,and W is a warning factor varying from 0.01 for fail-safe conditions to 1.0 for failure modes whichhave no prior warning.

    It is however noted in Reference (5) that the use of such expressions is open to wide interpretation,they do not account for many other relevant issues and comparisons are difficult without specificinformation and the context of the calculation.

    For loading situations which occur with low frequency, such as earthquakes, the aim reliability levelis generally lower. If this was not the case the cost of guaranteeing very low failure probabilities forevents which are unlikely to occur would be prohibitively high. It is therefore recognised that theoccupancy and functionality of buildings should be considered with the frequency of damaging eventwhen defining safety levels for individual elements of buildings(86).

    6.3 Treatment of Consequence in Three Major Codes

    6.3.1 ISO 2394: General Principles on Reliability for Structures

    ISO 2394(83) is a recently introduced (1998) standard, the remit of which is to provide a commonbasis for defining design rules relevant to the construction and use of the wide majority of buildingsand civil engineering works. It is emphasised in this code that structural reliability is an overallconcept comprising models for describing actions, design rules, reliability elements, structuralresponse and resistance, workmanship, quality control procedures and national requirements, all ofwhich are interrelated. The standard provides a full description of methods to be applied includingmodels, limit state designs, probability-based design, partial safety factors approach and theassessment of existing structures.

    In the context of ultimate limit states, the following points are stated in ISO 2394:

    Consequences of failure are defined at three levels and incorporate economic, socialand environmental consequences.

    Failure is considered to occur by four methods:

    (i) an unfavourable combination of circumstances within normal use.

    (ii) exceptional but foreseeable actions (e.g. climatic).

    (iii) consequence of error or misunderstanding.

    (iv) unforeseen influences.

    Any foreseeable scope of damage should be limited to an extent not disproportionate tothe original cause.

    SL/WEM/R/M8663/5/01/C

    24

  • A consideration is given to durability by classification of the structure into one of fourclasses which have notional design working lives of 1-5 years, 25 years, 50 yearsor 100+ years.

    The composition, properties and performance of materials, the shape and detailing ofmembers and the quality/control of workmanship (fabrication) are considered keyissues from a durability view.

    From a probabilistic point of view, an element can be considered to have one singledominating failure mode. A system may have more than one failure mode and/orconsist of two elements, each one with a single failure mode.

    Probabilistic structural design is primarily applied to element behaviour and limit states (serviceabilityand ultimate failure). Systems behaviour is of concern because systems failure is usually the mostserious consequence of localised component failure. It is therefore of interest to assess thelikelihood of systems failure following an initial element failure. In particular, it is necessary todetermine the systems characteristics in relation to damage tolerance or structural integrity withrespect to accidental events. The element reliability requirements should depend upon the systemscharacteristics.

    Properties of materials should be described by measurable physical quantities and shouldcorrespond to the properties considered in the calculation model. Generally, material properties andtheir variability should be determined from tests on appropriate test specimens, based on randomsamples which are representative of the population under consideration. By means of appropriatelyspecified conversion factors or functions, the properties obtained from test specimens should beconverted to properties corresponding to the assumptions made in calculation models, and theuncertainties of the conversion factors should be considered.

    The recommended target reliability levels are a function of the relative costs of safety measures andthe consequences of failure and are summarised in Table 11. For 'great consequences' themaximum acceptable probability of failure for the cases of high, moderate and low costs ofimplication of safety measures are 10-3, 7 x 10-5 and 10-5 respectively. The middle of these values iscomparable with that implied in Eurocode 3.

    6.3.2 Eurocode 3

    Eurocode 3(87) for steel structures was published in 1993 although it is not yet in widespread use. Areview of the partial safety factor and reliability levels associated with the material toughnessrequirements of this code(80) has demonstrated that a reliability approach has been used althoughthere is disagreement within the EU on the underlying inputs used for this. Partial safety factors forlive loads are higher than for permanent loads due to the increased uncertainty of the former. Thetarget reliability index of EC3 Annex C (Fracture Avoidance) is 3.8, corresponding to a failureprobability of 7 x 10-5.

    SL/WEM/R/M8663/5/01/C

    25

  • 6.3.3 British Standard BS 7910

    In BS 7910(77) the consequence of failure are defined as moderate, severe and very severe which, incombination with two levels of structural redundancy this gives six levels of target failure probability,Table 12: All values refer to probability of failure of individual components, the overall objective is toprotect the complete structure against failure, accepting that it may be possible to tolerate localdamage in some locations of redundant structures.

    In redundant structures, failure of a single component may be accommodated by alternative loadpaths and, although undesirable and expensive, it may be possible to make a case for a highertarget probability of failure for such a component compared to a critical one which would causecomplete failure. 'Moderate' consequences are interpreted as potential financial costs without threatto life. If failure is predicted to be by brittle fracture, which will by its nature occur without warning,the consequences should be interpreted as 'severe' or 'very severe'. In other respects, 'severe'consequences should be interpreted as any potential threat to human life and 'very severe'consequences as a potential threat to multiple lives. If failure is expected to be by plastic collapseand provided that there is no threat to human life, the consequences may be interpreted as'moderate'. In order to achieve these reliability levels a system of partial safety factors is used in BS7910 which, when used in combination for the data inputs, are intended to give a specific reliabilitylevel in the FAD analysis, Table 8.

    As scatter in material properties increases, COV increases, and a higher safety factor must be usedto maintain the same failure probability. In addition to this type of material variability, the assessedstate may be close to a mode change that could drastically alter material properties. In particular,the ductile-brittle transition may induce cleavage in an otherwise ductile process, and higher factorsmay be required in these conditions. One level of reliability can also be achieved through the use ofdifferent levels of partial safety factor and reliability is therefore not a unique value in this respect.

    There are many other circumstances listed in BS 7910 that might lead to the requirement forincreased reserve factors:

    The true loading system has to be simplified or assumptions have to be made in orderto analyse the component.

    The non-destructive examination capabilities are indistinct.

    Flaw characterisation is difficult or uncertain.

    The assessed loading condition is frequently applied or approached.

    Little pre-warning of failure is expected, forewarning being likely in cases of ductilefailure.

    There is a possibility of time dependent effects (fatigue, creep, corrosion).

    Changes of operational requirements are possible in the future.

    The consequences of failure are unacceptable.

    SL/WEM/R/M8663/5/01/C

    26

  • 6.4 Comparison of Target Reliability Levels in Different Industries

    In the context of what constitutes an acceptable level of risk, it is accepted(88) that the likelihood offailure due to the coincidence of under strength material, constructional inaccuracies, andoverloading is acceptably small, and by 'acceptable' it is meant that the frequency of occurrenceshould not be greater than it has been in the recent past.

    The public acceptance of risk over which they have a choice is different to that over which they donot, just as a service which is paid for (e.g. air travel) carries with it a duty of care which voluntaryrisk taking (car travel, leisure) does not. The definition of appropriate target reliability levels istherefore a difficult area and must be made with consideration of:

    Level of choice over whether to take risk.

    Consequences (societal, environmental, financial).

    Structural redundancy level.

    Prior warning of failure.

    In addition, the appropriate measure of failure probability must be considered, for example, reliabilityover planned life span, reliability per year, per inspection interval or per operating unit (/km/year inthe case of pipelines). The conventional approach to reliability-based code development is tocalibrate them against existing practice and implied levels of structural safety. This is summarised inFig. 38(5), following an iterative procedure to define the required combination of partial safety factorsdeemed sufficient to achieve a specific target reliability index (b), and hence maximum aim failureprobability. However, this approach has limitations in terms of accounting for human factors, and inthe past has been subject to a certain amount of fitting (known as the 'gap factor') to ensureconsistency with existing codes(88).

    For new or novel structures, the method may not be appropriate and a more structured analysisaddressing all credible failure modes may be needed. An example of this was the move to floatingoffshore structures where neither existing codes for fixed platforms, nor classification society rulesfor ships, were considered appropriate(89).

    Similarly, target reliability can also be re-defined for existing structures in circumstances including:

    Change of use, including increased load requirements.

    Concern about design or construction errors.

    Concern over quality of materials and workmanship.

    Effects of deterioration.

    Damage following and extreme loading event (storm or earthquake).

    Concern over serviceability.

    In these cases, load factors may have increased but very often the additional information gatheredduring the life of the structure can be used for reliability updating using Bayes' theorem(90), thusoffsetting the effect of increased loadings or decreased resistance on calculated reliability.

    SL/WEM/R/M8663/5/01/C

    27

  • A comparison of target reliability levels and corresponding maximum acceptable failure probabilitiesdefined for various consequences in different structures is given in Table 13, and summarised in Fig.39. For the three codes ISO 2394, BS 7910 and EC3, there is reasonable agreement that the aimacceptable probability of failure for a structural element is 7 x 10-5 for 'severe' consequence and1 x 10-5 for 'very severe' consequences. For ships, failure probabilities vary between 10-5 and 10-3

    depending on failure mode and consequence, while for FPSO and TLP (Tension Leg Platform)floating structures 10-4 is generally adopted. The UK offshore target is 10-4, and is similar to meanaim values for API and DNV offshore codes. Building codes have variable target levels dependingon materials of construction, occupancy and loading modes (dead, live, wind, snow, earthquakeloads) but are as low as 5 x 10-2 for survivability in earthquakes. Pipeline reliability depends onnature of medium (gas or oil), whether the line is off- or onshore, and in the latter case on populationdensity. For onshore gas lines, target maxima are typically 10-4 to 10-6 per km per year. The nuclearindustry has one of the highest general target reliability levels of any industry (b = 5.2), giving atarget maximum probability of 10-7.

    7. SOFTWARE FOR RELIABILITY ANALYSIS

    7.1 Scope

    Reliability analysis software is available for general applications in which any failure mode can beaddressed using the appropriate limit states, and for fracture-specific applications. Validation ofsuch software is usually carried out by benchmark exercises between different programs since it isnot feasible to compare results with real failure statistics.

    A summary of all the software reviewed is given in Table 14, and structure of the different programswhere available is given in Appendix 1.

    7.2 STRUREL

    STRUREL (STRUctural RELiability) is a general purpose reliability software series that has beendeveloped to perform computational tasks in a windows environment and using the most recenttheoretical findings. It is owned and developed by Reliability Consulting Programs GmbH, based atthe University of Munich (http:www.strurel.de).

    It comprises several independent but interrelated programs:

    STATREL: Statistical analysis of data, simulation, distribution fitting and analysis of time series.

    COMREL: Time-invariant and time-variant analysis of component reliability.

    SYSREL: Reliability analysis of systems.

    NASCOM: Finite element code for structural analysis.

    NASREL: Module combining COMREL with NASCOM.

    STATREL enables appropriate distributions to be derived for datasets input from e.g. spreadsheets.Goodness of fit tests are also included to demonstrate the best fitting method to be used. COMRELcomprises 44 models and limit state equations can be input for failure modes not addressed. Itincludes MCS, FORM and SORM methods, and in the case of the time-variant version includesmethods for incorporating random and point-in-time events. SYSREL enables multiple failure criteria

    SL/WEM/R/M8663/5/01/C

    28

  • for parallel and series systems to be evaluated, including conditional events. It links directly withCOMREL, making it straightforward to check individual failure criteria before combining them in asystem analysis.

    STRUREL has applications in many fields and many examples of its use for fracture, fatigue,collapse, corrosion and general strength problems exist. A joint industry project is planned for 2001in which a variety of reliability problems will be assessed using STRUREL as a benchmarkingexercise in comparison with other software(97).

    7.3 ProSINTAP

    ProSINTAP (PRObabilistic Structural INTegrity Assessment Procedure) automates MCS and FORManalysis of the failure assessment diagram and is only applicable to fracture and collapse failuremodes(13,98). It consists of five input decks:

    Geometry.

    Loading.

    Material.

    NDE (Non-Destructive Evaluation).

    Analysis.

    The Geometry section comprises stress intensity factor solutions for a range of plate and cylindergeometries with surface and through-thickness cracks.

    The load module enables through-thickness distributions of applied and welding residual stress to beincorporated.

    In the material module, yield strength, UTS and fracture toughness and their associated statisticaldistributions are input. This requires the mean