seismic hazard analysis — quo vadis?

32

Click here to load reader

Upload: jens-uwe-kluegel

Post on 05-Sep-2016

228 views

Category:

Documents


3 download

TRANSCRIPT

  • Keywords: seismic hazard analysis; probabilistic risk assessment; financial risk analysis; scenario-based seismic risk analysis; modeling seismic hazard analysis

    Available online at www.sciencedirect.comContents

    1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22. Areas of application and the principle of an economic seismic hazard analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23. Additional performance criteria for the selection of seismic hazard analysis method . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    3.1. Requirement of conservative realism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.2. Requirement of validation of results. Empirical control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5Abstract

    The paper is dedicated to the review of methods of seismic hazard analysis currently in use, analyzing the strengths and weaknesses of differentapproaches. The review is performed from the perspective of a user of the results of seismic hazard analysis for different applications such as the design ofcritical and general (non-critical) civil infrastructures, technical and financial risk analysis. A set of criteria is developed for and applied to an objectiveassessment of the capabilities of different analysis methods. It is demonstrated that traditional probabilistic seismic hazard analysis (PSHA) methodshave significant deficiencies, thus limiting their practical applications. These deficiencies have their roots in the use of inadequate probabilistic modelsand insufficient understanding of modern concepts of risk analysis, as have been revealed in some recent large scale studies. These deficiencies result inthe lack of ability of a correct treatment of dependencies between physical parameters and finally, in an incorrect treatment of uncertainties. As aconsequence, results of PSHA studies have been found to be unrealistic in comparison with empirical information from the real world. The attempt tocompensate these problems by a systematic use of expert elicitation has, so far, not resulted in any improvement of the situation. It is also shown thatscenario-earthquakes developed by disaggregation from the results of a traditional PSHA may not be conservative with respect to energy conservationand should not be used for the design of critical infrastructures without validation. Because the assessment of technical as well as of financial risksassociated with potential damages of earthquakes need a risk analysis, currentmethod is based on a probabilistic approachwith its unsolved deficiencies.

    Traditional deterministic or scenario-based seismic hazard analysis methods provide a reliable and in general robust design basis forapplications such as the design of critical infrastructures, especially with systematic sensitivity analyses based on validated phenomenologicalmodels. Deterministic seismic hazard analysis incorporates uncertainties in the safety factors. These factors are derived from experience as well asfrom expert judgment. Deterministic methods associated with high safety factors may lead to too conservative results, especially if applied forgenerally short-lived civil structures. Scenarios used in deterministic seismic hazard analysis have a clear physical basis. They are related toseismic sources discovered by geological, geomorphologic, geodetic and seismological investigations or derived from historical references.Scenario-based methods can be expanded for risk analysis applications with an extended data analysis providing the frequency of seismic events.Such an extension provides a better informed risk model that is suitable for risk-informed decision making. 2008 Elsevier B.V. All rights reserved.Received 19 June 2007; accepted 3 January 2008Available online 4 February 2008Review

    Seismic Hazard Analysis Quo vadis?

    Jens-Uwe Klgel

    Kernkraftwerk Goesgen-Daeniken, 4658 Daeniken, Switzerland

    Earth-Science Reviews 88 (2008) 132www.elsevier.com/locate/earscirev3.3. Requirement of robustness or time-invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.4. Requirement of the minimization of interface issues between seismic hazard analysis and the intended use of the results . . . . . 53.5. Requirement of traceability and logical consistency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    E-mail address: [email protected].

    0012-8252/$ - see front matter 2008 Elsevier B.V. All rights reserved.doi:10.1016/j.earscirev.2008.01.003

  • ..zar.micake....PE.d Agentyncural.es...ana...

    .

    .

    ce1. Introduction

    The discussion by seismologists about the pros and consof deterministic and probabilistic seismic hazard analysis has along history. The seismic design of structures, especially criticalinfrastructures such as, nuclear power plants or dams in mostcountries was and is still largely based on deterministic seismichazard analysis. Other applications such as, financial planningfor earthquake losses (insurance or reinsurance questions) or thegradual introduction of risk-informed nuclear reactor oversightprocess (e.g., in the U.S.A.) urged the development to perform arisk assessment in terms of the likelihood of significant losses ordamage, or explicitly in terms of financial risk. This need toquantify the likelihood prompted the development of probabil-istic methods. From various ways of performing probabilisticseismic hazard analysis, a specific method, called here as atraditional probabilistic seismic hazard analysis (PSHA); wonthrough in practical applications. This method has its origin inthe fundamental work of Cornell (1968). Its later development ischaracterized by an increasing degree of sophistication, espe-cially with respect to the treatment of uncertainties. It has foundits way into national regulations of many countries, mainly in theformat of probabilistic seismic hazard maps. It is not the intentionof this paper to continue the discussion by seismologists on whatmethod is preferable for seismic hazard analysis based on generalterms. In practice, all methods of seismic hazard analysis are

    potential users of the results of seismic hazard analysis byproviding a set of objective criteria for the selection of anappropriate analysis method considering the specifics of theintended application. The selected methods are recommendedbased on a thorough review of the strengths and weaknesses ofthe different approaches to seismic hazard analysis.

    2. Areas of application and the principle of an economicseismic hazard analysis

    The most frequent and most important application of seismichazard analysis results is for earthquake resistant design ofstructures for dwelling and lifelines. Historically, this task wasfirst performed in areas of high seismic activity. Here, this taskbecame an urgent challenge to avoid disastrous losses of life andcapital. Many key principles of earthquake resistant design havebeen developed on the basis of empirical observations andsimple structural models (e.g., Sieberg, 1943). The degree ofimplementation of earthquake resistant design principles alwaysdepended on the wealth of a society. In many countries, housingis still a problem in general, let alone earthquake resistanthousing. The death toll from earthquakes in these countries havebeen observed to be extraordinarily high, such as for the Xian1556, Tangshan 1976, Mexico 1985, Bam 2003, Andaman-Nicobar 2004, Kashmir 2005) earthquakes, because to a largeextent they do not implement even basic principles of earthquake4. Overview on approaches to seismic hazard analysis . . . . . .4.1. Deterministic seismic hazard analysis . . . . . . . . . .4.2. Strengths and weaknesses of the deterministic seismic ha

    4.2.1. General discussion . . . . . . . . . . . . . . .4.2.2. Discussion of conservatism of deterministic seis4.2.3. Evaluation of the location of anticipated earthqu4.2.4. Strength of the background source . . . . . . .4.2.5. Summary of the discussion . . . . . . . . . . .

    4.3. Probabilistic seismic hazard analysis (PSHA) . . . . . .4.3.1. General definitions and procedures . . . . . . .4.3.2. Discussion of the SSHAC-methodology and the4.3.3. Classification of PSHA-methods . . . . . . . .

    4.4. Strengths and weaknesses of Probabilistic Seismic Hazar4.4.1. Applicability of the model of a stationary homo4.4.2. Uncertainty, probability and subjective probabili4.4.3. Dependency, aleatory variability and epistemic u4.4.4. PSHA, energy conservation principle and struct4.4.5. Other known problems of PSHA . . . . . . . .

    5. Evaluation of the performance criteria for the different approach5.1. Conservative realism . . . . . . . . . . . . . . . . . . .5.2. Validation of the results. Empirical control . . . . . . .5.3. Robustness and time-invariance . . . . . . . . . . . . .5.4. Minimization of interface issues between seismic hazard5.5. Traceability and logical consistency . . . . . . . . . . .

    6. Recommendations and concluding remarks . . . . . . . . . . .Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . . . .

    References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Concluding statement . . . . . . . . . . . . . . . . . . . . . . . . .

    2 J.-U. Klgel / Earth-Scienhybrids because they combine deterministic models andelements of probabilistic treatment to varying degrees. The aimof this paper is much more to facilitate the interests of different. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6d analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7hazard analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 7s . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11GASOS-project . . . . . . . . . . . . . . . . . . . . . . . . . 14. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15nalysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17eous Poisson process . . . . . . . . . . . . . . . . . . . . . . 17. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19ertainty . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24reliability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25of seismic hazard analysis . . . . . . . . . . . . . . . . . . . . 27. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28lysis and the intended use of the results . . . . . . . . . . . . 29. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    Reviews 88 (2008) 132resistant design.A special application case is for the earthquake resistant

    design of critical infrastructures. In this paper, an infrastructure

  • 3nce Reviews 88 (2008) 132is called critical if its failure due to an earthquake couldpotentially amplify the damage and negative impacts caused bythe earthquake directly. Typical critical infrastructures are:

    nuclear power plants and research reactors, radioactive waste repositories, chemical plants, bridges, military plants, liquefied gas pipelines and pressurized gas storage tanks and dams.

    Frequently, essential lifelines like thermal power plants,important highways, hospitals, police and fire stations,telephone exchange and the like, electrical power distributionsystems and railways are included in the list of criticalinfrastructures due to their importance in emergency situationsand their high capital value.

    From the perspective of seismic hazard analysis, it appears atfirst glance that these different applications do not differ fromeach other for the seismic load input that are provided byseismic hazard analysis. But this is not true. These applicationsare characterized by different spatial and conceptual temporalscales. Typical dwelling structures are designed for a lifetime of40 to 60 years, buildings of nuclear power plants for a lifetimeof 60 to 100 years, a radioactive waste repository like the YuccaMountain project in the U.S.A. or similar projects under inves-tigation in Switzerland have to be designed for safe operationof hundreds of thousands years. Therefore, the time scale tobe considered in seismic hazard analysis should be completelydifferent. The same applies for the spatial scale. Pipelines, high-ways and railways extend over hundreds of kilometers withcompletely different seismotectonic and geotechnical conditionsin the regions they traverse.

    Similarly, we have to consider the different possible designprocedures starting from a simple equivalent static forces designto a nonlinear dynamic analysis. The output from seismichazard analysis for these different methods varies. For simpledwelling structures, and even for less important infrastructures(simple bridges) we may decide to permit some controlleddamage to the structure in the event of an earthquake, which willlater be repaired. For the equipment of a nuclear power plantneeded to shutdown the reactor safely and to remove its residualheat, we have to ensure safe operation after the earthquake. Inmany design codes (e.g., IAEA NS-G-3.3, 2002, EUROCODE8) a dual level seismic design procedure is implemented toaddress different design goals. These levels are defineddifferently, for example, by distinguishing a serviceabilitylevel (for loss prevention in case of a frequent event) and anultimate limit level for life protection as defined by EURO-CODE 8, and similarly proposed by Collins, 1995. Alterna-tively, they can be expressed in terms of performance goalsdepending on the function of the infrastructure as important oras common, adopted by Caltrans according to (Housner and

    J.-U. Klgel / Earth-ScieThiel, 1995). To apply the same methodology of seismic hazardanalysis to all different applications may be possible but maynot be economic. The question of an economic seismic hazardanalysis is an important one, because clients wishing to find asolution to their design or other problems do not want to spendmoney and time excessively to obtain hazard analysis result. Weformulate here a general principle. Seismic hazard analysisshould be performed in a way which minimizes the effortneeded to obtain the results requested by the client. The kind ofresults depends on the application. Therefore, seismic hazardanalysis methods should be commensurate to their applications.

    Another class of applications is related to risk analysis. Ingeneral, for seismic hazard analysis, we have to distinguishbetween two types of risk analysis:

    financial risk analysis for estimating capital and life lossescaused by earthquakes (risk insurance problem, see forexample, Bendimerad, 2001) and

    technical risk analysis evaluating the risk associated with theoperation of a critical infrastructure with respect to a possibleenvironmental impact.

    Although both applications have some common roots, atleast the mathematical basis theory of probability, the scaleof the problems to be covered is different. While insurancecompanies typically deal with the insurance of large dwellingareas and sometimes of industrial facilities of diverse types,their main concern consists in the limitation of capital losses.Technical risk analysis is aimed to evaluate the risk of a givenplant with respect to consequences to the environment.Therefore, technical risk analysis is much more focused on aspecific plant, while financial risk analysis (insurance risk) hasto consider a large variety of different technical facilities. Onceagain, to use the same methods of seismic hazard analysis forboth types of risk analysis may not be economic. While for thedesign of dwelling buildings and critical infrastructures we maydevelop a seismic design basis without evaluating the frequencyof earthquakes directly, risk analysis requires an evaluation ofthe frequency of events as well as an assessment of theprobability of damage. Nevertheless, even deterministic designmethods implicitly consider the frequency or probability ofevents. This is expressed by terms such as the MaximumCredible Earthquake (MCE) or by the requirement in the IAEAguide (IAEA NS-G-3.3, 2002) that the seismic design of nuclearpower plants for safety level 2, SL2, should be based on asufficiently rare event.1 Especially in areas of low seismicity,this requirement leads to the application of probabilistictechniques to define the maximum credible earthquake (Klgelet al., 2006) used in the deterministic seismic hazard analysis. Inmany cases, it may be more convenient and less laborious to usea probabilistic method of seismic hazard analysis. This allowsthe use of simple models treating the resulting lack ofcompleteness-problem as a random (i.e., uncertainty problem)instead of developing more realistic or complex models. Such asimplified approach will be beneficial if we have to deal with alarge variability for conditions to be considered or objects to beanalyzed. The uncertainty reflects the variability of individual1 The IAEA guide allows the use of probabilistic or deterministic methods forthe development of the design basis of a nuclear power plant.

  • properties of the objects for analysis being lumped togetherinto one population (or one class of objects) for simplicity.

    Fig. 1 (left hand side) summarizes this discussion by ageneric representation of the dependency of the degree ofsophistication of the seismic hazard analysis on the degree ofcriticality of infrastructures under the condition that the seismichazard analysis should be performed in an economic way. Fig. 1(right hand side) shows the potential benefit from incorporatingprobabilistic tools into the analysis in dependence of thevariability of conditions to be covered in a similar genericrepresentation. Starting from a certain point, the benefit fromthe probabilistic simplification begins to decrease, because thevariability of objects analyzed together as a part of a commonpopulation (or class) increases to the point that the results of aprobabilistic treatment become meaningless for practicalapplications. This occurs if the data for analysis is insufficiently

    face a lack of knowledge problem, which we somehowneed to address. We have to deal with large uncertainties.Because they are of the lack of knowledge type, theseuncertainties are called epistemic uncertainties. It is worthremembering that in the final end all uncertainties arefundamentally epistemic, because all our statements, propo-sitions and analyses are conditioned to the knowledgeavailable at the time when we make or perform them.Probability is supposedly a direct quantitative measure ofthis type of prediction uncertainty. Therefore, we obtainsome benefit in treating the problem probabilistically (or byusing fuzzy set mathematics). Accordingly, for long-livedcritical infrastructures the incorporation of probabilisticmethods for the treatment of uncertainties makes somesense. This does not mean that deterministic methods shouldbe abandoned, because a deeper understanding of physical

    4 J.-U. Klgel / Earth-Science Reviews 88 (2008) 132classified, lumping together too many different objects intoone class. As a consequence, the relationship between theresults of the probabilistic analysis and the performance of asingle object within the considered population of objects is lost,loss of information on the properties of individual objects.

    From the above we can draw simple conclusions for aneconomic seismic hazard analysis:

    For a short-lived critical infrastructure the most appropriateseismic hazard analysis method is a detailed deterministicanalysis, because the gain achievable by using probabilisticmethods is low. There is no trade-off between thesimplification of our structural models and the efficiencyof analysis. In other words, the use of probabilistic methodssimply leads to a lack of information on the behavior of ourobject. We lose the information on its individual properties,but this is just the information we want to keep to obtainmeaningful and realistic results for developing an economicdesign.

    With increasing lifetime of structures our ability to model itsbehavior reduces. The simple reason is that our capability tomake accurate predictions into the far future is limited. WeFig. 1. Dependency of the complexity of analysis on the degree of criticality of infraston the degree of variability of analysis conditions (right).phenomena allows us to reduce the amount of uncertaintywhich has to be treated as random, as unpredictable in adeterministic sense. With respect to design problems it stillremains a meaningful option to deal with uncertainties byincorporating safety factors into a deterministic seismichazard analysis. The difficulty is that the uncertainties are notexplicitly quantified. An alternative is to use a fraction ofstandard deviation of the hazard estimate to make areasonable design load. However, the approach to be usedfor seismic hazard analysis should be sufficiently sophisti-cated and be able to cope with the indeterminate uncertain-ties. The methods best suited for this type application aresometimes called (probabilistic or deterministic) modelingseismic hazard analysis (Modeling SHA).

    With respect to general (non-critical) applications such as theseismic design of dwelling buildings in a seismic activeregion or to objects which show a significant spatial extentthe effort to perform a complex deterministic analysis isbecoming increasingly greater. The spatial scale and/or thevariability of conditions to be considered are large. Thismeans, incorporation of probabilistic models based on asimplified deterministic representation of the behavior ofructure (left) and dependency of the potential benefit from probabilistic modeling

  • nceobjects during earthquakes may make some sense, because itreduces the analysis effort. The use of deterministic methodsin the format of deterministic seismic hazard maps basedon more generic models is a feasible alternative, too. Theproblem of variability of conditions to be considered is solvedby making adjustments from experience or empirical results(e.g., using information from historical seismicity). The usualsolution for deterministic seismic hazard analysis consistsagain in the incorporation of safety factors. In general, if bothanalysis techniques are performed adequately, they shouldresult in a similar seismic design basis.

    For risk analysis (technical or insurance risk) it is unavoidableto use probabilistic methods, otherwise it is not possibleto quantify risk. Again, there are differences for differentapplications. For technical risk analysis the level of complex-ity and detail should be significantly higher than for theevaluation of insurance risk for buildings and constructionscovering a large area. So it is reasonable to suggest that for thetechnical risk analysis an informed probabilistic analysismethod should be used, which to a very large extent shouldbase on an in depth deterministic analysis. Otherwise, wewould not be able to capture the individual features of theobject of investigation.

    3. Additional performance criteria for the selection ofseismic hazard analysis method

    Besides the principle of economy to be fulfilled in seismichazard analysis, there are additional requirements that anymethod should meet.

    3.1. Requirement of conservative realism

    The results of a seismic hazard analysis must be realistic andreflect the specific site conditions both with respect to seismicactivity and to site ground conditions to the extent as requiredby the application. An overly conservatism must be avoided forboth design applications and for risk analysis to ensureinformed decision making.

    3.2. Requirement of validation of results. Empirical control

    The results of a seismic hazard analysis must be validated asfar as possible. A validation is possible based on formalizedmathematical procedures (e.g., generation of synthetic catalo-gues) as well as on the basis of comparison to real worldobservations. This may require development of a specialvalidation procedure including tests, which can be judged byengineering common sense. The validation supports thetransparency of the results to the user.

    3.3. Requirement of robustness or time-invariance

    The results of a seismic hazard analysis must be robust,

    J.-U. Klgel / Earth-Sciemeaning that they do not change significantly during the designlife of the infrastructure concerned. A consequence of thisrequirement is that different methodologies for seismic hazardanalysis may have to be used for infrastructures with differentlifetimes as discussed in Section 2.

    3.4. Requirement of the minimization of interface issues betweenseismic hazard analysis and the intended use of the results

    The results of a seismic hazard analysis must be presented ina format which is directly applicable for the intended use design analysis or risk assessment. Typically this means that theoutput of a seismic hazard analysis shall be presented in theformat of the required input parameters of the eventual design orrisk analysis.

    3.5. Requirement of traceability and logical consistency

    The analysis must be documented in a way that it will bepossible for the user to identify key assumptions made in theanalysis and their impact on the results. The analysis must beperformed in a consistent way with the assumptions (modelingconsistency). The best way of meeting the traceability require-ment consists in keeping the analysis as simple as possible. Thisin general corresponds to the principle of economy discussed inSection 2.

    Practical experience has shown that simplicity and flawless-ness of the analysis are closely related.

    4. Overview on approaches to seismic hazard analysis

    In this section, I provide an overview on the different ap-proaches to seismic hazard analysis emphasizing their character-istic features. A common part of any meaningful seismic hazardanalysis approach consists in collecting the appropriate inputinformation. The IAEA-Safety Guide on theEvaluation of SeismicHazards for Nuclear Power Plants (IAEA, 2002) denotes this stepas the preparation of a geological, geophysical, geotechnical andseismological database. The safety guide provides guidance onwhat type of information should be considered and evaluated.The scope of the analysis to be performed again depends on theapplication. While for general purpose applications such as thedevelopment of a residential area it may be sufficient to review theavailable information provided by national seismological services,a more comprehensive approach is requested for criticalinfrastructures. As a part of the preparatory analysis a seismotec-tonic model of the region of interest is developed (or used fromavailable information). The geotechnical information is required tocharacterize site conditions to derive its potential effects. As is wellunderstood since the Mexico earthquake of 1985, soil character-istics near the surface as well as the depth of soil sediments abovehard bedrock can significantly affect seismic wave propagationand its potential amplification. Improved modeling techniqueshave shown that effects such as wave interference, wave trappingand nonlinear soil behavior represent significantly more complexphenomena than can be captured by just a simple soil amplificationfactor. A factor frequently underestimated by some seismologists

    5Reviews 88 (2008) 132is that the geotechnical information and especially the spatialvariability of soil characteristics have a significant impact onsoil structure interaction (SSI). Soil structure interaction can

  • cesignificantly affect the groundmotion characteristics as observed atthe site near massive structures.

    4.1. Deterministic seismic hazard analysis

    The term deterministic seismic hazard analysis is frequentlyused with different meanings in a narrow sense defining a veryspecific seismic hazard analysis procedure as outlined in nationalor international regulations (e.g. NRC RG 1.60 (NRC, 1973),KTA rule 2201.1 or IAEA NS 3.3, 2002 to mention someregulations in the area of nuclear energy) and in a broader sense asdefined by Klgel et al. (2006): Deterministic seismic hazardanalysis is called deterministic because it is based on facts, dataand physical models, describing the behavior of earthquakes.Statistical (probabilistic) techniques that are based on dataanalysis are a natural part of this type of deterministic analysis.I am going to use the term in the broader sense of the definition ifnot indicated otherwise. It is also possible to define deterministicseismic hazard analysis by what distinguishes it from aprobabilistic approach deterministic analysis does not try toquantify the frequency of earthquakes and it focuses on crediblecritical earthquake scenarios enveloping the effects of smallerevents because of their higher energy content and the higher inputenergy which these events can impart into a structure. Due to thelow number of scenarios to be considered, it is possible to studythem in detail. This allows the use of more sophisticated analysismethods for the application considered. It is worth underliningthat the use of deterministic methods does not imply that thissuggests the world is deterministic in the Laplacian sense asindicated by McGuire and Cornell (2005, p. 881) or by Woo(1999) and Atkinson (2004). In the sense of the broaderdefinition of deterministic seismic hazard analysis, a stochasticprocess model can be regarded as a deterministic model of theworld (SSHAC, 1997; Wen et al., 2003). The difference to aprobabilistic approach consists in the different, more implicittreatment of uncertainties and randomness by developingcredible earthquake scenarios corresponding to engineeringexperience. It is very easy to show that the assumptions madeby some of the key proponents of the deterministic method(Krinitzsky, 2002) have a sound quantitative equivalent in thetheory of probability and statistics. For example, the frequentlyused assumption to define the hazard ground motion at theconfidence level =1 (see Eq. (2) in Section 4.2) has its basis inChebyshev's inequality:

    Pr jX f jzr V 12

    1

    Here X is a random variable with the mean f (represented bythe function f in Eq. (2), see Klgel, 2007a). Eq. (1) simply saysthat the main bulk of earthquake ground motion recordings willlie within an interval defined as ( f, f+).

    Traditionally (Reiter, 1990), deterministic seismic hazardanalysis is represented as consisting of four basic steps:

    6 J.-U. Klgel / Earth-Scien Definition of an earthquake source or sources Selection of the controlling earthquake Determination of the earthquake effect (usually some type ofground motion at site)

    Determination of hazard at site (the output of step 3).

    Reiter (1990) acknowledged the possibility of iterationbetween steps 2 and 3 to select the controlling earthquake basedupon its resulting in the largest ground motion at the site. De-terministic scenario-based methods (Klgel et al., 2006) haverefined this approach:

    Characterization of seismic sources with respect to seismiccapacity/potential and location

    Selection of parameter(s) to characterize the impact of anearthquake on the infrastructure

    Development of an attenuation model to derive the values ofthe selected parameter(s) at the site

    Incorporation of site effects, and near-field and potentialdirectivity/focusing factors

    Definition of the scenario earthquake(s).

    The difference to the presentation by Reiter seems to be small,but it is not unimportant. The selection procedure is in general aniterative one, requiring screening and/or additional sensitivityanalysis for the selection of the final set of scenarios. Furthermore,the hazard parameter to be used has to be defined as a part of theprocedure. The parameter(s) selected depend on the applicationand have to consider the requirements of the user. This requires auser involvement in the hazard analysis. Furthermore, modernscenario-based seismic hazard analysis may provide their resultseither as an envelope of several scenario-events (similar to thetraditional NewmarkHall approach) or as a set of severalcontrolling events reflecting possibly different contents of highfrequency energy in a spectral acceleration response spectrum(e.g., distinguishing a far-field and a near-field controlling event,Krinitzsky et al., 1993). Fig. 2 showing the general work-flow istaken from Klgel et al. (2006).

    A further refinement step of the deterministic scenario-based method consists in the direct use of different wave formmodeling methods instead of using empirical attenuationrelations or point source stochastic models. This approach issometimes called modeling seismic hazard analysis (ModelingSHA). The results of modeling SHA can be representeddirectly by a large set of time histories (a set of severalthousands is feasible with the help of modern computertechnology) reflecting the possible variability (the scatter) ofearthquake time histories caused by the controlling earthquakeevents. Of course, the effort to perform such an analysis issignificantly greater than for other deterministic methods.Nevertheless, for critical infrastructures, especially with a longlifetime, the effort might be justified. Care must be taken (andthis is from experience of using phenomenological models inother areas of risk analysis, for example in thermo-hydraulics)that the models used result in realistic time histories. This canbe established by comparing modeling results with the wave

    Reviews 88 (2008) 132form of recorded earthquakes from the same region or fromareas with similar seismotectonic conditions and seismogenicfeatures.

  • nceJ.-U. Klgel / Earth-ScieIn Klgel et al. (2006) it was shown that scenario-basedmethods and (this is valid as well for modeling SHA) can beexpanded for use in risk analysis by defining the frequency ofscenario-events for a set of discrete magnitude ranges usingmodern tools of data analysis (see Section 4.3).

    It is interesting to note that deterministic seismic hazardanalysis has been the most widespread analysis method used todevelop the seismic design basis for critical infrastructures. Forexample, deterministic seismic hazard analysis methods wereused for the seismic design of operating nuclear power plants inthe U.S.A., in France, Sweden, Germany, Switzerland andJapan. Deterministic seismic hazard analysis was used todevelop the seismic design for critical infrastructures byCaltrans in California based on deterministic seismic hazardmaps (Mualchin, 1996). Infrastructures upgraded according tothis procedure have been found to have performed well duringthe 1994 Northridge earthquake (Housner and Thiel, 1995). Theactual European utility requirements for new nuclear powerplants are based on a deterministic seismic hazard analysis.

    Fig. 2. Work-flow for the selection of scenario earthquakes (from Klgel et al.,2006).Probabilistic methods were implemented into regulations at alater stage. They are used mainly for risk analysis purposes or toverify the deterministic seismic design basis by a second method.The actual IAEA safety guide (IAEA, 2002) allows the use ofeither probabilistic or deterministic methods for the developmentof the design basis. The seismic design of dams is based on thescenario-based approach. Deterministic seismic hazard analysismethods (modeling SHA based on synthetic seismograms) havebeen successfully applied to develop deterministic seismic hazardmaps proving its predicting capabilities wherever new strongearthquakes have occurred (Aoudia et al., 2000; Orozova-Stanishkova et al., 1996; Parvez et al., 2003). In this approachthe spatial maximum magnitude distribution required as an inputis derived from an actual earthquake catalogue. Because only theinformation on extreme events is required, the deterministicapproach is significantly less vulnerable to the question ofcatalogue completeness than approaches which are based on theGutenbergRichter equation. Uncertainties in the definition of thelocation of historical events can be incorporated, too (Kronrodet al., 2001). Unfortunately, the new developments of determi-nistic seismic hazard analysis methods have not found their wayinto an update of current regulations. Deterministic seismichazard analysismethods in current regulations still reflect the stateof the art of the early eighties of the past century, and not the recentadvance in the deterministic method.

    4.2. Strengths and weaknesses of the deterministic seismichazard analysis

    4.2.1. General discussionThe main strength of deterministic seismic hazard analysis is

    expressed directly by the definition introduced in Section 4.1.Deterministic seismic hazard analysis is attempting to make useof the available information on earthquakes and their propertiesto a maximal extent. It is based on facts, data and physicalmodels which can be compared with empirical information andtherefore, can be validated. Contemporary deterministic seismichazard analysis is aimed to usemethods which reflect our currentphenomenological understanding of earthquakes. Deterministicseismic hazard analysis is very transparent. Seismic sources aswell as (controlling) scenario-earthquakes can be traced directlyto historical or instrumental observations or to findings fromgeological and paleoseismological investigations.

    The adverse side of this strength consists in an increasinganalysis effort as soon as modeling techniques are applied toevaluate ground motions at the site of interest. But this problemcan easily be resolved by adjusting the method to the need of theapplication. For less critical applications the effort can bereduced, for example by using directly the insights from aregional deterministic seismic hazard map. For critical applica-tions a site specific investigation is mandatory, but here the effortis certainly not higher than for a large scale probabilistic study.

    4.2.2. Discussion of conservatism of deterministic seismic

    7Reviews 88 (2008) 132hazard analysisDeterministic seismic hazard analyses have been blamed as

    being too conservative as well as too optimistic. The most

  • cecurious argument in this area is the frequently told story aboutthe discovery of the offshore Hosgri fault near the constructionsite of the nuclear power plant Diablo Canyon in California. Thegeological and seismological site investigation for DiabloCanyon stopped at the shore of the Pacific Ocean. Therefore,the capable Hosgri fault was originally not considered as aseismic source for Diablo Canyon. The incorporation of thisfault into a deterministic seismic hazard analysis as defined atthat time by US NRC regulations would have had a tremendousimpact on the design basis of the plant, devaluating the originaldesign work and already erected constructions. The reason forthis consists in the following conservative assumptions made indeterministic seismic hazard analysis at that time:

    For the calculation of ground motion at the site by using anattenuation relation it was required to use the shortestdistance between the fault and the site of interest, which forDiablo Canyon amounts to only about 5 km.

    The deterministic hazard analysis procedure required todefine the design basis hazard for the safe shutdownearthquake at a confidence level of the regression mean(the attenuation equation) of =1. I refer here to thefollowing simple form of an attenuation equation:

    ln Sa f m; r;Xi r 2

    Here Sa is the spectral acceleration, f is a function derivedfrom nonlinear regression of registered earthquake timehistories (the ground motion model), m is magnitude, r isdistance to site and the parameters Xi represent additionalexplanatory variables which may have been incorporated intothe ground motion model f. Considering the possible largemagnitude (maximum credible magnitude) associated with theHosgri fault (approximately 7.5) and the short distance to thesite, the deterministic procedure would have resulted in a veryhigh seismic hazard at the site. Reiter (1990, p. 164) reports thatthe use of a peak ground acceleration of 1.25 g was seriouslydiscussed, based on recordings at Pacoima Dam during the 1971San Fernando earthquake. It was discussed by earthquakeengineers that the parameter specified for design purposesshould be an effective acceleration, not a spike peak groundacceleration, because peaks, when associated with isolated,non-repetitive high frequency motion, have little or no impacton large engineered structures. Such an effective accelerationwas defined for the Diablo Canyon plant to be 0.75 g. Later itwas concluded that deterministic seismic hazard analysis is aninappropriate seismic hazard procedure, because it had missedthe Hosgri fault. Therefore, the deterministic seismic hazardanalysis was abandoned by the US NRC (NRC RG 1.165, 1997;Braverman et al., 2007) in favor of the Cornell-McGuire(Cornell, 1968; McGuire, 1976) approach to PSHA. It isobvious that this conclusion was not justified. However, aninsufficient site investigation will affect any type of seismichazard analysis, neither PSHA can correctly incorporate an

    8 J.-U. Klgel / Earth-Scienunknown source. Note that in a PSHA the contribution of thedeterministic maximum credible earthquake (a controllingevent) shows up in the uniform hazard spectrum (UHS) onlyin a weighted form multiplied by an estimate of its frequency ofoccurrence (which is significantly smaller than 1). The problemis that if the MCE event at the Hosgri fault (at the originallyunknown source) occurs, the uniform hazard spectrum usedfor the design of Diablo Canyon plant will most likely beexceeded for some spectral frequencies, because then theweight the frequency of occurrence equals 1. It has to bementioned here that a large effort was made to improve theseismic load capacity of the safety-related structures of DiabloCanyon. However, after the ChuetsiOki earthquake in Japanaffecting the nuclear power plant at the Kashiwazaki site, thequestion may have to be revisited. This example shows that thedeterministic seismic hazard analysis still found in theregulations may be defined as conservative in comparisonwith the results of a PSHA (performed for a certain probabilityof exceedance) and at the same time argued as non-conservative(inappropriate) by proponents of the PSHA methodology. Thisdiscussion also illustrates one of the problems of deterministicseismic hazard analysis, namely that the lifetime of theinfrastructure is not considered. The discussed MCE from theHosgri fault may never occur during the lifetime of the DiabloCanyon plant and designing the plant to withstand anearthquake with an additional safety margin of +1 mostcertainly cost a lot of money. The trade-off for the savings incapital costs is the risk associated with the possibility that themaximum credible earthquake from the Hosgri fault may occurduring the residual lifetime of the Diablo Canyon nuclear powerplant. At the same time, it may happen at any moment, becausethe timing of any earthquake is indeterminate.

    From the discussion we can derive the following conserva-tive elements of deterministic seismic hazard analysis as stilldefined in the regulations:

    the selection of a relatively high magnitude value for thecontrolling event (a credible, not a worst case value),

    the selection of the shortest distance between the seismicsource (fault or shortest distance between the boundary of anareal source to the site); it is frequently claimed for the sitearea that the MCE should be located directly beneath the siteof interest (IAEA, 2002)

    the ground motion at the site is selected at the =1confidence level in Eq. (2).

    It is worth commenting the selection of the confidence levelin the deterministic seismic hazard analysis method. Thisselection is the result of an engineering combination of twoempirical observations:

    the comparison of observed earthquake recordings with suchsimple ground motion models as Eq. (2) (frequently onlytwo parameters, magnitude and distance, were used) showsome significant scatter of ground motion (Krinitzsky et al.,1993),

    the strong motion duration for a given content of elastic

    Reviews 88 (2008) 132energy of an earthquake measured by the Arias Intensity(Arias, 1970) is nearly inversely proportional to the observedpeak ground acceleration (Vanmarcke and Lai, 1980).

  • of an accuracy of ground motion predictions within a factor 2for comparable site conditions and given magnitude anddistance (or one intensity unit in MCS-scale). A ground motion

    9nce Reviews 88 (2008) 132Because strong motion duration is an important character-istic of the damaging potential of earthquakes, it does not makemuch sense to increase the confidence level to a value higherthan =1, while keeping the same strong motion duration(mean value from empirical correlations) for the obtained levelof ground accelerations (as is done in the engineering structuralanalysis subsequent to the seismic hazard analysis by develop-ing spectrum compatible time histories, Bommer et al., 2000). Afurther increase would have led to a significant overestimationof the energy content submitted to the structure in comparisonwith empirical observations. Using a confidence level of =1(84th percentile estimate) and developing spectrum compatibletime histories without reducing the strong motion duration (e.g.using the mean value from empirical observations) provides anintended safety factor against the observed data scatter. It caneasily be demonstrated that this approach leads to a very largesafety factor. Let us assume that Eq. (2) was derived by anonlinear regression analysis from empirical data (from n re-corded earthquake time histories). For significantly large valuesof n, the probability distribution of can be approximated bya normal distribution with a zero mean and a standard de-viation . Now let us assume that is constant and does notdepend on the explanatory variables used in our model f. Thisis the general assumption in probabilistic seismic hazard analy-sis, although this assumption is not correct (see discussion inSection 4.4). For a fixed value of m0 and r0, the spectral ac-celeration Sa has a lognormal distribution with the expectedvalue:

    E Sajm0; r0 exp f m0; r0 r2=2 3

    The spectral acceleration selected according to the determi-nistic seismic hazard analysis procedure corresponds to an 84thpercentile and is calculated as:

    P84 Sajm0; r0 exp f m0; r0 r 4By equalizing the right hand sides of Eqs. (3) and (4) we can

    calculate the values of from the resulting quadratic equation,for which the expected value of the spectral acceleration (meanhazard in PSHA) intersects with the 84th percentile. Thesolution is obvious, =0 (the trivial case, no uncertainties) and=2 the relevant solution. This means that only for a standarddeviation (characterizing ground motion variability) larger orequal 2 the mean hazard according to the probabilistic modelshould exceed the deterministic ground motion defined at the=1 confidence level (84th percentile). As is known frommany empirical investigations the observed value of the groundmotion variability depends on the spectral frequency andamounts between 0.65 and 0.7. Nevertheless, it is observed thatthe mean seismic hazard according to PSHA results intersects(Savy et al., 2002; BECHTEL/SAIC, 2004; Abrahamson et al.,2004; Zuidema, 2006) with the 84th percentile hazard. Thismeans that these studies consider a ground motion variability ofhigher than 2. To illustrate this by another comparison, a ground

    J.-U. Klgel / Earth-Sciemotion variability of =0.7 approximately corresponds to thelinguistic statement (frequently used by experts in thePEGASOS study, (Abrahamson et al., 2004; Zuidema, 2006))variability of =2 corresponds to a factor of 7.4 (or nearly 3intensity units in MCS-scale).

    It is worthmentioning that other and less conservativemethodsfor incorporating a meaningful safety factor into deterministicseismic hazard analysis have been applied. One of them consistsin the selection of the most conservative attenuation modelsupported by data from the region of interest and/or in developinga design basis response spectrum enveloping a larger set ofcontrolling earthquake events.

    The replacement of empirical attenuation models by thedirect use of time histories in modeling SHA has the potential toeliminate the need for the implementation of safety factorsbecause the variability (the data scatter) of ground motion iscaptured directly in the large amount of time historiesconsidered for the later structural design.

    4.2.3. Evaluation of the location of anticipated earthquakesThe definition of the location of anticipated earthquakes is an

    important step in any seismic hazard analysis method. Incombination with the use of simple amplitude decay attenuationequations the definition of the location of the scenarioearthquakes can have a decisive impact on the hazard estimatesand can constitute a significant source of conservatism. Ingeneral, the methods used in deterministic (and as well in someprobabilistic analysis methods) seismic hazard analysis fordefining the location of earthquakes can be subdivided intoinformative and non-informative methods. This classificationdepends on the amount of use made of available informationregarding seismicity in the region of interest. The terminology,informative and non-informative, also has its equivalent in thetheory of probability in the use of informative or non-informative distributions for the locations of earthquakes.Non-informative methods are still more popular than informa-tive methods. For known faults it is usually assumed that thelocation of the next earthquake will be situated at the shortestdistance to the site. This is a non-informative approach becauseit implicitly assumes a constrained non-informative2 (Atwood,1996) spatial distribution of earthquakes. For example, a Beta-distribution with parameters and smaller than 1 for thedistance between any possible location along the fault and thesite, represents a constrained non-informative prior. The modalvalues of this distribution correspond to the shortest and to thelargest distance from the fault to the site. From these possiblealternatives the more conservative, the shortest distance isselected. Similarly for areal sources the shortest distancebetween the boundary of the source area and the site is selected.Such a definition is conservative as long as simple amplitude-decay models are used to describe seismic wave attenuation.

    2 A non-informative distribution is a distribution which maximizes entropy

    (and therefore minimizes information) under the given boundary conditions.Entropy can be measured for example by Shannon's entropy, information canbe measured by Fisher's information.

  • ceAn alternative non-informative method for defining thelocation of earthquakes was proposed by Klgel (Klgel,2005a). He assumed that the site of interest is located just at theboundary of the rupture plane of the considered undetectableearthquake and calculated the resulting response spectrum usingan empirical attenuation equation with half of the fault rupturelength (length to width scaling 1:1) as the distance between thesite and the earthquake location. This approximately corre-sponds to the assumption of a uniform distribution for thelocation of the earthquake epicenter within the boundaries of therupture plane. This method can also be used to develop acompletely non-informative design basis hazard by evaluatingbounding response spectra. These spectra are based onregionally validated correlations for the relationship betweenfault rupture length and magnitude and for the attenuation ofseismic waves and represent an envelope over all possiblemagnitude values. Although such an approach does not seem tomake much sense at the first glance (indeed very limitedinformation on the seismicity of the area is used), it can be usedas a tool to measure the performance of the results of other sitespecific seismic hazard studies. It allows making a judgmentregarding how much the client of a seismic hazard analysisproject gains in information by ordering the use of moresophisticated analysis methods.

    An alternative non-informed approach for the definition of thelocation of earthquakes considers the assumption of a uniformspatial distribution of seismicity within an areal source. Thisapproach is frequently used in PSHA. This assumption is alsoused in the development of deterministic seismic hazard mapswith respect to the location of the seismic source within a meshgrid cell (e. g. source-receiver model in Parvez et al., 2003). Theuniform distribution is a special case of the Beta-distribution withthe parameters and equal to 1. This hypothesis on the spatialdistribution of earthquake location results in a location of theearthquake in the centre of the areal source, which corresponds tothe expected value of the uniform distribution.

    Informative approaches for defining earthquake locations inareas with diffuse seismicity are based on statistical models forthe description of the spatial variability of seismicity. They arebased on different types of kernel estimations of the activity ratedensity. Such approaches can very well be applied to describethe spatial distribution within a seismotectonic province or arealsource and represent more than merely a zoneless alternativefor representing seismic sources as originally suggested by Woo(1996). Another, essentially nonparametric approach wassuggested by Habenberger et al. (2004). They directly use thespatial distribution of known epicentres for deriving the spatialdensity distribution by constructing Voronoi-polygons aroundeach (single) known epicentre. This can be represented in aVoronoi-diagram. The spatial density of seismicity is thenassumed to be inversely proportional to the areas of thepolygons constructed around the epicentres.

    Other smoothing techniques based on historically observedseismicity are available, too (Frankel et al., 1996). Although

    10 J.-U. Klgel / Earth-Scienkernel estimation techniques have been developed in conjunc-tion with the development of probabilistic seismic hazardanalysis, there is no restriction for using them in a deterministicseismic hazard analysis. The location selected for the scenarioearthquake should correspond to the expected value of thederived spatial distribution of seismicity within the areal sourceor seismotectonic province.

    It is interesting to note that key proponents of deterministicseismic hazard analysis do not require the location of anearthquake to be assumed directly beneath the site except for thecase that there is evidence of a capable fault underneath the site.Such conservative assumptions have been introduced byregulators to increase the degree of conservatism of the analysis(IAEA, 2002) or they have been inherited from probabilisticseismic hazard analysis which, beginning with Cornell (1968)separated the size distribution of earthquakes (magnitude-frequency relationship) from the spatial distribution of earth-quake occurrence. Such a simplified approach with respect tothe definition of the location of earthquakes contradicts thebasic concept of deterministic seismic hazard analysis (to bebased on facts, data and physical models). Deterministic seismichazard analysis only requires the consideration of a sufficientlystrong background source defining a minimal design againstearthquakes (Anderson, 1997).

    4.2.4. Strength of the background sourceThe problem of definition of the strength of the background

    source is solved by incorporating an additional constraint intothe analysis, an estimate of the upper value of magnitude orintensity of the near site earthquake. For this purpose, it can beassumed that the fault causing the maximum credible near fieldearthquake is located beneath the site, and its magnitude cor-responds to the resolution limits of the currently availabledetection methods. It is usually assumed that such faults will notlead to surface rupture because otherwise their activity wouldhave been detected. The resolution limits for faults are differentfor inter-plate and intraplate conditions because of the differentscaling laws acting under these different conditions (Scholz,2002, table 4.1). The definition of resolution limits should alsoconsider the resolution limits of historical observations. Incountries with a long history of written civilization this limitcorresponds to a maximal epicentral intensity (MMI) of VII orVIII. From geological, geomorphologic and paleoseismologicalevidence the resolution limit for interplate conditions is in therange of a magnitude of 5 to 5.5 (Mw) and for intraplateconditions in the range of a magnitude of 6. These resolutionlimits define theminimal strength of the background source to beconsidered in the seismic hazard analysis. The design basisresponse spectrum can be defined either by the use of akinematic fault model or by converting intensity into an effectiveground acceleration (EGA) of a typical near field earthquake.The IAEA recommendation (IAEA, 2002, Section 5.5) forconsidering a minimum design level of a horizontal peak groundacceleration of 0.1 g corresponding to the zero period of thedesign response spectrum for the safety level 2 earthquake(therefore it has the meaning of an effective ground acceleration)is at the lower end of the minimal strength of the background

    Reviews 88 (2008) 132source defined by the resolution limits of currently availabledetection methods. On the other side, statistically only a part ofthe near-site earthquakes with a magnitudeMw of between 5 and

  • nce5.5 is able to exceed the damaging threshold of earthquakesdefined for nuclear power plants by a critical value of Cumula-tive Absolute Velocity (CAV) of 0.16 gs (EPRI, 2005; Klgel,2006a, 2007b):

    CAV XNI1

    H pga 0:025 Z ti1ti

    ja t jdtz0:16gs 5

    Here H represents the Heavyside function (jump function)obtaining the value 1 if the argument of the function is largerthan 0, otherwise it obtains the value 0. The peak groundacceleration pga in Eq. (5) is expressed in units of g. Summationis performed over time windows of 1 s. The Heavyside functionterm was introduced to eliminate effects of long-lasting Codawaves from the analysis.

    4.2.5. Summary of the discussionSummarizing the discussion, we can conclude that modern

    deterministic seismic hazard analysis possesses the tools toprovide meaningful, realistic seismic hazard estimates. Theaccommodation of existing regulations regarding this signifi-cantly improved state-of-the-art deterministic seismic hazardanalysis method is definitely required.

    4.3. Probabilistic seismic hazard analysis (PSHA)

    4.3.1. General definitions and proceduresWith respect to probabilistic seismic hazard analysis we face

    a similar problem as with deterministic seismic hazard analysis.There are many different uses of the term probabilistic seismichazard analysis. Therefore, we define the term probabilisticseismic hazard analysis in its narrow sense as meaning aspecific PSHA model the CornellMcGuire model (Cornell,1968; McGuire, 1976) which found its present culminationpoint in the SSHAC procedures (SSHAC, 1997) and its mostsophisticated application in the Yucca Mountain project study(Stepp et al., 2001; BECHTEL/SAIC, 2004)) and thePEGASOS project (Abrahamson et al., 2004; Klgel 2005a;Zuidema, 2006). The term probabilistic seismic hazard analysisin a broader sense is used for all seismic hazard analysismethods which attempt to define either the frequency of events(of earthquakes) or the frequency of exceedance of secondaryparameters, for example specified ground motion levels.

    In general, any probabilistic hazard analysis method includesthe following steps:

    the definition of earthquake sources (based on the availablegeological and seismological database and the seismotec-tonic model),

    the decision on a ground motion parameter, a selection ofground motion parameters or a damage index directly to beused in the study (spectral accelerations, spectral velocities,intensity measures or directly time histories),

    the development of a ground motion attenuation model (the

    J.-U. Klgel / Earth-Sciemodel can be empirical (parametric), phenomenological or acombination of both approaches) for each of the sources (orthe assumption to use a model for a larger region), the decision on a probabilistic model to reflect either the fre-quency of earthquake occurrences or to describe the probabilityof exceedance of secondary parameters, defined as certain levelsof the ground motion parameter(s) selected for the study,

    the development of scenario-earthquakes either by disaggre-gation of a uniform hazard spectrum (SSHAC, 1997, NRCRG1.165) or of a uniform confidence spectrum (IAEA, 2002) intoscenarios or by directly using the information on seismicsources (Ishikawa and Kameda, 1988; Klgel et al., 2006).

    Fig. 3 shows the typical workflow for a modern PSHAfollowing the CornellMcGuire model. Step 1 is dedicated to thedefinition of earthquake sources. Step 2 is the definition ofseismicity recurrence characteristics for each source. The SSHACmodel (SSHAC, 1997) prefers the use of the exponentiallytruncated GutenbergRichter relation, although other models canbe used, too. Step 3 comprises the development of a groundmotionmodel including the treatment of ground motion variability.

    Step 4 includes the development of uniform hazard spectrafor different probabilities of exceedance and the development ofhazard curves. Step 5 involves the disaggregation of the hazardto develop controlling scenario events defined by magnitudedistance pairs.

    The key difference to the probabilistic scenario-basedapproach suggested by Klgel et al. (2006) consists in thedefinition of the scenario earthquakes. In the PSHA model thescenarios are merely mathematical artifacts which resultdirectly from the simplified physical models used as the basis ofPSHA. In the scenario-based approach (Klgel et al., 2006) thescenarios have a clear physical basis.

    Additionally, a decision should be made on how to deal withuncertainties in risk analysis. In general this leads to thequestions put forward by Aven (2003):

    How do we express risk and uncertainty? How do we understand probabilities? How do we understand and use models? How do we understand and use parametric distributionclasses and parameters?

    How do we use historical data and expert opinions?

    This point is closely linked to the question which decisionsshall be made on the basis of the study results, which dependson application and how informed the decision making shall be.In engineering (design) we require an informed decision makingwhich implies the use of appropriate models (Krinitzsky, 1998).With respect to important societal or political decisions we mayhave to use other models, which intentionally are notinformative (SSHAC, 1997) and should not be compared withreal world data as expressed by some authors of the SSHACreport (SSHAC, 1997) in a discussion (Budnitz et al., 2005).With respect to how the questions by Aven (2003) are answered,we can distinguish four different schools (Aven, 2003):

    11Reviews 88 (2008) 132 the classical approach, based on classical statistics using abest estimate interpretation of data and a general, sometimesqualitative, assessment of uncertainty bounds,

  • Fig. 3. Workflow for a modern PSHA study following the CornellMcGuire model.

    12 J.-U. Klgel / Earth-Science Reviews 88 (2008) 132

  • the probability of frequency approach, which is claimed tobe the basis of the SSHAC procedures (SSHAC, 1997),expressing uncertainty of the underlying true risk numbersby subjective probability distributions,

    the Bayesian approach, and a modernization of the Bayesian approach apredictive Bayesian approach suggested by Aven (2003) asa basis for risk-informed decision making.

    13J.-U. Klgel / Earth-Science Reviews 88 (2008) 132Fig. 4. Example (partial) logic tree from the PEGASOS project, subproject 1, expert group EG1a.

  • PSHA in the narrow sense has evolved from the classicalapproach to the probability of frequency approach with thespecific feature that the link to the real world is (intentionally?)clipped (Budnitz et al., 2005).

    4.3.2. Discussion of the SSHAC-methodology and thePEGASOS-project

    Because of its practical importance, the SSHAC-methodol-ogy is discussed in more detail. The SSHAC approach(SSHAC, 1997) divides the total uncertainty associated withour predictive capability of (future) earthquake effects into twocategories (revised definitions according to Klgel (2007c):

    Epistemic Uncertainty Uncertainty attributable to incom-plete knowledge about a phenomenon which affects ourability to model it. Epistemic uncertainty can be representedin different ways as for example by a range of viable models,multiple expert interpretations, statistical uncertainty of

    logic tree. Experts are asked to propose and to evaluate differentmodeling alternatives, by assigning different subjective weights(probabilities) to each of the possible alternatives. Theseweights represent the different degree of belief of the expertsinto the different modeling alternatives (the weights, therefore,should sum up to 1 for each of the nodes of the logic tree). Ingeneral experts perform differently in such assessment tasks.The SSHAC procedures allow the use of different methods forthe aggregation of expert opinions, albeit the preferred approachuses equal weights. This aggregation approach corresponds tothe assumption of infallible experts (Klgel, 2005b), becauseit assumes implicitly that experts can make bias-freeestimates. This assumption is in general not justified (Kahne-man et al., 1982; Cooke, 1991). Figs. 4 and 5 show some typicallogic trees, to some extent simplified, as used in the PEGASOSproject (Abrahamson et al., 2004). The PEGASOS project wasthe first European trial application of the SSHAC procedures(SSHAC, 1997) at its most elaborate level (level 4) to develop a

    14 J.-U. Klgel / Earth-Science Reviews 88 (2008) 132modeling parameters or a combination of these approaches. Aleatory Variability the residual part of the totaluncertainty which is not captured by our model of theworld after epistemic uncertainty was explicitly modeled.

    The separation between aleatory variability and epistemicuncertainty is model dependent. Aleatory variability sometimesis called modeling uncertainty. It should be clearly understoodthat in the end all uncertainty is fundamentally epistemic. Forpractical decision making it is not necessary to distinguishbetween these different types of uncertainty, as long as the totaluncertainty is captured correctly. In practical applications theattempt to separate these different types of uncertainty has led toproblems, because the dependency between the differentrandom model parameters in PSHA models was not considered(Klgel, 2005a). According to the SSHAC procedures epis-temic uncertainty is treated by presenting different modelingalternatives for the hazard parameters as different branches of aFig. 5. Generalized logic tree for the PEGASOS ssite specific seismic hazard for the sites of Swiss nuclear powerplants (Abrahamson et al., 2004; Zuidema, 2006). The Swissnuclear power plants sponsored the study based on a request bythe Swiss regulator HSK. This request was derived fromdiscussions with a limited set of US-American consultants andNRC officers. The PEGASOS project was subdivided into 4subprojects:

    Subproject 1 (SP1) Seismic source characterization 4groups of experts, each group consisting of three experts

    Subproject 2 (SP2) Ground motion characteristics 5experts,

    Subproject 3 (SP3) Site response characteristics 4experts,

    Subproject 4 (SP4) Hazard quantification.

    Therefore, the study followed the convolution approachseparating source, groundmotion and site response characteristicsubproject 2 ground motion characteristics.

  • nce(see discussion in Section 4.4.5). Two leading American expertsin probabilistic seismic hazard analysis acted as TFIs (TechnicalFacilitator and Integrator).

    A set of 15 candidate groundmotionmodels were suggested tothe experts of subproject 2 for evaluation and weighing. Theseequations were based on different magnitude and distancemeasure definitions. Therefore, conversion formulae were re-quired to convert the characteristics used to moment magnitudeand JoynerBoore distance defined as the standard for the study.Most experts provided an assessment of uncertainties for the useof these conversion formulae (epistemic by comparing differentmodels, sometimes also aleatory). Most of the attenuationequations were empirical correlations expressing attenuation independence of magnitude and distance, and sometimes soilconditions and faulting style. All models were simple one-dimensional amplitude decay equations. The form of theregression function used in most of models corresponds to thefar-field solution of elastic seismic wave propagation inhomogeneous media (Aki, Richards, 2002). Directivity andtopographical effects as well as other source specific character-istics (e.g. the dominant stress mechanisms extensional orcompressional tectonic regimes) were not considered. Hangingwall and footwall effects (for normal or reverse faulting) were notdistinguished. The rather complicated geological conditions inSwitzerland investigated by the experts of subproject 1 anddescribed by different seismic zone and source models have notfound any corresponding differentiation in the attenuation modelsused. In other words, for each seismic source and for each of thefour different sites of nuclear power plants the same set ofweighted attenuation equations was employed. In doing so, thedependency between attenuation characteristics on the geologicaland seismotectonic characteristics was completely lost.

    It is worth mentioning, that the SSHAC procedures (SSHAC,1997) allow the use of source specific attenuation models, butdo not demand this. Therefore, the treatment of ground motioncharacteristics in the PEGASOS project was in compliance withthe SSHAC procedures, although such a treatment of the prob-lem is without any doubt questionable. A possible correlationbetween the proposed candidate attenuation models resultingfrom using the same databases and at least partially the samerecorded time histories was neither investigated nor consideredin the project. The original time histories used for the de-velopment of the different attenuation equations were not evenretrieved and analyzed with respect to their applicability forSwiss and especially for the site specific conditions.

    A reference rock shear wave velocity of 2000 m/s was usedwhich is a long way off from the data support used for thedevelopment of most of the attenuation equations, especially theEuropean ones. It is also a long way off from the site conditionsat the four nuclear power plant sites in Switzerland. Thisapproach requires a transfer of the results derived from theoriginal attenuation models to the reference rock conditions andlater an additional transfer from the reference rock conditions tosite conditions. This transfer was performed without any

    J.-U. Klgel / Earth-Sciereduction of the uncertainty, which would have been requiredbecause the attenuation models based on recordings fromlargely differing site characteristics (e. g. scatter of shear wavevelocities) were transferred to constant rock shear waveconditions (no scatter at all). Instead of reducing the uncertaintyas would have been required (moving from large variability tozero variability of site conditions) additional uncertainty termswere introduced (see discussion in Klgel, 2005a).

    A new feature in comparison to the only other full scopeapplication of the SSHAC procedures (Stepp et al., 2001) wasthe introduction of an upper boundary of ground motion used forhazard truncation. This avoided the development of completelymeaningless (for engineering applications) results as have beenderived for theYuccaMountain project (BECHTEL/SAIC, 2004).But again, the dependency of these upper ground motion levelson source characteristics was ignored. This means, for exam-ple, that the experts in general admitted the possibility that afar distant seismic point source, separated from the site of in-terest by many mountain chains and valleys, can create the sameupper ground motion levels as a source significantly closer to thesite.

    A specific feature of the PEGASOS project was that expertsalso provided estimates for the epistemic uncertainty on aleatoryvariability strictly separating these two types of uncertainty asindependent contributors to the overall uncertainty, withoutconsidering that the total amount of uncertainty modeled in thePEGASOS logic trees must reflect total uncertainty as observedin the real world. This approach reflects the lack of under-standing of the meaning of subjective probability as a directmeasure of the degree of belief into a specific model by some ofthe key proponents of this PSHA approach (Abrahamson andBommer, 2005; Bommer and Abrahamson, 2006).

    As Aven showed (Aven, 2003, pp. 48, 68, 146) modelinguncertainty does not exist in the mathematical framework ofsubjective probability used in risk analysis for decision making.Subjective probabilities assigned by an expert to a specificmodel themselves reflect the degree of belief (the assumeddegree of appropriateness) of a specific model and therefore, theuncertainty associated with its use.

    The approach used in the PEGASOS project manifestedanother problem associated with the SSHAC-methodology. TheSSHAC-methodology does not pay attention to the principle ofempirical control needed to be complied with for a meaningfulengineering decision making (Cooke, 1991; Klgel, 2005b). Onseveral occasions the experts decided to ignore or to overturneven readily available information. For example, the expertsdecided to neglect the available borehole information from theoriginal site investigations of the Goesgen plant, indicating thatthe characteristic shear wave velocity of the geological bed rockis significantly lower than assumed in originally used shear wavevelocity estimates (Klgel, 2007b). They also neglected the onlyavailable earthquake recordings (seven recordings) registered atthe Swiss nuclear power plant sites, although scaling techniques(Fukushima, 1996) were available to allow scaling from weakerearthquakes into the range of stronger earthquakes.

    4.3.3. Classification of PSHA-methods

    15Reviews 88 (2008) 132Different probabilistic seismic hazard analysis methodsdiffer with respect to the stochastic process models used inthe probabilistic framework. There are models based on the

  • cemodeling assumption of a homogeneous, stationary Poissonprocess for the arrival of earthquakes. In this model earthquakeshave a constant occurrence rate. The homogeneous, stationaryPoisson process is a memoryless stochastic process. The use ofthis model is equivalent to the assumption that the occurrence ofearthquakes does not depend on the occurrence of any previousearthquake. This process model is the basis for the CornellMcGuire PSHA model.

    There are other methods removing this assumption. Oneapproach consists in the assumption of a non-homogeneousPoisson process leading either to a Weibull-distribution for theinterarrival time of occurrence of earthquakes (e.g. using a powerfunction representation of the non-homogeneous Poisson pro-cess) or to models with time-dependent frequencies of occurrence(renewal model, WGCEP, 1995). Time or slip predictable modelsrepresent a further alternative usually leading to the model of alognormal distribution for the interarrival time between twoearthquakes. The latter approach in general reflects the model ofthe seismic cycle and of characteristic earthquakes (Wesnouskyet al., 1983; Wesnousky, 1994; Anderson et al., 1996; Scholz,2002).

    The last alternative considered here are Markov or generalsemi-Markov models (Patwardhan et al., 1980), where theoccurrence of an earthquake depends only on the previousearthquake.

    A broader class of probabilistic methods is described byKlgel et al. (2006). They suggest expanding deterministicscenario-basedmethods for use in risk analysis applications. Herethe frequency of earthquake events is represented by a frequencydensity depending on magnitude, location and time. From thisfrequency density, time-averaged frequencies of occurrence ofearthquakes within an areal source (or from a fault) assigned todifferent magnitude classes are obtained. The time averaging isbased on the design life time of the infrastructure considered. Thismethod avoids the decomposition of the multivariate distributionof earthquake occurrence (characterized by magnitude, time andlocation) into separate and (assumedly) independent distributionsfor magnitude, location and time introduced by Cornell (1968).The frequency density is defined from data analysis. It can bederived by parametric and non-parametric estimation methods.New information can be incorporated using a Bayesian approachor by a direct parametric estimate. Uncertainty is represented bysubjective probability distributions derived from the interpreta-tion of real world data and propagated as parametric through themodel. In my understanding this is in line with the predictiveapproach by Aven (2003), because the key principle of hisapproach, the principle of empirical control, is implemented.This approach found application in a risk analysis performed forthe decision on a production loss insurance problem (Klgel,2006b).

    Another classification of PSHA methods is possible withrespect to the type of probability distributions, which are used toform the model. Once again it is possible to distinguish betweeninformative and non-informative approaches. The Cornell

    16 J.-U. Klgel / Earth-ScienMcGuire model and especially its culmination point, theSSHAC-method (SSHAC, 1997), belong to the class of non-informative approaches. This is because non-informativeprobability distributions are used for the main modelingparameters, such as

    the magnitude-frequency relation (the exponentially trun-cated GutenbergRichter relationship represents a maximumentropy distribution; therefore, it minimizes the informationused in the analysis),

    the preferred use of a uniform distribution of seismicitywithin an areal source.

    The treatment of uncertainty is also non-informative becausethe results are not comparedwith real world data. The aggregationof expert opinions used to evaluate epistemic uncertainties is alsonon-informative because the weighting of the opinions is notperformance based.

    The broader classes of probabilistic scenario-based methodspresented by Klgel et al. (2006) are informative methods as wellas (direct) Bayesian methods (Rttener, 1995). Both approachesbase on data analysis leading to informative probabilitydistributions for the key modeling parameters. The parametric-historical approach (Kijko and Graham, 1998) can also beassigned to the informative approaches, although geologicalinformation may have been used incompletely with respect toseismic source modeling in some applications.

    There are a few recent papers commenting on the developmentof probabilistic seismic hazard analysis limiting PSHA to aspecific PSHAmodel, the CornellMcGuire model, for example,Atkinson (2004) and Bommer and Abrahamson (2006). It isworth mentioning that there are a fewmeaningful deviations fromthis model, which are not covered by these reviews. One of themwhich has found application in design regulations for nuclearreactors in the former COMECON countries is worth explaining.The author happened to be a member of the rulemakingcommittee for the development of the standards covering thefundamentals of seismic design of nuclear power plants in theCOMECON countries (about 20 years ago). The safe shutdownearthquake (safety level 2 according to IAEA, 2002) was definedas the hazard corresponding to a groundmotion which will not beexceededwith a probability of 104/a with a statistical confidenceof 50%. Here Eq. (2) is interpreted from the point of view ofexperimental physics a regression equation representing themean value of the logarithm of ground acceleration for differentconfidence levels (different levels of knowledge). The 50%confidence level corresponds to =0 in Eq. (2). It is worthmentioning, that the IAEA guide (IAEA, 2002, page 22) permitsthe use of uniform confidence response spectra instead of uniformhazard spectra with respect to the application of the PSHAmethod. This method has found some kind of revival in the KY(Kentucky)-PSHA as suggested byWang (2006a). The advantageof this method is that the temporal characteristics of earthquakerecurrence are maintained (removal of one of the many ergodicassumptions in the CornellMcGuire/SSHAC model of PSHA,Wang, 2006b).

    Proponents of the CornellMcGuire PSHA model began to

    Reviews 88 (2008) 132call the seismic hazard estimate evaluated at the 50%-confidencelevel a median. Some authors seem to understand that this is notcorrect but regard it a naming convention (Campbell, 2003).

  • nceUnfortunately, this naming is confusing and mathematicallyincorrect (see Section 4.4).

    4.4. Strengths and weaknesses of Probabilistic Seismic HazardAnalysis

    In my discussion I focus on the CornellMcGuire/SSHACmodel of PSHA (traditional PSHA) because of its popularityand its frequent use in regulations.

    4.4.1. Applicability of the model of a stationary homogeneousPoisson process

    It is worth comparing the PSHA methods with the observedbehavior of earthquakes. Bolt (2004) distinguishes betweenself-exciting and self-correcting behavior. The self-excitingbehavior assumes that the conditional rate of earthquakeoccurrence increases as more earthquakes occur. According tothis model the self-exciting effect will taper off as both time andepicentral distance from the previous earthquake increase. Suchbehavior cannot be described by a memoryless homogeneousPoisson process as is usually assumed in PSHA, becausethe rate of earthquake occurrence depends on the previousearthquake.

    According to the self-correcting model the conditional rate ofearthquakes at a given point in space y and time t2 depends onthe strain present at this point. As the occurrence of anearthquake at a nearby point x at a previous time t1 decreasesthe strain at point y and time t2 such an event will generallydecrease the conditional rate of earthquake occurrence at point yfor time t2. Such behavior cannot be described by a memorylesshomogeneous Poisson process either. The rate of earthquakeoccurrence depends on the previous earthquake and its location.Similarly, Klgel (2005a) discussed the applicability ofstationary stochastic processes for modeling earthquake occur-rence. The mathematical prerequisite for a stochastic process tobe stationary is that the process is ergodic. Klgel (2005a) cameto the conclusion that earthquake behavior significantly differsfrom the properties characterizing an ergodic stochastic process.The main reasons are the observed (at least temporarily) cyclicbehavior of earthquake occurrence, the observation of after-shocks and triggered earthquakes and of discrete values ofearthquake magnitudes observed in a constrained area. Theseobservations contradict the property of Positive HarrisRecurrence of ergodic stochastic process (Gill, 2002). Thisconclusion complies with the characterization of earthquakebehavior provided by Bolt (2004). In a comment on my paper(Klgel, 2005a) a group of key proponents of the PSHA method(Musson et al., 2005) repeated an argument originallyintroduced by Woo (1999) to justify the assumption of ahomogeneous Poisson process. They referred to a theorem byKhintchine (1933, 1955). According to this theorem thesuperposition of a large number of independent processes canbe approximately described as a Poisson process, as long as thepoints of each of the individual processes are sufficiently sparse

    J.-U. Klgel / Earth-Scieand provided that no processes dominate the others. A.Y.Khintchine (18941959) made a huge contribution to the theoryof ergodic and Markov processes. The problem is that even themost generalized version of his theorem expressed by theLvyKhintchine Formula, which allows representing astochastic process as comprising of three components adrift, a Brownian motion and a jump component is only validfor a specific class of stochastic processes, the Lvy processes.A Lvy process is defined as follows (Applebaum, 2004):

    A Lvy process X=(X(t), t0) is a stochastic processsatisfying the following:

    (L1) X has independent and stationary increments,(L2) Each X(0)=0 (with probability one),(L3) X is stochastically continuous, i.e., for all aN0 and for

    all s0, limtYs P jX t X s jNa 0.

    Lvy processes are frequently known as processes withstationary and independent increments. They form a subclassof Markov processes. Poisson processes and Wiener processesare typical examples for Lvy processes.

    Referring to the two distinctions of earthquake behaviorprovided by Bolt (2004) it becomes obvious that the property ofstationary and independent increments is not fulfilled. Indeed,the rate of earthquake occurrence according to the self-excitingmodel rises with an increasing rate depending on previousearthquakes, while according to the self-correcting model itdecreases. So the arguments provided by Woo (1999), Mussonet al. (2005) and implicitly also used by Wen (2004) in supportof the model of a stationary homogeneous Poisson process asbasis for PSHA are invalid.

    The other argument frequently used in support of thePoissonian assumption is based on the declustering ofcatalogues. The removal of fore- and aftershocks indeed some-times allows describing the remaining main shocks by a homo-geneous Poisson process. Let us assume that such a declusteringwas performed successfully and we obtain at least a set of weaklynon-homogeneous Poisson processes. Then, indeed the super-position can be represented or approximated by a Poisson processaccording to the quoted theorem of Khinchine (here in the PalmKhinchine version, Cinlar, 1972; Daley and Vere-Jones, 2002).This theorem is used widely in operations research (e.g. modelingcall center or computer server workloads). After declustering, wecan treat the problem of earthquake arrivals as a classical queuingproblem. Unfortunately, here the problem arises that the theoremof PalmKhinchine refers to discrete probability distributions,while PSHA tries to evaluate the probability distribution forground motion parameters at a given site in dependence of time.Because groundmotion parameters can be any value between zeroand a physical upper limit they represent continuous randomvariables. Once again, the theoremofKhinchine (here in its PalmKhintchine version) is not applicable. It could only be used tocalculate the frequency of earthquake observations (frequency ofobserved events) at a given site as a superposition of earthquakeoccurrences from surrounding sources (which we merely assumeto be weakly non-homogeneous, see Fig. 6). Cornell (1968) in hisoriginal work tried to circumvent the problem by introducing

    17Reviews 88 (2008) 132special earthquake events exceeding a specified level of groundmotion (e.g., spectral acceleration). Taking the limit by introdu-cing an infinite number of ground motion levels, it is possible to

  • large scale PSHA studies show some remarkable contradictionsto the modeling assumptions used.

    The theorem of Khinchine (in the version of PalmKhinchine)also prescribes how to calculate the statistical characteristics of theresulting superposed Poisson process from the statisticalcharacteristics of the underlying stochastic processes describingthe different sources (denoted as sub-processes in our context).For example, the mean frequency of t