sepala, muilticriterio

24
FORUM http://mitpress.mit.edu/JIE  Journal of Industrial Ecology   45  Copyr ight 2002 by the Massachusetts Institute of Technology and Yale Universi ty Volu me 5, Numb er 4 Decision Analysis Frameworks for Life-Cycle Impact Assessment  Jyri Seppa ¨la ¨, Lauren Basson, and Gregory A. Norris Keywords decision analysis life-cycle impact assessment (LCIA) methods multiple attribute multiple-criteria decision analysis (MCDA) normalization Address correspondence to:  Jyri Seppa ¨la ¨ Finnish Environment Institute Kesa ¨katu 6, PB 140, SF-00251 Helsinki, Finland jyri.seppala@environment.www.environment./syke Summary Life-cycle impact assessments (LCIAs) are complex because  they almost always involve uncer tain consequences relative to multiple criteria. Several authors have noticed that this is pre- cisely the sort of problem addressed by methods of decision ana lys is. Des pit e sev era l exp erie nces of usi ng mul tiple- attribute decision analysis (MADA) methods in LCIA, the pos- sibilities of MADA methods in LCIA are rather poorly elabo- rated in the eld of life-cycle assessment. In this article we provide an overview of the commonly used MADA methods and discuss LCIA in relation to them. The article also presents how different frames and tools developed by the MADA com- munity can be applied in co nducting LCIAs. Although the exact framing of LCIA using decision analysis still merits debate, we show that the similarities between generic decision analysis steps and their LCIA counte rparts are clear . Structuring of an assessment problem according to a value tree offers a basis for the denition of impact categories and classication. Value  trees can thus be used to ensure that all relevant impact cate- gories and interventions are taken into account in the appro- priate manner. The similarities between multiattribute value  theory (MAVT) and the current calculation rule applied in LCIA mean that techniques, knowledge, and experiences de- rived from MAVT can be applied to LCIA. For example, MAVT offers a general solution for the calculation of overall impact values and it can be applied to help discern sound from un- sound app roa che s to val ue mea sur ement, normali zat ion, weighting, and aggregation in the LCIA model. In addition, the MAVT framework can assist in the methodological develop- ment of LCIA because of its well-established theoretical foun- dation. The relationship between MAVT and the current LCIA methodology does not preclude application of other MADA methods in the context of LCIA. A need exists to analyze the weaknesses and the strengths of different multiple-criteria de- cision analysis methods in order to identify those methods most appropriate for different LCIA applications.

Upload: wellingtonbn

Post on 15-Oct-2015

1 views

Category:

Documents


0 download

TRANSCRIPT

  • F O R U M

    http://mitpress.mit.edu/JIE Journal of Industrial Ecology 45

    Copyright 2002 by theMassachusetts Institute of Technologyand Yale University

    Volume 5, Number 4

    Decision AnalysisFrameworks for Life-CycleImpact AssessmentJyri Seppala, Lauren Basson, and Gregory A. Norris

    Keywords

    decision analysislife-cycle impact assessment (LCIA)methodsmultiple attributemultiple-criteria decision analysis

    (MCDA)normalization

    Address correspondence to:Jyri SeppalaFinnish Environment InstituteKesakatu 6, PB 140, SF-00251Helsinki, [email protected]/syke

    Summary

    Life-cycle impact assessments (LCIAs) are complex becausethey almost always involve uncertain consequences relative tomultiple criteria. Several authors have noticed that this is pre-cisely the sort of problem addressed by methods of decisionanalysis. Despite several experiences of using multiple-attribute decision analysis (MADA) methods in LCIA, the pos-sibilities of MADA methods in LCIA are rather poorly elabo-rated in the field of life-cycle assessment. In this article weprovide an overview of the commonly used MADA methodsand discuss LCIA in relation to them. The article also presentshow different frames and tools developed by the MADA com-munity can be applied in conducting LCIAs. Although the exactframing of LCIA using decision analysis still merits debate, weshow that the similarities between generic decision analysissteps and their LCIA counterparts are clear. Structuring of anassessment problem according to a value tree offers a basisfor the definition of impact categories and classification. Valuetrees can thus be used to ensure that all relevant impact cate-gories and interventions are taken into account in the appro-priate manner. The similarities between multiattribute valuetheory (MAVT) and the current calculation rule applied inLCIA mean that techniques, knowledge, and experiences de-rived from MAVT can be applied to LCIA. For example, MAVToffers a general solution for the calculation of overall impactvalues and it can be applied to help discern sound from un-sound approaches to value measurement, normalization,weighting, and aggregation in the LCIA model. In addition, theMAVT framework can assist in the methodological develop-ment of LCIA because of its well-established theoretical foun-dation. The relationship between MAVT and the current LCIAmethodology does not preclude application of other MADAmethods in the context of LCIA. A need exists to analyze theweaknesses and the strengths of different multiple-criteria de-cision analysis methods in order to identify those methodsmost appropriate for different LCIA applications.

  • F O R U M

    46 Journal of Industrial Ecology

    Introduction

    What is a good decision? A distinction canbe made between a good decision and a gooddecision outcome (von Winterfeldt and Edwards1986). The decision outcome refers to the con-sequence of the decision and may be regarded asgood if it is to the satisfaction of the decisionmaker(s). A good decision, on the other hand, isone that is produced by a quality decision-making process. The characteristics of a qualitydecision-making process include that it involvesthe appropriate people, identifies good alterna-tives, collects the right amount of information,is logically sound, uses resources efficiently, andproduces choices that are consistent with thepreferences of the responsible decision makers(Merkhofer 1999).

    Decision making under multiple objectiveswith difficult trade-offs and uncertain outcomesis an iterative and complex process. In these sit-uations, methods of decision analysis can helpdecision makers to make better decisions. De-cision analysis is a merger of decision theory andsystems analysis. Decision theory provides afoundation for a logical and rational approach todecision making. Systems analysis providesmethodologies for systems representation andmodeling to capture the interactions and dynam-ics of complex problems (Huang et al. 1995).The term encompasses a variety of activities andmethods ranging from those that focus on facili-tating the decision process itself, to those thatcan be used to provide and collate informationrequired for decision making.

    Several authors (e.g., Miettinen and Hama-lainen 1997; Seppala 1997, 1999; Azapagic andClift 1998; Spengler et al. 1998; Basson and Pe-trie 1999a, 1999b, 2000; Stewart 1999; Hertwichand Hammitt 2001a, 2001b) have noted that de-cision analysis can be applied in the context oflife-cycle assessment (LCA). Similarities existbetween the different stages of an LCA and thephases of a structured decision analytic approachto decision making, which allows LCA to benefitfrom the approaches and tools developed withinthe decision analysis field. Furthermore, life-cycle impact assessment (LCIA), the phase ofLCA that assists in improving the understanding

    of the information gathered during the inventoryphase, can benefit in particular from the appli-cation of decision analysis. LCIA typically aimsto assist in obtaining an overall impression of theenvironmental impacts caused by a complex ar-ray of interventions (emissions, land use, and re-source extractions) identified during inventoryanalysis. Decision analysis provides guidance onhow such an overall impression (or several dif-ferent impressions) of environmental impact maybe constructed.

    In this article, we aim to present decisionanalysis frameworks for LCIA and to provide anoverview of some of the multiple-attribute deci-sion analysis (MADA) methods that may be ap-plied in LCIA. We do not attempt to evaluatethe entire LCA process from a decision analyticperspective, nor do we purport this article to becomprehensive in its consideration of MADAmethods. The purpose is rather to apply the prob-lem framing and problem analysis concepts de-veloped by the multicriteria decision analysiscommunity to the task of LCIA. The article alsopresents how different frames and tools devel-oped by the MADA community can be appliedin conducting different stages of LCIA, such asnormalization and weighting. We hope that ap-plication of these concepts to LCIA will enhancethe clarity and quality of decisions based onLCA.

    Overview of MADA Methods

    Background

    In many decision-making situations it is de-sirable to achieve or respond to several objectivesat once. For example, in evaluating alternativesfor proposed streets, one wishes simultaneouslyto minimize the cost of construction and main-tenance, maximize positive social impacts oftransportation and land use, minimize environ-mental impact, and so forth. Because differentalternatives have different levels of performancewith respect to different objectives, it is rare tofind a single alternative that performs best withrespect to all objectives at once. For this reason,members of the decision analysis communityhave developed a number of different methods

  • F O R U M

    Seppala, Basson, and Norris, Decision Analysis Frameworks for LCIA 47

    to help decision makers identify and select pre-ferred alternatives when faced with a complexdecision problem characterized by multiple ob-jectives. These so-called multiple-criteria deci-sion analysis (MCDA) methods structure andmodel multidimensional decision problems interms of a number of individual criteria whereeach criterion represents a particular dimensionof the problem to be taken into account.1

    MCDA methods are based on the assumptionthat decision makers strive to make rationalchoices. The term rational here refers to thechoice to approach a decision in a structured andlogical manner. The decision maker2 behaves ra-tionally if he or she evaluates all the alternativesand chooses the one that maximizes his or hersatisfaction (Guitouni and Martel 1998). It is,however, difficult to structure and model thedecision-making situation to enable rational de-cision making because information on alterna-tives is typically incomplete and many factors af-fect the desirability of alternatives in differentdecision situations. In addition, experimentalstudies in psychology and behavior have revealedthat in practice, the decision maker does not al-ways analyze all the alternatives or maximize hisor her welfare (e.g., Simon 1957; Zeleny 1992).For these reasons, there has been a need to de-velop various methods for different decision-making situations in which rules for rational de-cision making are structured and modeled indifferent ways.

    A distinction can be made between discreteand continuous decision problems. Discrete de-cision problems involve a finite set of alterna-tives and are often referred to as selection prob-lems. Continuous decision problems arecharacterized by an infinite number of feasiblealternatives and are referred to as synthesisproblems. It is customary to refer to MCDAmethods developed for the sorting or ranking ofa finite set of alternatives as multiple-attributedecision analysis (MADA), and those that assistin the synthesis of preferred solution when thepotential solution set is described by continuousvariables (or a mix of discrete and continuousvariables) as multiple-objective optimization(MOO). Examples of the application of MADAmethods in LCA or in conjunction with LCA-

    based environmental indicators to supportmultiple-criteria decision making can be foundin work by Miettinen and Hamalainen (1997),Seppala (1997, 1999), Spengler and colleagues(1998), Basson and Petrie (1999a, 1999b, 2000),and Basson et al. (2000). Examples of the appli-cation of MOO in LCA or in conjunction withLCA-based environmental indicators are foundin work by Azapagic (1996), Azapagic and Clift(1998), Stewart (1999), and Alexander and col-leagues (2000).

    The emphasis in this article is on discrete de-cision problems, because most common LCA ap-plications, especially product comparisons, in-volve the evaluation of a finite set of alternatives;hence this article is limited to consideration ofMADA methods. The general principles of de-cision analysis and MADA methods are equallygermane to MOO methods, and the analysiscould readily be extended to MOO methods andsynthesis type problems involving continuousvariables.

    General Procedural Framework

    Although the particular methods used in de-cision analysis may vary according to the specificsituation, there is a general procedural frameworkfor decision analysis. The problem is decomposedinto components, each of which is subjected toevaluation by the decision maker. The individualcomponents are then recomposed to give overallinsights and recommendations on the originalproblem (Bunn 1984).

    The range of objectives for consideration istypically identified during the first phase of a de-cision analysis exercise. This phase is known asthe problem structuring phase and involves thedefinition of the decision problem, and the iden-tification of stakeholders, their objectives, the al-ternatives for consideration, and the perfor-mance measures (attributes) that will be used toevaluate the degree to which a particular objec-tive is achieved. Objectives are typically struc-tured according to a value tree, also known asan objectives hierarchy, as is demonstratedlater in the Problem Structuring section.

    The second phase of a decision analysis pro-cess is construction of the preference model.

  • F O R U M

    48 Journal of Industrial Ecology

    This involves the evaluation and comparison ofthe performance of the alternatives. It is unlikelythat one alternative will perform best with regardto all attributes, so it is necessary to determine adecision rule to apply in order to identify andselect the alternative that best meets the objec-tives of the stakeholders in some overall sense.

    Because there may be many uncertaintiespresent in the information used to support de-cision making and because the decision analysisprocess itself introduces uncertainties (e.g.,through the choice of particular model structuresand assumptions), the third phase of a decisionanalysis process involves sensitivity analyses todetermine the robustness of the choice of thepreferred alternative(s). It involves the deter-mination of the changes in the model responseas a result of changes in model data input. Sen-sitivity analysis allows the identification of criti-cal inputs/judgments and the identification ofany close competitors to the preferred alterna-tive.

    Clearly a decision analysis process is an iter-ative, exploratory process. Iterations are doneboth between the different steps of the threephases and for the cycle as a whole, as the infor-mation used to support decision making becomesmore clearly defined.

    Approaches and Basic Elementsof MADA Methods

    The starting point for modeling a discrete de-cision situation under multiple objectives is thatthe set of alternatives, aj ( j 1, . . . , n), andattributes, Xi (i 1, . . . , m), must be identified.Attributes should be chosen so that their valuesare sufficiently indicative of the degree to whichan objective is met. In addition, they have to bemeasurable. Measurability implies that a mea-surement scale3 that allows ordering of the alter-natives for a particular objective can be con-structed.

    In general, the problem addressed in discreteMADA methods is to judge the attractiveness ofalternatives on the basis of the scores of the at-tributes, xi(aj). The information on the scores ofalternatives, aj, with respect to attributes, Xi, canbe expressed by the following matrix:

    AttributesX1 X2 X3 Xm

    a1 x1(a1) x2(a1) x3(a1) x4(a1)a2 x2(a2) x2(a2) x3(a2) x4(a2)Alternatives

    an x1(an) x2(an) x3(an) xm(an)

    This representation of the decision situation as-sumes that it is possible to precisely forecast im-pact; in other words, we can associate one con-sequence, x (x1, x2, x3, . . . , xm), with certaintyto each alternative. This is known as a decisionproblem under certainty. Unfortunately, theproblem is usually not so simple because of un-certainties about the eventual consequences.Decision under uncertainty means that foreach alternative, the set of possible consequencesand the probabilities that each will occur can bedetermined. These determinations can be con-ducted by means of a probability distributionfunction, Pj(x), over the set of attributes for eachalternative, aj. For practical reasons, a commonsimplification is to consider the decision situa-tion as a problem under certainty where Pj(x)assigns a probability one to a particular x and zeroto all others. In decision under strict uncer-tainty the decision maker feels that he or shecannot quantify the uncertainty associated withalternatives in any way. In this article, decisionunder certainty is assumed, unless stated other-wise. For an exposition of decision making whenthe consequences of alternatives are uncertain,see work by French (1988).

    MADA methods differ with respect to inputdata and aggregation procedures. An aggregationprocedure specifies the set of rules that is used toprocess this information and generate a rankingof alternatives. Most MADA methods are de-signed to process cardinal information on attri-butes. Methods such as the regime (Hinloopenand Nijkamp 1990) and EVAMIX (Voogd 1983)methods have been developed to process ordinalor mixed information on attributes. In practice,attributes may be expressed in a variety of mea-surement units (dollars, kilograms of pollutants,etc.). To make their scores comparable, theymust be transformed into a common dimensionsuch as monetary value or into a common di-

  • F O R U M

    Seppala, Basson, and Norris, Decision Analysis Frameworks for LCIA 49

    mensionless scale. The latter type of transfor-mation can include processes in which the origi-nal scale of an attribute is converted into a scalethat reflects the decision makers relative pref-erences for different levels of that attribute.Thus, the results of the transformation can beconsidered as criteria scores, ci(aj), that describethe performance of the alternatives within eachcriterion. Finally, weights, wI, are used to reflectthe trade-offs that decision makers are willing toaccept between performances in different objec-tives. Alternative MADA methods differ withrespect to how ci(aj) and wi are determined

    Depending upon the multiple-criteria aggre-gation procedure applied, a MADA method canidentify the following (Janssen 1992):

    A complete ranking : a1 a2 a3 a4The best alternative : a1 (a2, a3, a4)A set of acceptable

    alternatives : (a1, a2, a3) a4An incomplete ranking

    of alternatives: [a1 (a2, a3, a4)] or[(a1, a2) (a3, a4)]

    Thus, the procedures create complete pre-orders, complete orders, or partial orders. In acomplete order, all alternatives are ranked rela-tive to one another, and no two alternatives areregarded as equal. The order is a preorder if somealternatives are regarded as equal. In partial or-ders, some alternatives may not be ranked rela-tive to others. On the basis of order structures,MADA methods are designed for use in eitherscreening or selection problems, although selec-tion methods could conceivably be used toscreen alternatives, and under certain circum-stances a screening method allows only a singlealternative to pass the screening test.4 A reviewof MADA methods specifically designed for solv-ing selection problems has been given by Olson(1996).

    An important property in discrete multiple-criteria aggregation procedures is the manner inwhich these procedures model decision makerpreferences. In most methods the decision makerpreference structure is modeled according to thefollowing basic binary relations:

    Strict preference(P):a b a P b a is preferred to b (1)

    Indifference(I):a b a I b a is as preferable as b (2)

    Weak preference(Q):a b a Q b a is at least as preferable as b (3)

    where a and b are anything over which it is pos-sible to express a preference. For example, theycan be consequences, such as xi(aj); or they canbe the alternatives, aj, themselves. For these re-lations to describe a rational persons preference,they must satisfy specific consistency conditions(e.g., transitivity, comparability, asymmetry; seeFrench 1988). For example, typically, it is as-sumed that a rational persons preferences shouldbe transitive. In the case of strict preference thismeans that if a b and b c, then a c.Turning to the concept of indifference, if the de-cision maker holds a b and b c, then he orshe must also hold a c. Operationally, weakpreference means that neither strict preferencenor indifference holds. Any relation that is tran-sitive is known as an order. For example, assumethat the decision maker can compare alternativesaj in terms of weak preference. Then, alternativesaj can be arranged according to a complete pre-order (i.e., a ranking that makes some distinctionbetween alternatives but allows some alterna-tives to be of equal rank) (French 1988).

    Each MADA method uses a specific approachto model the decision makers preferences. Ac-cording to Guitouni and Martel (1998), methodscan be divided into performance aggregationand preference aggregation approaches. In thefirst set of methods the various criteria scores areaggregated into a single score using an aggrega-tion function (F) that best represents the deci-sion makers preferences. In the latter set ofmethods, information on the relative preferencefor good performance in different criteria is ag-gregated to determine which alternatives can jus-tifiably be regarded as better than others.

    Furthermore, MADA methods can be com-pensatory, noncompensatory, or partially com-pensatory. Compensatory methods allow goodperformance relative to one attribute to compen-sate for low performance relative to another at-tribute. Noncompensatory methods do not allowsuch compensation between performances in dif-ferent criteria. Most of the MADA methods fall

  • F O R U M

    50 Journal of Industrial Ecology

    Figure 1 Relation amongmultiple-criteria decision analysis(MCDA), multiple-attributedecision analysis (MADA),multiple-objective optimization(MOO), and commonly usedMADA methods considered inthis article.

    -

    --

    MAUT, MAVT, SMART, Weighted Summation

    z

    within the partially compensatory category (Gui-touni and Martel 1998).

    Note that many of the preference aggregationmethods cannot be regarded as noncompensa-tory in the absolute sense of the term. This is afunction of the manner in which the preferencerelations between alternatives are defined andhow the information is aggregated with infor-mation about the relative preference for goodperformance in different criteria (weights). Asacknowledged by Guitouni and Martel (1998),the issue is not so much compensatory versusnoncompensatory, but rather of the degree ofcompensation that different methods allow. Thedegree of compensation that is effected by themanner in which (1) the preference relations aredefined and (2) the relative preference for goodperformances is aggregated is the subject of cur-rent research (Basson 1999).

    Characterization of the MADA Methods

    In this article, we have limited the presenta-tion of MADA methods only to some commonlyused methods (figure 1) that provide, in our opin-ion, a sufficient basis for a discussion about pos-sibilities of applying MADA methods in LCIA.The methods are classified into four groups ac-cording to their theoretical bases or practical fea-tures. Most of these methods are described brieflyby Stewart (1992), Chen and Hwang (1992),Norris and Marshall (1995), and Guitouni andMartel (1998). Most of them are described in

    depth by Hwang and Yoon (1981), Yoon andHwang (1995), and Olson (1996).

    Elementary MethodsElementary methods do not require explicit

    evaluation of quantitative trade-offs between cri-teria/attributes in order to select or rank alter-natives. In other words, no intercriteria weight-ing is required and in some cases not even arelative ranking of criteria is required. Elemen-tary methods are described in detail by Yoon andHwang (1995), and the discussion below drawsmainly from this work.

    The viewpoint underlying the maximinmethod is one that assigns total importance tothe attribute with respect to which alternativeperforms worst. Another way of expressing thisviewpoint is the common saying that a chain isonly as strong as its weakest link. Effectively, themethod gives each alternative a score equal tothe strength of its weakest link, where the linksare the attributes. In multiple-attribute decisionmaking the maximin method can be used onlywhen values of different attributes are compara-ble. Thus, all attributes must be measured on acommon scale.

    The maximax method is analogous to themaximin method. The viewpoint underlying themaximax method is one that assigns total im-portance to the attribute with respect to whichalternative performs best.

    The conjunctive method is purely a screeningmethod. The requirement embodied by the con-

  • F O R U M

    Seppala, Basson, and Norris, Decision Analysis Frameworks for LCIA 51

    Figure 2 Utility function ui(xi)for attribute Xi, indicating utilityvalues ui(xi(a1)) and ui(xi(a2)) forattribute values of alternative 1(xi(a1)) and alternative 2 (xi(a2)),respectively.

    junctive screening approach is that in order tobe acceptable, an alternative must exceed givenperformance thresholds for all attributes. The at-tributes (and thus the thresholds) need not bemeasured in commensurate units. These methodsrequire satisfactory rather than best possible per-formance in each criterion because any alterna-tive passing the screen is acceptable. One use ofthis simple screening rule would be to select asubset of alternatives for subsequent analysis byselection methods.

    The disjunctive method is also purely ascreening method. It is the complement of theconjunctive method: whereas the conjunctivemethod requires an alternative to exceed thegiven thresholds in all attributes (the and rule),the disjunctive method requires that the alter-native exceed the given thresholds for at leastone attribute (the or rule). Like the conjunc-tive method, the disjunctive method does notrequire attributes to be measured in commensu-rate units.

    The lexicographic method takes its namefrom alphabetical ordering such as that found indictionaries. Using this method, attributes arefirst rank-ordered in terms of importance. Thealternative with the best performance on themost important attribute is chosen. If there areties with respect to this attribute, the next mostimportant attribute is considered, and so on. InMADA problems with few alternatives, quanti-tative input data, and in which uncertainty hasbeen neglected, the chance of ties is virtually nil,so that the lexicographic method ends up beinga selection based on a single attribute. If uncer-tainty is acknowledged and significant, however,

    the lexicographic methods features come morefully into play.

    Multiattribute Utility Theory MethodsThe multiattribute utility theory (MAUT)

    provides a clear axiomatic foundation for ra-tional decision making under multiple objec-tives. The theory assumes that the decisionmaker is able to articulate his or her preferencesaccording to the strict preference or indifferencerelations, and that he or she always prefers thesolution that maximizes his or her welfare.MAUT is one of the most commonly usedMADA methods that aim to produce a totalpreorder of alternatives. Note that multiattributevalue theory (MAVT) can be considered as amultiattribute theory for value measurement inwhich there are no uncertainties about the con-sequences of the alternatives, whereas MAUTexplicitly considers that the consequence of thealternatives may be uncertain (Keeney andRaiffa 1976; French 1988).

    According to MAUT, the value scores of at-tributes measured on different measurementscales must be normalized to a common dimen-sionless unit using single-attribute utility func-tions, ui(.). By using single-attribute utility func-tions, the scale of an attribute is converted intoa scale that should reflect the decision makersrelative preferences for different levels of that at-tribute (figure 2). The single-attribute functionsare usually normalized onto the [0,1] range, withthe poorest performance exhibited by an alter-native on an attribute scaled to zero and the bestexhibited performance on that attribute scaledto 1.

  • F O R U M

    52 Journal of Industrial Ecology

    Procedures for assessing single-attribute utilityfunctions are well developed. Utility functionscan be assessed, for example, using the certaintyequivalent or the probability method (see Kee-ney and Raiffa 1976; von Winterfeldt and Ed-wards 1986). These procedures take into accountthe probabilities of the consequences in thevalue judgments and lead to functions that rep-resent the risk attitude of the decision maker.

    Assuming certain consequences of the alter-natives, functions for single attributes can beconstructed by easy measurement techniquessuch as direct ratings, category estimation, andso forth (see von Winterfeldt and Edwards 1986).These functions obtained by methods not basedon the use of probabilities are called value func-tions, vi(.). In essence, utility functions incor-porate risk attitudes, whereas value functions donot.

    Single-attribute utility/value functions can beused in constructing a multiattribute utility/valuefunction that is needed in a decision situationunder multiple objectives. In discussing decisionsin which the decision maker can predict conse-quences of alternatives with certainty, the com-putationally easiest form of the decompositionsis the additive function

    m

    V(a ) w v (x (a )) (4)j i i i ji1

    where vi(.) are single-attribute value functions,and wi are attribute weights. In MAUT/MAVT,attribute weights are considered as scaling con-stants that relate to the relative desirability ofspecified changes of different attribute levels.The higher V(aj) is, the more desirable the alter-native. Thus, the magnitudes of V(aj) can be usedto establish a ranking that indicates the decisionmakers preferences for the alternatives. A nec-essary condition for an additive decompositionof the multiattribute value (or utility) functionis mutual preferential independence of the at-tributes (see the Aggregation section of this ar-ticle, and work by Keeney and Raiffa (1976) formore detail).

    In an additive value or utility function, thevalues of wi indicate the relative importance ofchanging each attribute from its least desirableto its most desirable level. To assess these scaling

    constants, one generates data representing statedvalue judgments of the decision maker. In caseof uncertain consequences of the alternativesand the single-utility functions, the variableprobability method or the variable certaintyequivalent method can be used for determiningattribute weights. In MAVT there are numerousprocedures for the determination of weights. Thetrade-off procedure has a strong theoretical foun-dation (Keeney and Raiffa 1976; Weber andBorcherding 1993), but it is rather difficult touse. Easier methods such as ratio estimation andthe swing weighting procedure are, therefore,more widely used (von Winterfeldt and Edwards1986; see also the Weighting section).

    If a direct rating technique is used for con-structing single-attribute value functions, and at-tribute weights are defined using ratio estimation,the additive weighting method (equation 4) iscalled the simple multiattribute rating technique(SMART) (see Edwards 1977). SMART, how-ever, has different variations in which simplevalue measurements for single-attribute valuesand weighting procedures can vary (see vonWinterfeldt and Edwards 1986). SMART is oneof the most commonly used MAVT techniques.

    A simple MAVT method, sometimes calledweighted summation (Janssen 1992), is an addi-tive aggregation model (equation 4) in which thevalue function elements vi(xi(aj)) are replaced byxi (aj)/xi* or (xi (aj) xi)/(xi* xi). The firsttransformation factor scales the scores for eachattribute according to the relative distance be-tween the origin and the maximum score (xi*).The second factor scales these scores accordingto their relative position on the interval betweenthe lowest (xi) and the highest scores (xi*) (as-suming higher scores are preferred to lower ones).

    Outranking MethodsIn so-called outranking methods, it is assumed

    that the decision maker can express strict pref-erence or indifference or weak preference whencomparing one alternative to another for eachcriterion. The outranking relation (a S b) holdswhen there is a strong reason to believe that con-sidering all n criteria, an alternative a is at leastas good as b (or a is not worse than b) (Guitouniand Martel 1998). According to such a relationit is possible to conduct pairwise comparisons be-

  • F O R U M

    Seppala, Basson, and Norris, Decision Analysis Frameworks for LCIA 53

    tween each pair of alternatives under consider-ation for each of the criteria/attributes to supportor refute the hypothesis that one alternative isbetter than the other. For example, in an out-ranking method known as ELECTRE II (Roy1973) a dominance relationship for each pair ofalternatives is derived using both an index ofconcordance and an index of discordance. Theconcordance index represents the degree towhich alternative a1 is better than alternative a2,whereas the discordance index reflects the degreeto which alternative a1 is worse than alternativea2. The method requires that concordance anddiscordance threshold values be specified by thedecision maker. Whether both concordance anddiscordance indices are used and how these aredefined constitutes the main difference betweenthe different outranking methods. Concordanceindices typically add the importance weightswhen there is preference for one alternative overthe other, hence the term performance aggre-gation methods. These weights are determinedduring the weight elicitation step and should re-flect the relative preference that stakeholdershave for good performance in the different cri-teria (where good is defined through the use ofthresholds when establishing preference rela-tions). Finally, the concordance (and/or discor-dance) indices are used to sort or rank the alter-natives. The different outranking methods use arange of procedures of differing complexity to dothis.

    There exist many versions of the eliminationet choix en traduisant la realite (ELECTRE)method, which can be translated roughly aselimination and choice reflecting the reality(see, e.g., Roy and Vanderpooten 1996), andother methods such as the preference rankingorganization method of enrichment evaluations(PROMETHEE) (Brans et al. 1984; Bransand Vincke 1985). Other outranking methodssuch as ORESTE (Roubens 1980), the regimemethod (Hinloopen and Nijkamp 1990), andMELCHIOR (Leclerc 1984) are based on thesame concepts as ELECTRE. The methods pro-duce different order structures of alternatives de-pending on preference relations considered, thehypotheses about the properties of these relations(transitivity, etc.), and use of thresholds (veto,preference, etc.). All methods, however, use the

    calculation rules reflecting the idea that, beyonda certain level, bad performance on one criterioncannot be compensated for by good performanceon another criterion.

    Outranking methods have been promoted fortheir noncompensatory approach to decisionmaking and for the ease with which uncertaintiescan be incorporated explicitly into the evalua-tion of the differences between alternatives;however, the outranking methods lack a strongtheoretical foundation (Guitouni and Martel1998).

    Other MethodsThe analytical hierarchy process (AHP) de-

    veloped by Saaty (1980) is a popular MADAmethod. In principle, AHP has close connec-tions with MAVT because both nonprobabilisticmethods use a hierarchical structure and an ad-ditive preference model. On the other hand, theyhave different evaluation scales for determina-tion of criteria weights and criteria scores of al-ternatives with respect to each criterion. In AHP,criteria scores are also called weights. All weightsare elicited through pairwise comparisons, util-izing a prespecified one- to nine-point scale forquantifying verbally expressed descriptions ofstrength of importance among attributes orstrength of preference among alternatives. Muchcriticism has been leveled at the interpretationof the scale used in AHP; instead of the nine-point scale proposed by Saaty (1980), variousother scales have been suggested by several au-thors (e.g., Ma and Zheng 1991; Salo and Ham-alainen 1997). Having judged the scores of eachpair, final weights are calculated according to aso-called principal eigenvalue method. In AHPit is possible to check consistency of judgmentson the basis of an index obtained from the cal-culation method.

    The principle behind the technique for orderpreference by similarity to ideal solution (TOP-SIS) (Hwang and Yoon 1981) is simple: Thechosen alternative should be as close to the idealsolution as possible and as far from the negative-ideal solution as possible. The ideal solution isformed as a composite of the best performancevalues exhibited by any alternative for each at-tribute. The negative-ideal solution is the com-posite of the worst performance values. Proxim-

  • F O R U M

    54 Journal of Industrial Ecology

    ity to each of these performance poles ismeasured in the Euclidean sense (i.e., square rootof the sum of the squared distances along eachaxis in the attribute space), with optionalweighting of each attribute. The method and itsresults are straightforward to depict graphically.

    Choice of a MADA Methodfor LCIA

    All methods described here support the rank-ing of alternatives in different ways. The choiceof a MADA method can be based in part on thetype of decision, the stakeholders involved in thedecision-making process, and the informationavailable to support decision making. As a firststep in the development of a comprehensiveframework for the selection of an appropriateMADA method for a particular decision-makingsituation, Guitouni and Martel (1998) providedtentative guidelines for this choice and a com-parative study of 29 MADA methods. They em-phasized, however, that their results are far fromsatisfactory because they do not facilitate a clearunequivocal choice of method and that muchwork is still required to show the strengths andweaknesses of different MADA methods. Ex-amples of such studies, and particularly of theinfluence of the type of decision, the stakeholdersinvolved in the decision-making process, and theinformation available to support decision mak-ing, have been provided by Basson (1999), Bas-son and Petrie (1999a, 1999b, 2000), and Bassonet al. (2000).

    The essential questions that need to be an-swered are (1) which type and level of support isrequired, and (2) which method is most appro-priate in LCIA.The central ethical question, par-ticularly in the context of sustainability, iswhether a compensatory approach to environ-mental performance is an acceptable approachfor the evaluation of alternatives in LCA. Can agood performance with regard to climate change(global-level impact) offset a bad performancewith regard to acidification (regional-level im-pact), or stated more simply, can clean air com-pensate for dirty water?

    Three conditions exist where noncompensa-tory MADA methods may be preferred:

    1. Single, all-important impact category. If thereis one criterion (impact category) whoseimportance is deemed by the decisionmaker(s) to be overriding, then this by def-inition precludes compensation, and amethod such as the lexicographic methodcan be used.

    2. Categories of ranked importance along withperformance uncertainty. If uncertainty inthe results has been quantified and thedecision makers can agree to thresholdsof difference and confidence required todistinguish performance among alterna-tivesthat is, to identify a lack of a tiewith respect to a criterion (e.g., impactcategory)then again, noncompensatorymethods such as the lexicographic semior-der and some of the ELECTRE andPROMETHEE methods may be used. Un-der this second condition, the categoriesneed only be rank-ordered in terms of im-portance, rather than weighted.

    3. Performance thresholds. If thresholds of per-formance are identifiable, such that eitherdifferences in performance above thethreshold are unimportant, or differencesin performance below the threshold areunimportant, then these are circumstancesunder which compensation does not apply.In such cases, it might be possible to usescreening methods such as the conjunctiveor disjunctive methods or the classificationmethod ELECTRE TRI to eliminate somealternatives.

    It is not immediately clear how LCA results,which are tied to the functional unit and so areoften cited as being scale independent, could beassessed with respect to thresholds. This may,however, be possible, either by testing for thresh-old exceedance at the level of physical processes,or perhaps by testing normalized results where anormalization regime such as total market size isused to scale the functional unit. In this lattercase, for example, one might choose to equate tozero all impacts in categories whose normalizedresults fall below some threshold.

    One way to obtain the answer to the suit-ability of different MADA methods for LCIA isto compare the current methodological choices

  • F O R U M

    Seppala, Basson, and Norris, Decision Analysis Frameworks for LCIA 55

    applied in LCIA with MADA methods. Accord-ing to the International Standards Organization(ISO 1997, 2000), LCIA is divided into man-datory and optional phases. The first phase is theselection of impact categories (e.g., climatechange and acidification), indicators for the cate-gories (e.g., radiative forcing in climate change,H release in acidification), and models to quan-tify the contributions of different environmentalinterventions to the impact categories. The sec-ond phase, classification, is an assignment of theinventory data to the impact categories. Thethird phase, characterization, is a quantificationof the contributions to the chosen impacts fromthe product system. This phase produces resultsfor so-called impact category indicator results.After these mandatory phases, LCIA can con-tinue, depending on the goal and scope of theLCA study. Normalization relates the magnitudeof the impacts in the different impact categoriesto reference values. Weighting is the process ofconverting indicator results (i.e., results from thecharacterization or normalization), typically, intoa single score by using numerical factors based onvalue choices. In LCIAs, a typical calculationrule for a single score called total environmentalimpact (EI) is a simple model

    n I (a )i jEI(a ) W (5)j i Ri1 iwhere Wi is the weighting factor of impact cate-gory i, Ii(aj) is the impact category indicator re-sult of impact category i caused by product system(alternative) aj, and Ri is the reference value, thatis, the indicator result of impact category i of thereference area.

    It can be said that equation (5) correspondsto the current LCIA methodology, although awide range of competing LCIA methods exist.Equation (5) is typical of the methodology sup-ported by the Society for Environmental Toxi-cology and Chemistry (SETAC) (Consoli et al.1993; Udo de Haes 1996) and the ISO (2000).In addition, many popular methods such as en-vironmental theme (Baumann and Rydberg1994), Eco-indicator 95 (Goedkoop 1995), andEco-indicator 99 (Goedkoop and Spriensma1999) are based on the use of equation (5), al-though they differ from each other in terms of

    overall assessment philosophy. The equation isrelevant for the methods using midpoint or end-point modeling to calculate impact category in-dicator results.5 For example, the environmentaltheme and the CML methods (Heijungs et al.1992) are based on midpoint modeling, whereasEco-indicator 99 and the environmental prioritysystem (EPS) (Steen and Ryding 1992; Steen1999) try to predict damage according to end-point modeling. Note that in the calculation rulefor EPS, the end point is expressed in monetaryterms, and hence there is no normalizationphase, that is, Ri is omitted in equation (5).

    Equation (5) can be interpreted as the pref-erence model derived from the decision analysisframework. The assessment problem of total en-vironmental impact is decomposed into impactcategories, each of which is subjected to evalu-ation by the decision maker. The impact categoryindicator results are recomposed to give overallinsights on the original problem. Seppala (1997,1999) has shown that equation (5) correspondsto the simple additive-weighted model in whichlinear value functions, vi(Ii(a)) Ii(a)/ Ri, areused. In addition, Seppala and Hamalainen(2001) pointed out that the calculation rule oftotal environmental impact, EI, which does notinclude normalization (c.f. EPS), that is,

    n

    EI(a ) w I (a ) (6)j i i ji1

    is consistent with MAVT if the weighting factor,wi, corresponds to Wi / Ri in equation (5).

    The similarities between equation (5) andMAVT mean that techniques, knowledge, andexperience for evaluating the alternatives devel-oped in MAVT can be applied in the LCIAmodel. In addition, the MAVT framework offersa foundation for methodological development inLCIA because of its well-established theoreticalbasis.

    The relationship between MAVT and thecurrent LCIA methodology does not mean thatit is not worthwhile to apply other MADA meth-ods in the context of LCIA; however, the appli-cability of the elementary methods to LCIAseems to be relatively limited. The maximin,maximax, and lexicographic methods utilize onlya small part of the available information in mak-

  • F O R U M

    56 Journal of Industrial Ecology

    ing a final choiceonly one attribute (whichequals one category indicator result or interven-tion in the context of LCIA; see the ProblemStructuring section) per alternative. To applythe conjunctive and disjunctive methods, the de-cision maker must supply the minimal or maxi-mal attribute values acceptable for each of theattributes. This may lead to methodological dif-ficulties in the context of LCIA, as we pointedout above. On the other hand, for example,Spengler and colleagues (1998) and Basson et al.(2000) illustrated the successful use of the out-ranking methods. Analysis of the weaknesses andthe strengths of each method is needed in orderto help users select appropriate MADA andMOO methods for different LCIA applications.

    The Decision Analysis Processin the Context of LCIA

    As indicated in the General ProceduralFramework section, the main phases of decisionanalysis are structuring of the problem, construc-tion of the preference model, and sensitivityanalyses. These are discussed in turn, with spe-cific focus on the elements that are of interest forLCIA.

    Problem Structuring

    Structuring of a decision problem includes de-fining and organizing the objectives in order tocompare the alternative systems. Many similari-ties exist between decision analytic approachesto problem structuring and LCIA. In general, theaim of LCIA is to assess differences between al-ternative systems with regard to potential envi-ronmental impacts. The alternatives consideredinclude different product types, life-cycle stages,and so on. The main objective in the LCIAframework is to assess the total environmentalimpact with the help of impact categories, and astarting point is that these impact categories givea fair and relevant description of the studiedproducts environmental effects. Interventionscaused by the alternatives result in a certain levelof impact within these impact categories, and sointerventions have to also be considered as ele-ments of the assessment problem.

    In LCIA with several objectives, the objec-tives can be structured according to a value tree,or objectives hierarchy. This is one of the majortools for the structuring of a decision problem inthe presence of multiple objectives. A value treecaptures the aspects that those involved in thedecision-making process deem to be important inthe selection or ranking of the alternatives (i.e.,the objectives) and arranges them in a mannerthat shows the relationships between them. Thistool leads to a hierarchical representation of theobjectives. A rule for building the value tree isthat higher-level objectives are specified bylower-level objectives in a hierarchy. For exam-ple, in a two-level value tree, a useful distinctionis to refer to the attributes at the higher levels ofthe value tree as main attributes, and those atthe lowest level, which are the aspects of the per-formance of the alternatives relative to thesemain attributes that are in fact assessed (quali-tatively or quantitatively), as subattributes. At-tributes thus express the performance of the al-ternatives under consideration in the differentobjectives (impact categories).

    A value tree can be built using a top-down ora bottom-up approach (see, e.g., von Winterfeldtand Edwards 1986). The top-down approachstarts from the areas of general concern (i.e., im-pact categories relevant to the assessment, suchas climate change), whereas in the bottom-up ap-proach the idea is to identify relevant impactcategories (e.g., acidification) related to the in-terventions (e.g., NOx, SO2) caused by alterna-tives. Stated more generally, the bottom-up ap-proach looks at those characteristics thatdistinguish alternatives from one another in or-der to evaluate them, whereas the top-down ap-proach considers those characteristics that areimportant to the decision makers when evalu-ating the differences between the alternatives.The latter is thus a value-based approach and ismore consistent with an evaluation of alterna-tives driven by the objectives and preferences ofdecision makers.

    In the field of LCIA, the top-down approachfor problem structuring has been used, for ex-ample, in the Eco-indicator 99 method (Goed-koop and Spriensma 1999), in which the resultsof assessments are always expressed in terms ofgeneral environmental damage categories, for ex-

  • F O R U M

    Seppala, Basson, and Norris, Decision Analysis Frameworks for LCIA 57

    Figure 3 Value tree for adecision on the totalenvironmental impact fordifferent product alternatives,where the interventions areregarded as measurablesubattributes.

    Figure 4 Value tree for adecision on the totalenvironmental impact fordifferent product alternatives,where the impact categoryindicator results are regarded asmeasurable attributes.

    ample, damage to human health, ecosystem qual-ity, and resources. These impact categories havetheir own indicators (e.g., disability-adjusted lifeyears, in the case of human health) calculated byend-point models. On the other hand, the freeselection of impact categories on the basis of dif-ferences in the inventory data is consistent witha bottom-up approach.

    The selection of what are to be regarded asthe measurable attributes is a matter of choice.One approach for building the value tree inLCIA is for the final value tree to have impactcategories (e.g., climate change and acidifica-tion) at the top, and for the lowest-level objec-tives, or subattributes, in the hierarchy to be in-terventions (e.g., emissions, extractions, andland use). Values of subattributes can be directlyderived from interventions caused by the alter-natives (figure 3) (Seppala 1997, 1999). Another

    approach suggested for LCIA (e.g., Miettinenand Hamalainen 1997; Basson and Petrie 1999a,1999b) is for the impact category indicator re-sults (i.e., global warming potential in CO2equivalents, or acidification potential in SO2equivalents) obtained for the alternatives to beattributes. Then the value tree of environmentalimpacts is only a single-level hierarchy in whichimpact category indicator results can be directlyused as measures for damage levels of impactcategories (e.g., climate change and acidifica-tion) (figure 4). This structure can also be ap-plied for end-point-oriented LCIA methods suchas Eco-indicator 99 (Goedkoop and Spriensma1999). The value tree of Eco-indicator 99 con-sists of the three impact categories (humanhealth, ecosystem quality, and resources) withtheir indicator results (attributes) (expressed asdisability-adjusted life years, potentially affected

  • F O R U M

    58 Journal of Industrial Ecology

    fraction of species, and megajoules of surplus en-ergy, respectively).

    Keeney and Raiffa (1976) have proposed a setof desirable properties of objectives and attri-butes in a value tree. A set of objectives andattributes should be complete, operational, de-composable, nonredundant, and minimal. Com-pleteness requires that all relevant values be in-cluded in the superstructure of the tree and thatthe substructure completely defines the higher-level values. Operationality requires that thelowest-level values or attributes be meaningfuland assessable. Decomposability means that theattributes can be analyzed one or two at a time,that is, that they are judgmentally independent.Absence of redundancy means that no two at-tributes or values mean the same thing. The min-imum size requirement refers to the necessity ofkeeping the number of attributes small enoughto manage. Note that the requirements conflict.Operationality often requires further decompo-sition, thus increasing the number of attributes.Completeness may lead to redundancy, becausetrue value independence is often an unattainableideal (von Winterfeldt and Edwards 1986).

    In practice, a selection of impact categoriesand an assignment of the inventory data to theimpact categories (classification) in LCIA cor-respond to the structuring of the problem in de-cision analysis. An assessment of total environ-mental impact (e.g., Eco-indicator 99 (Goedkoopand Spriensma 1999)) implies a complete hier-archy from the attributes to the final overall ob-jective. Furthermore, the similarity means thatall requirements mentioned above are also rele-vant for the set of impact categories identifiedand for classification. Because LCIA developedwithout substantial regard for the expertise de-veloped in decision analysis, little effort has beendevoted to evaluating whether these conditionsare indeed upheld by the set of impact categoriesdefined for LCA studies.

    Awareness of these conditions may contributeto selecting appropriate impact categories and at-tributes for evaluations and may improve thequality of assessments. A condition such as theimportance of the absence of redundancy, that is,the avoidance of double counting, is intuitivelyan obvious requirement, but it can be easily for-

    gotten in the assessment. For example, methaneemission and the emission of volatile organiccompounds are not a suitable pair of subattri-butes (figure 3) under the impact category of tro-pospheric ozone formation because methane isincluded under volatile organic compounds, andincluding methane as a separate subattribute un-der tropospheric ozone formation would lead todouble counting. Another requirement easilyforgotten is that attributes should be judgmen-tally independent. For example, this requirementis not fulfilled in the case where indicator resultsof aquatic eutrophication caused by nutrientsand of oxygen depletion caused by organic ma-terial in water bodies are considered as separateattributes. Oxygen depletion is one consequenceof aquatic eutrophication because decompositionof the dead algal populations reduces the oxygenconcentration in the water. These attributes aretherefore judgmentally dependent because theevaluation of an alternative with respect to ox-ygen depletion depends in part on how the al-ternative performs with respect to aquatic eutro-phication.

    Completeness is another important issue ad-dressed explicitly by the decision analysis frame-work. Creating a value tree helps to identify at-tribute sets used in an overall evaluation of thealternatives (see the Construction of the Pref-erence Model section). In this way, value treesfacilitate the process of ensuring that all relevantattributes are taken into account, thus ensuringcompleteness of the attribute set. Assume, for ex-ample, that the task is to determine the bestproduct plan among three alternatives from thepoint of view of environmental effects. Charac-terization results of climate change and acidifi-cation for all three alternatives are identified asbeing at the same level; however, one productalternative has a very bad odor problem, whereasthe other two alternatives do not. If only climatechange and acidification are included in the as-sessment, priorities between the alternatives maybe based on random selection and may differfrom those if the assessment also covers odor.Thus, if the assessment does not cover the mostimportant impacts caused by the alternatives, theassessment will not appropriately reflect the dif-ferences between the alternatives and, hence,may produce misleading results.

  • F O R U M

    Seppala, Basson, and Norris, Decision Analysis Frameworks for LCIA 59

    A hierarchys structure and scope embodyvalue judgments and perspectives. Decision anal-ysis research shows that the structure of the hi-erarchy alone (given a fixed scope) can influenceconclusions in some instances. In particular, thisinfluences the degree of trade-off that is allowedbetween performance in different attributes.Awareness of how the structure of the hierarchymay influence conclusions of a study is importantfor LCA applications.

    Construction of the Preference Model

    Values of AttributesThe elements of the model(s) are obtained

    from the structuring phase. The elements are al-ternatives a1, . . . , an and a value tree with mlowest-level objectivesthe attributes X1, . . . ,Xm. Furthermore, xi is defined to be a specificlevel of Xi, so that the possible impact of select-ing an alternative can be characterized by theconsequence x (x1, . . . , xm). The structuringof the model and the quantification methods de-pend on the approach. For example, in the casein figure 4, impact category indicator results, Ii,are defined as attributes, Xi, whereas in the casein figure 3, interventions are defined as subattri-butes.

    Note that as a result of the hierarchy in figure4, each alternative can be represented as a vectorof four numbers: x (x1, x2, x3, x4), where x isthe evaluation object (the total environmentalimpact) and xi is the measured value (e.g., globalwarming potential in CO2 equivalents) of x onattribute Xi (e.g., indicator result for climatechange). In the case in figure 3 we have x equalto the evaluation object (the total environmen-tal impact), and xi equal to the measured value(e.g., global warming potential in CO2 equiva-lents) of x on the main attribute, Xi (e.g., indi-cator result for climate change). Outcomes ofeach alternative within impact category i can berepresented as a vector of K numbers: xi (b1,i,. . . , bK,i), where bk,i is the measurement (e.g.,emission of substance k) of xi on subattribute Bk,i(e.g., NOx under impact category i) (k 1, . . . ,K).

    In the case in figure 4, where the categoryindicator results are regarded as the measurableattributes, the enumeration of the attributes is

    done via the classic LCIA characterization stepbased on the quantitative inventory results andcharacterization factors. Thus, the decisionframework does not explicitly involve the char-acterization phase.

    In the case in figure 3, where inventory itemsare subattributes, the values of the subattributesare obtained directly from the inventory results(e.g., emission of NOx) or from the modified in-ventory results. In the latter situation, only thoseemissions that cause adverse effects related to im-pact categories are taken into account by usingresults obtained from scientific models, empiricaldata, or expert judgments (Seppala 1999). Inboth cases the determination of the values of themain attributes is regarded as part of the MADAtask itself. This task includes the choice of anaggregation model, normalization, and a weight-ing procedure. In this case, the characterizationis carried out within the decision analysis frame-work.

    In principle, the structure in figure 3 offers thepossibility to handle subattributes (interven-tions) that do not have natural value scales. ForLCIA purposes there are many value measure-ment techniques that can be used in order toexpress performance information in a quantita-tive manner (see, e.g., von Winterfeldt and Ed-wards 1986). For example, in the crudest versionof the direct rating technique, one directly as-signs subattribute values to alternatives. A sub-attribute is defined in qualitative terms. Next,the alternatives that seem best and worst withrespect to that attribute are identified. All otheralternatives are ranked between the two ex-tremes.

    The above rating step can be useful in somestreamlined LCA applications; however, it intro-duces subjective judgment and leads to subjec-tive characterization (see the Weighting sec-tion). Seppala (1999) showed how to conductsuch subjective characterization with weightingfactors and attribute values obtained from expertjudgments.

    Normalization and Transformation ofAttribute ValuesIn LCIA, normalization relates the magnitude

    of the impacts in the different impact categoriesto reference values. The purpose of normaliza-

  • F O R U M

    60 Journal of Industrial Ecology

    tion is to express the characterization results interms that communicate the significance of theimpact indicator results. A variety of approachesto normalization may be used, including onesbased on external reference values (e.g., totalcontribution to impact category in a region or byan industrial sector) or a project-specific relativereference point such as a base case scenario (Nor-ris 2001).

    In decision analysis, the aim of normalizationis to convert the different scales of the attributesonto the same range (e.g., [0,1]), which is an es-sential step if the attribute values are to be ag-gregated into a single number. The scale of [0,1]is established in such a way that the end points(0 and 1) are meaningful to the decision makersand assist them in the evaluation of trade-offsbetween scores in different impact categories.

    In the MAUT/MAVT methods, normaliza-tion is carried out by using utility or value func-tions. It is assumed that the decision maker canexpress a preference for different levels of perfor-mance in each criterion separately and that thiscan be expressed quantitatively in terma of util-ity or value functions to create individual orsingle-attribute utility/value functions. De-pending on the shapes of single-attribute value(or utility) functions and the ranges of each at-tribute taken into account, we can get differentnormalization factors (see Seppala and Hamalai-nen 2001). For example, if the value function forattribute Xi is set up to be linear and to rangefrom no environmental impact(i.e., vi(x0i) 0,where attribute value x0i 0 ) to the impactcaused by a particular reference system (i.e.,vi(xi(reference system) 1), the typical normal-ization factor (1/Ri) used in the calculation rulesof LCIA (see equation 5 and Seppala 1999) isobtained. Using the normalization terminologyof LCIA suggested by Norris (2001), an externalnormalization corresponds to a situation inwhich the reference system is larger than a pro-duction system of the application (such as all ac-tivities from Holland and western Europe). Onthe other hand, MAUT/MAVT also providesnormalization rules for a case-specific normali-zation in which an attribute range of single-attribute value (or utility) functions is chosenfrom the best and the worst attribute values of

    the alternatives. In the case of linear single-attribute value functions, the internal normali-zation leads to normalization factors (transfor-mation factors) represented in the description ofthe weighted summation method.

    In AHP and outranking methods, the trans-formation of attribute values into a common di-mensionless unit is typically carried out by nor-malization procedures using pairwisecomparisons. In practice in these approaches,normalization can be considered internal. Thenormalization is related to the alternatives ofeach assessment problem and aggregation rulesapplied in each method.

    In practice, damage functions describing re-lationships between category indicator results (orvalues of interventions) and the effects (dam-ages) are assumed to be linear in LCIA methods.This means that the same incremental attributevalue causes the same response at all attributevalue levels; however, the scientific bases for us-ing such linear damage functions are rather weak.In the MAUT/MAVT framework, differentshapes of damage functions can be taken intoaccount. Damage functions are replaced byvalue/utility functions. This offers a general so-lution for the calculation of the overall impactvalues which includes the considerations of nor-malization and weighting (Seppala and Hama-lainen 2001).

    WeightingIn impact assessment it is almost always im-

    possible to find one alternative that causes thelowest impact category indicator results with re-spect to all impact categories. This leads to aproblem: What is the best alternative from thepoint of view of environmental impact? The is-sue is one of trade-offs between different impactcategories chosen in the model. Using the ter-minology of the LCA community, the questionis how to conduct weighting or valuation. Thisis regarded as a process of assessing the relativeimportance of impacts in LCA by focusing oneither end-point or damage modeling (with sub-sequent aggregation of damages, e.g., in mone-tary terms) or on elicitation of weights to expressthe relative preference for performance in differ-ent impact categories. In the decision analysis

  • F O R U M

    Seppala, Basson, and Norris, Decision Analysis Frameworks for LCIA 61

    framework the issue is the same: weighting is aprocess in which trade-offs among attributes arequantified as importance weights or other scalingfactors.

    In general, MADA offers techniques, knowl-edge, and experience for evaluating the weight-ing factors in LCIA. This is especially relevantto the so-called panel methods in which peopleare asked to give weighting factors (attributeweights) using different elicitation procedures.Elicitation is the process of gathering judgmentsconcerning the problem through specially de-signed methods of verbal or written communi-cation (Meyer and Booker 1990). DifferentMADA methods, however, have different re-quirements for elicitation of weighting factorsbecause of their various methodological bases.Clearly, practitioners have to understand rela-tionships between weighting factors and theother aggregation elements in the weightingmethods used when elicitation procedures arechosen. In practice, this relationship is often ig-nored, which can lead to the choice of preferredalternatives that do not accurately reflect thepreferences of those surveyed using the panelmethod.

    As mentioned earlier, in MAUT/MAVTthere are numerous procedures for the determi-nation of weighting factors. Even though weight-ing is an important issue in MAUT/MAVTmethods, many researchers are incapable of jus-tifying clearly their choice of one elicitationtechnique over another. The choice is partly atrade-off between comprehensiveness and sim-plicity. The decision analysis literature includessome studies on advantages and weaknesses ofdifferent elicitation techniques in different de-cision situations, which can assist in selection ofan appropriate technique (e.g., Borcherding et al.1991; Weber and Borcherding 1993). Only a fewexamples of using simple techniques for con-structing weights in LCIA currently exist. In anLCA of the Finnish forest industry, ratio esti-mation was used for elicitation of impact cate-gory weights from 58 experts working with en-vironmental issues (Seppala 1999). The ratiomethod requires the panelist to first rank therelevant impact categories according to their im-portance. The least important impact category isassigned a weighting factor of 10, and all others

    are judged as multiples of 10. The resulting rawweighting factors are normalized to sum to one.In a case study involving the recommissioning ofa power station, Basson and Petrie (2000) usedan indifference weighting technique (see Keeneyand Raiffa 1976). The cost of the project wasused as a reference point for all trade-off ques-tions, and respondents were asked to indicate thecost margin that would be acceptable in order toimprove the environmental and social perfor-mance of the design alternatives from worst topotentially best levels. This technique enabledthe project designers to think critically about thetrade-offs that would have to be made in termsthat they could relate to (i.e., project cost) andgave a clear, quantitative expression of the rela-tive importance of different economic, social,and environmental aspects of the project.

    MAVT can precisely explain how attributeweights should be assessed in the context of dif-ferent normalization regimes. For example,weights in equation (5) (the equation that rep-resents the aggregation function most typicallyused in LCIA) reflect the damages caused by Ri;this feature has to be taken into account in thedetermination of the attribute weights. In theelicitation situation, the question format has tobe adjusted so that panelists express their opin-ions about the importance of different impactscaused by the reference values, Ri (Seppala 1997,1999; Seppala and Hamalainen 2001). For ex-ample, assume that the whole of Europe is chosenas the reference area for normalization. Then, thequestion By how much would you prefer theemissions causing acidification to be reducedcompared to those causing tropospheric ozoneformation? does not constitute an acceptablephrasing of the question for preference elicita-tion if panelists do not know that their responsewill be interpreted with reference to total emis-sions for Europe.

    In decision analysis there are also so-calledranking methods that can be used if the panelistsare able only to rank the criteria (impact cate-gories) in order of importance. Ranking methodssuch as the expected value method (Rietveld1984a, 1984b), the extreme value method (Pae-linck 1974, 1977), and the random value method(Voogd 1983) have different assumptions and

  • F O R U M

    62 Journal of Industrial Ecology

    calculation procedures that produce weightingfactors from the rank-order information obtainedfor the criteria.

    It is important to understand that if the LCIAapplication is constructed according to MADAmethods other than the additive-weighted meth-ods (such as SMART and AHP), different aggre-gation models, compared to equation (5), may beobtained. Examples of other aggregation meth-ods are the outranking methods (e.g., ELECTREand PROMETHEE methods). Different meth-odological bases of MADA methods also meanthat each method has its own requirements forelicitation of weighting factors, which have totaken into account in the weighting.

    In figure 3, where interventions are used assubattributes, MADA methods can offer a foun-dation to generate the impact category indicatorresults from various values of interventions. Thiscan be useful if characterization factors are notknown for all interventions within impact cate-gories. Decision analysis techniques provide a so-lution in which subjective attribute weights aredetermined according to rational rules, and ob-jective attribute weights are directly derivedfrom characterization factors and values of inter-ventions (see Seppala 1997, 1999). Note that inthe case in figure 3, the impact category weightsneed also be determined in order to calculate to-tal environmental impact scores.

    In the decision analysis framework, weightsare thus subjective data, and input regardingthe decision makers preferences is required inorder to decide which alternative would be pre-ferred. It is also clear that the result of theweighting task is also dependent on many thingsother than the techniques applied. Panel proce-dures may differ with respect to the size of thepanel and type of panelists (environmental ex-perts, experts from other sciences, stakeholders,lay people, or a representative mix), the elicita-tion situation (questionnaires, interviews, inter-active group, and Delphi (see Dalkey 1969);whether a one-round procedure or a multiroundprocedure (with feedback or without) is used, thequestion format, presentation of background in-formation and type of aggregation (a consensus,use of mathematical methods to combine mul-tiple panelists data into a single estimate, or sin-gle distribution of estimates) (e.g., Meyer andBooker 1990; Seppala 1999), and so forth. Al-

    though much knowledge and experience has ac-cumulated about how these different factors canaffect the weighting, further work is needed toestablish the implications of these findings forthe application of different weighting techniquesin the field of LCA.

    AggregationIn decision analysis, a preference model ranks

    the decision alternatives on the basis of inputdata (such as interventions, category indicatorresults, thresholds, or weights), and the aggre-gation rule is applied. Typically, the preferencemodels produce an overall score for each alter-native as a final result.

    In an LCIA application in which character-ized scores of impact categories (category indi-cator results) are aggregated into a single score,the higher the score, the more undesirable thealternative.6 The aggregation is usually implicitlyincorporated, with weighting under the headingvaluation. Despite the importance of aggrega-tion rules, this subject has not yet received muchattention in the LCA community.

    As mentioned earlier, the typical aggregationrule for the calculation of the total environmen-tal impact applied in LCIA is the additive-weighted model derived from MAVT.7 From apoint of view of MAVT, a necessary conditionfor an additive decomposition of the multiattri-bute value function is mutual preferential inde-pendence of attributes. In the decision analysisframing of LCIA illustrated in figure 4, prefer-ential independence between two attributesimpact category indicator results, I1 (e.g., globalwarming potential in CO2 equivalents), and I2(e.g., acidification potential in SO2 equiva-lents)would hold if the preferences for the spe-cific value of attribute I1 do not depend on thevalue level of attribute I2. If any pair of attributesis preferentially independent of the others, thenthe attributes are mutually preferentially inde-pendent. If the additive model fails because ofdependencies, the multiplicative or multilinearmodel may still be appropriate (see Keeney andRaiffa 1976; von Winterfeldt and Edwards 1986).In LCA applications to date, little effort has beenmade to verify whether impact category attrib-utes are indeed mutually preferentially indepen-dent and whether the additive aggregation func-tion is thus appropriate. Multiplicative and

  • F O R U M

    Seppala, Basson, and Norris, Decision Analysis Frameworks for LCIA 63

    multilinear aggregation forms have not beenused.

    Note that following MAVT rules and assum-ing simple conditions, we can construct equation(5) from the different starting points in figures 3and 4 (see Seppala 1997, 1999). It is importantto understand that if the LCIA application isconstructed according to MADA methods (suchas ELECTRE and PROMETHEE) other than theadditive-weighted methods (such as SMARTand AHP) we may get different aggregation mod-els compared to equation (5). Different meth-odological bases of aggregation models also meanthat each model has its own input data and re-quirements for normalization/transformationprocedures and elicitation of weighting factorsthat have to be taken into account in the assess-ment. Because of the relationships between ag-gregation rules and other elements of preferencemodels, an aggregation model has to be selectedbefore the gathering of input data begins.

    Sensitivity Analysis

    To make better-informed decisions, it is ex-tremely important to carry out a sensitivity anal-ysis. The aim of sensitivity studies is to identifyvalues of input variables that result in a prefer-ence for different alternatives. In this regard it isimportant to distinguish between the differentsources and types of uncertainties that are pres-ent. Some uncertainties may legitimately bequantified in a probabilistic sense, whereas otherscannot be interpreted as having true values. Inthe case of the former, the uncertainty can bepropagated through the analysis using analyticalmethods or random sampling techniques (e.g.,Monte Carlo simulation in which the propertiesand behaviors of a variable are investigated byrepeated random sampling from a distributionnormal, lognormal, etc.representing the vari-able). In the case of the latter, the effect of theuncertainty in the variable on the choice of pre-ferred alternative(s) can be investigated by vary-ing the value of each individual variable over arealistic range of values.

    Inventory data are uncertain estimates. Muchof this uncertainty relates to empirical parame-ters and can be quantified; however, at present,this uncertainty is not well characterized in mostLCA studies. Characterization introduces more

    uncertainties of different kinds (including sub-stantial model uncertainties). These may be dif-ficult to express quantitatively and may be of dif-ferent magnitudes for different impact categories.Although some work has been done in this area(e.g., Meier 1997; Seppala 1999; Hertwich 1999;Huijbregts 2001), uncertainties introduced dur-ing characterization are not managed well atpresent.

    Weighting introduces value-based judgmentsor subjective preferences and is a significantsource of uncertainty because it may be very dif-ficult for respondents to express preferences ac-curately based on the information provided tothem and to accurately reflect these preferencesin the relative assessment of attribute scores.

    In principle, the combined influence of un-certainty in inventory data, characterization fac-tors, and weights can be analyzed using a MonteCarlo simulation (e.g., Seppala 1999; Huijbregtset al. 2001). The approach needs distributions ofthese model variables as uncertainty input in or-der to produce distributions of value scores of al-ternatives. Determinations of the distributionscan be based on expert judgments (see Seppala1999).

    A well-known feature of hierarchical prefer-ence models is that changing elements at lowerlevels of the value tree has only minor effects onresults, and weights at the top are the most im-portant factors for sensitivity analysis (von Win-terfeldt and Edwards 1986). For this reason, acommonly used approach in sensitivity analysisis to calculate certainty intervals for one set ofweights. In the context of LCIA the procedurecan reveal an interval of the chosen impact cate-gory weight in which the order of preference ofthe alternatives is not changed (Seppala 1999).The so-called turning points (change in order ofpreference for/ranking of alternatives) are foundby varying values of the particular weight andcalculating the corresponding results. In the cal-culation, the ratios between all other weights re-main unaltered. A search procedure also existsfor a turning point when all weights are allowedto vary (Rietveld and Janssen 1989).

    The above approaches to dealing with uncer-tainty do not cover model errors. Selection andaggregation of variables, boundaries, and assump-tions in the LCIA model contribute to the cred-ibility of results. For defining this uncertainty, the

  • F O R U M

    64 Journal of Industrial Ecology

    use and comparison of alternative models areneeded. Work on the management of uncer-tainty in decision making supported by decisionanalytic decision support models is underway(Basson 1999).

    Conclusions

    The assessment of total environmental im-pact in LCA can be considered as decision mak-ing under multiple objectives with difficult trade-offs and uncertain outcomes. In these complexdecision situations, the methods of decision anal-ysis can offer different approaches to structureand model the decision, and thus enable rationaldecision making. MADA methods developed fordiscrete decision problems, in particular, are use-ful for assisting the choice or screening of thebest alternative or a good set of alternatives inLCA applications. The normative findings of theMCDA field can be usefully applied to help dis-cern good and bad approaches to LCIA.

    The framing of LCIA (and LCA) using de-cision analysis can make an important contri-bution to the development of LCA. This couldpromote a more structured and consistent ap-proach to LCA and decision making. Becausedecision analysis is in essence a systems tool, itis consistent with the systemic intent of LCA.Furthermore, the decision analysis communityhas a wealth of experience and has generated anabundance of decision support, including tools tosupport problem structuring, preference elicita-tion, and problem analysis, as well as uncertaintyand sensitivity analysis. By framing LC(I)A usingdecision analysis, it becomes clearer which ofthese tools are relevant to support particular as-pects of LCA and overall decision making in avariety of contexts.

    Although the exact framing of LCIA usingdecision analysis still merits debate, the genericdecision analysis steps and their LCIA counter-parts are clear. Firstly, a selection of impact cate-gories and an assignment of the inventory datato the impact categories (classification) in LCIAcorrespond to the structuring of the problem indecision analysis. The assessment problem can bestructured according to a value tree in order toensure that all relevant impact categories and in-terventions/impact category indicator results are

    taken into account in the appropriate way. Sec-ondly, construction of the impact assessmentmodel according to a decision analysis frameworkcan assist in deciding which aggregation rule isrequired for the calculation of overall environ-mental impact. The framework can offer tools toassess values of attributes, that is, values of in-terventions or category indicator results, depend-ing on the structure of the model. Furthermore,the aggregation rule can answer questions suchas What does the concept of a normalizationmean and how should it be conducted? andWhat is the relationship between normalizationand weights and how should it be taken into ac-count in weight elicitation? In addition, MADAmethods can clarify how to conduct weighting inthe context of panel methods. Further researchis needed to ensure the effective use of weightingin the field of LCA.

    A typical calculation rule for total environ-mental impact used in LCIA can be interpretedas the additive-weighted model derived fromMAVT. The similarities mean that especiallytechniques, knowledge, and experience for eval-uating the alternatives developed in MAVT/MAUT can be applied to the LCIA model. Thestrength of this theory compared to many non-MAVT/MAUT approaches has a well-established theoretical foundation. Thus, MAVToffers a basis for methodological development inLCIA.

    The choice of compensatory and noncom-pensatory approaches to decision making raisessome fundamental ethical questions for LCA.The challenge is to determine when compensa-tory approaches to decision making are accept-able. On the other hand, noncompensatorymethods, such as lexicographic, conjunctive, andoutranking methods, require the establishmentof thresholds and absolute performance stan-dards; this provides substantial methodologicalchallenges for LCA, which bases its evaluationof environmental damage on per-unit contribu-tions to proxy environmental indicators.

    In summary, LCIA methods can and havebeen usefully framed as instances of multiple-attribute approaches to decision making and ap-plications of MADA methodology; however, itis important to understand that the differentmethodological bases of different MADA meth-

  • F O R U M

    Seppala, Basson, and Norris, Decision Analysis Frameworks for LCIA 65

    ods mean that each method has its own require-ments for the structuring and modeling of theassessment problem and that these have to betaken into account in normalization, weighting,and similar procedures. It is essential that prac-titioners understand the relationships betweendifferent elements in the various methods whenapplying them.

    Notes

    1. This field has an abundance of terminology. Thewords objectives and attributes may be usedsynonymously with the word criteria. A usefuldistinction is to refer to more general statementsabout aspects that would be considered in theevaluation of alternatives as objectives whenthey explicitly have a direction of preference as-sociated with them (e.g., minimize environmen-tal impact) and as criteria when the directionof preference is not stated and merely implied.The term attributes is then reserved for thatwhich is actually evaluated (qualitatively orquantitatively) about the performance of thesealternatives relative to the more general objec-tives/criteria.

    2. The term decision maker is used in the generalsense here and refers to those involved in thedecision-making process. It is customary in deci-sion analysis to refer to a hypothetical decisionmaker who represents the consensual point ofview that is developed during the decision anal-ysis process.

    3. Scales of measurement that can be employed forthe measurement of quantities are ordinal or car-dinal. Ordinal scales are ranking scales, whichcan be used to indicate order preference, that is,A is first or best, B is second or second best, C isthird or third best, and so forth. Such scales, how-ever, do not indicate the magnitude of the differ-ence in preference, that is, the magnitude of thedifference in preference between A and B andbetween B and C is not expressed in an ordinalscale and is not necessarily the same. Interval orratio scales are cardinal scales. Interval scaleshave constant units of measure, but the scale isnot proportional and the zero point of the scale(if defined) is arbitrary. The temperature scale isa typical interval scale. One can say that an ob-ject at 100C is 50C hotter than one at 50C,but it is not meaningful to say that the formerobject is twice as hot as the latter. Ratio scales,such as those used for length and mass, have a

    natural zero and constant units of measurement,and the difference between units is proportional,for example, an object with a mass of 100 kg istwice as heavy as an object with a mass of 50 kg.

    4. In MCDA, screening and selection methods arealso referred to as sorting and choice methods,respectively, where the aim of the former is toclassify the set of alternatives into a minimum oftwo sets (where at least one set merits furtherconsideration), whereas the purpose of the latteris to find a single, preferred alternative.

    5. According to a United Nations EnvironmentProgram (UNEP) workshop (Bare et al. 2000),end-point modeling refers to characterization de-scribing impact category indicators at the end ofthe cause-consequence chain, such as years of lifelost, whereas midpoint modeling is related to in-dicators located somewhere along the cause-consequence chain, such as proton (H) releasein the case of acidification.

    6. This is a matter of choice, and the value functionsmay equally well be defined to indicate the extentto which performance scores meet the objectivesof the decision maker, for example, best perfor-mance (lowest impact) could be given a score of1, and worst performance (highest impact) avalue of zero; hence, higher scores will be pre-ferred to lower ones.

    7. Note that there are also LCIA methods such aseco-scarcity (Ahbe et al. 1990; BUWAL 1998) inwhich the aggregation rule does not correspondto the rules of MAVT (Seppala and Hamalainen2001).

    References

    Ahbe, S., A. Braunschweig, and R. Muller-Wenk.1990. Methodik fur Okobilanzen auf der Basisokologischer Optimierung. [Methodology forecobalances on the basis of ecological optimisa-tion.] In Schriftenreihe Umwelt, no. 133. Bern:Bundesamt fur Umwelt, Wald und Landschaft(BUWAL).

    Alexander, B., G. Barton, J. Petrie, and J. A. Romag-noli. 2000. Process synthesis and optimisationtools for environmental design: Methodology andstructure. Computers and Chemical Engineering24(27): 11951200.

    Azapagic, A. 1996. Environmental system analysis:The application of linear programming to life cy-cle assessment. Ph.D. dissertation, University ofSurrey, Surrey, England.

    Azapagic, A. and R. Clift. 1998. Linear programmingas a tool for life cycle assessment. InternationalJournal of Life Cycle Assessment 3(6): 305316.

  • F O R U M

    66 Journal of Industrial Ecology

    Bare, J. C., P. Hofstetter, D. W. Pennington, and H. A.Udo de Haes. 2000. Life cycle impact assessmentworkshop summary: Midpoints versus endpoints:The sacrifices and benefits. International Journal ofLife Cycle Assessment 5(6): 319326.

    Basson, L. 1999. Decision-making for effective envi-ronmental management: Multiple criteria deci-sion analysis and management of uncertainty.Ph.D. thesis proposal, Department of ChemicalEngineering, University of Sydney. Sydney, Aus-tralia.

    Basson, L. and J. G. Petrie. 1999a. Multiple criteriaapproaches for valuation in life cycle assessment.Presentation. Paper presented at the annualmeeting of the American Institute of ChemicalEngineers (AIChE), 31 October5 November,Dallas, Texas.

    Basson, L. and J. G. Petrie. 1999b. Decision-makingduring early stages of a project life cycle: Rolesfor multiple criteria decision analysis, life cycleassessment and ecological risk assessment. Paperpresented at the 20th annual meeting of the So-ciety for Environmental Toxicology and Chem-istry (SETACNorth America), 1418 Novem-ber, Philadelphia, Pennsylvania.

    Basson, L. and J. G. Petrie. 2000. The development ofa decision support framework for fossil fuel basedpower generation. Presentation record 225c. InProceedings of the annual meeting of the AmericanInstitute of Chemical Engineers (AIChE). NewYork: AIChE Manuscript Center.

    Basson, L., A. R. Perkins, and J. G. Petrie. 2000. Theevaluation of pollution prevention alternativesusing non-compensatory multiple criteria deci-sion analysis methods. Presentation record 230c.In Proceedings of the annual meeting of the AmericanInstitute of Chemical Engineers (AIChE). NewYork: AIChE Manuscript Center.

    Baumann, H. and T. Rydberg. 1994. Life cycle assess-ment: A comparison of three methods for impactanalysis and evaluation. Journal of Cleaner Pro-duction 2(1): 1320.

    Borcherding, K., T. Eppel, and D. von Winterfeldt.1991. Comparison of weighting judgements inmultiattribute utility measurement. ManagementScience 37(12): 16031619.

    Brans, J. P. and Ph. Vincke. 1985. A preference rankingorganisation method (t