environmental assessments: four under-appreciated elements of design
TRANSCRIPT
Global Environmental Change 11 (2001) 311–333
Environmental assessments:four under-appreciated elements of design
Alex Farrella,*, Stacy D. VanDeveerb, Jill J.aagerc
aDepartment of Engineering & Public Policy, Carnegie Mellon University, Pittsburg, PA 15213 3890, USAbDepartment of Political Science, University of New Hampshire, Durham, NH 03824, USA
c International Human Dimensions Programme on Global Environmental Change, Bonn D-53113, Germany
Received 28 November 2000
Abstract
Environmental assessments generate and/or collect individual research efforts to answer policy-relevant questions and otherwise
provide technical advice to decision-makers, typically legislators, international negotiators and regulators. Though one might think
first of assessments in terms of the reports that they often produce, the implications of scientific assessment are better understood by
viewing assessments as a social processes, rather than principally as a document. Assessment processes are embedded in different
sorts of institutional settings, within which scientists, decision-makers, and advocates communicate to define relevant questions for
analysis, mobilize certain kinds of experts and expertise, and interpret findings in particular ways. This social process perspective on
assessment directs attention beyond the content of assessment reports to encompass questions the design of the social process. In this
paper, we focus on four elements of assessment design that are too frequently under-appreciated: assessment context and initiation,
science–policy interaction, participation in assessment processes, and assessment capacity. We show how widely these elements vary
across five different assessments and discuss the implications of this variation. # 2001 Elsevier Science Ltd. All rights reserved.
1. Introduction
As environmental issues such as acid deposition,stratospheric ozone depletion and climatic changereceive increasing attention in both national andinternational scientific and political circles, assessmentprocesses that connect scientific research and policyhave become increasingly common and important.1
Assessment designers, organizers, and administratorshave a large array of concerns at the outset ofassessment processes and projects. The usual listincludes such things as the technical expertise andcapacity of participants, funding and resources, timeframes, deadlines, structure of the assessment andreporting out, and data availability and quality (just toname a few). This paper identifies under-appreciatedelements of assessment design that can affect assessmentoutcomes, shows that these elements are actually veryimportant to the outcome of assessments, and providessome thoughts on how to improve environmentalassessments by paying greater attention to theseelements. This work is part of the ongoing, multi-yearand multidisciplinary Global Environmental Assess-ment project, which is examining assessments of globaland regional environmental issues.2 Two fundamentalconcepts developed by the GEA Project underlie thispaper, (1) assessments are fundamentally social pro-cesses, and (2) assessment processes share many
*Corresponding author. Tel.: +1-412-268-2670; fax: +1-412-268-3757.
E-mail address: [email protected] (A. Farrell).1The earliest and most common form of environmental assessment
is project-based, which is now institutionalized in over 100 countries
and in international organizations such as the World Bank (Sadler,
1996). An international treaty on project-level transboundary environ-
mental assessment was signed in 1991, but it has been ratified only by
Sweden and the European Economic Community (now the European
Union) and is not yet in force (United Nations Economic Commission
for Europe, 1991). This paper deals with less narrow activities that
examine large-scale phenomena beyond the scope of a single project,
sometimes called ‘‘ecosystem-level’’ or ‘‘strategic’’ environmental
assessments (Merkle and Kaupenjohann, 2000; Partid!aario and Clark,
2000). A key feature of the assessments discussed here are that they
examine environmental phenomena that cross political boundaries that
matter for policy-making, usually international boundaries but some-
times jurisdictional boundaries within a single nation (i.e. state
boundaries within the US). They often accompany international (or
inter-state) negotiations on issues such as regional and global air
footnote contined
pollution, ocean pollution, fisheries management, and the environ-
mental impacts of international trade.2See http://environment.harvard.edu/gea for more information.
0959-3780/00/$ - see front matter # 2001 Elsevier Science Ltd. All rights reserved.
PII: S 0 9 5 9 - 3 7 8 0 ( 0 1 ) 0 0 0 0 9 - 7
important features irrespective of topic or discipline,making generalizations possible (Miller et al., 1997).Environmental assessments generate and/or collect
individual research efforts to answer policy-relevantquestions and otherwise provide technical advice todecision-makers, typically legislators, international ne-gotiators and regulators. They are often created toinform decision-makers about issues that are new topolicy-makers or controversial. Though one might thinkfirst of assessments in terms of the reports that theyoften produce, the implications of scientific assessmentare better understood by viewing assessments as acommunication process, rather than principally as adocument. These assessment processes are embedded indifferent sorts of institutional settings, within whichscientists, decision-makers, and advocates communicateto define relevant questions for analysis, mobilize certainkinds of experts and expertise, and interpret findings inparticular ways. This social process perspective onassessment directs attention beyond the content ofassessment reports to encompass questions regardingparticipation, context, presentation, evaluation and thenegotiation and legitimization of boundaries betweenscientific and policy dimensions.The study of environmental studies is by no means
unique to the GEA Project. Numerous researchers haveaddressed similar issues, but most have looked at oneparticular assessment, not across topic areas (Boehmer-Christiansen, 1994a,b; Cohen, 1997; Elzinga, 1997;Winstanley et al., 1998; Cowling and Nilsson, 1995;Oversight Review Board, 1991; Rubin, 1991; Castellsand Funtowicz, 1997; Tuinstra et al., 1999; Wettestad,1997; Christoffersen et al., 2000; Eckley, 2000; OxfordEconomic Research Associates, 2000). What is new inthe GEA Project is that a systematic research effort hasbeen undertaken to develop general observations andrecommendations about environmental assessment pro-cesses. This view implies that assessment is a separate,independent activity from either scientific research orpolitical choice, with its own features, norms ofbehavior, and limitations. The evidence presented inthis paper is meant to support this claim, and to extendthe insights that have been made so far on the basis ofevaluations of single assessments. In particular, we hopeto show that there is great variety in how an assessmentcan be designed, and what some of those choices meanin terms of effectiveness, variously defined.In the research presented here, we are particularly
interested in environmental assessments involving multi-ple jurisdictions, typically international efforts. Theseassessments are often associated with political agree-ments between jurisdictions that address transboundaryenvironmental problems. Such assessments may precedeformal political agreements or occur simultaneouslywith policy negotiations. This paper draws on ourexamination of assessment processes embodied in the
following five cases: Intergovernmental Panel on ClimateChange (IPCC), the United States (US) Ozone Trans-port Assessment Group (OTAG), the Long-RangeTransboundary Air Pollution regime (LRTAP) inEurope, and the US National Acidic PrecipitationProgram (NAPAP), and the Helsinki Commission(HELCOM) for the protection of the Baltic Sea. Thepaper defines environmental assessment, surveys cri-tiques of assessment, and identifies four important andunder-appreciated elements in the design of environ-mental assessments. The third section briefly presents thefive cases, and section four analyzes the different rolesour four under-appreciated elements played in each case.The final section draws conclusions across the cases.
2. Defining environmental assessment
Scientific and engineering research has a major impacton environmental policy, but the processes by which thisoccurs are complex, contested, and poorly understood(Jasanoff, 1990; van Eijndhoven et al., 2001). Environ-mental assessment is the entire social process by whichexpert knowledge related to a policy problem isorganized, evaluated, integrated, and presented indocuments and otherwise to inform decision-making.Assessment processes are important mechanisms forbridging the gap between people and institutions thatcreate and hold scientific and engineering informationand those that may wish to use it (and are sometimesrequired to use it by law) in public policy and privatesector decision-making (Levin, 1992). Environmentalassessments generate and/or collect individual researchefforts to answer policy-relevant questions and other-wise provide technical advice to decision-makers,typically legislators and regulators. They are oftencreated to inform decision-makers about new and/orcontroversial issues.Assessments are often key means of developing and
collectively articulating scientific and technical consen-sus statements, as in the case of US National Academyof Sciences panels, but they can also be importantforums for political negotiation and interaction betweenscientists and policy-makers. Assessments are a power-ful means of developing of credible knowledge forpolicy-making. The choices upon which credible knowl-edge bears often have very high stakes, resulting in a‘‘forced marriage of science and politics’’ (Jasanoff,1990). Because credible knowledge is a complex andcrucial basis for environmental policy, it is important tounderstand how it is created.Experts involved in an assessment process generally
recognize that they are involved in a hybrid activity inwhich scientific expertise is accompanied by a consider-able amount of social and political judgment. Someobservers, however, maintain that their contributions to
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333312
assessment processes should be ‘‘objective scientificfacts’’ uninfluenced by any value judgment (Bolin,1994). However, our examination of multiple assessmentprocesses below shows clearly that value judgments aremade at all stages of an assessment process. Forexample, when assessments are initiated, value judg-ments are made in the framing of central question(s) andproblems to be considered, in decisions about what willand will not be considered and assumed, and indecisions about who will be involved (and how andwhy they will participate) in the assessment. Valuejudgments are also embedded in decisions about whichresults will be used in the assessment and in choicesabout data analysis and interpretation. Value judgmentsare also crucial in the making of assessment summariesand the specification of recommendations.
2.1. Four under-appreciated elements
Environmental assessments, having grown in numberand influence, have also become the subject of a growinganalytical literature (assessments of assessments, if youwill). Such work offers many useful suggestions forimproving assessment practice, such as the need for self-evaluation, a focus on quantifying and analyzinguncertainties, and the importance of clarifying assump-tions (especially those embodied in model formulation),(Morgan and Dowlatabadi, 1996; Oversight ReviewBoard, 1991; Tuinstra, Hordijk and Amman, 1999;Keating and Farrell, 1998). Other researchers makedeeper critiques, including the observation that aseparation of ‘‘science’’ from ‘‘politics’’ in environmen-tal assessment is essentially impossible, despite beinghighly sought-after (Jasanoff, 1990; Herrick and Jamie-son, 1995; Castells and Funtowicz, 1997). In this paper,our focus lies on four practical issues: assessmentcontext and initiation, science–policy interaction, parti-cipation in assessment processes, and assessment capa-city. These under-appreciated elements of environ-mental assessment design and practice manifest manyof the deeper critiques in specific cases. To clarify ourarguments about these elements, we briefly describe thefour elements before turning to discussion of five casesof assessment. The ways in which these elements werepresent in the case studies are given in Table 1.
2.1.1. Assessment initiation and contextIn examining and designing assessment processes,
important influences on the characteristics and outcomesof assessment processes, and the roles played byindividuals and institutions, can be traced back to theorigins of particular assessment processes. Who calledfor a particular assessment process and why? Doparticipants share an understanding of why assessmentprocesses were initiated, or do they hold different views?What is the organizational context of the assessment
process? Does it take place within a particular organiza-tion, such as an environmental policy bureaucracy, ordoes the assessment process cross numerous levels ofjurisdiction and types of organizations? The first columnof Table 2 lists many of the kinds of goals that users andassessors often have in mind when calling for an assess-ment. In the table, ‘‘assessors’’ are those individuals andorganizations that conduct assessments, usually asso-ciated with research communities. ‘‘Users’’ mostly refersto the involved decision-makers, although many peopleuse assessments, especially those who wish to learn moreabout the issue or influence the decision-makers. As thetable illustrates, considerable variation exists in therationale for initiating assessment. This variation mayinfluence assessment practice and outcomes.The diverse goals, justifications and institutional
contexts present at the inception of assessment processes‘‘frame’’ assessment conduct and outcomes. Under-standing assessment as communicative processes drawsattention to the importance of interpretive ‘‘frames’’that are articulated and institutionalized within assess-ment practice. Framing denotes processes of organizingunderstanding contingent on collectively held perceptuallenses, worldviews or underlying assumptions. Under-stood in this way, framing is crucial for the everydayactivities of practitioners of assessment, policy-makingand scientific research. They make (implicit and explicit)decisions within particular frames}or when choosingamong alternative frames. For example, assessmentprocesses are framed by the initial understandings or‘‘the problem’’ under examination, participants ideasabout the ‘‘stakes’’ associated with assessment, theorganization(s) sponsoring the assessment (its rules,membership, culture, etc.), and so on.The framing of assessments is an important determi-
nant in the selection of people involved in the assessmentprocess and the design of the process itself. It is clearthat assessments rarely have one widely agreed-upongoal. In many cases, multiple actors want particulartypes of assessment processes for a variety of reasons.That many different actors and goals co-exist within ‘‘anassessment’’ essentially requires a process view ofassessments. The task for analysts, in part, is to parseout how certain strategic interests, frames and patternsof participation arise or come to dominate particularassessment processes. Contextual factors may includethe organizational bodies administering assessments, thekind of decisions they may be intended to inform, or thelevel of perceived crises among participants.
2.1.2. Science–policy interactionsThe structure of interactions between the scientists
and the policy-makers within assessment processes cantake on different forms. These structures can be thoughtof as falling on a spectrum ranging from attempts toisolate scientists from policy processes, on one end, to
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333 313
Table 1
Summary of 4 important elements of assessment design across selected examples
Context and initiation Scientist–politician
interaction
Participation Assessment
capacity
Helsinki
Commission
(HELCOM)
HELCOM was established in the
1974 Convention on the Protection
of the Marine Environment of the
Baltic Sea Area (Helsinki
Convention). This was negotiated
among all seven of the region’s states
(at that time) as a response to
growing scientific and public concern
about the state of the Baltic Sea
environment and following the 1972
Stockholm convention.
HELCOM assessment participants
have frequent formal and informal
links to international and domestic
policy-makers.
HELCOM assessment participation
is open to state and non-state experts
from all of the Baltic littoral states
and additional states within the
Baltic watershed. Individuals from
the wealthier states in the region
constitute a majority of participants,
but post-communist states and
expert communities rarely lack
representation.
HELCOM significantly increased Baltic
region assessment capacity for marine
pollution and protection and it improved
monitoring and data gathering around the
region. Furthermore, HELCOM assess-
ment has improved the state of knowledge
about the Baltic regional ecosystem and
many of its processes and constituent
organisms.
The 1974 agreement was only
possible after the easing of Cold War
tensions (d!eetente) and after resolving
the ‘‘German Question.’’
HELCOM has helped build a broad
network of leading scientific and
technical institutions around the
region, studying many issues and
linking them to state bureaucracies
and to some industry researchers.
HELCOM draws broadly from an
array of scientific, technical, legal
and policy fields for assessment
activities.
Assessment capacity was improved in all
Baltic littoral states through HELCOM
activities. Many HELCOM activities in-
cludes training components specifically
designed to increase capacity, raise aware-
ness and enhance understanding.
HELCOM became fully operational
in 1979 following the entry into force
of the Helsinki Convention and has
expanded its areas of assessment and
policy interest since then.
HELCOM helps to organize periodic
high-profile regional Ministerial
Conferences, attracting media and
political attention, assessing progress
toward environmental goals and
addressing continuing and/or new
environmental challenges.
HELCOM participation has
expanded over time, including experts
in marine science, engineering, law,
political science and economics.
Selection is somewhat informal and
participation is paid for by national
governments.
Assessment capacity is maintained
through ongoing assessment activities
and an organizational structure of com-
mittees and working groups.
In 1992, the Baltic region’s nine states
agreed on a new Convention, greatly
revising, expanding and strengthening
the 1974 version.
HELCOM coordinates cooperation
and collaboration among scientific
and technical experts, NGOs, multi-
lateral development banks, domestic
policy makers and international
assistance programs.
HELCOM successfully disseminates scien-
tific and technical information and recom-
mendations throughout the Baltic region.
In theory all HELCOM members
support it and pay for their own
articipation, with an extra burden on
Finland due to hosting the Secretariat.
In practice, Western countries pay
for most of HELCOM and subsidize
participation of and implementation
by post-communist nations.
Convention
on the Long-
Range Trans-
port of Air
Pollution
(LRTAP)
LRTAP emerged out of a desire
for East/West cooperation and a
recognition that both sides had
polluted air, spurred on by the
1972 Stockholm Conference on
the Environment. It was preceded
LRTAP assessments involved many
prominent research scientists and
scientific organizations, but these
were mostly organizations from
pro-environmental protection nations,
especially NILU, RIVM, and IIASA.
LRTAP assessments are open to all
European nations, but it is influenced
strongly by countries interested in
pollution control, partly because they
devote so many resources to the issue
and produce so much research. This
LRTAP assessment activities significantly
increased European assessment capacity
for transboundary air pollutants, mostly
by installing many monitoring sites in
countries that otherwise would not have
done so.
A.Farrell
etal./GlobalEnviro
nmentalChange11(2001)311–333
314
by original Swedish and OECD
concerns. LRTAP was signed in
1979 and is housed in the UN
Economic Commission for Europe
(UN-ECE), to which all Parties
already belonged.
Important interactions between
scientists and politicians in the
Soviet Union also occurred.
group of ‘‘lead countries’’ includes
Norway, the Netherlands, Germany,
Sweden, and Austria.
Numerous assessment activities
were developed within the LRTAP
framework, notably EMEP, an
international air pollution monitoring
program, and RAINS, an air
pollution and effects model. Many
LRTAP-initiated assessment activities
were later supported by (or taken over
by) the European Union (EU).
During LRTAP negotiations,
information developed by EMEP
and by RAINS is often the point of
departure and the model is frequently
relied upon to forecast expected
outcomes of possible emission
reduction agreements. These
negotiations generally take place
during ‘‘technical’’ meetings, the actual
negotiators would sometimes just add
final editing to the text of agreements
that had been largely been made
previously.
The United States and Canada
participate in LRTAP assessments to
a very limited extent, these nations
mostly rely on domestic monitoring
and modeling.
Assessment capacity in poorer European
countries (i.e. those in the South and East)
did not develop much due to LRTAP
assessments, although the EU fostered
capacity development in indirect ways.
LRTAP assessments were begun in 1984
and were initially concerned with acidifi-
cation, but now also looks at tropospheric
ozone, heavy metals, and persistent or-
ganic pollutants.
In the leading states there is significant
scientist–politician interaction at the do-
mestic level, both formal and informal.
This is not the case in the other nations,
creating an asymmetry in the way scien-
tists and politicians interact at the inter-
national level, non-leading states are at a
disadvantage during negotiations because
their delegations are less familiar with the
technical dimensions of the issue.
Since LRTAP was a product of the Cold
War, it has always been important to
include Soviet/Russian participation.
Several important concepts and activities
were promoted by LRTAP assessment
activities including the discovery of Wald-
sterben and the development of Critical
Load maps.
The initial focus of LRTAP was on
acidification, although only a few north-
ern West European countries thought it
was a serious problem at the time.
The UN-ECE supports some travel to
LRTAP negotiations by representatives
from post-communist nations, but rarely
supports participation in the assessment
activities themselves.
LRTAP assessment activities led to im-
portant advances in scientific understand-
ing of acidification effects, and have
helped significantly in the development
of larger formal assessment efforts, many
under the auspices of the European Union
(especially the EUROTRAC efforts).
Inter-Govern-
mental Panel on
Climate Change
(IPCC)
IPCC was established by the World
Meteorological Organization (WMO)
and the United Nations Environment
Program (UNEP) in 1988, in response to
increasing concern about anthropogenic
climatic change. .
IPCC is an intergovernmental body, so
authors of the reports are nominated by
governments.
Participants in the assessment are nomi-
nated by governments and selected on the
basis of their academic qualifications, with
consideration given to geographical bal-
ance.
IPCC has the mandate to report on the
state of the art, not to do new research.
From the outset, it was decided that IPCC
would only include peer-reviewed pub-
lished material in its reviews.
The mandate of the IPCC was to produce
a state-of-the-art assessment on the scien-
tific basis of concern, possible impacts of
climate change and possible response
options.
The nominated authors work together to
produce draft chapters which undergo
expert review and government review
and these reviews are taken into account
in the final draft chapters.
Even before the first IPCC report was
published, extra efforts had to be made to
increase developing country participation.
A Trust Fund was established to fund
developing countries’ participants at
IPCC authors’ meetings, plenary sessions,
etc.
For the Third Assessment Report, the
need for regionally explicit impact assess-
ments led to a dilemma, since much of the
indigenous knowledge about impacts and
how societies adapt to them is not
published in peer-reviewed scientific lit-
erature.
A.Farrell
etal./GlobalEnviro
nmentalChange11(2001)311–333
315
The IPCC publishes state-of-the-art as-
sessment reports (in 1990, 1992, and 1995
to date) and since the mid-1990s special
reports on issues raised in the negotiations
on the UN Framework Convention on
Climate Change (UNFCCC)
A policy-makers summary of each report is
drafted and approved line-by-line by
governmental officials in a final plenary
session.
The first two reports (1990 and 1995) were
strongly oriented to literature published in
the English language.
The panel was established as ‘‘Inter-
governmental’’ to ensure that nations
would have control of it, and that it
would not develop its own power.
The roles of the IPCC and the Subsidiary
Body for Scientific and Technological
Advice of the UNFCCC, SBSTA, are
not clearly separable.
National Acidic
Precipitation As-
sessment Pro-
gram (NAPAP)
NAPAP was started due to a growing
concern about acidification in the US and
Canada and a strong interest in the newly
elected Reagan Administration to delay
further environmental regulation.
NAPAP funded a great deal of research
by research scientists, but had few me-
chanisms for interaction with politicians.
Almost all NAPAP participants were U.S.
government employees or research scien-
tists in universities.
The most important innovation in the
NAPAP process was the development of a
unique mechanism for federal interagency
collaboration. After 10 years of working
together, many agency representatives felt
that they were in a better position to make
joint policy decisions.
The initial efforts to create an acidification
assessment were a report commissioned by
the Council on Environmental Quality in
1978 and a Presidential Initiative started
by President Carter in 1979.
The NAPAP director from 1980 to 1986
was the principal author of a widely
criticized Interim Assessment Report that
was used by those against pollution
control.
NAPAP was governed by a federal inter-
agency coordinating committee.
NAPAP did not create any formal or
informal successor institutions.
NAPAP was authorized and funded by
Congress in 1980 to ‘‘increase our under-
standing of the causes and effects of acid
precipitation’’. It lasted about ten years
(1981–1990) involved 12 federal depart-
ments and agencies.
Accounts of the importance of NAPAP to
policy-making vary widely.
Participants were scientists, with few, if
any, policy analysts, State representatives,
or decision-makers.
NAPAP contributed significantly to ad-
vances in atmospheric, aquatic, and soil
science.
NAPAP has a very small continuing role
to provide quadrennial reports to Con-
gress.
Ozone Transport
Assessment
Group (OTAG)
OTAG was a limited-time assessment
(1995–97) that occurred largely outside
of the legal framework of the US Clean
Air Act of conjoint federalism.
The interactions between the politicians
and technical experts were frequent, ex-
tended, and more or less the point of the
assessment.
The decision-making body of OTAG (i.e.
the Policy Group) was made up only of
State representatives. The various working
groups in OTAG were open to participa-
tion by all, but States (and groups of
States) and industry were the main actors.
OTAG greatly increased assessment capa-
city in those States that did not previously
have it, States with severe ozone problems
did not improve much.
OTAG was initiated due to twin crises; a
technical crisis caused by the inadequacy
of state-by-state analysis to accurately
represent the regional ozone problem,
and the political crisis for environmental
law during the 104th Congress.
Interactions within peer groups (e.g. peo-
ple developing emissions inventories in
state agencies) were new in that they
included people from various states and
industry for the first time.
The USEPA did not overtly participate in
OTAG, but it planned the process (in-
cluding deciding on rules for participa-
tion) with state-level leaders, funded most
activities, acted as an observer, and
responded to specific inquires.
The ability to use air quality data analysis
(as opposed to atmospheric modeling) for
thinking about ozone policy was greatly
increased (in recognition, at least) during
OTAG.
Table 1 (continued)
Context and initiation Scientist–politician
interaction
Participation Assessment
capacity
A.Farrell
etal./GlobalEnviro
nmentalChange11(2001)311–333
316
OTAG was strongly influenced by the
concurrent debate about the deregulation
of the US electric power sector, a trend
towards reliance on incentive-based reg-
ulation, and widespread support for par-
ticipatory (i.e. stakeholder) processes for
regulatory development.
Most of the negotiations (and all of the
analysis, obviously) occurred at
‘‘technical’’ meetings of the various Sub-
Groups and Work Groups, which were
headed by political leaders. Technical
experts would act in their professional
capacity, verify the integrity of the analy-
sis for their organizations and represented
the interests of their organizations at the
same time.
NGOs had only very minor and narrow
roles in OTAG, except for NRDC’s role in
initiating the process by agreeing not to
sue the EPA.
The development of an informal profes-
sional network of State and industry air
management professionals improved the
overall assessment capacity in the eastern
U.S.
The formal presentation of technical
information came during Policy Group
meetings, which developed a reputation
for elaborate computer animations. How-
ever, most of the Policy Group members
were already aware to some degree of
what would be presented, from reports by
their technical staff or discussions with the
political leaders that headed the technical
groups.
OTAG involved few research scientists (i.e
those publishing in peer-reviewed jour-
nals), instead the analysis was led by
technically trained experts with advanced
education or sometimes long field experi-
ence.
OTAG did not create any formal or
informal successor institutions, although
it was important in establishing the
reputation of a new organization, the
Environmental Council of the States
(ECOS).
OTAG did not have as a formal mandate
to advance in science or engineering
research, but during the assessment a
previously unappreciated approach, the
statistical analysis of large air quality
monitoring data sets, was improved and
became more widely accepted. Several
other examples of such ‘‘incidental’’ ad-
vances occurred, such as improved bio-
genic emissions estimates.
A.Farrell
etal./GlobalEnviro
nmentalChange11(2001)311–333
317
highly institutionalized collaboration between indivi-duals from groups on the other end. At this end of thespectrum, some people may even be both a scientist anda decision-maker. No matter where on the spectrum thescience–policy interaction falls for any given assessment,each group must maintain its self-identity and protect itssources of legitimacy and credibility, so boundaries arecommonly negotiated, articulated and maintained byassessment participants. It is important to note thatthese descriptions apply to the formal interactionsbetween scientists and policy-makers. There may beimportant informal interactions as well, as described below.One view on where assessments should be located on
that spectrum is articulated by Kai Lee, who arguesthat: ‘‘Science and politics serve different purposes.Politics aims at the responsible use of power; in ademocracy, ‘responsible’ means accountable, eventuallyto voters. Science aims at finding truths}results thatwithstand the scrutiny of one’s fellow scientists’’ (Lee,1993). Lee uses an idea of occupational roles and socialfunctions originally proposed by Price to identifydistinctive contributions made by different groups to atechnological society (Price, 1965). Lee argues that theroles of individuals as politician, administrator, profes-sional analyst and scientist are separate and that a singleperson cannot play several different roles at once at leastwithout the risk of losing legitimacy. The cases discussedhere suggest that this is not always true, in several veryeffective assessments the participants can serve morethan one role, although doing so is often not easy.An analytically useful approach treats environmental
assessments as ‘‘boundary organizations’’. This conceptemerges from the scholarship in the social studies ofscience which shows how scientists assert and maintaintheir authority to speak definitively about the characterof the world around us. Jasanoff (1990) extended thisanalysis by demonstrating the need for scientists andpolicy-makers to ‘‘negotiate’’ boundaries between theirdomains in any given assessment or advisory process.These boundaries are agreements about which issueseach group is authoritative on; lines between concepts,not lines on a map. Her work uses examples of standardsetting for pollutants and decisions about drug useapproval to illustrate the challenges inherent in theproduction of politically legitimate and relevant advicethat is also scientifically credible. These challenges areresolved through ongoing ‘‘boundary negotiation’’.Boundary negotiations involve agreements betweenregulatory bodies and expert advisory groups as towhat issues each will deal with and what issues will beshared between them (Guston, 1999).Jasanoff concludes that since the issue is not whether
there are boundaries between ‘‘science’’ and ‘‘policy’’but where they are, and how and why they are put there,one of the key roles of expert advisory bodies is toprovide a forum to define these boundaries. Allowing
expert bodies to perform this function within giveninstitutional contexts, is crucial for obtaining thepolitical and scientific acceptability of advice. Whenthe process is successful (and scientific credibility isobtained) it is not possible for political adversaries todeconstruct the results or attack them as ‘‘bad science’’.Thus, Jasanoff notes that the problem of politically-motivated bias in appointing members of expertcommittees is unavoidable, and should be counteredwith administrative devices to limit it, rather thandoomed attempts to banish it (pp. 244–245).3
2.1.3. Participation: Who, When, and HowQuestions surrounding ‘‘participation’’ are central to
any discussion of assessment process design. Byparticipation, we refer to which individuals and organi-zations are involved in an environmental assessment,and when and how they are. We distinguish betweennominal and engaged participation. Nominal partici-pants are formally part of assessment processes, but mayor may not have much understanding of issues at handor much influence on outcomes. Nominal participationoften results from a combination of resource constraintsor a lack of real interest in the relevant issues butnonetheless needs to appear engaged. Engaged partici-pation refers to active participation in meetings,attempts to influence decisions, contributions to writingand editing of reports, and so forth.Participation in the various phases of an assessment
depends on the assessment goals and design. Participa-tion in different phases of an assessment can varysubstantially, from developing the initial scope of workfor the assessment, to the day-to-day conduct of theassessment, to the communication of its results. Intransboundary assessments, the governments of thenations whose borders are involved are the principalparticipants. In some cases, special concessions aremade to allow particular actor groups, such as firms orenvironmental NGOs, to participate directly in theassessment process. Other assessments use more formalmechanisms such as public hearings.There are various reasons for choosing different levels
and types of participation in designing an assessmentprocess. Most obviously, broad participation will increasediversity of the actors involved in an assessment process,which is valuable if the views or suggestions from dif-ferent groups are desired. Broad participation is seen bysome as a way of permitting power to be shared. It mayease implementation of measures, by ensuring that theinterests of important groups are taken into account in
3Jasanoff’s ‘‘constructivist,’’ approach has been criticized by
positivists, such as Aaron Wildavsky, who claimed in a review of
Jasanoff’s work that Wildavsky (1992, p. 512): ‘‘the best thing for
scientists to do when they are far apart is to speak the truth exactly as
they understand it to the powers that be.’’ (Wildavsky, 1992, p. 512).
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333318
designing those measures and thus make it easier to gaintheir agreement on the final decision. However, ifparticipation is used to engage a wider array of groupsin a dialogue, capacity constraints may limit the ability ofsome to participate and/or the ability of assessmentorganizers to include diverse voices and expertise. Moregenerally, expanding participation in an assessmentprocess can increase input (or appear to do so). Decisionsabout participation decisions may also be stimulated bynon-technical considerations, such as the perceived needto build a political constituency, attempt to influence thepolicy agenda or create or remove political cover.The self-interest of assessment organizers is another
important motivation for participation decisions, asassessors attempt to use participation to preemptcriticisms. Participation can be expanded to generatenew insights, shape research agendas, practices andmethods, and/or to build and expand issue networksand professional communities. Of course, there are alsoreasons to limit participation, especially if assessmentorganizers and designers feel it necessary to separatescientists and engineers from the decision-makers.Lastly, there is an interplay between decisions regarding
who participates and decisions about the rules and normsof participation in particular assessments. If, for example,assessment decisions are to be made by consensus, thenincorporating critical (or opposition) perspectives becomesmore problematic. Consensus decision making, in effect,gives each participant a kind of veto power. As such,incorporating critical perspectives into consensual decisionmaking processes may lead to an inability to act or sayanything specific.
2.1.4. Assessment capacity‘‘Assessment capacity’’ refers to the ability of relevant
groups, organizations or particular political jurisdic-tions to meaningfully engage and participate in anassessment (i.e. to get past nominal participation) and tosustain that ability over time. Most obviously, thisrequires possessing the necessary linguistic, scientific andtechnical skills (i.e. knowledgeable personnel), materialcapabilities (i.e. financial resources and equipment), andorganizational support. Developing and maintainingtechnical and organizational aspects of assessmentcapacity requires resources, so differences in wealth areoften an important cause of differences in assessmentcapacity among relevant and/or participating jurisdic-tions. Differences in resource allocation may also resultfrom the fact that not all participants have the samelevel of internal (i.e. domestic) interest in the issuesunder assessment.
3. The cases
This research compares experience across five cases ofenvironmental assessment, spanning the range of sub-
national, national, international, and global levels.4
Each case is briefly described below. Table 1 presentsspecific information regarding the four under-appre-ciated elements of design for all five cases. This list ofcases, largely determined by the authors’ previousresearch, covers a set of assessment efforts that vary ingeographic and political scope, contain relatively matureexamples of assessment processes, and deal with a hostof environmental phenomena. Thus, we argue thatinsights gained from looking at this set of assessmentcases have reasonably high level of validity.5
3.1. LRTAP
The technical advisory bodies associated with the1979 Convention on Long-Range Transboundary AirPollution (LRTAP) is a classic example of an interna-tional environmental assessment process. LRTAP’sroots are in Swedish concerns about increases infreshwater acidity over the 1950s and 1960s, whichdrove extensive European research on the issue (Oden,1967; Bolin, 1972; Organisation for Economic Coopera-tion and Development, 1979; Cowling, 1982; Lundgren,1998, pp. 74–78). To some degree, LRTAP was actuallycreated as part of the Cold War maneuvering in Europe;the Soviet Union desired a means of continuing theprocess of d!eetente and chose environmental issues(among others) as a topic through which to do so. Theappropriate body for this task was thought to be theUnited Nations Economic Commission for Europe(UN-ECE). The Geneva-based UN-ECE is one of fiveregional UN bodies that collects and distributesinformation and facilitates cooperation between na-tions. It covers 34 nations, including the United Statesand Canada.The initial convention was negotiated for several
years before being signed in 1979 and entering into forcein March of 1983. It simply identified transboundary airpollutants as an important issues, especially SO2, andprovided a framework for cooperative internationalaction. There is a very small administrative body, calledthe LRTAP Secretariat, housed within the Air PollutionUnit of the ECE’s Environment and Human Settle-ment’s Division. The primary job of the Secretariat’s
4We admit to an emphasis on United States and European
examples, and that possible generalizations of our findings may thus
be limited. On the other hand, however, the similarities across the
rather different cases we do have gives us some confidence that our
conclusions are fairly robust.5 In the interest of brevity, this section gives only a cursory treatment
to the well-known and better-documented cases (IPCC, NAPAP, and
LRTAP) under the assumption that most readers will already have
some familiarity with them (although references to more complete
treatments are given). For more information on the cases, see: http://
www.oar.noaa.gov/organization/napap.html, http://www.ipcc.ch/,
http://www.unece.org, http://www.helcom.fi/oldhc.html, http://
www.epa.gov/ttn/rto/otag/index.html
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333 319
five-person staff is to organize meetings of variousLRTAP bodies}some two to three score per year. Thesubstantive work of LRTAP is done by governmentofficials of signatory nations through a system ofWorkgroups and TaskForces.The policy-making group is the Executive Body (EB)
that is made up of government officials and meets onceper year. The Working Groups are open to allrepresentatives from all signatories and primarily draftthe regulatory protocols and manage collective researchprojects (most notably EMEP}the air quality monitor-ing network of LRTAP). Below the working groups arevarious Task Forces that carry out specific activities forthe EB. These Task Forces are open to any willingparticipant, but in practice all of the workgroups andtask forces are headed and predominantly staffed byrepresentatives from the northern, western countries.Detailed discussions and critiques of the structure,processes, and outcomes of LRTAP are contained in(Boehmer-Christiansen and Skea, 1991; Levy, 1994;Wettestad, 1997; Castells and Funtowicz, 1997; Con-nolly, 1999).
3.2. NAPAP
Concerns about acidification emerged in the UnitedStates in the 1970s as well, and remained controversialfor many years (Cowling, 1982; Forster, 1993). Theseconcerns led to the creation of a national-level assess-ment effort that eventually lasted from 1982 to 1991, theNational Acid Precipitation Assessment Program (NA-PAP).6 As with the topic of acidification itself, NAPAPwas quite controversial (Cowling, 1988; National AcidPrecipitation Assessment Program, 1991; Rubin, 1991;Oversight Review Board, 1991; Cowling and Nilsson,1995; Herrick and Jamieson, 1995).NAPAP was first proposed in 1978, legislation to
fund the program was enacted the following year, andresearch actually began in 1982 (Galloway and Cowling,1978). Most of the funding for this effort came from theUS Federal government, which amounted to over half abillion dollars over the period 1980–1991. NAPAPsponsored investigator-initiated research by researchersin many different organizations}research universities,national laboratories, federal government agencies, andothers. At least four US states and two Canadianprovinces participated in NAPAP by funding research,notably California, Florida, New York, Ontario,Quebec, and Wisconsin. The Electric Power ResearchInstitute, the Mellon Foundation, and the NationalResources Defense Council provided private researchfunds for NAPAP-associated research.
Although it mostly occurred within the context of asingle nation, NAPAP is considered a transboundaryenvironmental assessment because it did have someCanadian participation in addition to the very sub-stantial US activities. Moreover, acidification is anintensely regionalized issue in the US}most of theemissions come from the Ohio Valley while most of thedamage occurs in New York and New England, whilecoal interests (mine owners and miners) in the easternand western parts of the country also find the issuedivisive due to differences in low- and high-sulfur coalresources. The strong federalism of the US politicalsystem means that such a regionalized issue will havemany elements apparent in transboundary assessments,especially if several state governments participate, aswas the case for NAPAP.
3.3. HELCOM
The Baltic Sea has experienced a number of sig-nificant environmental changes including increasedlevels of toxins; increased levels of nutrients (eutrophi-cation); decreased oxygen levels; increasing salinity; andincreasing temperatures. Regional international coop-eration has been most concerned with the first three ofthese (increasing toxins and nutrients and decliningoxygen levels) as well as threats to biodiversity. Scientificstudy of the Baltic Sea, much of it organized byinternational scientific groups, began in the early 20thcentury. Bilateral and multilateral international envir-onmental protection arrangements for the Baltic Seadate back to the late 1960s. Twice representatives of theBaltic Sea littoral states gathered in Helsinki to signcomprehensive environmental protection treaties; first in1974 and again in 1992.The 1974 Helsinki Convention was the first regional
international agreement limiting marine pollution fromboth land and sea-based sources, whether air or waterborne. The 1974 Convention established the HelsinkiCommission (HELCOM) as the regime’s secretariat andcentral organization. The Baltic Sea environmentalprotection regime was constructed and operated acrossthe ideological and strategic divide between East andWest, becoming a model for other regional environ-mental protection regimes and conventions. Scientificand technological discourse and scientific ‘‘experts’’dominated participation and activities during regimeformulation and in the structure and operation ofHELCOM for most of its 20-plus years of operation(Haas, 1993; Hjorth, 1996, 1992; VanDeveer, 1997).HELCOM issues non-binding environmental policy
‘‘Recommendations’’ with unanimous support of theparties. State representatives with relevant scientific,technical and legal expertise work out recommendationsin committee. Thus, HELCOM ‘‘recommends’’ com-mon (regional) environmental policy standards and
6Technically, NAPAP still exists, but it has a tiny budget and no
longer supports research.
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333320
procedures to participant states. These HELCOMrecommendations, over 175 of them, are based on thework of the scientific and technical advisory bodiesdescribed below and on state practice and regulatoryexperience. They recommend environmental policycontent and practice to member states of the regime.The full Commission meets annually to administer theConvention and pass recommendations. Since decisionsare taken by consensus at all stages, disagreements fromworking groups and committees do not progress to theCommission. The fact that recommendations to the fullCommission must pass out of the permanent committeesof experts helps to maintain the appearance of separa-tion of scientific and political spheres, by relegatingdisagreements to further work between these ‘‘scientificexperts’’ instead of passing them along to the ‘‘morepolitical’’ Commission. This adds to the science-basedlegitimacy of decisions because, ostensibly, the directrepresentatives of states at the level of the Commissiondo not tinker with scientific recommendations duringpolitical meetings. In reality, plenty of ‘‘politicking’’goes on within the scientific and technological workinggroups.The early organization of HELCOM and its perma-
nent committees, established under the auspices of theInterim Commission, was simple and small. For most ofits history, HELCOM had three permanent committees(Maritime, Combating, and Scientific-Technological) aswell as numerous working groups of experts. Subsidiarygroups, usually committees, formulate recommenda-tions and proposals for adoption by the full Commis-sion. The Maritime Committee (MC) advises theCommission mainly on pollution from ships, offshoreplatforms and waste disposal in ports. It works closelywith the International Maritime Organization (IMO) inLondon. The Combating Committee (CC) formulatesrules and guidelines for combating spillage of oil andother harmful substances.The Scientific-Technological Committee (STC) dealt
primarily with issues concerning the monitoring andassessment of pollution, its sources and means of entryinto the sea. STC also promoted scientific and technicalcooperation with relevant regional international bodies.Recently STC was divided into the EnvironmentCommittee (EC), which concentrates environmentalmonitoring and assessment, and the TechnologicalCommittee (TC), which focuses on restricting pointand non-point source pollutant discharges. As HEL-COM’s organizational structure and activities ex-panded, it became the center of both regional‘‘scientific’’ and ‘‘political’’ activities and expectationsconcerning environmental protection of the Baltic Sea(VanDeveer, 1997).Professional staff positions are filled with nationals
from Baltic littoral states with marine-related scientificand technical training and experience. They are rarely
Foreign Ministry officials, and they generally have noformal diplomatic training. In recent years, thesepositions have tended to go to people with extensiveprevious experience within the HELCOM organiza-tional structure and within national HELCOM offices.Other than a handful of full-time administrativepositions at HELCOM, individual participants in theorganization fulfill their HELCOM-related duties inaddition to (or in conjunction with) full-time positionswithin state bureaucracies or research institutes.Furthermore, full-time HELCOM administrators servelimited terms. Few make long careers of HELCOMservice.HELCOM lacks formal enforcement powers and the
convention makes implementation the responsibility ofmember states. Generally, states use national Baltic Seacommittees or HELCOM offices to implement HEL-COM decisions. The Commission does not formallymonitor compliance. Rather, it coordinates environ-mental monitoring and national discharge reporting. Inrecent years, however, HELCOM officials have becomemore implicit about which recommendations andreporting requirements have been implemented bystates. The Commission bases these assessments onstate self-reporting (or the lack thereof). Thus HEL-COM does not monitor states}states monitor them-selves. States sometimes submit ‘‘independent’’assessments of their compliance and implementation toHELCOM with their reports, however.
3.4. IPCC
The Intergovernmental Panel on Climate Change wasestablished by the World Meteorological Organisationand the United Nations Environment Program in 1988.It was set up in view of the increasing concern thathuman activities are causing climatic change and wasrequested to provide a state-of-the-art assessment on thescientific issues related to human-induced climaticchange, the potential impacts of climatic change andthe possible response options. In order to do this, threeworking groups were set up. The Chairs of the WorkingGroups were nominated by governmental representa-tives, with careful attention to geographic balance. Asmall secretariat was established in Geneva. For detailedtreatments of the creation of IPCC see (Boehmer-Christiansen, 1994a, b; Agrawala, 1998, 1998). TheIPCC assessment process involves the scientific commu-nity that actually carries out the state-of-the-art assess-ment (until recently this was based only onconsideration of literature that had appeared in peer-reviewed publications) and the governmental represen-tatives that approve the ‘‘Policy Makers Summary’’ ofeach report line by line in plenary session.The first report of the IPCC was published in 1990, an
interim report in 1992, a second full report was
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333 321
published in 1995 (Intergovernmental Panel on ClimateChange, 1990). At the present time a third report isbeing prepared and should be finished in early 2001.Participation in the process has changed over time. Inparticular there has been an attempt to include morescientists from developing countries as time progressed.In addition, scientists from the community of environ-mental non-governmental organizations (NGO) andindustry have been added over time. The topics of thethree working groups have been modified, but WorkingGroup 1 has remained focused entirely on ‘‘the scienceof climate change’’. The process has changed in terms ofreview procedures, with an elaborate process now inplace for expert review, governmental review and formalresponses to the reviewers’ comments. The processremains one in which a consensus among the partici-pants is sought, although increasing attention is beinggiven to the question of ‘‘uncertainty’’.
3.5. OTAG
The OTAG process was part of a lengthy effort tocontrol air pollution in the United States (Keating andFarrell, 1999; Farrell and Keating, 1998). It dealt withone of the most complex and intractable of air pollutionproblems, tropospheric ozone (or photochemical smog).The main question at issue in OTAG was about thelong-range aspect of this problem: How do nitrogenoxide (NOx) emissions from existing coal-fired powerplants in the Ohio River Valley and the Southeast (i.e. in‘‘upwind’’ states) affect ozone concentrations in North-eastern cities (i.e. in ‘‘downwind’’ states)?The institutional setting of the OTAG process is the
conjoint federalism of the US Clean Air Act (CAA) inwhich the EPA sets national health-based air qualitystandards, emissions standards for consumer products(like cars) and emissions standards for new industrialfacilities, including power plants. All other decisions,such as transportation planning and emissions stan-dards for existing facilities are left to state discretion.States that fail to attain the national standard have somerequirements imposed on them by the CAA, mostimportantly to submit to the EPA their State Imple-mentation Plans (SIP) detailing how they will reachattainment. The EPA has the power to withholdsubstantial federal transportation funds from states thatfail to plan for and reach attainment and to impose afederal plan, a highly contentious and rarely takenaction.OTAG was initiated due to a perceived crisis created
by the election in November 1994 of a Congressionalmajority with an anti-environmental and pro-stateagenda, combined with a virtually uniform failure ofthe states to submit new SIPs which were due the samemonth (Gillespie and Schellhas, 1994; Pagano andBowman, 1995). These SIPs were meant to finally (after
over two decades of effort) allow the metropolitan areastretching from Washington, DC to Boston to finallyachieve the national ozone standard. However, thedownwind States found that doing so would either beimpossible or unacceptably expensive unless the upwindstates also reduced their NOx emissions, which they wereunwilling to do. Traditional methods of addressing thistype of problem, such as federal lawsuits or EPA orders,were not pursued due to the fear of reprisals by thenewly elected Congress plus the belief on the part ofstate environmental agencies that they, not the federalgovernment were the appropriate group to address thisproblem. Thus in the spring of 1995 the EPA quietlyagreed with a few key States and NGOs to hold off onlawsuits and instead engage in a ‘‘consultative process’’to be completed by the end of 1996. Their purposewould be ‘‘to reach consensus on the additionalregional, local and national emission reductions thatare needed for . . . the attainment [of the ozonestandard]’’ (Nichols, 1995).The structure of participation in OTAG evolved to
address the various needs and issues identified by theparticipants. The top decision-making body was thePolicy Group, which had its membership limited toState officials. Eventually 23 hierarchal subsidiaryworking bodies were created, which were open to allcomers. The technical analysis accomplished duringOTAG was performed by these lower-level groups inwhich experts from different organizations, who pre-viously had little chance to interact (except perhaps inthe court room), worked together to develop analysesthat all believed were well-supported by scientific andengineering evidence. State officials and their contrac-tors did most of the work in OTAG, although theelectric power industry played a large role. The federalgovernment generally observed the OTAG processsilently while supplying funds to the states.One of the key features of OTAG was that participa-
tion in these subsidiary groups was as broad as the rangeof interests in the Policy Group, and that informationflowed ‘‘up’’ to the Policy Group in parallel informaland formal pathways. The formal pathway might be, forinstance, from the Biogenics Ad Hoc Group to theEmissions Inventory Workgroup to the Modeling andAssessment Subgroup and finally to the Policy Group.In each group, a technical expert (or experts) wouldpresent the information, receive comments, performfurther analysis if necessary, and then be authorized topresent to the next higher group. The informal pathwaysfor the same information would operate unseen butmore quickly, allowing for considerable review anddebate. Several different groups (e.g. EPA or the electricpower industry) had representatives on most of the23 working groups and would convene privately todiscuss what was going on across OTAG at any givenpoint. This process of repeated public discussions of
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333322
particular issues, shadowed by multiple informal ana-lyses, served somewhat like peer review. The main resultwas that by the time information was presented to thePolicy Group, the question had been thoroughlynegotiated and the answer thoroughly vetted, so theinformation was generally regarded as both legitimateand credible.National politics largely brought OTAG to a conclu-
sion when 1996 Congressional elections repudiated theanti-environmental agenda of the Republicans. TheOTAG process culminated in mid-1997 with an agree-ment by almost all of the participating states on a rathervague recommendation that a range of controls mightbe necessary, from the status quo (minor reductions forsome plants) to 85% emission reductions. Subsequent toOTAG, the issue re-entered the authoritative decision-making processes of the US federal government throughregulatory rules and numerous lawsuits (US Environ-mental Protection Agency, 1997; Kelley, 2000; US Courtof Appeals}District of Columbia, 2000; Arrandale,2000; Wald, 1999).Nonetheless, there were some long-term consequences
of OTAG. Many important issues that were highlycontentious at the outset of the OTAG process wereresolved, such as establishing consistent, high-qualityemissions inventories nationwide. In addition, somefirms and some states re-assessed their positions onvarious issues based on the information revealed by theOTAG process (Farrell, 2001). Finally, many of theupwind states had little or no capacity model ozone orwrite a SIP before OTAG, and the process (especiallythe EPA funding) gave them the chance to do so, and todevelop regional modeling centers.
4. Discussion
This section discusses the four under-appreciatedelements of assessment design and operation in detail,using examples from and comparing across the five casesdescribed above.
4.1. Initiation, context and ‘‘framing’’
As the cases illustrate, assessment processes areframed by the initial understandings of ‘‘the problem’’under examination, participant ideas about the ‘‘stakes’’associated with assessment, the organization(s) sponsor-ing the assessment (its rules, membership, culture, etc.),and so on. Because they are complex social processesdesigned to create and improve collective understand-ings of environmental problems, framing effects may beobvious or subtle to participants or analysts, and canarise in many different ways.Often framing derives rather directly from the
political institutions that ask for expert advice. For
example, the Cold War era divisions between ‘‘East’’and ‘‘West’’ in Europe influenced the assessmentprocesses, institutional contexts, participation patternsand assessment outcomes in both HELCOM andLRTAP. In HELCOM, the security concerns of ColdWar adversaries resulted in restrictions on examination,assessment and policy recommendations in coastalwaters (the most polluted areas of the Baltic Sea) andthey limited the data available for use in assessment.Furthermore, the early focus on pollution in openwaters left many major pollution sources, such asagriculture and atmospheric transport, outside of thepurview of regional assessment bodies. In LRTAPcooperation, the initial framing of the ‘‘problems’’ and‘‘causes’’ of transboundary air pollution in Europeentirely in terms of ‘‘acid rain’’ and its ecologicalimpacts, left many of the environmental problems ofgreatest concern to Eastern European outside ofLRTAP’s activities. This early ‘‘acid rain’’ frame shapednational participation patterns in LRTAP. In recentyears, as LRTAP turned its attention to pollution issuessuch as heavy metals and persistent organic pollutants(POPs), participation patterns established around acid-ification issues persisted}despite the different geo-graphic distribution of the POPs and heavy metalsproblems (VanDeveer, 1998; Eckley, 1999).An example of a more complex framing effects can be
seen by comparing how one important air pollutant,tropospheric ozone (i.e. smog), was dealt with byLRTAP and by the US assessment processes of whichOTAG was a part (Keating and Farrell, 1998). In boththe US and Europe, efforts had been underway since thelate 1970s to understand and model regional airpollutants, including ozone. However, the initial prio-rities and policy-relevant research questions were quitedifferent between the two. In the US, concerns abouthuman health drove the policy process}so the initialfocus was on understanding the complex photochem-istry of ozone production. This was accomplished withthe help of sophisticated, computationally intensiveEulerian grid models of the atmosphere which couldpredict atmospheric ozone concentrations with goodaccuracy and resolution, but could not determinesource/receptor relationships.7 As the long-range char-acter of ozone emerged, the local-scale grid models werescaled up accordingly. These atmospheric chemistrymodels were supported by separate cost-benefit analysesthat examined national (or sometimes targeted national)policies.
7 In a Eulerian model, the atmosphere is divided into a three-
dimensional grid. The model tracks the chemical reactions of
pollutants within each grid cell and the movement of pollutants
between grid cells over time. While the Eulerian formulation allows for
the representation of complex three-dimensional wind fields, the
models are limited by the scarcity of wind speed and direction
observational data, particularly above the ground.
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333 323
By contrast, ecological concerns, especially acidifica-tion, first drove European assessment processes}theinitial focus was on long-range transport of sulfurdioxide and on measuring its ecological impacts. Sincefrom the outset the scale of the problem was large and akey issue was understanding which country’s emissionswas contributing to which country’s acidificationproblem, Lagrangian plume models were developed.8
These models are simpler and less finely resolved thangrid models, but they can calculate source/receptorvalues. These atmospheric chemistry models wereintegrated into a single modeling framework (RAINS)that also optimized emissions control policies on acountry-by-country basis. As European policy-makersbecame more concerned about ozone in the late 1980s,the RAINS framework and its Lagrangian modelingwere adapted to study ozone.During the 1990s, the United States and the European
LRTAP nations struggled during joint assessments forfuture ozone control strategies, including LRTAP Proto-cols (the US is a signatory to LRTAP) largely because ofthe difference inherent in the framing created by the twodifferent modeling approaches. The scientists on the twosides had built up different standards of credibility forproviding advice to policy-makers (a requirement foraccurate, high-resolution concentration estimates vs. arequirement for integrated analysis that included economicconcerns and could show source/receptor relationships)and thus disagreed sharply with each other about thequality of each others modeling. This slowed down thedevelopment of policy mechanisms for some time.
4.2. Scientist–politician interaction
The cases presented above illustrate the spectrum ofpossibilities for designing the structure of interactionsbetween the scientists and the policy-makers withinassessment processes. At one extreme is IPCC which ismost explicitly structured to attempt isolate scientistsfrom policy processes. On the other hand HELCOMand LRTAP feature highly institutionalized collabora-tion between scientific advisors and policy-makers.The IPCC itself produces assessments based on the
state of the art of knowledge about the climate issue, e.g.
(Intergovernmental Panel on Climate Change, 1990),but it does not make policy recommendations per se, sothe scientific work remains insulated from policyrecommendation and decision-making processes (Bolin,1994). The most widely read product of the IPCC is notthe Panel’s multi-volume assessment report however,but the much shorter Summary for Policy makers. Thescientists do not write this document at all, rather,political representatives, who must agree on every word,negotiate the Summary text. This process greatlydiminishes the influence of scientists on, arguably, themost important outcome of the IPCC.Similarly, during the OTAG process a conscious
effort was made to divide ‘‘technical’’ issues from‘‘political’’ ones and to keep subsidiary working groupsfocused on the former while the Policy Group dealt withthe latter. This is a good example of ‘‘boundarynegotiations’’, and the existence of different groupingsto deal authoritatively within a single organization isevidence that OTAG was a boundary organization. Alsosimilar to the IPCC was the way in which the final reportwas written. The Executive Report (the only completeand official document produced by OTAG) was negotiatedon a word-by-word basis by the Policy Group.9
In contrast to these two examples, the assessmentprocesses within the bodies of LRTAP and HELCOMprovide multiple opportunities for formal and informalinteractions between scientists, engineers, modelers anddomestic and international policy-makers. In both ofthese cases the various groups cooperate by sharinginformation and by collectively redefining and elaborat-ing of questions for assessment, modeling and policy. Inthe OTAG process there were intensive interactionsbetween politicians and technical staff aimed at improv-ing decision-makers’ understanding of the physicalcharacteristics of the problems at issue, but these almostall occurred in the informal information pathwaysdescribed above. This proved to be somewhat successful;the OTAG Executive Report contains many detailedstatements describing important phenomena that werethe source of wide disagreement when the process began,but not the (Ozone Transport Assessment Group, 1997).Thus, unlike IPCC, the LRTAP, HELCOM and OTAGassessment processes are characterized by relativelyporous and ill-defined boundaries between the scienceand policy realms. Despite these institutionalizedlinkages, neither LRTAP nor OTAG’s scientific andtechnical expertise have been successfully attacked bypolitical opponents or ‘‘captured’’ by policy makersrepresenting particular interests. Furthermore, HEL-COM participants (technical and political) have used
8Lagrangian models track changes in concentrations within a single
column of air as it is moved across a two-dimensional horizontal grid.
As the column moves, the initial concentrations are affected by
emissions, deposition, chemical reactions, and the exchange of air
through the top of the column. A concentration field is developed by
sequentially tracking multiple columns of air whose trajectories end
within the domain of interest. The fundamental assumption in the
Lagrangian formulation is that the column of air remains intact over
the length of the trajectory. The advantage of the Lagrangian
formulation is that it greatly simplifies the mathematical representation
by eliminating the need to compute many wind speed and direction
terms.
9The OTAG Technical Supporting Document containing the detailed
analyses was only ‘‘published’’ in electronic form on the World Wide
Web, no final printed version was ever produced, see the OTAG
website given in a preceding footnote.
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333324
this close cooperation to establish HELCOM as theauthoritative source for regional environmental scienceand policy information.Clearly, the characteristics of the boundary between
the science and policy realms influence assessmentoutcomes. In LRTAP, for example, the negotiators ofthe second sulfur protocol had direct interaction withthe scientific and technical participants and were able toexplore the implications of various policy options asproposed. As a result, negotiators were able to agree ona protocol containing differentiated emission reductionsfor each of the signatory states. Through this interactionmany participating policy-makers appeared to increasetheir scientific, technical and policy knowledge vis-"aa-visacidic deposition and transport. IPCC Working Group1, in contrast, produces large scientific reports based onavailable peer-reviewed literature. In a separate process,governmental delegates approve a summary of thesereports and official state negotiators of the climateregime are briefed about the summaries’ content. Few, ifany, governmental negotiators participate in the assess-ment process itself and generally they have infrequent,ad hoc and informal contact with IPCC participants.Further research is needed to develop a scheme tocharacterize science/policy boundaries and examine howthe nature of these boundaries influences assessmentprocess outcomes.In OTAG, institutions play multiple roles without
much loss of the legitimacy, because of a relatively openand transparent process although other factors hadnegative influences on legitimacy (Keating and Farrell,1999). Alternatively, individuals and institutions may beable to play multiple roles in NAPAP without anapparent loss of legitimacy because of the low salienceof the acid precipitation issue. In other words, if theissue were higher on the political agenda then it wouldmatter much more if individuals were playing multipleroles. In IPCC, scientists in Working Group 1 wereintended to be insulated from direct influence from thepolicy realm, leaving their scientific and technicallegitimacy dependent on the perceptions of their workwithin scientific communities.LRTAP provides another example of judgments made
during assessment processes. During 1991 and 1992,there was a dip in interest in LRTAP negotiations.Participating modelers thought that this might be due,in part, to the use of three different models to supportthe negotiations, thereby potentially increasing the senseof uncertainty. The Task Force on Integrated Assess-ment Modeling decided to use only the RAINS model toreduce the perceived uncertainty, without any explicitselection criteria for choosing this particular model.10
Another example of judgments made in the LRTAPprocess is provided by Patt, who describes how involvedactors chose to center analysis on ecosystem damagerather than on human health impacts (Patt, 1999a,b).This choice significantly influenced the assessmentprocess and the design of policy solutions. The decisionto base policy on critical loads for ecosystems, ratherthan on emission reductions to improve human health,appears to have been based on political considerations.According to studies cited by Patt, the economicdamages from health effects in Europe far exceed thosefrom damages to materials or crops.Quantitative modeling efforts often lie at the inter-
section of public policy-making and scientific andtechnical expertise. The use of models in assessmentprocesses introduces many important value judgments.Assessment process in LRTAP, NAPAP, IPCC, andOTAG all involve modeling in some form. Modelersmust produce a mathematical simplifications of physical(and sometimes economic) phenomena with the toolsavailable to them. Producing a model is a selectiveeffort, necessarily involving judgments about what toinclude in the model and what to exclude. In order tocommunicate with people outside of specialized model-ing or technical communities, modelers perform asecond stage of simplification as they describe theirwork and its results. Here, another set of judgments ismade. Thus, assessment efforts could benefit from moreexplicit reflection and documentation of the manyjudgments made throughout these processes.The assessment processes examined here show that
individuals and institutions can and do play multipleroles in (and around) assessment processes. They oftenparticipate in scientific and technical assessment, policyrecommendation, and policy-making}though usuallyin different forums. However the norms and expecta-tions regarding what is allowed of such individuals andgroups vary across assessment processes and profes-sional communities. This variance may be associatedwith the political salience of the issues at hand, the levelsand nature of pre-existing and constructed trust andcredibility among participants and the prevalence ofscientific and technical organization and consensus. Inaddition, the multiple role players serve as boundaryspanners or linkages between scientific/technical andpolicy-making realms, frequently framing or‘‘brokering’’ information among participants.11
4.3. Participation
The five assessments studied exhibit substantialvariance in what sorts of people participated and howthey did so. It appears that this variance results fromseveral factors, including differential access to resources10Leen Hordijk presented this example. He pointed out that
although this selection was based on a value judgment, the participants
did not seem to be aware that it was at the time. 11On the importance of ‘‘knowledge brokers’’ see (Litfin, 1994).
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333 325
and interest in issues under assessment. In LRTAPassessment, lack of resources (financial and/or technical)reduces the participation of individuals and organiza-tions from the countries of Central, Eastern andSouthern Europe (‘‘the European periphery’’) (VanDev-eer, 1998; Farrell, 1999). Furthermore, the breadth ofparticipation in LRTAP is often exaggerated by LRTAPofficials and scholars, making national participationapear to be more symmetrical across countries than itactually is. LRTAP has institutionalized the environ-mental interests of northwestern European nations (e.g.Germany, Norway, the Netherlands and Sweden),especially regarding acidification concerns. The periph-eral European nations have little influence on LRTAPdecision-making. To some degree, this is not tooimportant to most peripheral state officials, becausetheir LRTAP participation is more about image thanabout environmental protection. These countries areoften more interested in becoming, or appearingbecome, ‘‘good Europeans’’ than in transboundaryenvironmental protection, per se. Environmental co-operation is thus a vehicle for demonstrating‘‘European-ness’’ and their LRTAP participation occursin this light (VanDeveer, 1998, 1997; Levy, 1994; Farrell,1999). LRTAP’s prioritization of acidification effectshad implications for national participation as well.Eastern and Southern European countries simply didnot perceive these issues to be important problems forthem. Central and Eastern European pollution con-cerns, such as urban air pollution and human healtheffects, were not at the center of LRTAP. Today,LRTAP is expanding the scope of its interests to includePOPs, heavy metals, eutrophication, and troposphericozone, yet the assessment processes participationpatterns largely reflect national interests in acidification(VanDeveer, 1998; Eckley, 1999).Participation in IPCC has been broadened over time
by IPCC officials and organizers to include moreindividuals from many different countries (especiallydeveloping countries) in an attempt to address perceivedWestern dominance and bias (Kandlikar and Ambuj,1999) diversification of national participation costsmoney, which must come from sponsoring organiza-tions and states, and it often creates tensions amongvarious criteria for vetting participants.Liliana Botcheva discusses mechanisms through
which the nature of participation is likely to affect thecredibility of the information being communicatedthrough an assessment process (Botcheva, 1998). Thestudy is based on information about the use andperceived credibility of different economic assessmentsof European air pollution standards in Poland andBulgaria. Botcheva shows that in Poland the inclusionof multiple political perspectives in a knowledge-building process enhances its credibility and commu-nication power to multiple audiences. Building such a
participatory assessment process involves complexinteractions between relevant actors and technicalexperts and requires a considerable institutional capa-city to facilitate such interactions without sacrificingacademic quality. Domestic institutional and expertcapacity played an important role in this participatoryprocess. Botcheva shows that, while wide politicalparticipation might be a commendable quality of anassessment process, it is often difficult to achieve and theability to design a sufficiently participatory processwithout undermining its technical quality depends onthe institutional framework within which the assessmentis embedded. These arguments should be discussedfurther, particularly with regard to their implications forassessment design.The actors’ participating in assessment also vary in
different aspects or stages of assessment processes (e.g.scoping and initiating, conducting it, and communicat-ing the results). For example, in the OTAG process theofficials from US federal states and USEPA carried outthe initial scoping during closed-door negotiations inearly 1995 (Keating and Farrell, 1999, pp. 29–33).Indeed, industry was surprised to learn of the existenceof OTAG several months later, and were largelyexcluded from the process throughout its duration(pp. 91–94).12 Yet the OTAG process evolved overtime}and industry was eventually able to have somescenarios included in the discussions that were notincluded in the scope of work. They did this byinfluencing receptive states, by funding some analysisin a loose cooperation with the OTAG analysis, and byconducting parallel activities. With a few notable (butlimited) exceptions NGOs played little part in OTAG,including the scoping of the assessment. However, oneof those exceptions was the threat of a lawsuit thathelped create OTAG in the first place (pp. 29–33).The objectives of participating in scoping differed
among groups, with some states and industries taking adefensive stance, and other states participating in orderto fulfill ideals of federalism}that is proving that theycould be trusted to act responsibly on interstate airpollution problems without the need to turn to thefederal government. Participation in the OTAG assess-ment broadened over time and included technocratsfrom state air offices, contractors, a variety of businessesand other actors (note, very few academic researchersparticipated in OTAG, see Keating and Farrell, 1998,pp. 96–7). On one hand, the broad participation meantthat, at the end of the process, more people possessed abetter understanding of technical issues of ozonetransport. On other hand, achieving consensus among37 states about the assessment’s conclusions resulted in
12OTAG was an important exercise in federalism so it was crucial
that the states were seen as in control of the process, not the federal
government and not industry.
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333326
long delays and weak recommendations. All actorgroups participated in the communication of assessmentresults, but the results were not well communicatedoutside the assessment community. For example, theexecutive summary was only distributed to participantsin an electronic version.Analysis of a range of assessment processes highlights
the influence of participation patterns on the commu-nication of assessment outcomes. Often, participation inthis phase was very politically influenced. Decisionsabout the content of reports or executive summaries (orabout whether to have them at all), were made at thisstage. The ‘‘line-by-line’’ review of the IPCC Policy-Makers Summaries is a case in point as discussed inSection 4.2. In cases such as LRTAP and HELCOM,where assessment outcomes are iterated and not highprofile summations, scientific and technical assessorsseem to retain more control over the communication ofresults.In sum, participation is an important design criterion
of assessment processes. We reemphasize the importanceof recognizing that participation and the role it plays arenot constant throughout an assessment process. Theselection of participants at the various stages of theassessment process depends on the objectives at thatstage and very much upon the context within which theassessment is being carried out. Decisions aboutparticipation involve concerns beyond the need forparticular kinds of technical expertise. An assessment’scredibility, political acceptability, utility and influenceon knowledge creation and policy-making dependheavily on the content and forms of participation (whoparticipates, when and how) within assessment pro-cesses.An important element of assessment design related to
assessment participation concerns the way in which thedifferent perspectives and backgrounds of the partici-pants are incorporated. Assessment processes, as bridgesbetween science and policy, bring together knowledgefrom more than one disciplinary approach in ways thatattempt to be useful for policy-making. How doassessment processes deal with dissenting views aboutthe state of knowledge both within and betweendisciplines? How do they address differences betweengroups with different strategic interests (as appears tohave been the case in OTAG)? Can assessment processesincorporate dissenting views about the interpretation ofknowledge? The assessments examined here all usedconsensus-based approaches among the participants.However, there are a number of different interpretations(or definitions) of ‘‘consensus’’. Some assessmentprocesses deal with dissent through the production ofa ‘‘minority report’’, but this remains rare for globalenvironmental issues. Others deal with differences bywidening parameters for uncertainty or the range ofpossible outcomes or developments.
The OTAG process was consensus-based, but themeaning of consensus varied across sub-groups withinthe assessment. In addition, dominant OTAG partici-pants (the US states) viewed the success of the process asimportant, partly to demonstrate that the statesinvolved in the process could work together and partlyto constrain future actions by the US federal govern-ment. The IPCC process is consensus-based, but a vocalgroup of ‘‘skeptics’’ emerged to disagree with the viewspresented by the IPCC (and labeled as consensus) on thescience of climate change. Some scientists who do notparticipate in the IPCC process pointed out that thebroad consensus developed by the IPCC did notrepresent a consensus of all scientists knowledgeableabout climate change.Consensus also plays differing roles depending on the
goals associated with an assessment process or product.Patt shows that assessment processes based on con-sensus tend to omit information and discussion aboutextreme outcomes in order to facilitate consensusbuilding (Patt, 1999a,b). Yet, omitting these lowprobability, high impact outcomes may facilitate con-sensus at the expense of accuracy in the presentation ofpossible scenarios or options.Examination of multiple assessment processes de-
monstrates that consensus does not always give assess-ments greater legitimacy. Much depends on the decisionrules used to reach consensus, which vary given thevarious ways to define consensus and judge when it hasbeen reached. For example, in some assessment processconsensus is simply claimed if nobody speaks against aproposition. In others, majority votes are accepted asconsensus. Consensus may mean that ‘‘nobody arguedloudly enough against a point’’ or that no powerfulactors spoke against it, or it could mean that ‘‘everyonefelt that they could live with the point in one way oranother’’. In the IPCC, reaching consensus on scientificissues often leads to homogenization and ‘‘lowestcommon denominator’’ language on controversialissues.Consensus within LRTAP policy negotiation and
scientific assessment forums frequently relies on con-sensus between a small number of the most interestedand influential states and national research communities(VanDeveer, 1998). Once these parties reach agreement,the others react and tend to follow. Participants fromcountries on the European periphery generally recognizethe central importance of ‘‘big player’’ consensus. Whilethe consensus-based approach has helped LRTAPachieve a high level of agreement and cooperation, itlikely overestimates the degree of the actual sharedcommitments and agreements among participants.Some consensus positions among HELCOM partici-pants manifest similar dynamics, with participants fromthe post-communist countries generally following thelead of West European participants once the West
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333 327
European participants have achieved agreement amongthemselves (VanDeveer, 1997).LRTAP’s experience consensus-based approaches
parallels experiences of other international organiza-tions. Consensus decision making often results ininformal ‘‘vote trading’’ and side agreements to keepall participants on board, as well as in a lack oftransparency about decisions (particularly about thereasons underlying the consensus position) (Woods,1999). Furthermore, consensus positions reflect theconsensus of those present or participating. Consensusdecision-making excludes the views of those not present,such as those who do not have the resources (orpermission) to participate and it gives process organizersincentives to exclude those with dissenting views.While agreeing that the process of consensus-building
can be important in creating a shared view of a problem(i.e. ‘‘buy in’’), not enough attention has been given inglobal environmental assessments to other possibleapproaches for dealing with disagreement within anassessment process. For example, rather than a con-sensus-based/lowest common denominator outcome onscientific and technical topics, expert elicitation (e.g.Morgan and Keith, 1995) could be used to show therange of opinions. Testing of multiple, divergenthypotheses, an approach with a long tradition inscience, is another approach.
4.4. Assessment capacity
Some assessment processes are sustained over the longterm, even when popular and political interest in theissues under assessment wane. For example, the LRTAPassessment process addressed issues related to thenegotiation of the second NOx protocol in the late1990s, although transboundary air pollution receivedlittle public and political attention in this period.Certainly, not all assessment processes need to besustained over the long term. But there are a numberof good reasons for sustaining the capacity forperforming assessments. First, it has proven hardenough to create qualified assessment teams to dealwith the complex issues of global environmental changeand expertise can be difficult to replace if employmentopportunities appear unstable. Furthermore, if assess-ment capacity is lost, then institutional memory mayalso be lost, thereby limiting learning from pastexperience. Importantly, maintaining capacity helps toensure continued monitoring of environmental condi-tions and relevant human activities (such as pollutionemissions). For instance, at a 1997 meeting to review theWorld Climate Research Program, several participantsnoted that the capacity to monitor the global climatesystem was actually deteriorating. They suggested thatin the year 2007, climate scientists would be able to sayless about the climate of the past 10 years than they were
in 1997. While the 1992 UN Framework Convention onClimate Change commits signatory states to maintainmonitoring systems and exchange data, there are nofunding commitments. In contrast, LRTAP cooperationincludes a 1984 protocol to provide long-term financialsupport for the European Monitoring and EvaluationProgramme (EMEP). EMEP remains a valuable re-source for monitoring and data collection, calibrationand analysis. This shared monitoring system frequentlyis cited as an important factor in explaining thesuccessful and ongoing political and policy-makingcooperation among LRTAP countries and air pollutionexperts across Europe.Sustaining assessment capacities over time also
involves sustaining personal and professional relation-ships, networks and communities. Assessment practi-tioners frequently cite factors such as trust, respect,personal commitment to particular issues, and previousinterpersonal and professional experience when explain-ing why they chose to work with other assessmentparticipants. While formal credentials (e.g. training,education, and work history) are often importantprerequisites for assessment participation, the ‘‘humanfactors’’ inevitably play important roles. If individualand community relationships are not maintained, theymust be rebuilt if a new assessment is started. Thus,sustaining assessment capacities over time is not merelya question of ‘‘keeping your skills up’’. It also includesthe maintenance of networks and organizations thatconnect people who are personally committed torelevant issues and who have established and iteratedrelations with one another.Maintaining long-term assessment capacity also has
disadvantages, including cost and the potential forossification. The challenge is to design processes thatremain flexible and adaptive over the long term andcontinue to provide useful input to policy-makingprocesses. Clearly, there are opportunity costs ofmaintaining a long-term capacity for performing assess-ments, especially in societies with greater resource-scarcity. The importance of maintaining assessmentcapacity suggests that international assessment pro-cesses ought to include funding programs for capacitybuilding and assessment maintenance in less developedcountries and for underrepresented groups and interests.For example, such funding within the MediterraneanAction Plan programs maintains assessment capacities(networks, training programs, necessary equipment,etc.) and enhances environmental monitoring systemsaround the Mediterranean Sea vis-"aa-vis many marinepollution issues (VanDeveer, 2000).What factors contribute to the maintenance of long-
term assessment capacity? The cases examined heresuggest that one import factor is ‘‘broad framing’’.Within the LRTAP regime, the broad frame of ‘‘airpollution’’ has meant that a series of protocols on sulfur
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333328
dioxide, nitrogen oxides, volatile organic compounds,persistent organic pollutants and multiple pollutantscould be researched and negotiated. Initially, LRTAPfocused almost exclusively on acidification issues,however broad framing allowed assessors and policy-makers to pursue connections between various airpollution issues. Broad framing provides opportunitiesfor entrepreneurial leadership over time.Maintaining assessment capacity over time requires
the dedication of resources. Thus, institutionalizedmechanisms for long-term funding of assessment andmonitoring should be considered where there is aperceived long-term need for these activities. Whilemaintenance costs may be small in comparison torebuilding lost capacity, resources for such projectsremain extremely scarce in many parts of the world andin some issue areas. In such cases, explicit programmaticefforts to build and maintain assessment and monitoringcapacity across borders may be necessary to ensurelong-term capacity.
5. Summary and implications for assessment design
This paper demonstrates that our four under-appre-ciated elements of the design of environmental assess-ments (context and initiation, science–policy interface,participation, and assessment capacity) can vary acrosscases. It also highlights numerous implications of thatvariation. Because of their influence on the effectiveness(variously defined) of assessments, these issues deservegreater attention than they have often received in thepast. None of these elements is fixed. All are negotiatedat the outset (and sometimes re-negotiated during) anassessment, as the assessment is organized to serve bothdecision-makers and scientists as a boundary organiza-tion. Our findings are summarized below.
Context and Initiation play an important roles inattracting relevant actors into the assessment processes,as well as in ensuring that capacity for assessment ofcomplex, interdisciplinary issues is maintained on thelong term. Open dialogue about research and policyquestions at the beginning of assessment processes helpsto ensure that a broad framing is achieved. In practice,participants in assessments sometimes see framing as acrucial concept, even the idea that a problem needs to beassessed is a frame and can be contested. Thus,participants often contest frames. Furthermore, framesalso operate as limits or boundaries. The definitions (orboundaries) of ‘‘problems’’ and ‘‘causes’’ shape assess-ment participation, design, conduct and communica-tion. These framing issues influence determinationsabout who and what is relevant in (and to) theassessment. They may delimit issues and stakeholders,thereby impacting participation. Assessment processesdesign also depends very much on the kind of
assessment that is envisioned. In the early phase of theassessment it is also advisable to consider what breadthand depth of assessment would be enough to achieve theassessment objectives. Finding out how much detail isenough}or too much}requires an iterative process.The historical and institutional contexts within which
an assessment process is to be carried out should betaken into account in the design. The implication of thisinsight}that contexts matter for assessment practiceand outcomes}cautions against the common practiceof transplanting assessment design ideas and modelsacross contexts with the expectation that they willoperate similarly and yield similar results. For example,assessment processes which involve socially and eco-nomically diverse sets of actors (e.g. internationalscientific experts, local communities, representativesfrom developed and developing countries) it is oftennecessary to institutionalize training programs or buildresearch and management capacities within the assess-ment processes. However, critiques of these effortssuggest that one cannot simply ‘‘add science’’ tosocieties which do not have the cultural and technolo-gical support for scientific and technical advice (Milleret al., 1997). In other words, simply trying to replicate aparticular organizational model assessment across vary-ing institutional contexts is unlikely to yield the expectedoutcomes.The structure of the science/policy interface can have
many different forms in assessment processes. Forexample, in some assessments there is regular interactionbetween the scientists and decision-makers participatingin the process with a two-way exchange of information.In other assessments scientists and decision-makersinteract rarely, if at all, and the interaction can also beunidirectional. Especially given the complex issues ofglobal environmental change and the ‘‘turbulent’’ policyrealm in which decisions about these issues have to bemade, careful consideration (and reexamination) mustbe given to the design of the interface. Furthermore, thenegotiated boundaries and institutionalized bridgesbetween the policy and the scientific/technical realmsvary over time and across issue areas and cultures. Thiscautions against attempting to impose rigid anduniversal institutional models of assessment practice inlarge and complex global and regional settings. One ofthe most important parts of this interface are quantita-tive modeling efforts, must be designed to answer thequestions that decision-makers ask and at the same timesatisfy demands for rigor and empirical basis from thescientific community.
Participation should be designed to achieve theobjectives of the assessment and account for differentkinds of necessary participation at different phases ofassessment (scoping, conducting and communicating).Table 2 shows, in fact, a difference between the nationaland international assessments in terms of participation.
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333 329
In the international assessment processes (LRTAP,HELCOM and IPCC) the assessment participation isby definition from many countries but it is also broaderand not only restricted to either government scientists(NAPAP) or State representatives and industry(OTAG). A common feature of the internationalassessments, however, is that countries with lessfinancial resources and scientific capacity find it difficultto participate in the assessment process to the extentthat their ultimate interest in the outcome of the processwould warrant.As pointed out above, expanding participation does
not necessarily benefit the assessment process}particu-larly in the short term. It can reduce the assessment’squality, make the assessment logistically unmanageableand/or increase the difficulty of reaching consensus. InIPCC, for example, the inclusion of more scientists from
developing countries has increased discussion on topicssuch as the historical responsibility for greenhouse gasemissions and the economic valuation of human life.Yet, expanding participation and debate within IPCChas also improved the quality of data and research inareas of particular concern to developing countries and(perhaps) begun to address IPCC’s legitimacy problemswithin scientific and policy communities within devel-oping countries. Designing participation so that dissent-ing views can be internalized in the assessment process isextremely important, as are community building anddialogue among participants. Assessment participationis not merely a matter of getting a group of people withtechnical expertise into same room to draft a report.Assessment processes require attention to social, cultur-al and political factors impacting the technical con-siderations, which cannot be separated from assessment.
Table 2
Common reasons for initiating assessment processes
From assessor’s points of view
Raise awareness among decision-makers
Synthesize/compare disciplinary perspectives
Determine whether targets can be met
Enhance research capacities
Obtain funding
Enhance personal (or organizational) power and status
Conduct interesting research and learn
Find issues for further research
Generate increased for further research
From the user’s points of view
Formulate policy/policy options
Provide information to help make decisions, or to justify decisions already made
Influence technological change
Prevent or delay action
Educate assessors
Expand constituency
Promote political support for action
Influence assessment content
Potentially shared goals of assessor and user communities
Engage in boundary work in order to legitimize organizational existence and goals
Increase interaction/connections between users and assessors
Address and reduce uncertainties
Ascertain status of science/technology
Build support via engagement
Support strategic national interest
Prepare for negotiations
Educate publics
Determine political feasibility
Frame debates
Discover new facts and link scientific findings
Further unrelated political ends
Link to other issues
Solve a problem or mitigate its effects
Keep an issue on the agenda
Develop a shared understanding of the central scientific issues
Define new research frontiers
Look for missing links in or between science and policy fields
Deal with skeptics
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333330
Lastly, if assessment processes continue over the longterm the forms and content of participation should beexamined and revisited to ensure that they reflectcurrent, not prior, needs of the assessment process.
Assessment Capacity}its construction and mainte-nance over time}emerges from comparison acrossassessment cases as a centrally important issue ofassessment design. Iterated and ad hoc (or one-shot)assessment processes both tend to expand the size andbroaden the scope of assessment capacities over time.However, iterated assessment is likely to sustain assess-ment capacity over time. Inability to sustain capaci-ty}for future needs}is a problem for ‘‘one shot’’assessment exercises. The construction and maintananceof assessment capacities rely heavily on the creation andmaintenance of networks of individuals and organiza-tions. Assessment processes which pay explicit attentionto expanding assessment capacity}through training,education and demonstration programs, equipmentprovisions and small financial transfers}can buildassessment capacity in communities (and geographicareas) previously unable to participate.Assessment capacity building efforts can improve the
quality and scope of assessment outputs by regularizingand expanding important functions such as monitoringand data collection, calibration and analysis. In short,all of the large-scale assessment processes studied hereinfluenced the conduct of scientific and technicalresearch over time. They influenced the conduct ofscientific and technical research and, when they tried todo so, raised the capacities of previously uninvolvedactors to participate in assessment activities andgenerate information for environmental policy-making.A few final recommendations on the design of
environmental assessments emerge from these observa-tions. First, it is important to recognize that eachenvironmental assessment is part of many larger socialprocesses. They are part of processes by which societiesidentify, understand, and deal with environmentalproblems, and they are part processes by which scientistsparticipate in society, thus helping to justify the researchfunding and respect given to them. Assessments are alsopart of processes by which public sector actors justifytheir decisions. Thus, the content of an assessmentreport will never be the sole determinant of policyoutcomes or assessment effectiveness. Scientists hopingto have their efforts accurately reflected in policydecisions therefore should not assume their role is donewhen their report is delivered.The highly inconsistent track record of assessment
effectiveness suggests that the under-appreciated ele-ments described here are simply too important to remainso. Therefore this analysis suggests that formal methodsof review and self-evaluation into the design ofenvironmental assessments is advisable. These reflectiveassessment processes would allow for more ‘‘adaptive’’
assessment practice and ensure that various designelements are given their proper place.
References
Agrawala, S., 1998. Context and early origins of the intergovernmental
panel on climate change. Climatic Change 39 (4), 605–620.
Agrawala, S., 1998. Structural and process history of the intergovern-
mental panel on climate change. Climatic Change 39 (4), 621–642.
Arrandale, T., 2000. Balking on Air. Governing, 26–29.
Boehmer-Christiansen, S., 1994a. Global climate protection policy:
The limits of scientific advice 1. Global Environmental
Change}Human and Policy Dimensions 4 (2), 140–159.
Boehmer-Christiansen, S., 1994b. Global climate protection policy:
The limits of scientific advice 2. Global Environmental
Change}Human and Policy Dimensions 4 (3), 185–200.
Boehmer-Christiansen, S., Skea, J., 1991. Acid Politics: Environmental
and Energy Policies in Britain and Germany. Belhaven Press, New
York.
Bolin, B., 1972. Sweden’s Case Study for the United Nations
Conference on the Human Environment: Air Pollution Across
National Boundaries: The Impact on the Environment of Sulfur in
Air and Precipitation. Kungl. Boktryckeriet, Stockholm.
Bolin, B., 1994. Science and policy making. Ambio 23 (1), 25–29.
Botcheva, L., 1998. Doing is Believing: Use and Participation in
Economic Assessments in the Approximation of EU Environmental
Legislation in Eastern Europe. Global Environmental Assessment
Project, Kennedy School of Government, Cambridge, MA.
Castells, N., Funtowicz, S., 1997. Use of scientific inputs for
environmental policymaking: The RAINS model and the sulfur
protocols. International Journal of Environment and Pollution 7 (4),
512–525.
Christoffersen, L., Denisov, N., Folgen, K., Heberlein, C., Hislop, L.,
2000. Impact of Information on Decision-making Processes. United
Nations Environment Programme, Arendal, NOR.
Cohen, S.J., 1997. Scientist-stakeholder collaboration in integrated
assessment of climate change: lessons from a case study of
Northwest Canada. Environmental Modeling and Assessment 2,
281–293.
Connolly, B., 1999. Asymmetrical rivalry in common pool resources
and European responses to acid rain. In: Barkin, J.S., Shambaugh,
G.E. (Eds.), Anarch and the Environment. State University of New
York Press, Albany, NY.
Cowling, E., 1988. Scientific integrity in the NAPAP interim
assessment. Paper read at International Conference on Acid
Precipitation: A Technical Amplification of NAPAP’s Findings,
January 26–28, Pittsburgh, PA.
Cowling, E.B., 1982. Acid precipitation in historical perspective.
Environmental Science & Technology 16, 110–123.
Cowling, E., Nilsson, J., 1995. Acidification research: lessons from
history and visions of environmental futures. Water, Air and Soil
Pollution 85 (1), 279–292.
Eckley, N., 1999. Drawing Lessons About Science-Policy Institutions:
Persistent Organic Pollutants (POPs) under the LRTAP Conven-
tion. Kennedy School of Government, Harvard University, Cam-
bridge, MA.
Eckley, N., 2000. Effectiveness and Scientific Assessment: Lessons for
the European Environment Agency from the Global Environmental
Assessment Project. European Environment Agency, Copenhagen,
DEN.
Elzinga, A., 1997. From Arrhenius to megascience: interplay between
science and public decisionmaking. Ambio 26 (1), 72–80.
Farrell, A., 1999. Environmental protection in an industrializing,
democratizing nation: air pollution in Spain. Paper read at Annual
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333 331
Meeting of the American Association Political Science Association,
Atlanta, GA.
Farrell, A., 2001. Electricity restructuring and ozone transport: the political
economy of inter-state public policy. In: Farrow, S., Fischbeck, P. (Eds.),
Improving Regulation. RFF Press, Washington, DC.
Farrell, A., Keating, T.J., 1998. Multi-Jurisdictional Air Pollution
Assessment: A Comparison of the Eastern United States and
Western Europe. Belfer Center for Science and International Affairs,
Harvard University, Cambridge, MA.
Forster, B.A., 1993. The acid rain debate: science and special interests
in policy formation. In: Wagner, F.H. (Ed.), Contemporary Issues in
Natural Resources and Environmental Policy. Iowa State University
Press, Ames, IA.
Galloway, J.N., Cowling, E., 1978. The effects of acid precipitation on
aquatic and terrestrial ecosystems - A proposed precipitation
chemistry network. Journal of the Air Pollution Control Association
28, 229–235.
Gillespie, Ed, and Schellhas, B., (Eds.), 1994. Contract With America:
The Bold Plan by Representative Newt Gingrich, Representative
Dick Armey and the House Republicans to Change the Nation.
Times Books, New York.
Guston, D.H., 1999. Introducing boundary organizations. Paper read
at Workshop on Boundary Organizations in Environmental Science
and Policy, December, New Brunswick, NJ.
Haas, P.M., 1993. Protecting the Baltic and North Seas. In: Haas,
P.M., Keohane, R., Levy, M. (Eds.), Institutions for the Earth. MIT
Press, Cambridge, MA.
Herrick, C., Jamieson, D., 1995. The social construction of acid rain.
Global Environmental Change 5 (2), 105–112.
Hjorth, R., 1992. Building International Institutions for Environ-
mental Protection: The Case of Baltic Sea Environmental Protec-
tion. Linkoping University Department of Water and
Environmental Studies, Linkoping, Sweden.
Hjorth, R., 1996. Baltic Environmental Cooperation: A Regime in
Transition. Linkoping University Department of Water and
Environmental Studies, Linkoping, Sweden.
Intergovernmental Panel on Climate Change, 1990. In: Houghton, J.,
Jenkins, G., Ephraums, J. (Eds.), Climate Change: The IPCC
Scientific Assessment. Cambridge University Press, Cambridge.
Jasanoff, S., 1990. The Fifth Branch: Science Advisors as Policy-
makers. Harvard University Press, Cambridge, MA.
Kandlikar, M., Ambuj, S., 1999. Climate change research and analysis
in India: an integrated assessment of a South-North divide. Global
Environmental Change}Human and Policy Dimensions 9 (2), 119–
138.
Keating, T.J., Farrell, A., 1998. Problem framing and model formulation:
the regionality of tropospheric ozone in the US and Europe. GEA
Discussion Paper. Harvard University, Cambridge, MA.
Keating, T.J., Farrell, A., 1999. Transboundary Environmental
Assessment: Lessons from the Ozone Transport Assessment Group.
National Center for Environmental Decision-Making Research,
Knoxville, TN.
Kelley, T., 2000. Appeals Court Upholds E.P.A. Rules to Reduce
Smog in the Northeast. New York Times, March 4, p. A11.
Lee, K.N., 1993. Compass and gyroscope: Integrating Science and
Politics for the Environment. Island Press, Washington, DC.
Levin, S., 1992. Orchestrating enviornmental research and assessment.
Ecological Applications 2 (2), 103–106.
Levy, M.A., 1994. European acid rain: the power of tote-board
diplomacy. In: Haas, P.M., Keohane, R.O., Levy, M.A. (Eds.),
Institutions for the Earth: Sources of Effective International
Environmental Protection. MIT Press, Cambridge, MA.
Litfin, K., 1994. Ozone Discourses. Columbia University Press, New
York.
Lundgren, L.J., 1998. Acid Rain on the Agenda. Lund University
Press, Lund, Sweden.
Merkle, A., Kaupenjohann, M., 2000. Derivation of ecosystemic effect
indicators}method. Ecological Modeling 130 (1–3), 39–46.
Miller, C., Jasanoff, S., Long, M., Clark, W.C., Dickson, N., Iles, A.,
Parris, T., 1997. Global Environmental Assessment Project Working
Group 2 Background Paper: Assessment as Communications
Process. In: A Critical Evaluation of Global Environmental
Assessments: The Climate Experience. Global Environmental
Assessment Project, CARE, Calverton, MD.
Morgan, G., Keith, D., 1995. Climate change}subjective judgments
by climate experts. Environmental Science & Technology 29 (10),
A468–A476.
Morgan, M.G., Dowlatabadi, H., 1996. Learning from integrated
assessment of climate change. Climatic Change 34, 337–368.
National Acid Precipitation Assessment Program, 1991. Summary
Report of the U.S. National Acid Precipitation Assessment
Program. In: Irving, P.M. (Ed.), Acidic Deposition: State of Science
and Technology. National Acid Precipitation Assessment Program,
Washington, DC.
Nichols, M., 1995. MEMORANDUM: Ozone Attainment Demon-
strations, Office of Air and Radiation. U.S. Environmental
Protection Agency, Washington, DC.
Oden, S., 1967. Neders forsurning (The acidification of precipitation).
Dagens Nyheter, October 24.
Organisation for Economic Cooperation and Development, 1979. The
OECD Programme on Long Range Transport of Air Pollutants,
Measurement and Findings, 2nd Edition. OECD, Paris.
Oversight Review Board, 1991. The Experience and Legacy of
NAPAP. National Acid Precipitation Assessment Program, Wa-
shington, DC.
Oxford Economic Research Associates, Ltd, 2000. Policy, Risk and
Science: Securing and using Scientific Advice. Health and Safety
Executive, Oxford, UK.
Ozone Transport Assessment Group, 1997. OTAG Executive Report.
Environmental Council of the States, Washington, DC.
Pagano, M.A., Bowman, A., 1995. The State of American Federalism
1994–1995. Publius: The Journal of Federalism 25 (3), 1–21.
Partid!aario, M.R., Clark, R. (Eds.), 2000. Perspectives on Strategic
Environmental Assessment. Lewis Publishers, New York.
Patt, A., 1999a. Extreme outcomes: the strategic treatment of low
probability events in scientific assessments. Risk, Decision, and
Policy 4 (1), 1–15.
Patt, A., 1999b. Separating analysis from politics: acid rain in Europe.
Policy Studies Review 16 (3/4), 104–137.
Price, D.K., 1965. The Scientific Estate. Oxford University Press, New
York.
Rubin, E.S., 1991. Benefit–cost implications of acid rain controls}an
evaluation of the NAPAP integrated assessment. Journal of the Air
& Waste Management Association 41 (7), 914–921.
Sadler, B., 1996. International Study of the Effectiveness of Environ-
mental Assessment. Environment Australia, Barton, Australia.
Tuinstra, W., Hordijk, L., Amman, M., 1999. Using computer models
in international negotiations: acidification in Europe. Environment
41 (9), 32–42.
U.S. Court of Appeals}District of Columbia, 2000. State of Michigan
and State of West Virginia vs. U.S. Environmental Protection
Agency. In: 213 F. 3d 2000. U.S. Court of Appeals, Washington,
DC.
U.S. Environmental Protection Agency, 1997. Proposed Rule for
Reducing Regional Transport of Ground-Level Ozone (Smog):
Federal Register Notice 62FR60317. Government Printing Office,
Washington, DC.
United Nations Economic Commission for Europe, 1991. Convention
on Evironmental Impact Assessment in a Transboundary Context.
van Eijndhoven, J., Clark, W.C., Jaeger, J., 2001. Meeting the
changing challenge of managing global environmental risks.
In: Clark, W.C., Jaeger, J., van Eijndhoven, J., Dickson, N.M.
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333332
(Eds.), Learning to Manage Global Environmental Risks: A
Comparative History of Social Responses to Climate Change,
Ozone Depletion, and Acid Rain. MIT Press, Cambridge, MA.
VanDeveer, S.D., 1997. Normative Force: The State, Transnational
Norms, and International Environmental Regimes. Ph.D., Political
Science, University of Maryland, College Park, MD.
VanDeveer, S.D., 1998. European Politics with a Scientific Face:
Transition Countries, International Environmental Assessment, and
Long Range Transboundary Air Pollution. Harvard University}
Global Environmental Assessment Program, Cambridge, MA.
VanDeveer, S.D., 2000. Protecting Europe’s seas: Lessons from the last
25 years. Environment 42 (6), 10–26.
Wald, M.L., 1999. E.P.A. is Ordering 392 Plants to Cut Pollution in
Half. New York Times, December 18, pp. A1, A10.
Wettestad, J., 1997. Acid Lessons? LRTAP implementation and
effectiveness. Global Environmental Change 7 (3), 235–249.
Wildavsky, A., 1992. Book Review: The Fifth branch}Science
advisors as Policymakers. Journal of Policy Analysis and Manage-
ment 11 (3), 505–513.
Winstanley, D., Lackey, R.T., Warnick, W.L., Malanchuk, J., 1998.
Acid rain: science and policy making. Environmental Science &
Policy 1 (1), 51–57.
Woods, N., 1999. Good governance in international organizations.
Global Governance 5 (1), 39–61.
A. Farrell et al. / Global Environmental Change 11 (2001) 311–333 333