the value of information for integrated assessment models of climate change
TRANSCRIPT
Author's Accepted Manuscript
The value of information for integratedassessment models of climate change
Stephen C. Newbold, Alex L. Marten
PII: S0095-0696(14)00018-7DOI: http://dx.doi.org/10.1016/j.jeem.2014.01.002Reference: YJEEM1840
To appear in: Journal of Environmental Economics and Management
Received date: 4 April 2013
Cite this article as: Stephen C. Newbold, Alex L. Marten, The value ofinformation for integrated assessment models of climate change, Journal ofEnvironmental Economics and Management, http://dx.doi.org/10.1016/j.jeem.2014.01.002
This is a PDF file of an unedited manuscript that has been accepted forpublication. As a service to our customers we are providing this early version ofthe manuscript. The manuscript will undergo copyediting, typesetting, andreview of the resulting galley proof before it is published in its final citable form.Please note that during the production process errors may be discovered whichcould affect the content, and all legal disclaimers that apply to the journalpertain.
www.elsevier.com/locate/jeem
1
TITLE: The value of information for integrated assessment models of climate change AUTHORS: Stephen C. Newbold U.S. Environmental Protection Agency National Center for Environmental Economics 1200 Pennsylvania Ave NW EPA West 4316-T, MC 1809T Washington, DC 20460 Telephone: (202) 566-2293 Fax: (202) 566-2338 Email: [email protected] Alex L. Marten U.S. Environmental Protection Agency National Center for Environmental Economics 1200 Pennsylvania Ave NW EPA West 4316-T, MC 1809T Washington, DC 20460 Telephone: (202) 566-2301 Fax: (202) 566-2338 Email: [email protected] RUNNING TITLE: Value of information for climate change TITLE FOOTNOTE: The findings, conclusions, and views expressed in this paper are those of the authors and do not necessarily represent those of the U.S. EPA. No Agency endorsement should be inferred TITLE:
The value of information for integrated assessment models of climate change
ABSTRACT:
We estimate the value of information (VOI) for three key parameters of climate integrated
assessment models (IAMs): marginal damages at low temperature anomalies, marginal
2
damages at high temperature anomalies, and equilibrium climate sensitivity. Most empirical
studies of climate damages have examined temperature anomalies up to 3°C, while some
recent theoretical studies emphasize the risks of “climate catastrophes,” which depend on
climate sensitivity and on marginal damages at higher temperature anomalies. We use a
new IAM to estimate the VOI for each parameter over a range of assumed levels of study
precision based on prior probability distributions calibrated using results from previous
studies. We measure the VOI as the maximum fixed fraction of consumption that a social
planner would be willing to pay to conduct a new study before setting a carbon tax. Our
central results suggest that the VOI is greatest for marginal damages at high temperature
anomalies.
KEYWORDS:
Climate change, integrated assessment model, value of information, uncertainty, climate sensitivity
JEL CODES:
Q54, Q51
1 Introduction The systematic analysis of climate change policies involves combining natural
science and economic models to forecast the accumulation of greenhouse gases in the
atmosphere, the response of the climate to these gases, and the impacts of subsequent
climate changes on the natural environment and human well‐being across the globe over the
3
course of centuries [1,2,3]. Due to the inherent complexity of this task, the “integrated
assessment models” (IAMs) typically used for these analyses are highly uncertain.1 It is
possible that some of these uncertainties can be reduced through further scientific and
economic research. However, the resources available for such studies are limited, so it is
important to prioritize new research efforts among the uncertain components of climate
IAMs to help achieve the highest possible return on these investments.
Research priorities typically evolve through an ad hoc and unstructured process of
collaboration and competition among researchers, grant funding agencies, and the editors of
scholarly journals. When researchers explicitly aim to inform research priorities in a
systematic way, this is usually done through sensitivity analysis where two or more model
parameters are varied and the influence of these variations on key outputs of the model are
recorded. The parameters that have the strongest influence on the model outputs are then
highlighted as top candidates for further research.
A number of previous studies have conducted sensitivity analyses using IAMs. For
example, Stern found the Policy Analysis of the Greenhouse Effect (PAGE) model to be highly
sensitive to the exponent of the damage function, the elasticity of the marginal utility of
consumption, and the pure rate of time preference [4].2 Hope conducted additional
sensitivity analyses using PAGE and found it to be most sensitive to equilibrium climate
sensitivity, 3 the pure rate of time preference, the exponent of the non‐economic damage
function, the elasticity of the marginal utility of consumption, and the decay rate of CO2 in the
atmosphere [5 p 118]. Nordhaus conducted a sensitivity analysis using the Dynamic 1 Integrated assessment models can be defined “broadly as any model which combined scientific and socio‐economic aspects of climate change primarily for the purpose of assessing policy options for climate change control” [2]. 2 In the climate economics literature the term “damage function” is used to describe a mapping between the climate impacts of anthropogenic activity (e.g., temperature increases, precipitation changes, sea level rise) and consumption‐equivalent monetized loses. 3 “Equilibrium climate sensitivity” represents the long‐run steady state temperature anomaly (increase above pre‐industrial level) that would be reached with global radiative forcing equivalent to a sustained doubling of the atmospheric carbon concentration above pre‐industrial levels. As such, the equilibrium climate sensitivity provides a summary measure to the responsiveness of the climate to anthropogenic activity.
4
Integrated Climate Economy (DICE) model [6]. Varying parameters over a range of minus to
plus six standard deviations showed that the projected global temperature rise by 2100 was
most sensitive to the growth rate of total factor productivity, with the equilibrium climate
sensitivity parameter a close second. The temperature rise was found to be nearly invariant
to the cost of the backstop technology, the damage coefficient, and the fossil fuel resource
limit. Yohe and Hope [7] used PAGE to compare the expected value of the social cost of
carbon before and after simulated adjustments to some of the probability density functions
representing uncertainty about economic parameters upon which the SCC depends. Yohe and
Hope found only very small changes to the SCC from reasonably large reductions in the
range of underlying damage function parameters.
The results of these and similar exercises help to identify those parameters that have
the strongest influence on IAM outputs. However, the results of such sensitivity analyses do
not necessarily translate directly into the optimal priorities for future research. The value of
additional research depends not only on how sensitive model outputs are to a particular
parameter, but also on how strongly the parameter influences the optimal choice of policy
variables, how much is currently known about the parameter, and the cost of learning more
about the parameter.
In this paper we use a value of information (VOI) framework to formally examine the
optimal allocation of effort across areas of research related to climate change damage
assessments.4 Specifically, we examine the relative value of additional research on three
crucial components of any IAM: marginal damages at low temperature anomalies, marginal
damages at high temperature anomalies, and equilibrium climate sensitivity. Most empirical
research on climate change impacts has estimated the economic damages at relatively low
4 The value of information is closely related to option value [8,9,10,11]. In most studies of option value the information is obtained passively through observations over time. In this paper we focus on the value of active information collection through the funding of new research. For a general overview of VOI studies in the area of environmental health risk management see Yakota and Thompson [12], for an application to medical decision‐making and research design see Willan et al. [13], and for an application to climate thresholds see Keller et al. [14].
5
temperature anomalies, up to 2.5 or 3°C. In contrast to this traditional research focus, some
recent studies have emphasized the risks of “climate catastrophes,” or “fat‐tailed” climate
risks, which depend on potential damages at higher temperature anomalies and scientific
uncertainty about the sensitivity of the climate to green house gas (GHG) emissions [15,16].
The growing appreciation for these low‐probability high‐impact outcomes and uncertainty
about the slope of the damage function at high temperature anomalies [17] raises an
important question for future research: Should we continue “searching under the lamp post”
by studying impacts at low temperature anomalies where data are relatively plentiful and
existing models can be reasonably well calibrated, or should we begin searching in dimmer
territory by shifting some or much of our research effort toward studying impacts at higher
temperature anomalies farther outside of the range of historical experience? This question
has been highlighted as one of the pathways for improving official U.S. Government
estimates of the social cost of carbon [19] and is the main motivation for this paper.5,6
5 Some researchers have suggested that impacts at high temperature anomalies are not only unknown but may be “unknowable” (Pindyck 2013). This claim could be interpreted in at least two ways. A “strong” interpretation is that it is not possible to conduct a study that will provide any additional information on climate damages at high temperature anomalies, i.e., no research we can undertake will allow us to update our prior over b. A “weak” interpretation is that new studies can provide some information on potential climate damages at high temperature anomalies, but repeated studies will not cause the prior to converge to a degenerate probability density function over the unique true value of b; i.e., there may be a limit to how narrow we can collapse the probability density function over b no matter how much effort we devote to studying it. The strong interpretation would obviate the usefulness of our study with respect to damages at high temperature anomalies, since it implies the marginal cost of additional information about b is infinite. The weak interpretation, on the other hand, does not materially diminish the relevance of this aspect of our study. Our main results are principally based on the marginal value of information about each parameter and so are not compromised by the weak interpretation of “unknowable.” That is, our main results rely only on the assumption that some updating of the prior over each uncertain parameter is possible, not that perfect knowledge is ultimately attainable. 6 Improving our estimates of damages at low temperatures should be possible with additional data collected within the contemporary ranges of spatial and temporal variability in temperatures. There are far fewer opportunities to collect data on damages at very high temperatures, so learning about high temperature damages may require expanded applications of mechanistic models that can produce realistic representations of spatial and temporal climate variability as well as the physiological and behavioral responses of humans and other species to those changes. For example, Sherwood and Huber examined the direct impact of climate change on humans and other mammals in the form of heat stress using an approach based on first‐principles of thermodynamics that is “relatively well‐constrained by physical laws” [18 p 1]. Additional learning about potential climate damages at high temperatures will presumably require more research using the same basic approach of combining detailed integrated simulation models to forecast climate and economic response variables of interest, where the constituent models are parameterized using a combination of scientific and economic first principles where possible (e.g., mass balance, conservation of energy, constrained profit maximization, etc.) and statistically robust
6
To examine the benefits of additional research on economic damages from climate
change and climate sensitivity, we address two specific questions. First, if we could obtain
perfect knowledge about one of these three uncertain parameters, which parameter should
we choose? Second, since any new study will produce far less than perfect knowledge, which
of the three parameters would give the highest “rate of return” to additional research
investments, conditional on the accuracy and precision of the respective study designs? The
first question is about the total value of information (or the value of perfect information),
and the second is about the marginal value of information (or the value of sample
information).
In contrast to our focus in this paper, which is to estimate the value of active learning
in the short run, a few previous studies have examined the influence of passive learning over
time on optimal climate policy. For example, Kelly and Kolstad [20] added Bayesian learning
to a dynamic optimization model. They found that learning whether climate sensitivity is
either “low” or “high” with 95% confidence will take nearly 100 years, so passive learning
had little impact on the optimal policy. Leach [22] developed a similar model but added an
additional uncertain parameter that controls for autocorrelation of stochastic temperature
shocks. As a result, the time required to learn about the climate system parameters was
found to be extended by an order of magnitude. Roe and Baker [23] explained why the
distribution of potential temperature increases conditional on a given level of radiative
forcing is relatively insensitive to learning about the underlying climate processes. An ideal
model would allow us to examine the value of active learning in the short run with continued
active or passive learning over time. However, this would require a substantially more
sophisticated and numerically demanding modeling framework, so we leave these extensions
for future work (and possibly future advances in computing power). In the meantime, based
empirical relationships to fill the inevitable gaps based on our incomplete understanding of the fundamental biophysical and economic processes that will combine to determine the nature and magnitude of the climate impacts.
7
on the previous research discussed just above, we would not expect the addition of passive
learning about climate sensitivity to substantially change our results. If anything, as explained
later in the paper, we would expect the addition of passive learning about climate damages to
reinforce our central results.
Before proceeding to the model, we should clarify the scope of the paper. In this study
we focus on the demand side of the information “market.” Specifically, we address the
question: How much would a Bayesian rational decision‐maker be willing to pay for a new
study using an estimator with known consistency and efficiency properties? We do not
estimate an information cost function, so to fully optimize a portfolio of research
expenditures a more complete model would be needed. However, our estimates of the
marginal value of information are intended to represent the willingness to pay for one
additional unit of research effort, where the units of effort are defined to have equal costs
across research areas. For example, one unit of research effort could represent an average
sized research grant from the National Science Foundation. We do not give point estimates
for the normalized marginal costs since this would require a detailed examination of the costs
and estimation uncertainties of many previous research studies across several disparate
(sub‐)disciplines of economics and climate science, which is well beyond the scope of this
paper. However, we do examine a wide span of assumed sampling errors for each parameter
in an attempt to cover the plausible range of relative precision per standardized unit of
research effort among the parameters. Therefore, this study is akin to a cost‐effectiveness
analysis rather than a complete benefit‐cost analysis: our results indicate where the marginal
dollar invested in climate change IAM research should be targeted among the three
parameters of interest, not how many dollars in total should be devoted to each of these
areas of research.
8
2 Model This section summarizes the formal definitions of the value of information used in
this paper, a customized integrated assessment model, prior probability distributions that
represent current knowledge about each uncertain parameter, and a generic sampling
distribution function to characterize the precision and accuracy of new studies designed to
estimate each uncertain parameter. Complete technical documentation of the model,
parameter values, and source studies used for calibration are provided in the online
supporting information.
2.1 The value of information defined
To define the value of information, let ( ), ,W x y z be a social welfare function that
describes the decision maker’s preferences over policy outcomes, where x represents the
policy variables under the control of the decision maker (such as a carbon tax), y is
aggregate income, and z represents an uncertain “state of the world” that will partly
determine the policy outcome but cannot be influenced by the decision‐maker. Denote the
prior probability distribution over z as ( )p z , which describes current knowledge. The
decision maker’s goal is to choose a policy that maximizes expected welfare given current
knowledge about the uncertain state of the world. The income‐equivalent value of perfect
information, VPI , is defined as the maximum amount of income the decision maker would be
willing to sacrifice to learn the true state of the world before setting the policy. Given that the
decision maker aims to maximize expected social welfare, VPI is defined by:
( ) ( ) ( ) ( )max , , max , ,x x
W x y z p z dz W x y VPI z p z dz⎡ ⎤ ⎡ ⎤= −⎢ ⎥ ⎣ ⎦⎣ ⎦∫ ∫ .
(1)
The left hand side of equation (1) is the maximum expected social welfare under the
baseline scenario, characterized by current knowledge, where the policy is chosen
9
conditional on ( )p z , and the right hand side is expected maximum social welfare, where the
decision maker will choose x after the true value of z is learned and aggregate income is
reduced by the amount VPI . Assuming welfare is increasing in income, 0W y∂ ∂ ≥ , VPI
will be strictly non‐negative and equal to the maximum willingness to pay for perfect
information.
Now consider the case where this one‐time learning event, a new study, provides the
decision maker with only partial information about the uncertain state of the world prior to
setting the policy. Let ( )ˆl z z represent the sampling distribution of the estimator, i.e., the
probability that a new study will produce an estimate z when the true state of the world is
z . The sampling distribution characterizes the precision and accuracy of the study design.
After the study is conducted, the decision maker will update her prior over z using Bayes’
rule: ( ) ( ) ( ) ( )ˆ ˆ ˆp z z p z l z z q z′ = , where ( ) ( ) ( )ˆ ˆq z l z z p z dz=∫ is the unconditional
probability of the study outcome z . The income‐equivalent value of study information, VSI ,
is then defined by:
( ) ( ) ( ) ( ) ( )ˆ ˆ ˆmax , , max , ,x x
W x y z p z dz q z W x y VSI z p z z dz dz⎡ ⎤ ⎡ ⎤′= −⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦∫ ∫ ∫ .
(2)
In this case the baseline scenario on the left hand side, in which the decision maker sets a
policy that maximizes the expected welfare conditional on the current state of knowledge,
remains the same. However, on the right hand side the decision maker will choose x after
the study outcome z is observed and the decision maker updates her prior from ( )p z to
the posterior ( )ˆp z z′ . Therefore, VSI is the maximum amount of income the decision
maker would be willing to pay to fund the study and learn its outcome before setting the
policy.
10
This framework makes three important simplifying assumptions to facilitate
estimating the value of information. First, it assumes that in both the baseline scenario (with
current knowledge) and in the counter‐factual scenario (with additional knowledge based on
a new study) the decision maker will implement the first‐best policy that maximizes
expected welfare conditional on the existing state of knowledge. Second, it does not account
for passive learning over time. And third, it assumes that the new study providing the
additional information is a one‐time event that occurs in the very short run, before the policy
is set, and is not repeated in the future. These are strong simplifying assumptions, but they
allow us to develop a sufficiently rich yet tractable model that is suitable for examining
strategic questions about the value of information and for providing reasonable initial
estimates of the relative value of information across key dimensions of the climate change
problem. In Section 4 we discuss how relaxing these assumptions in future work might
modify our results.
2.2 Integrated assessment model
To estimate the value of information measures defined above, we use a customized
IAM, the Value of Information for Climate Economics (VOICE) model, designed to solve for
the global carbon tax path that maximizes an expected social welfare function under
uncertainty. This dynamic optimization model includes the key elements of the climate
change problem—linkages between economic growth, CO2 emissions, accumulation of CO2 in
the atmosphere, global warming from the growing atmospheric stock of CO2, economic
damages from global warming, and the cost of GHG emissions abatement—but remains
solvable under uncertainty. This section summarizes the main components of the model; full
documentation including all functional forms, parameter values, and calibration notes are
provided in the online supplemental information.
As in the DICE model [6], the global economy is represented by an aggregate
production function that uses labor and physical capital to produce a single commodity as
11
output. In each time period, a fraction of gross output is lost due to damages from climate
change, represented by the global average surface temperature anomaly (the difference
between the current temperature and the pre‐industrial average temperature). To simplify
our examination of learning about damages at “low” and “high” temperature anomalies
independently, we use a piecewise linear damage function bounded between zero and one
with two parameters, a and b , which are the slopes below and above 3°C, respectively.7 A
fraction of output (net of climate damages) is consumed in each period and the remainder is
invested to maintain and grow the physical capital stock. The growth rates of population,
total factor productivity, and the carbon emissions intensity of output are exogenously
specified to grow or shrink from their initial values toward their respective long‐run levels at
exponentially declining rates.
The use of a piecewise linear damage function with one breakpoint has two
significant advantages in this application. First, we believe that data on marginal damages at
low temperatures typically will provide very little or no information about marginal damages
at high temperatures (and vice versa), and this functional form allows us to respect this
assumption. It is the most parsimonious continuous function for which the values of
additional information on damages at different temperature levels—below and above 3°C in
this case—are independent. This would not be the case, for example, for a power function,
bL aT= , since in this case the marginal damage at each level of T is a function of both
parameters, a and b . Therefore, additional data on damages not only above but also below
3°C would lead to improved estimates of both a and b and thereby improve our predictions
of damages above 3°C. So if we were to use this functional form, or something similar, then
part of the value of learning more about damages at low (high) temperatures would arise
from the additional precision this would allow in estimating damages at high (low)
7 The choice of 3°C as the breakpoint is based on the makeup of the empirical literature studying the impacts of climate change, where most studies analyzing near term effects or using cross section variation have focused on anomalies less than or equal to this level.
12
temperatures. The piece‐wise linear functional form we use allows us to avoid this spurious
attribution of the value of information. Second, the inclusion of only a single break point
greatly reduces the computational burden associated with solving for the optimal tax when
trying to maximize expected welfare under uncertainty over multiple parameters and the
outcome of the study providing the new information. In other words, a piece‐wise linear
damage function with a single break point is general enough to allow for increasing marginal
damages yet simple enough to keep the optimization problem under uncertainty tractable.
We use a simple two‐compartment model to represent the dynamics of the
atmospheric carbon stock, calibrated to match the relationship between the carbon
concentrations and harmonized emissions from the Representative Concentration Pathways
(RCPs) developed for the IPCC Fifth Assessment Report [25]. We use a one‐dimensional
energy balance model to forecast the response of the global average temperature to changes
in radiative forcing [22,26]. A key parameter that determines the response of the global
surface temperature to elevated greenhouse gases is the “equilibrium climate sensitivity
parameter,” denoted in this paper as c , which represents the long‐run temperature anomaly
arising from a sustained doubling of the atmospheric CO2 concentration relative to pre‐
industrial levels. This is the third of the three key parameters treated as uncertain in this
paper.
We assume that the marginal abatement cost (MAC) curve is a power function of the
mass of carbon emissions abated in each period. We also assume that the prices of low‐
carbon or carbon‐neutral fuels and technologies will decline over time, represented by an
exponential decline in the coefficient of the MAC curve over time. We calibrated the
parameters of the MAC curve to approximate recent results from the MIT Emissions
Predictions and Policy Analysis (EPPA) model [27].
Climate change policy is represented in the VOICE model through a revenue neutral
carbon tax. The social welfare function is the discounted sum of total utility in each time
13
period, where total utility in a period is the product of the population size and a power
function of per‐capita consumption. The decision maker chooses the time path of the carbon
tax to maximize the expected value of the social welfare function, conditional on the current
level of uncertainty over the parameters of the model. The carbon tax is the control variable
for the policy maker in the optimization problem.
We use this model to calculate the value of information in terms of the fixed fraction
of consumption that, if subtracted from the baseline consumption path, makes expected
social welfare with the additional information equal to that without the new information.
This is directly analogous to the VOI definitions given above, but measured as a fixed fraction
of consumption in all periods rather than an absolute level of income in one or more periods.
2.3 What we know now
Before we can estimate the value of new information, we must first take stock of
what we know now. That is, we must specify prior probability distributions for each
uncertain parameter under investigation. To characterize existing information regarding the
damage function parameters, we assembled published estimates of the potential future loss
of global economic output under various benchmark global warming scenarios from 17
previous studies. Results from these studies expressed as a percentage of global GDP at a
given global temperature anomaly are indicated by the points in Figure 1.
[Figure 1 about here.]
To specify priors over a and b , we calibrated two lognormal probability
distributions using this set of damage estimates. Specifically, we calibrated a prior
distribution over a by finding the parameters of a lognormal pdf that maximize the joint
probability of observing the set of reported or implied marginal damages below 3°C, treating
each study result as an independent draw from the underlying distribution. Analogously, we
calibrated a prior distribution over b by finding the parameters of a lognormal pdf that
14
maximize the joint probability of observing the reported or implied marginal damages above
3°C.8 This procedure results in the prior pdfs shown in the top graph of Figure 2. The 95%
confidence region for the damage function based on these calibrated prior distributions for
a and b is represented by the shaded region in Figure 1 and the mean is shown by the
solid line.
We note that the two distributions are assumed to be independent for the purposes of
this study. This was a deliberate strategic simplifying assumption, which allows us to
maintain the independence between the value VOI for a and b . For example, if we know a
priori that b is at least as large as a , then a new study on a could help us refine not only our
pdf over a but also that over b . This is not necessarily implausible, but we think that to a
first approximation new information on a would tell us nothing or very little about b , and
vice versa. For example, learning about climate change impacts in the agriculture sector at
low temperature anomalies, when low cost adaptation measures such as crop switching and
modifications in planting schedules are available, may provide very little or no indication
about the severity of impacts in a state of the world where CO2‐fertilization benefits have
diminished because nitrogen has become a limiting nutrient for plants or the tolerable
thresholds for plant growth have been exceeded in many areas. So‐called “Ricardian” studies
of the agricultural sector can be used to learn about damages at low temperature anomalies
[28], but approaches based less on statistical modeling and more on simulation modeling
might be required to make reliable predictions at higher temperature anomalies outside of
the range of temperature variations observed in historical data. We also note that assuming
the pdfs for a and b are independent results in roughly a 20% probability that a is greater
than b , in which case the damage function would be concave. This means that damages (net
8 This is an admittedly simplistic procedure for specifying prior probability distributions for the damage function parameters. An ideal method would account for the precision and accuracy of the estimation approaches used in each study and any correlations that may exist among studies. The simple approach used here is sufficient for our goals in this paper, which are to illustrate the value of information framework and to develop preliminary VOI estimates for climate change damage assessment research.
15
of adaptation) would exhibit a saturation effect. We discuss the implications of this
simplifying assumption for our main results in section 4 below.
To specify a prior distribution over c , we adopt the equilibrium climate sensitivity
distribution used by Roe and Baker [22], calibrated to match the statements of the IPCC in
their fourth assessment report (AR4) assuming a median value of 3°C and a two‐thirds
probability that the value lies between 2°C and 4.5°C.9 The prior over c is shown in the
bottom graph of Figure 2.
[Figure 2 about here.]
2.4 What we might learn
To estimate the marginal value of information—the value of one additional study—
we must specify a sampling distribution for new estimates of each uncertain parameter.
Consider the uncertain parameter a , the marginal damages from climate change at low
temperature anomalies. The sampling distribution for a , ( )ˆl a a , represents the probability
that a new study on climate change damages at low temperatures will return an estimate a
if the true value is a . The sampling distribution characterizes the precision and accuracy of
the study design and estimation methods. If the study methods are accurate (i.e.,
asymptotically unbiased) then the sampling distribution will be peaked at the true value of
the parameter. If the study methods are precise then the variance of the estimator used by
the study will be relatively small and so the sampling distribution will be sharply peaked
around its mode. The characteristics of the sampling distribution for a particular study will
depend on the details of the research design, including the sample size and variations of and
correlations among treatment and control variables and so on. For the illustrative purposes of
this paper, and to make our comparisons across parameters as clean as possible, we use a
9 For practical purposes, to implement the learning process in our computations we use mean‐preserving discretizations of the distributions with six nodal points over each distribution’s domain. Sensitivity analyses, not reported here, indicate that the results are reasonably robust to increases in the number of nodal points.
16
generic sampling distribution function that assumes future studies will be accurate and
proportionally symmetric—that is, the probability that the estimate a is ψ×100 percent
above or below the true value is the same. Specifically, we use the following form for the
sampling distribution function:
( )( )( )
( )( )
2
2
ˆ
ˆ
ˆ 0
ˆˆ
a
a
a a a
a a a
a
el a a
e da
ζ
ζ
− −
∞ − −
=
=
∫.
(3)
The parameter measures the precision of the study. If is close to zero then the
sampling distribution will be very flat, close to a uniform probability distribution, and so
each new study gives very little additional information. If is very large, then the sampling
distribution will be sharply peaked around the true value, so one new study can deliver
nearly perfect information.
As noted above, the accuracy and precision of any new study will depend on the
details of the experimental design and the natural variability of the phenomenon of interest.
These factors may differ greatly among the parameters a , b , and c , so , , and
generally will take different values. We define these ranges by calibrating each precision
parameter in terms of the cumulative probability, , that a study estimate will lie within plus
and minus ψ×100 percent of the true value:
( )( )
( )1
1
ˆ ˆa
a
l a a daψ
ψ
ω+
−
=∫ .
(4)
For example, calibrating the precision parameter using ω= 0.25 and ψ = 0.5 implies
that there is a 25% chance that the study estimate will fall within an interval 50% above or
below the true value. A convenient feature of this approach is that the calibrated precision
17
parameters will not depend on the units or magnitudes of the unknown parameters, making
our VOI comparisons among the parameters relatively straightforward.
3 Results In this section we present the results of our main analysis and a number of sensitivity
analyses. We discuss the results in Section 4.
3.1 Baseline results and model sensitivity analysis
To produce a set of initial benchmark results that can be compared to previous
studies, we solved the model with all uncertain parameters set at the expected values of their
prior probability distributions. In this case, the optimal carbon tax in the initial year is 0τ =
12$/tCO2, the average growth rate of the carbon tax over the first 50 years of the planning
horizon is gτ = 3.0% per year, the fixed‐fraction‐of‐consumption‐equivalent (FFCE) value of
all future climate damages is 0v = 3.2%, and the FFCE net benefits of the optimal carbon tax is
*v = 0.17%.10 Our estimate of the initial optimal carbon tax is reasonably close to the social
cost of carbon (SCC) in the most recent version of the RICE model, which was 12$/tCO2 [29],
and is well within the central range of estimates of the SCC from previous studies [30,31].
Also, our estimate of *v is close to the net benefits of the optimal emissions path divided by
the net present value of all future income in DICE2007, which was 0.17% [6 p 84].
To examine the sensitivity of these results to the uncertain parameters, we re‐ran the
model multiple times with a , b , or c set at either the 5th or the 95th percentile of its
10 Recall that the FFCE value is the maximum fixed fraction of consumption that the decision‐maker would be willing to forego to move from the baseline scenario to the specified counterfactual scenario. For some intuition about this quantity using a numerical example, let PV denote the present value of a consumption stream, 1 2, , ,t t tc c c+ + … The present value of the fixed fraction v of the stream is simply v PV× . Assume that present‐day global per‐capital consumption is $7,000 per person per year, and assume the global population is 7×109 people, so present‐day aggregate annual consumption is $4.9×1013 per year. If the global population were to remain constant, if per‐capita consumption were to grow forever at 2% per year, and if η = 2 and ρ = .01, then the Ramsey discount rate would be a constant r = 2×.02+.01 = .05 per year. The present value of the future stream of consumption would be $4.9×1013/(.05‐.02) = $1.6×1015, and the present value of 0.17% of consumption in every period would be .0017×$1.6×1015 = $2.72×1012, which is about 5.6% of present‐day aggregate annual consumption.
18
respective prior probability distribution while the other two uncertain parameters were held
fixed at their expected values. Figure 3 shows results for the FFCE values 0v and *v resulting
from these sensitivity analyses.
[Figure 3 about here.]
The left panel in Figure 3 shows that 0v is substantially more sensitive to a than the
other uncertain parameters, and is slightly more sensitive to c thanb . However, the right
panel of Figure 3 based on *v shows a completely different ordering for the influence of the
uncertain parameters, where *v is most sensitive to b , followed by c and a , respectively.
Another key model output, the initial carbon tax, 0τ , is most sensitive to a , followed by c
and b , respectively (see Table S4 in the supplemental information). This illustrates an
important limitation of traditional sensitivity analysis as a tool for prioritizing future
research efforts: the results can be crucially dependent on which model output or set of
outputs is used as the target of the analysis. On the other hand, value of information
measures give direct indications of research priorities and do not suffer from this
indeterminacy.
For the remainder of this paper we consider our baseline scenario to be one in which
all three parameters (a , b , and c ) are uncertain as defined by their respective prior
probability distributions, and the carbon tax implemented in this baseline scenario is the one
that maximizes the expected net present value of social welfare. The time path of the optimal
carbon tax in this case and three standard deviation plots of key state variables are presented
in Figure 4.
[Figure 4 about here.]
3.2 VOI estimates
To examine the value of information for each uncertain parameter we calculated the
VOI at four assumed levels of study precision: ω = 0.25, 0.5, and 0.75 with ψ = 0.5 and ω = 1
19
with ψ = 0 (perfect information). Recall that if ω = 0.25 and ψ = 0.5 then there is a 25%
chance that the estimate from a new study will fall within plus or minus 50% of the true
value. The top left panel of Figure 5 shows a plot of the FFCE value of one additional study for
all three parameters in turn at each assumed level of study precision. These results indicate
that the value of perfect information is largest for b , followed by c and a , respectively. The
FFCE value of perfect information for b is roughly 0.09%. To put that in perspective, it is
equivalent to approximately 3% of the value of all future climate damages in an uncontrolled
setting under uncertainty or about 30% of the net benefits of implementing an optimal tax
policy given existing information.
[Figure 5 about here.]
Note that the lack of horizontal overlap of the gray bars representing the range of VOI
estimates for a and b in the top left panel of Figure 5 means that the value of a new study
on b at the lowest examined level of study precision is greater than the value of perfection
information about a . While the precision of additional studies will likely vary across the
parameters, it is interesting to note that the VOI ranking changes over the range of study
precision examined: at the lowest assumed level the VOI for a is slightly greater than for c ,
but at the highest level (perfect information) the VOI for c is greater than for a .
3.3 VOI sensitivity analysis
In this section we examine the influence of three key modeling assumptions on the
VOI estimates: the width of the prior probability distribution over marginal damages at high
temperature anomalies, the pure rate of time preference, and the costs of emissions
abatement. The sensitivity analysis results are shown in the remaining panels of Figure 5.
The top right panel shows results from a mean‐preserving compression of the prior
distribution over b such that the variance is 10% of the default case. (This change in
variance is roughly equivalent to removing the two highest damage estimates used to
20
calibrate the prior.) This significantly decreases the VOI for b at each level of ω . In this case
there is near complete horizontal overlap across the three uncertain parameters.
The middle left panel considers a lower pure rate of time preference of 0.005, one half
of the default value of 0.01. In this case the VOI for all three parameters increases, especially
for b (which now goes off the chart) and c , since more weight is placed on future outcomes
making climate damages more important overall. The VOI for b and c increase relatively
more than that for a because b and c are more relevant than a for far future outcomes.
The middle right panel considers a higher pure rate of time preference of 0.02, two times the
default value. In this case the VOI estimates for all three parameters shrink substantially, but
the VOI for b and c shrink relatively more than that for a . This result can be understood by
reversing the reasoning given just above for the case with a lower pure rate of time
preference.
To examine the influence of abatement costs on the VOI estimates we re‐ran the
model with a lower and then a higher MAC curve. The default MAC curve used for our central
results was calibrated to outputs from MIT’s EPPA model. (Details of the calibration procedure
are provided in the supplementary information.) For our low‐cost sensitivity case, we re‐
calibrated the MAC curve to roughly match the DICE and RICE models [6,32], which are
substantially more optimistic about the pace of improvements in control technologies than
the EPPA model. The MAC curve in DICE2010 implies that reducing global carbon emissions
by 50% in 2015 would cost slightly less than 1% of gross global economic output, compared
to 16% under our default parameters. This version of the MAC curve may seem overly
optimistic, but it has been used in previous studies and serves as an illustrative bounding
case for our analysis. For our high‐cost sensitivity case we re‐calibrated the MAC curve by
increasing all of the marginal abatement costs from the EPPA model by fifty percent and then
re‐calibrating the parameters of the MAC curve.
21
The bottom left panel of Figure 5 shows the VOI estimates for the lower MAC curve.
These results are in stark contrast to those using our default parameter values, where the VOI
for b was higher than that of the other two uncertain parameters over nearly the entire
range of study precision considered. The more optimistic MAC curve reverses the ordering of
those results. In the lower abatement cost case, learning about damages at low temperatures
becomes significantly more valuable, and in most cases more valuable than information
about damages at higher temperatures. This is because with lower abatement costs the
optimal carbon tax is expected to prevent the temperature anomaly from exceeding 3°C by
very much or for very long in most states of the world, thereby making additional information
about b nearly moot. With the lower MAC curve, the path of the temperature anomaly under
the optimal tax is expected to peak slightly above 3°C and then begin to decline, making
damages at lower temperatures the more relevant parameter. Information about c also
increases relative to the default case as it now plays a larger role in the tradeoff between near‐
term damages and the decreasing cost of abatement over time.
4 Discussion The central results of this paper are illustrated in the top left panel of Figure 5. The
VOI for b is substantially greater than that for a and c over most of the range of study
precision levels examined, including the case of perfect information. These results are driven
largely by the fact that, given the prior distributions calibrated in section 2.3, the optimal
carbon tax policy must balance the high costs of abatement against the substantial risk of
severe climate damages if the global average temperature anomaly exceeds 3°C by much and
for very long. The optimal tax path would look very different if it were known that b is near
the high end of its prior pdf rather than the low end. Learning more about a is far less
valuable because additional information about a has relatively little influence on the optimal
tax policy in light of the existing uncertainty about b . As noted above, given our calibrated
22
priors over a and b there is a roughly 20% probability that the damage function is concave,
which means that damages net of adaptation measures would exhibit a saturation effect.11
This makes a finding of high VOI for b less likely than if we were to constrain the damage
function to be always convex. Therefore, we would expect that modifying the model to
include such a constraint would strengthen our main result.
If we consider the supply of research effort as fixed, if only in the short run, this result
implies that some research effort currently devoted to understanding the climate response
and especially the impacts at low levels of climate change would be put to better use if shifted
towards learning about the welfare impacts at moderate to high levels of climate change.
There is substantial horizontal overlap of VOI estimates for a and c (and to a lesser degree
b and c ). Thus, there are combinations of values for ω that would yield equivalent VOI
estimates for these parameters. So while b seems to be the most valuable in our default case,
the ranking of the other two parameters is less clear‐cut. These are the main take‐home
message of this paper.
We can now reinforce a point we made in the introduction of this paper, that
sensitivity analysis alone may not be sufficient to inform priorities for future research. Recall
that Yohe and Hope [7] used the PAGE model to examine what they call the “value added”
from improved understanding of economic damages. They did this, in part, by comparing the
expected value of the social cost of carbon before and after three simulated adjustments to
some of the uncertain economic damage function parameters in the model: first, they 11 A concave damage function strikes us as unlikely but not necessarily implausible, so we do not want to rule it out a priori. To use a simplistic example, the damage function could be concave if the main effects of climate change were to cause a large fraction of households in currently mild climates where indoor air conditioning is not ubiquitous to install air conditioning at a large fixed cost, after which time additional increases in temperature only have the effect of causing households to run their air conditioners more often at a modest marginal cost. More abstractly, think of the global economy as composed of many distinct sectors, each with its own temperature threshold above which the sector is no longer viable. If we assume that these temperature thresholds are distributed normally among the sectors—or, more generally, according to any distribution with an initially increasing then decreasing probability—then a few sectors will be eliminated at low temperature anomalies, many more will be eliminated at intermediate temperature anomalies, and a few will be eliminated only at high temperature anomalies. This would result in an S‐shaped aggregate damage function, which is convex at low temperature anomalies and concave at high temperature anomalies.
23
simulated a mean‐preserving compression of the damage parameters; second, they shifted
the means down by 50%; and third, they shifted the means up by 50%. They found only
small to modest changes to the SCC in these experiments and concluded that there is
“minimal value added from improved economic damage estimates.”
Yohe and Hope’s results suggest that the value of additional information about the
damage function parameters in PAGE will be low because, all else equal, if a model output
upon which a policy choice variable depends is highly insensitive to changes in one or more
input parameters over a wide range of plausible values, then pinning down those parameter
values more precisely will not help the decision maker improve her policy choices very
much. However, as illustrated by the results in the present paper, the value of information
depends not only on the sensitivity of a single model output to variations in the uncertain
input parameters, but also on the response of the decision‐maker to variations in possibly
multiple model outputs as well as the cost of abatement and the cost of additional research
on the parameter. All of these factors combine to transform the parametric sensitivity of a
model’s output to the formal value of information for additional research on that parameter.
For example, if the effect of the simulated changes in damage function parameters on the SCC
grows over time, and if the decision‐maker will use the estimated SCC to set a global carbon
tax, and if the marginal abatement cost curve is reasonably flat in each time period, then
even a modest change in the SCC in the first period might be accompanied by larger changes
in the future and reasonably large changes in the time path of the carbon tax, the present
value of which could be very large indeed. Recall that the initial carbon tax in our model was
least sensitive to b and most sensitive to a , the exact reverse of the ordering implied by our
central VOI estimates shown in the top left panel of Figure 5. We do not know if this would be
the result of a more complete VOI study using the PAGE model, but the point here is that we
cannot know for sure unless we explicitly account for these other elements in the analysis. It
24
may be difficult to judge the substantive economic significance of the magnitude of the model
sensitivity without a formal VOI analysis.
In the remainder of the paper we discuss some additional findings from our VOI
model, including important caveats to our central results, and we highlight what may be
fruitful directions for future research. Perhaps the least rigorously estimated inputs to our
VOI estimates are the priors for the parameters a and b . While better than raw subjective
judgments, our method of combining prior study estimates is still rather simplistic. The
importance of the width of the prior over b was revealed by the sensitivity analysis in
section 3.3. When the prior over b was compressed, the VOI for a increased slightly and the
VOI for b decreased substantially. So the value of information is determined partly by the
interaction between the current level of uncertainty about the parameter and the sensitivity
of the optimal policy response to variations in the parameter.
In light of previous research in this area, it is natural to consider how the absence of
passive learning about the uncertain parameters over time may influence our results. As
noted earlier, the findings of Kelly and Kolstad [18] and Leach [19] suggest that passive
learning about equilibrium climate sensitivity through observations of the temperature
anomaly is relatively slow and therefore does not substantially influence estimates of the
optimal climate policy. Therefore, we would not expect passive learning about c to lead to a
substantive change in our VOI results. In terms of the uncertain parameters representing the
strength of climate damages, passive learning will first provide information on damages at
low temperature anomalies. Under the reasonable assumption that damages at low
temperature anomalies provides little information about damages at relatively high
anomalies, the incorporation of passive learning about damages should push the VOI for
active learning further towards damages at high anomalies, thereby strengthening our main
result.
25
The sensitivity analysis based on variations in our assumptions about abatements
costs also provides important lessons. Our central result was essentially reversed when we
re‐calibrated the MAC curve to match the more optimistic DICE model. In this case
abatement technologies are assumed to be relatively affordable and so the optimal carbon tax
is expected to prevent the temperature from increasing substantially under most possible
states of the world. Therefore, learning more about the welfare impacts of moderate to high
levels of climate change would be of relatively little use to the decision maker. This illustrates
that the value of learning more about the benefits of a policy will depend on, among other
things, the costs of the policy.
Both of these sensitivity analyses point to an important message: The value of
information about an uncertain parameter will be highest, all else equal, when the expected
net benefits of the policy are close to zero and the variance of net benefits is high. It is under
these conditions that the expected costs of a “policy error” (from an ex post perspective) are
relatively high, and so the benefits of gathering additional information before the decision
must be made also are high. To sharpen this point, consider an extreme scenario where the
cost of a pollution control device is arbitrarily close to zero. Unless the pollutant is known
with certainty to cause no damages, then the optimal policy would be to install the control
device and the value of additional information about the precise magnitude of the damages
would be negligible, because there is very little cost to save by avoiding the installation of the
control device if it is not in fact needed. In the current case, if we already know enough about
climate change damages to decisively rule in or rule out policies that will avoid temperature
anomalies greater than 3°C in light of the costs, then little would be gained by learning more
about damages beyond that level.
Throughout this paper we calculated the VOI for each uncertain parameter under a
wide range of assumptions regarding the precision of a new study on each. We did this in
part to illustrate the influence of study precision on the VOI, but also because we do not know
26
the precision of new studies in each of these areas. Determining this would require a form of
statistical power analysis where features of the experimental design and assumptions about
the natural variability of the phenomena under investigation are combined to produce an ex
ante estimate of the standard error of the estimator. Such an analysis is beyond the scope of
this paper, but will be important for narrowing the VOI estimates in future studies. In the
meantime we would speculate that the precision of studies focused on a should be greater
than those focused on b because the latter necessarily requires extrapolations well outside
of the range of temperature variations observed in the historical record. However, it is not
clear that this discrepancy, all else equal, would be large enough to overturn our central result
above, since we found that the VOI for b nearly as large as that for a even when we
assumed the lowest level of study precision for the former and perfect information for the
latter.
It is more difficult to speculate about the relative precision of studies designed to
estimate c compared to a and b , since this involves a comparison between very different
kinds of natural science and economic studies. Judging by the number of published studies
that have examined the historic record of global average temperatures and GHG emissions
and the several large research teams around the world developing and refining large scale
global circulation models of the climate, compared to the relatively few studies that examine
economic damages at high temperature changes (see Table S3 in the Supplemental
Information), it seems that much more research effort has been devoted to estimating
climate sensitivity than the economic damages of extreme climate change. Furthermore, the
current conventional wisdom seems to be that it will be difficult to pin down climate
sensitivity much more precisely, at least in the short term [23]. On the other hand, the body
of scientific theory that can serve as a basis for estimating c seems to be far more well
developed than the economic theory that serves as the basis for estimating a and b .
27
References
1. Parson EA, Fisher‐Vanden K (1997) Integrated assessment models of global climate
change. Annual Reviews of Energy and the Environment 22:589‐628.
2. Kelly D, Kolstad C (2000) in International Yearbook of Environmental and Resource
Economics 1999/2000: A Survey of Current Issues, eds Folmer H, Tietenberg T (Edward
Elgar, Cheltenham), pp 171‐197.
3. Sarofim MC, Reilly JM (2010) Applications of integrated assessment modeling to climate
change. Wiley Interdiscip Rev Clim Change 2(1):27‐44.
4. Stern N (2006) The Economics of Climate Change: The Stern Review, (Cambridge
University Press , Cambridge).
5. Hope C (2008) Optimal carbon emissions and the social cost of carbon under
uncertainty. The Integrated Assessment Journal 8(1):107‐122.
6. Nordhaus W (2008) A Question of Balance: Weighing the Options on Global Warming
Policies, (Yale University Press, New Haven).
7. Yohe G, Hope C (2013) Some thoughts on the value added from a new round of climate
change damage estimates. Climatic Change 117:451‐465.
8. Conrad JM (1980) Quasi‐option value and the expected value of information. The
Quarterly Journal of Economics 94(4):813‐820.
9. Hanemann WM (1989) Information and the concept of option value. Journal of
Environmental Economics and Management 16:23‐37.
10. Pindyck RS (2002) Optimal timing problems in environmental economics. Journal of
Economic Dynamics & Control 26:1677‐1697.
11. Pindyck RS (2007) Uncertainty in environmental economics. Review of Environmental
Economics and Policy 1(1):45‐65.
12. Yokota F, Thompson KM (2004) Value of information analysis in environmental health
risk management decisions: past, present, and future. Risk Anal 24(3):635‐650.
28
13. Willan AR, Goeree R, Boutis K (2012) Value of information methods for planning and
analyzing clinical studies optimize decision making and research planning. Journal of
Clinical Epidemiology 65:870‐876.
14. Keller K, Kim S‐R, Baehr J, Bradford DF, Oppenheimer M (2007) in Integrated Assessment
of Human Induced Climate Change. Cambridge, eds Schlesinger M (Cambridge University
Press, Cambridge) pp 343‐354.
15. Weitzman ML (2009) On modeling and interpreting the economics of catastrophic
climate change. Rev Econ Stat 91:1‐19.
16. Weitzman ML (2010) GHG targets as insurance against catastrophic climate damages.
Harvard Environmental Economics Program Discussion Paper 10‐20.
17. Kopp RE, Golub A, Keohane NO, Onda C (2011) The influence of the specification of
climate change damages on the social cost of carbon. Economics: The OpenAccess Open
Assessment EJournal. No. 2011‐22.
18. Sherwood SC, Huber M (2010) An adaptability limit to climate change due to heat stress.
P Natil A Sci USA 107(21):9552‐9555.
19. Kopp RA, Mignone BK (2011) The U.S. Government’s social cost of carbon estimates
after their first year: pathways for improvement. Economics: The OpenAccess Open
Assessment EJournal. No. 2011‐16.
20. Kelly DL, Kolstad CD (1999) Bayesian learning, growth, and pollution. Journal of
Economic Dynamics and Control 23:491‐518.
21. Smith JB, Schellnhuber H‐J, Mirza MMQ, Fankhauser S, Leemans R, et al. (2001) in
Climate Change 2001: Impacts, Adaptation, and Vulnerability, eds McCarthy J, Canziana O,
Leary N, Dokken D, White K (Cambridge University Press , New York), pp 915‐967.
22. Leach AJ (2007) The climate change learning curve. Journal of Economic Dynamics and
Control 31:1728‐1752.
29
23. Roe GH, Baker MB (2007) Why is climate sensitivity so unpredictable? Science 318:629‐
632.
24. Weitzman ML (2010) What is the “damages function” for global warming—and what
difference might it make? Climate Change Economics 1(1):57‐69.
25. Meinshausen M, Smith SJ, Calvin K, Daniel JS, Kainuma MLT, et al. (2011) The RCP
greenhouse gas concentrations and their extensions from 1765 to 2300. Clim Change
109:213‐241.
26. Marten AL (2011) Transient temperature response modeling in IAMs: the effects of over
simplification on the SCC. Economics: The OpenAccess, OpenAssessment EJournal
5:2011‐18.
27. Morris J, Paltsev S, Reilly J (2012) Marginal abatement costs and marginal welfare costs
for greenhouse gas emissions reductions; results from the EPPA model. Environ Model
Assess Forthcoming.
28. Mendelsohn R, Nordhaus WD, Shaw D. 1994. The impact of global warming on
agriculture: a Ricardian analysis. The American Economic Review 84(4):753‐771.
29. Nordhaus W (2011) Estimates of the social cost of carbon: background and results from
the RICE‐2011 model. Cowles Foundation discussion paper no. 1826.
30. Tol RSJ (2005) The marginal damage costs of carbon dioxide emissions: an assessment
of the uncertainties. Energ Policy 33(16):2064‐2074.
31. Tol RSJ (2008) The social cost of carbon: trends, outliers, and catastrophes. Economics:
The OpenAccess, OpenAssessment EJournal 2(25):1‐24.
32. Nordhaus WD (2010) Economic aspects of global warming in a post‐Copenhagen
environment. P Natil A Sci USA 107(26):11721‐11726.
30
Tables and figures
Figure 1. Estimates of potential global economic damages from climate change based on
previous studies. Circles denote estimates from studies summarized in the IPCC’s Third
Assessment Report [21], plus signs denote estimates from more recent studies, the lower
dashed line is the damage function from DICE2007 [6], and the higher dashed line is a
damage function from Weitzman [24]. The shaded region is the 95% range of damage
functions used in the present study and the solid line represents the mean damage function.
0 2 4 6 8 100
20
40
60
80
100
Temperature Anomaly [oC]
Dam
ages ‐ [%
of GDP]
31
Figure 2. Prior probability distributions over the uncertain parameters a (marginal
damages below 3°C), b (marginal damages above 3°C), and c (equilibrium climate
sensitivity). The means of the calibrated lognormal probability distributions for a , b , and
c are 1.1, 3.6, and 3.5, and the variances are 11.1, 34.3, and 4.0, respectively.
0 1 2 3 4 5 6
Damage Function Slope [%GDP / oC]
ab
0 2 4 6 8 10
Equilibrium climate sensitivity [oC]
c
32
Figure 3. Ranges for the fixed fraction of consumption equivalent value of all future climate changes, 0v , and net benefits of the optimal carbon tax policy,
*v , between the 5th and 95th
percentile values of each uncertain input parameter, a , b , and c , in turn, with all other parameters held fixed at their expected values.
Optimal Tax Economic Output
Carbon Emissions Atmospheric Carbon Concentration
0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09a
b c
v0
0
0.002
0.004
0.006
0.008
0.01
0.012
0.014
a
b
c
v*
33
Temperature Anomaly Climate Change Damages
Figure 4. Optimal tax and key state variables in the baseline scenario with full uncertainty over the
damage function parameters and equilibrium climate sensitivity. The optimal tax maximizes
expected welfare conditional on priors over the uncertain parameters. The solid line represents the
expected value of the state variable with the shaded regions representing up to three standard
deviations from the mean.
34
Baseline parameters Narrower prior over b
Lower ρ Higher ρ
Lower MAC Higher MAC
Figure 5. VOI estimates using default parameters, top left, and under five parameter variations: the
prior over b compressed (top right), ρ decreased from 0.01 to 0.005 (middle left), ρ increased to
0.02 (middle right), MAC curve decreased to approximate DICE (bottom left), MAC curve increased
35
by 50 percent (bottom right). The four horizontal line breaks in each bar correspond to the four
assumed levels of study precision for each parameter. The bottom line of each bar corresponds to
the lowest level of precision, and higher lines correspond to progressively higher assumed levels of
precision.