judging the credibility of climate projections

Post on 16-Feb-2016

43 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Judging the credibility of climate projections. Chris Ferro (University of Exeter) Tom Fricker , Fredi Otto, Emma Suckling. Uncertainty in Weather, Climate and Impacts (13 March 2013, Royal Society, London). Credibility and performance. - PowerPoint PPT Presentation

TRANSCRIPT

Judging the credibility of climate projections

Chris Ferro (University of Exeter)Tom Fricker, Fredi Otto, Emma Suckling

Uncertainty in Weather, Climate and Impacts (13 March 2013, Royal Society, London)

Credibility and performance

Many factors may influence credibility judgments, but should do so if and only if they affect our expectations about the performance of the predictions.

Identify credibility with predicted performance.

We must be able to justify and quantify (roughly) our predictions of performance if they are to be useful.

Performance-based arguments

Extrapolate past performance on basis of knowledge of the climate model and the real climate (Parker 2010).

Define a reference class of predictions (including the prediction in question) whose performances you cannot reasonably order in advance, measure the performance of some members of the class, and infer the performance of the prediction in question.

Popular for weather forecasts (many similar forecasts) but less use for climate predictions (Frame et al. 2007).

Climate predictions

Few past predictions are similar to future predictions, so performance-based arguments are weak for climate.

Other data may still be useful: short-range predictions, in-sample hindcasts, imperfect model experiments etc.

These data are used by climate scientists, but typically to make qualitative judgments about performance.

We propose to use these data explicitly to make quantitative judgments about future performance.

Bounding arguments

1. Form a reference class of predictions that does not contain the prediction in question.

2. Judge if the prediction problem in question is harder or easier than those in the reference class.

3. Measure the performance of some members of the reference class.

This provides a bound for your expectations about the performance of the prediction in question.

Bounding arguments

S = performance of a prediction from reference class C

S′ = performance of the prediction in question, from C′Let performance be positive with smaller values better.

Infer probabilities Pr(S > s) from a sample from class C.

If C′ is harder than C then Pr(S′ > s) > Pr(S > s) for all s.

If C′ is easier than C then Pr(S′ > s) < Pr(S > s) for all s.

Hindcast example

Global mean, annual mean surface air temperatures. Initial-condition ensembles of HadCM3 launched every year from 1960 to 2000. Measure performance by the absolute errors and consider a lead time of 10 years.

1. Perfect model: try to predict ensemble member 1

2. Imperfect model: try to predict CNRM-CM5 model

3. Reality: try to predict HadCRUT3 observations

Hindcast example

1. Errors when predict HadCM3

2. Errors when predict CNRM-CM5

3. Errors when predict reality

Recommendations

Use existing data explicitly to justify quantitative predictions of the performance of climate projections.

Collect data on more predictions, covering a range of physical processes and conditions, to tighten bounds.

Design hindcasts and imperfect model experiments to be as similar as possible to future prediction problems.

Train ourselves to be better judges of relative performance, especially to avoid over-confidence.

ReferencesOtto FEL, Ferro CAT, Fricker TE, Suckling EB (2012) On judging the

credibility of climate projections. Climatic Change, submitted

Allen M, Frame D, Kettleborough J, Stainforth D (2006) Model error in weather and climate forecasting. In Predictability of Weather and Climate (eds T Palmer, R Hagedorn) Cambridge University Press, 391-427

Frame DJ, Faull NE, Joshi MM, Allen MR (2007) Probabilistic climate forecasts and inductive problems. Phil. Trans. Roy. Soc. A 365, 1971-1992

Knutti R (2008) Should we believe model predictions of future climate change? Phil. Trans. Roy. Soc. A 366, 4647-4664

Parker WS (2010) Predicting weather and climate: uncertainty, ensembles and probability. Stud. Hist. Philos. Mod. Phys. 41, 263-272

Parker WS (2011) When climate models agree: the significance of robust model predictions. Philos. Sci. 78, 579-600

Smith LA (2002) What might we learn from climate forecasts? Proc. Natl. Acad. Sci. 99, 2487-2492Stainforth DA, Allen MR, Tredger ER, Smith LA (2007) Confidence, uncertainty and decision-

support relevance in climate predictions. Phil. Trans. Roy. Soc. A 365, 2145-2161

Future developments

Bounding arguments may help us to form fully probabilistic judgments about performance.

Let s = (s1, ..., sn) be a sample from S ~ F( |∙ p).

Let S′ ~ F( |∙ cp) with priors p ~ g( ) and ∙ c ~ h( ).∙Then Pr(S′ ≤ s|s) = ∫∫F(s|cp)h(c)g(p|s)dcdp.

Bounding arguments refer to prior beliefs about S′ directly rather than indirectly through beliefs about c.

Predicting performance

We might try to predict performance by forming our own prediction of the predictand.

If we incorporate information about the prediction in question then we must already have judged its credibility; if not then we ignore relevant information.

Consider predicting a coin toss. Our own prediction is Pr(head) = 0.5. Then our prediction of the performance of another prediction is bound to be Pr(correct) = 0.5 regardless of other information about that prediction.

1. Perfect model errors

1. Perfect model errors

top related