calculations - university of sheffield · partial evpi sample sizes, oakley et al. 2 9 summary 10...

31
Sample Sizes for Monte Carlo Partial EVPI 1 Calculations 2 Jeremy E. Oakley Department of Probability and Statistics, University of Sheffield, UK Alan Brennan, Paul Tappenden and Jim Chilcott School of Health and Related Research, University of Sheffield, UK. 3 February 5, 2007 4 Running title: Partial EVPI sample sizes. 5 Address for Correspondence: Dr Jeremy Oakley, Department of Probability and 6 Statistics, University of Sheffield, The Hicks Building, Hounsfield Road, Sheffield, 7 S3 7RH, UK. E-mail: j.oakley@sheffield.ac.uk 8 1

Upload: others

Post on 24-Feb-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Sample Sizes for Monte Carlo Partial EVPI1

Calculations2

Jeremy E. Oakley

Department of Probability and Statistics, University of Sheffield, UK

Alan Brennan, Paul Tappenden and Jim Chilcott

School of Health and Related Research, University of Sheffield, UK.

3

February 5, 20074

Running title: Partial EVPI sample sizes.5

Address for Correspondence: Dr Jeremy Oakley, Department of Probability and6

Statistics, University of Sheffield, The Hicks Building, Hounsfield Road, Sheffield,7

S3 7RH, UK. E-mail: [email protected]

1

Page 2: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 2

Summary9

Partial expected value of perfect information (EVPI) quantifies the10

economic value of removing uncertainty concerning parameters of interest in11

a decision model. EVPIs can be computed via Monte Carlo methods. An12

outer loop samples values of the parameters of interest, and for each of13

these, an inner loop samples the remaining parameters from the appropriate14

conditional distribution. This nested Monte Carlo approach can result in15

biased estimates if very small numbers of inner samples are used and can16

require a large number of model runs for accurate partial EVPI estimates.17

We present a simple algorithm to estimate the EVPI bias and EVPI18

confidence interval for a specified number of inner and outer samples. The19

algorithm uses a relatively small number of model runs (we suggest 630), is20

quick to compute, and can help determine how many outer and inner21

iterations to use for a desired level of accuracy. This algorithm is tested22

using two case studies, a simple illustrative cost-effectiveness model with23

easily computed partial EVPIs, and a much more complex health economic24

model of multiple sclerosis. The simple case study results demonstrate that25

our algorithm produces robust estimates of the EVPI bias and EVPI26

confidence interval for different numbers of inner and outer samples. In the27

complex case study, it is established that very large Monte Carlo sample28

sizes would be required, and Monte Carlo estimation would not be feasible29

due to the computing time involved. This algorithm to establish numbers of30

model runs required for partial EVPI estimation will be of use in any31

decision model when parameter uncertainty is important.32

Page 3: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 3

KEY WORDS: Economic Model, Expected Value of Perfect Information,33

Monte Carlo estimation, Bayesian Decision Theory.34

1 Introduction35

It is common practice to use decision models to estimate the expected net benefit36

of alternative strategy options open to a decision maker [Raiffa, 1968], [Brennan37

and Akehurst, 2000]. Invariably in health economic cost-effectiveness models, the38

input parameters’ true values are not known with certainty. A probabilistic39

sensitivity analysis will then be required to investigate the consequences of this40

input parameter uncertainty [Briggs and Gray, 1999]. Simple Monte Carlo41

propagation of input uncertainty through the model can provide an estimate of the42

distribution of net benefit, thus giving its expected value, and the probability that43

the incremental net benefit will be positive [Van Hout et al., 1994].44

More detailed analysis can compute the partial expected value of perfect45

information (partial EVPI), that is, the value to the decision maker of learning the46

true value of the uncertain parameter input before deciding whether to adopt the47

new treatment [Raiffa, 1968], [Claxton and Posnett, 1996]. Partial EVPIs are48

recommended because they quantify the importance of different parameters using49

decision-theoretic arguments [Claxton, 1999], [Meltzer, 2001], and thus can50

quantify the societal value of further data collection to help design and prioritise51

research projects [Claxton and Thompson, 2001], [Chilcott et al., 2003a].52

Unfortunately, evaluating partial EVPIs is not a trivial computational exercise.53

Page 4: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 4

Generally, a two level Monte Carlo procedure is needed, which will involve many54

runs of the economic model. A developing literature has described this more and55

more clearly [Felli and Hazen, 1998], [Brennan et al., 2002], [Felli and Hazen,56

2003], [Ades et al., 2004], [Yokota and Thompson, 2004], [Koerkamp et al., 2006],57

[Brennan et al., 2007], [Brennan and Kharroubi, 2007b]. The procedure begins58

with an outer loop sampling values of the parameters of interest, and for each of59

these, an inner loop sampling the remaining parameters from their conditional60

distribution [Brennan et al., 2002] [Ades et al., 2004].61

It is obvious that increasing the inner and outer loop sample sizes will result in62

more accurate estimates of partial EVPI. It is also known that the nested Monte63

Carlo approach can result in biased estimates if the inner loop sample size is small64

[Brennan and Kharroubi, 2007a]. Most studies recommend ‘large enough’ inner65

and outer samples e.g. 1,000 or 10,000 in order to produce ‘accurate’ partial EVPI66

estimates. If the decision model is complex, requiring substantial computation67

time for each model run, then this can become infeasible. One can use instead a68

Gaussian process meta-model, which approximates the economic model, enabling69

more efficient Monte Carlo sampling, but with the disadvantages that it is far70

more complex to program initially and is not always feasible for models with large71

numbers of uncertain input parameters (e.g., in excess of 100) [Oakley, 2003],72

[Oakley and O’Hagan, 2004], [Stevenson et al., 2004]. The fact is however, that73

none of the existing studies discuss in detail how to decide on the number of inner74

and outer samples required, nor their relationship with the scale of any bias in the75

Page 5: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 5

partial EVPI estimate.76

In this paper we present an algorithm for determining the inner and outer loop77

sample sizes needed to estimate a partial EVPI to a required level of accuracy. We78

first review the theory of estimating partial EVPIs via Monte Carlo. We then79

propose an algorithm, using a moderate number of model runs, to produce an80

estimate of the bias and confidence interval for partial EVPI estimates, and hence81

to help determine the inner and outer sample sizes needed. The results of applying82

and testing the algorithm’s performance in two case studies are given, followed by83

a discussion of how this approach can be applied more generally to support Monte84

Carlo estimation of partial EVPI.85

2 METHODS86

2.1 Evaluating Partial EVPI via Monte Carlo Sampling87

Suppose we have T treatment options, and our economic model computes the net88

benefit NB(t,x) for treatment t = 1, . . . , T , when provided with input parameters89

x. We will denote the true, uncertain values of the input parameters by90

X = {X1, . . . , Xd}, so that the true, uncertain net benefit of treatment t is given91

by NB(t,X). When considering the partial EVPI of a particular parameter Xi, we92

will use the notation X−i = {X1, . . . , Xi−1, Xi+1, . . . , Xd} to denote all the inputs93

in X except Xi. Note that all of the equations presented, and indeed the proposed94

algorithm, are the same if we are considering a group of parameters Xi rather than95

Page 6: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 6

a single scalar parameter. We use the notation NB(t,X) = NB(t,Xi,X−i) and for96

expectations EX denotes an expectation over the full joint distribution of X, that97

is EX{f(X)} =∫X f(X)dX. The partial EVPI for the ith parameter Xi is given98

by99

EV PI(Xi) = EXi

[max

tEX−i|Xi

{NB(t, Xi,X−i)}]−max

tEX{NB(t,X)}. (1)

The second term in the RHS of (1) can be estimated by Monte Carlo. We sample100

X1, . . . ,XN from the distribution of X, evaluate NB(t,Xn) for t = 1, . . . , T and101

n = 1, . . . , N , and then estimate NB∗ = maxt EX{NB(t,X)} by102

NB∗ = maxt

1

N

N∑

n=1

NB(t,Xn). (2)

We do not consider the choice of N in this paper. In practice, it is usually feasible103

to have N fairly large such that NB∗ is an accurate estimate of NB∗, and can be104

used in the partial EVPI estimate of any parameter or group of parameters.105

In this paper we concentrate on estimating the first term in the RHS of (1), which106

has the form of a maximum of inner expectations nested within an outer107

expectation. We define the maximum of the conditional expected net benefits108

given Xi as:109

m(Xi) = maxt

EX−i|Xi{NB(t,Xi,X−i)} , (3)

and hence, the first term on the RHS of equation (1) can be written as110

EXi{m(Xi)}.111

Typically, we cannot evaluate m(Xi) analytically, and so we estimate it using112

Monte Carlo. We randomly sample J values of X−i from the conditional113

Page 7: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 7

distribution of X−i|Xi to obtain X−i,1, . . . ,X−i,J . We then run the economic114

model for each of the J sampled inputs to obtain NB(t,Xi,X−i,j) for t = 1, . . . , T115

and j = 1, . . . , J , and estimate m(Xi) by116

m(Xi) = maxt

1

J

J∑

j=1

NB(t,Xi,X−i,j). (4)

We now approximate EXi{m(Xi)} by EXi

{m(Xi)}, and estimate EXi{m(Xi)} by117

randomly sampling K values from the distribution of Xi i.e. Xi,1, . . . , Xi,K , to118

compute the estimator:119

EXi{m(Xi)} =

1

K

K∑

k=1

m{Xi,k}. (5)

We refer to the process of obtaining the maximum conditional net benefit120

estimator m{Xi} for a single given Xi using J samples from the distribution of121

X−i|Xi as the inner level Monte Carlo procedure. The process of calculating (5)122

(given the values m(Xi,1), . . . , m(Xi,K)) using the K samples Xi,1, . . . , Xi,K is the123

outer level Monte Carlo procedure. The full description of our Monte Carlo124

estimate for the partial EVPI for parameter Xi is therefore given by:125

EV PI(Xi) = EXi{m(Xi)} − NB∗ (6)

=1

K

K∑

k=1

max

t

1

J

J∑

j=1

NB(t,Xi,k,X−i,j|k)

− NB∗. (7)

The inner and outer level sample sizes, J and K respectively, mean that the total126

number of runs of the economic model required for the partial EVPI estimate is127

J ×K. The objective in this paper is to determine what values of J and K should128

be used in order to obtain a sufficiently accurate estimate of the partial EVPI.129

Page 8: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 8

2.2 Uncertainty and Bias in Monte Carlo Estimates of130

Partial EVPI131

Any Monte Carlo estimate of an expectation is subject to uncertainty due to132

random sampling, and the larger the number of samples, the more this uncertainty133

is reduced. We will assume that N is sufficiently large such that uncertainty in134

NB∗ is small relative to uncertainty in EXi{m(Xi)}. If K is sufficiently large then135

a normal approximation will apply, and a 95% confidence interval for EV PI(Xi) is136

given by137

1

K

K∑

k=1

m{Xi,k} −NB∗ ± 1.96

√V ar{m(Xi)}

K. (8)

The Monte Carlo process described earlier provides the first two terms of (8). For138

the final term, we can estimate V ar{m(Xi)} by simply using the sample variance139

of m(Xi,1), . . . , m(Xi,K). We note from equation (4) that m(Xi) and hence the140

variance V ar{m(Xi)} will depend on the value of J . Equation (8) appears to141

suggest that to obtain a sufficiently accurate estimate of the partial EVPI, we just142

need K to be very large, thereby making the 3rd term of (8) negligible.143

Unfortunately, this is not the case, because, as we now show, m(Xi) is an upwards144

biased estimator of m(Xi). The bias is independent of K, the number of outer145

samples, but it does depend on J and can be reduced to an acceptably small level146

if we choose J sufficiently large. We define the conditional expected net benefit147

given a particular Xi for each treatment t as:148

µt(Xi) = EX−i|Xi{NB(t,Xi,X−i)} (9)

Page 9: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 9

and, its estimator based on J random samples as:149

µt(Xi) =1

J

J∑

j=1

NB(t,Xi,X−i,j). (10)

In our estimate of EVPI in equation (7), we estimate the maximum conditional150

expected net benefit over the T treatments given a particular Xi,151

m(Xi) = max{µ1(Xi), . . . , µT (Xi)} (11)

by the maximum of the Monte Carlo estimates for each treatment152

m(Xi) = max{µ1(Xi), . . . , µT (Xi)}. (12)

The problem is that, although µt(Xi) is an unbiased estimator of µt(Xi), i.e.153

EX−i|Xi{µt(Xi)} = µt(Xi), when the maximisation is applied to the estimators154

then m(Xi) (12) is not an unbiased estimator of m(Xi) (11). In fact, it is upwards155

biased, i.e., it tends to over-estimate m(Xi). This is because, for random variables156

Z1, . . . , Zn it is straightforward to show that157

E{max(Z1, . . . , Zn)} ≥ max{E(Z1), . . . , E(Zn)}. Hence we have that for the158

expectation of the estimator m(Xi)159

EX−i|Xi{m(Xi)} = EX−i|Xi

[max{µ1(Xi), . . . , µT (Xi)}] (13)

≥ max[EX−i|Xi

{µ1(Xi)}, . . . , EX−i|Xi{µT (Xi)}

](14)

= max{µ1(Xi), . . . , µT (Xi)} (15)

= m(Xi). (16)

That is, EX−i|Xi{m(Xi)} ≥ m(Xi). Thus, we expect m(Xi) to overestimate m(Xi)160

for any value of Xi and consequently we expect EV PI(Xi) to be an over-estimate161

Page 10: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 10

of the true partial EVPI. The bias is the difference between the two terms i.e.162

Bias(Xi) = EX−i|Xi{m(Xi)} −m(Xi). (17)

2.2.1 Example163

We illustrate this with a simple example. Suppose we have two treatments T = 2,164

and165

µ(Xi) ∼ N2

0

0.5

,

1

J

1 0.1

0.1 1

. (18)

In figure 1 we plot the density functions of µ1(Xi) and µ2(Xi) for J = 1 and166

J = 100. When J = 1, there is considerable overlap between the two densities,167

with the result that E[max{µ1(Xi), µ2(Xi)}] (marked as the vertical line) is168

noticeably greater than max[E{µ1(Xi)}, E{µ2(Xi)}] = 0.5. When J = 100, the169

overlap is reduced and E[max{µ1(Xi), µ2(Xi)}] is much closer to 0.5, i.e. the bias170

is negligible.171

2.3 Estimating the bias172

To understand and quantify the bias in the estimator m(Xi) we can investigate the173

sampling distribution (the probability distribution, under repeated sampling of the174

population) of the vector of Monte Carlo estimators for conditional expected net175

benefit µ(Xi) = {µ1(Xi), . . . , µT (Xi)}′. If J is sufficiently large then, then the176

distribution of µ will be approximately multivariate normal:177

µ(Xi) ∼ NT

{µ(Xi),

1

JV (Xi)

}, (19)

Page 11: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 11

−2 −1.5 −1 −0.5 0 0.5 1 1.5 20

0.2

0.4

0.6

net benefit

J=1

−2 −1.5 −1 −0.5 0 0.5 1 1.5 20

1

2

3

4

net benefit

J=100

Figure 1: The sampling distributions of µ1(Xi) (solid line) and µ2(Xi) (dashed line)

for J = 1 and J = 100 . The vertical line shows the expectation of the maximum of

µ1(Xi) and µ2(Xi) in each case.

where µ(Xi) = {µ1(Xi), . . . , µT (Xi)}′ and element p, q of the matrix V (Xi) is given178

by179

Vp,q(Xi) = CovX−i|Xi{µp(Xi), µq(Xi)} . (20)

Recall that the bias is given by180

EX−i|Xi{max{µ1(Xi), . . . , µT (Xi)}} −max

t{µ1(Xi), . . . , µT (Xi)}, (21)

and so will depend on µ(Xi), V (Xi) and J through the distribution of µ(Xi).181

Note that the bias will decrease as J increases, as increasing J decreases the182

variance of µ(Xi). In the next section we describe an algorithm for estimating183

µ(Xi) and V (Xi) using a moderate number of model runs. This leads to an184

Page 12: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 12

estimate of the bias in the EVPI estimate for a specified J , and then to a 95%185

confidence interval for EV PI for a specified J and K.186

3 Algorithm to estimate bias and CI in EV PI187

for a specified J and K188

We present the algorithm in three stages. In the first stage we run the economic189

model a moderate number of times (630 is proposed) to obtain estimates of µ(Xi)190

and V (Xi), for different values of Xi. In the second stage, we use these estimates191

to sample from the distribution for the conditional expected net benefit estimators192

µ and estimate the bias in EV PI(Xi) for different specified J ’s. In the third193

stage, we estimate the variance in the maximum conditional expected net benefits194

var{m(Xi)}, enabling us to estimate the confidence interval for EV PI(Xi) using195

equation (8) for different specified K’s.196

Stage 1: Estimating µ(Xi) and V (Xi)197

1. Choose a small number of design points (say 21) Xi,1, . . . , Xi,21 to cover the198

sample space of Xi. If Xi is a scalar, these can be evenly spaced. If if Xi is a199

vector then a Latin Hypercube sample can be used.200

2. For each Xi,k generate a random sample (say 30) of the remaining inputs201

from their conditional distribution X−i|Xi,k. Denote these by202

X−i,k,1, . . . ,X−i,k,30.203

Page 13: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 13

3. Run the economic model to obtain the net benefits NB(t,Xi,k,X−i,k,j) for204

t = 1, . . . , T ; k = 1, . . . , 21; j = 1, . . . , 30.205

4. Estimate µt(Xi,k) by206

µt(Xi,k) =1

30

30∑

j=1

NB(t,Xi,k,X−i,k,j), (22)

for t = 1, . . . , T ; k = 1, . . . , 21 to obtain207

µ(Xi,k) = {µ1(Xi,k, . . . , µT (Xi,k))} , (23)

for k = 1, . . . , 21.208

5. Estimate V (Xi,k) for k = 1, . . . , 21 by V (Xi,k) where element p, q of V (Xi,k)209

is given by210

1

29

30∑

j=1

{NB(q, Xi,k,X−i,k,j)− µp(Xi,k)} {NB(q,Xi,k,X−i,k,j)− µq(Xi,k)}

(24)

We now have estimates of µ(Xi) and V (Xi) at a range of different values of Xi,211

and can now do a simulation to identify a suitable value of J .212

Stage 2: Estimating bias and determining J213

1. Choose a candidate value of J .214

2. For k = 1, . . . , 21:215

(a) Approximate the distribution of µ(Xi,k) by216

µ(Xi, k) ∼ NT

{µ(Xi,k),

1

JV (Xi,k)

}(25)

Page 14: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 14

(b) Generate µ1(Xi, k), . . . , µN(Xi, k) for large N (say 10,000) from the217

distribution of µ(Xi, k), with218

µn(Xi,k) = {µ1,n(Xi,k), . . . , µT,n(Xi,k)}′ , (26)

for n = 1, . . . , N .219

(c) Estimate the bias in m(Xi,k) for each of the 21 sampled Xi,k by220

b(Xi,k) =1

N

N∑

n=1

max {µ1,n(Xi,k), . . . , µT,n(Xi,k)}−max {µ1(Xi,k), . . . , µT (Xi,k)}

(27)

3. Estimate the expected bias in m(Xi) by221

b(Xi) =1

21

21∑

k=1

b(Xi,k). (28)

Note that if Xi,1, . . . , Xi,k are chosen deterministically, numerical integration can be222

used to estimate the expected bias, rather than Monte Carlo sampling as in (28).223

This process is repeated for different J in order to determine the relationship224

between J and the scale of bias. When put into the context of the overall EVPI225

and the partial estimate EV PI(Xi), this can help determine a J which produces226

an acceptably small bias.227

Finally, we consider estimating the width of the confidence interval for the228

estimator in (5) that will result from choosing an outer sample size K.229

Stage 3: Estimating the 95% CI for EV PI(Xi) to determine K230

1. Calculate σ2(Xi), the estimated variance of m(Xi,k), by231

σ2(Xi) =1

K − 1

K∑

k=1

{m(Xi,k)− m(Xi)}2 , (29)

Page 15: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 15

with232

m(Xi) =1

K

K∑

k=1

m(Xi,k). (30)

2. Estimate the width of the 95% confidence interval for EV PI(Xi) as233

±1.96√

σ2(Xi)/K.234

Technically, this gives us the estimated width of the confidence interval when235

J = 30; the variance of m(Xi) will decrease with J , though we would not expect it236

to change substantially relative to σ2(Xi). In practice, once J has been chosen, we237

can re-estimate σ2(Xi).238

The most important purpose of the algorithm is to estimate the bias and239

determine J . This is because the size of the bias cannot be observed when we240

come to actually estimate a partial EVPI using Monte Carlo sampling. Although241

it is useful at stage 3 to estimate the width of the confidence interval for242

EV PI(Xi), it is possible to observe directly the variability in EVPI estimates as243

more outer samples are used in the two-level procedure.244

4 Case studies245

Two case studies have been used to explore the feasibility, accuracy and usefulness246

of the proposed algorithm in predicting the bias and confidence interval in EVPI247

estimates using a specified J and K.248

Page 16: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 16

4.1 Case study 1: A simple decision tree model249

The first case study concerns a simple hypothetical cost-effectiveness model, which250

compares two strategies: treatment with drug T0 versus treatment with drug T1.251

Table 1 shows the nineteen uncertain model parameters, with prior mean values252

shown for T0 (column a), T1 (column b) and hence the incremental analysis253

(column c). Costs include cost of drug and cost of hospitalisations i.e the product254

of the percentage of patients admitted to hospital, days in hospital, and cost per255

day. Thus, mean cost of strategy T0 = $1000 + 10% x 5.20 x $400 = $1208. Health256

benefits are measured as QALYs gained and come from two sources: responders257

receive a utility improvement for a specified duration, and some patients have side258

effects with a utility decrement for a specified duration i.e. QALY for strategy T0259

= 70% responders x 0.3 x 3 years + 25% side effects x -0.1 x 0.5 years = 0.6175.260

The illustrative willingness to pay i.e. threshold cost per QALY is set at261

λw = $10000. (Note that this is a purely illustrative model, the value of λw is262

arbitrarily chosen and is lower than the threshold of many western countries).263

Thus, the net benefit of T0 is = $10000 x 0.6175 - $1208 = $4967. Effectively this264

is a simple decision tree model with a net benefit function of sum-product form i.e.265

NB(T0) = λw(θ5θ6θ7 + θ8θ9θ10)− (θ1 + θ2θ3θ4)266

NB(T1) = λw(θ14θ15θ16 + θ17θ18θ19)− (θ11 + θ12θ13θ4)267

Uncertainty about the parameters is described with independent normal268

distributions. Standard deviations for the model parameters are shown in columns269

(d) and (e). Each parameter can be informed by collection of further data on270

Page 17: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 17

individual patients. Given current knowledge, the basic model results show $5405271

expected net benefit for T1 compared with $4967 for T0 (difference = $437.80),272

which means that our baseline decision given current information should be to273

adopt strategy T1. Probabilistic sensitivity analysis [Briggs and Gray, 1999] shows274

that T1 provides greater net benefits than T0 on only 54.5% of 1000 Monte Carlo275

samples. This suggests that knowing more about some of the uncertain parameters276

could affect our decision. The results of the partial EVPI analyses we have277

undertaken are described on an indexed scale, where the overall expected value of278

perfect information, $1319 per patient in our model, is indexed to 100.279

Our analysis of the accuracy of the algorithm in predicting the expected bias and280

confidence interval of EVPI estimates focusses on parameter θ5, the % responding281

to treatment T0. For inner sample sizes of J=10, 100 and 500 and outer samples282

also of K= 10, 100 and 500 we undertake comparisons between the predictions of283

our algorithm and the results of re-running the Monte-Carlo estimation of EVPI284

1000 different times. By generating 1000 real estimates of EV PI(θ5), using for285

example J = 10 inner and K = 10 outer samples, we can compute the mean bias286

exhibited and the 95% confidence interval exhibited in these EVPI estimates. We287

then compare the mean bias exhibited with that predicted by the algorithm and288

similarly the confidence interval width exhibited with that predicted by the289

algorithm.290

Page 18: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 18

ab

cd

ePara

mete

rsPara

mete

rM

ean

Valu

es

giv

en

Unce

rtain

tyin

Means

Exis

ting

Evid

ence

Sta

ndard

Devia

tions

T0

T1

(T1-

T0)

T0

T1

Cos

tof

Dru

g(θ

1,θ

11)

$10

00$1

500

$500

$1$1

%A

dm

issi

ons

(θ2,θ

12)

10%

8%-2

%2%

2%D

ays

inH

ospit

al(θ

3,θ

13)

5.20

6.10

0.90

1.00

1.00

Cos

tper

day

(θ4)

$400

$400

...

$200

$200

%R

espon

din

g(θ

5,θ

14)

70%

80%

10%

10%

10%

Uti

lity

Chan

geif

resp

ond

(θ6,θ

15)

0.30

0.30

...

0.10

0.05

Dura

tion

ofre

spos

e(y

ears

)(θ

7,θ

16)

3.0

3.0

...

0.5

1.0

%Sid

eeff

ects

(θ8,θ

17)

25%

20%

-5%

10%

5%C

han

gein

uti

lity

ifsi

de

effec

t(θ

9,θ

18)

-0.1

0-0

.10

-0.0

00.

020.

02D

ura

tion

ofsi

de

effec

t(y

ears

)(θ

10,θ

19)

0.50

0.50

...

0.20

0.20

Centr

alEst

imate

Resu

lts

Tot

alC

ost

$120

8$1

695

487

Tot

alQ

ALY

0.61

750.

7100

0.09

25C

ost

per

QA

LY

5267

Net

Ben

efit

give

nth

resh

old

(λ)=

$100

00)

$496

7$5

405

$438

Table 1: Summary of Model and Parameters for Case Study 1

Page 19: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 19

4.2 Case study 2: A complex cost-effectiveness model291

The second case study uses a published health economic model developed on292

behalf of the National Institute for Clinical Excellence (NICE) to evaluate the293

cost-effectiveness of beta interferon and glatiramer acetate in the management of294

multiple sclerosis in the UK. Full details are given elsewhere [Chilcott et al.,295

2003b]. Briefly, the model uses a state transition methodology to describe the296

clinical course of patients with relapsing/remitting MS and secondary progressive297

MS through the Expanded Disability Status Scale (EDSS). Owing to the298

time-dependence of the probabilities of transiting between health states, a single299

run of the model required around 7 seconds, meaning that probabilistic sensitivity300

analysis using the model was highly computationally expensive. In fact, if 10,000301

samples were used for both the inner and outer levels, then calculating the partial302

EVPI for just 1 of the models 128 uncertain parameters using the 2-level sampling303

algorithm would require around 22.2 years computation time. Using 10,000 random304

samples we estimated the overall EVPI for the model to be £8,855 per patient.305

One-way sensitivity analyses identified several model parameters as important and306

we chose to focus our analysis of the bias and confidence intervals for EVPI307

estimates on the parameter: mean cost associated with the EDSS 9.5 health state.308

Page 20: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 20

5 Results309

5.1 Results using Case Study 1310

The true EVPI for the parameter of interest θ5 is estimated at around $199, which311

is 15.1 when indexed against overall EVPI ($1319=100). The results of running312

the algorithm to predict bias and confidence intervals are shown in Table 2.313

J = 10 J = 100 J = 500 J = 1000 J = 5000 J = 10000

Bias (independent of K) 15.61 1.58 0.25 0.17 0.05 0.02

95% CI

K = 10 ±43.9 27.7 25.5 25.1 24.9 24.8

K = 100 ±13.9 8.8 8.1 7.9 7.9 7.9

K = 500 ±6.2 3.9 3.6 3.6 3.5 3.5

K = 1000 ±4.4 2.8 2.5 2.5 2.5 2.5

K = 5000 ±2.0 1.2 1.1 1.1 1.1 1.1

K = 10000 ±1.4 0.9 0.8 0.8 0.8 0.8

Table 2: Predicted bias and 95% CI for Monte Carlo partial EVPI estimate in case

study 1 using our proposed algorithm

The predicted bias in the Monte Carlo estimate for a given inner sample size is314

compared with the actual bias exhibited in Figure 1. The order of magnitude of315

the predicted expected bias is very similar to that mean bias actually exhibited316

using 1000 repeated runs of the two level partial EVPI. The absolute scale of the317

Page 21: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 21

bias is quite high for very small numbers of inner samples. For example, with just318

10 inner runs used, the bias in this case study is approximately 15.5 when indexed319

to the overall EVPI = 100, which would more than double our estimate for the320

indexed partial EVPI for parameter θ5. When larger numbers of inner runs are321

used, the bias fall substantially with an expected indexed bias of just 0.25 when322

using J=500 inner iterations. This suggests that on average, if we used J=500323

iterations we would estimate the indexed partial EVPI for parameter θ5 at 15.35324

rather than 15.1. The near equivalence of the predicted versus actuals for the325

different inner numbers of iterations tested suggests that our algorithm provides a326

robust estimate of the expected bias due to small numbers of iterations, at least in327

this case study.328

Comparison of Bias Predicted by Proposed Algorithm versus Exhibited bias in Case Study 1

-4

0

4

8

12

16

20

10 100 500

J - number of Inner Samples

Exp

ecte

d bi

as

(Ind

exed

to

over

all

EV

PI

= 1

00)

Predicted bias

Actual BiasExhibited

Figure 2: Comparison of Bias Predicted by Proposed Algorithm versus Exhibited

bias in Case Study 1.

Figure 3 shows that the predicted confidence intervals for the partial EVPI329

estimates are similar but slightly wider than the actual confidence intervals330

exhibited when using 1000 repeated computations of EV PI(θ5). The estimated331

Page 22: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 22

widths are of sufficient accuracy to be informative, and as discussed earlier, K can332

be refined once we have actually chosen J and obtained a partial EVPI estimate.333

With the smallest numbers tested, (K=J=10), the predicted 95% confidence334

interval for the indexed partial EVPI is ±43.8 i.e. an estimate could be expected335

to be biased by 15.5 indexed points as discussed earlier and vary anywhere336

between (-13 to +74) when indexed to overall EVPI = 100. Clearly using such337

small numbers to produce one estimate of EV PI(θ5) would give almost338

meaningless results. As larger numbers of iterations are used, the predicted339

confidence intervals for EV PI(θ5) reduce. When K=J=500 is used, the predicted340

95% confidence interval is 15.35± 3.6. It may be that even higher numbers of341

iterations are required if this level of accuracy is not enough for analysts or342

decision makers’ needs. It can be seen that increasing the outer sample size K343

reduces the confidence interval in proportion to√

K. Again, the near equivalence344

of the predicted versus actuals for the different numbers of outer and inner345

iterations tested suggests that our algorithm provides a robust estimate of the346

order of magnitude of 95% confidence intervals for partial EVPI estimates, at least347

in this case study.348

5.2 Results using Case Study 2349

For case study 2, the predicted bias in EVPI estimates produced by our algorithm350

for different numbers of inner samples J are shown in Table 3. The analysis351

suggests that an inner sample of 1,000 would still produce an appreciable bias352

Page 23: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 23

Comparison of Confidence Intervals Predicted by ProposedAlgorithm versus Exhibited Confidence Intervals in Case Study 1

-

5

10

15

20

25

30

35

40

45

50

10*1

0

10*1

00

10*5

00

100*

10

100*

100

100*

500

500*

10

500*

100

500*

500

Outer K * Inner J Samples Used

Con

fiden

ce In

terv

al (+

/- X

) (

Ind

exed

to o

vera

ll E

VP

I = 1

00)

Precticted CIActual CI

Figure 3: comparison of Confidence Intervals Predicted by Proposed Algorithm

versus Exhibited Confidence Intervals in Case Study 1.

(2.42 on the indexed scale).353

In this case study, we tested different versions of our algorithm, extending the354

number of inner iterations used within the algorithm from our originally specified355

J = 30 up to 40, 50 and 60. We found some small variations but no appreciable356

differences in assessing the order of magnitude of the predicted bias when using357

greater than 30 inner iterations in our algorithm.358

Table 4 shows the estimated the width of a 95% confidence interval for the partial359

EVPI for various outer sample sizes, given an inner sample size of J = 1000. We360

can see that the confidence intervals are fairly wide here, indicating a relatively361

large outer sample size will also be needed, certainly in excess of K = 1000.362

The results of our case study 2 analyses, on the basis of 630 model evaluations, led363

us to establish that a likely minimum sample size for an accurate 2-level Monte364

Page 24: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 24

Inner Predicted Bias Indexed

Samples (overall EVPI=100)

J = 500 £ 456 5.25

J = 1000 £ 214 2.42

J = 5000 £ 45 0.51

J = 10000 £ 20 0.22

Table 3: The predicted size of a bias in a Monte Carlo partial EVPI estimate for

parameter cost associated with health state EDSS 9.5, in case study 2.

Outer Samples K K=100 K=1,000 K=10,000 K=100,000

Absolute Value £ 11, 729 £ 3, 709 £ 1173 £ 371

Indexed to Overall EVPI 132.45 41.88 13.25 4.19

Table 4: Estimated Width of 95% confidence interval for a Monte Carlo partial

EVPI estimate for parameter cost associated with health state EDSS 9.5, for an

inner sample size J = 1000 and outer sample size K.

Carlo estimate will be of the order of 1 million model runs. Partial EVPI365

estimation using the Monte Carlo approach was clearly infeasible because this366

would require 81 days computation time. At this point we moved on to utilise the367

more efficient Gaussian Process model to approximate partial EVPI, the details of368

which are given elsewhere [Tappenden et al., 2004].369

Page 25: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 25

6 Discussion370

We have presented an algorithm for predicting the expected bias and confidence371

interval for a Monte Carlo estimate of partial EVPI. Testing of the algorithm372

against exhibited bias and confidence intervals in our first case study shows that it373

provides predictions with the correct order of magnitude. Our second case study374

provides an example where infeasibly long computation times would be required375

for accurate EVPI estimates and a Gaussian process meta-model was required to376

emulate the original model and produce much quicker model runs [Tappenden377

et al., 2004].378

The algorithm can be applied generally to any decision model, and is not limited379

to economic evaluation of health technologies. It can be applied patient simulation380

models [Brennan et al., 2006], where there is the additional complication of how381

many patient simulations per parameter set to use [O’Hagan et al., 2007]. It is also382

applicable to models with any characterisation of uncertainty and is not limited to383

continuous or parametric distributions. Although both of our case studies384

examined the partial EVPI of a single parameter, the algorithm is identically385

applicable to computing EVPI for a group of parameters Xi.386

The algorithm is very quick to compute. The time taken to compute predicted387

biases and confidence intervals for the 36 options of inner J and outer K samples in388

table 2 for case study 1 was just over 8 seconds. This compares with several days389

computation to produce the 36,000 runs of actual two level partial EVPIs with390

which to compare the predictions in Figures 1 and 2.391

Page 26: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 26

The main limitation of the approach is that it requires the computation of the392

conditional distribution for the parameters not of interest X−i|Xi. In case study 1393

these were multi-variate normally distributed and so the conditional mean and394

variance matrix were easily available with analytic formulae. This may not be the395

case for other characterisations of uncertainty. Of course this problem applies just396

as equally to the computation of EVPI via Monte carlo itself as it does to the397

prediction algorithm for the bias and confidence intervals of these estimates. A398

second limitation is that this procedure will be most useful for models that can run399

fairly quickly (i.e. a few seconds per model run). For very computationally400

expensive models, such as patient simulation models that can take an hour per401

run, we believe a Gaussian meta-model approach is currently the only viable402

option for computing EVPI.403

One implication of considering the uncertainty in EVPI estimates explicitly is that404

decision makers and analysts may need to define some limit on how accurately405

they wish to know the true EVPI for parameters of interest. Both case studies406

suggest that relatively small numbers of inner and outer loops, such as K=J=500407

and a total of 250,000 model runs, provide a reasonable estimate of the order of408

magnitude of the partial EVPI. The diminishing returns of higher numbers of409

inner and outer iterations for EVPI accuracy are an unavoidable fact of the Monte410

Carlo estimation process. In case study 1 moving from K=J=500 to K=J=1000411

(an additional 750,000 models runs taking 4 times longer) reduces the 95% CI412

from ±3.6 to ±2.5 on the indexed scale. At a further extreme, moving from413

Page 27: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 27

K=J=5,000 to K=J=10,000, an additional 75 million model runs reduces the 95%414

CI by just 0.3, from ±1.1 to ±0.8.415

As partial EVPI calculations become more common in practice, we have a need to416

be as efficient as possible in producing estimates [Claxton et al., 2004]. We have417

tested the algorithm on only two case studies here and further research would be418

useful to test the performance of the algorithm in other models. The field of value419

of information analysis is widening to consider expected value of sample420

information (EVSI) [Ades et al., 2004], [Brennan and Kharroubi, 2007a] and the421

expected value of perfect implementation (EVPImp) [Fenwick and Claxton, 2005].422

In many of these analyses, the same issue of inner and outer sampling exists and it423

would be useful to adapt our algorithm to these contexts and investigate its424

accuracy and usefulness.425

In conclusion, this novel algorithm is easily and generally applied to compute426

predicted bias and confidence intervals for Monte Carlo based estimates of partial427

EVPI in decision models. The algorithm is a relatively simple tool, using standard428

results concerning uncertainty in Monte Carlo estimates, but extending them into429

the context of nested expectations with intervening maxima. Our judgement is430

that it should provide robust estimates in all kinds of decision model but further431

testing of this may be useful. The algorithm is particularly useful when relatively432

small numbers of inner and outer iterations are planned and is recommended for433

use when reporting all Monte Carlo based partial EVPI calculations.434

Page 28: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 28

References435

A. E. Ades, G. Lu, and K. Claxton. Expected value of sample information436

calculations in medical decision modeling. Medical Decision Making, 24:207–227,437

2004.438

A Brennan and R Akehurst. Modelling in health economic evaluation. what is its439

place? what is its value? Pharmacoeconomics, 17, 2000.440

A. Brennan and S.A. Kharroubi. Efficient computation of partial expected value of441

sample information using bayesian approximation. available at442

http://dx.doi.org/10.1016/j.jhealeco.2006.06.002. Journal of Health Economics,443

26, 2007a.444

A Brennan and SA Kharroubi. Expected value of sample information for weibull445

survival data (accepted). Health Economics, 2007b.446

A. Brennan, S. A. Kharroubi, J. Chilcott, and A O’Hagan. A two level monte447

carlo approach to calculating expected value of perfect information:- resolution448

of the uncertainty in methods. available at449

http://www.shef.ac.uk/content/1/c6/02/96/05/brennan.doc. Discussion paper,450

University of Sheffield, 2002.451

A. Brennan, SE Chick, and R Davies. A taxonomy of model structures for452

economic evaluation of health technologies. available at453

http://dx.doi.org/10.1002/hec.1148. Health Economics, 15:12951310, 2006.454

Page 29: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 29

A Brennan, SA Kharroubi, A O’Hagan, and J Chilcott. Calculating partial455

expected value of perfect information via monte-carlo sampling algorithms.456

(provisionally accepted). Med Decis Making, 2007.457

AH Briggs and A Gray. Handling uncertainty when performing economic458

evaluation of healthcare interventions. Health Technology Assessment, 1999.459

J. Chilcott, A. Brennan, A. Booth, J. Karnon, and P. Tappenden. The role of460

modelling in prioritising and planning clinical trials. Health Technology461

Assessment, 7, 2003a.462

J. Chilcott, C. McCabe, P. Tappenden, and A. et al O’Hagan. Modelling the cost463

effectiveness of interferon beta and glatiramer acetate in the management of464

multiple sclerosis. BMJ, 326, 2003b.465

K. Claxton. Bayesian approaches to the value of information: implications for the466

regulation of new health care technologies. Health Economics, 8:269–274, 1999.467

K. Claxton and J. Posnett. An economic approach to clinical trial design and468

research priority-setting. Health Economics, 5:513–524, 1996.469

K. Claxton and K. Thompson. A dynamic programming approach to efficient470

clinical trial design. Journal of Health Economics, 20:797–822, 2001.471

K Claxton, L Ginnelly, M Sculpher, Z Philips, and S Palmer. A pilot study on the472

use of decision theory and value of information analysis as part of the nhs health473

technology assessment programme. Health Technology Assessment, 8, 2004.474

Page 30: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 30

J. C. Felli and G. B. Hazen. Sensitivity analysis and the expected value of perfect475

information. Medical Decision Making, 18:95–109, 1998.476

J. C. Felli and G. B. Hazen. Correction: Sensitivity analysis and the expected477

value of perfect information. Medical Decision Making, 23:97, 2003.478

E Fenwick and Sculpher M Claxton, K and. The value of implementation and the479

value of information: Combined and uneven development. CHE Discussion480

Paper 5, 2005.481

BG Koerkamp, M Hunink, T Stijnen, and MC Weinstein. Identifying key482

parameters in cost-effectiveness analysis using value of information: a483

comparison of methods. Health Economics, 15, 2006.484

D. Meltzer. Addressing uncertainty in medical cost effectiveness analysis.485

implications of expected utility maximization for methods to perform sensitivity486

analysis and the use of cost-effectiveness analysis to set priorities for medical487

research. Journal of Health Economics, 20:109–129, 2001.488

J. E. Oakley. Sensitivity analysis for computationally expensive economic models.489

Technical report, Department of Probability and Statistics, University of490

Sheffield, 2003.491

JE Oakley and A O’Hagan. Probabilistic sensitivity of complex models: a492

Bayesian approach. J. Roy. Statist. Soc. Ser. B, 66:751–769, 2004.493

A. O’Hagan, M Stevenson, and J Madan. Monte carlo probabilistic sensitivity494

Page 31: Calculations - University of Sheffield · Partial EVPI Sample Sizes, Oakley et al. 2 9 Summary 10 Partial expected value of perfect information (EVPI) quantifles the 11 economic

Partial EVPI Sample Sizes, Oakley et al. 31

analysis for patient level simulation models: efficient estimation of mean and495

variance using anova. Health Economics, page ??, 2007.496

H. Raiffa. Decision Analysis: Introductory Lectures on Choice Under Uncertainty.497

Addison-Wesley, Reading, Mass., 1968.498

MD Stevenson, JE Oakley, and JB Chilcott. Gaussian process modelling in499

conjunction with individual patient simulation modelling: A case study500

describing the calculation of cost-effectiveness ratios for the treatment of501

osteoporosis. Medical Decision Making, 24, 2004.502

P. Tappenden, J. Chilcott, S. Eggington, J. Oakley, and C. McCabe. Methods for503

expected value of information analysis in complex health economic models:504

developments on the health economics of interferon- and glatiramer acetate for505

multiple sclerosis. Health Technology Assessment, 8, 2004.506

BA Van Hout, Al MJ, G.S. Gordon, and F.F. Rutten. Costs, effects and c/e-ratios507

alongside a clinical trial. Health Economics, 1994.508

F. Yokota and K.M. Thompson. Value of information literature analysis509

(VOILA): A review of applications in health risk management. Medical510

Decision Making, 24, 2004.511