use of monte-carlo particle filters to fit and compare models for the dynamics of wild animal...

Post on 16-Jan-2016

214 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Use of Monte-Carlo particle filters to fit and compare models for the

dynamics of wild animal populations

Len Thomas

Newton Inst., 21st Nov 2006

I always wanted to be a model….

Outline 1. Introduction

2. Basic particle filtering

3. Tricks to make it work in practice

4. Applications

– (i) PF, Obs error fixed

– (ii) PF vs KF, One colony model

– (iii) PF vs MCMC

5. Discussion

References

Our work: http://www.creem.st-and.ac.uk/len/

Joint work with…

Methods and framework:

– Ken Newman, Steve Buckland: NCSE St Andrews

Seal models:

– John Harwood, Jason Matthiopoulos: NCSE & Sea Mammal

Research Unit

– Many others at SMRU

Comparison with Kalman filter:

– Takis Besbeas, Byron Morgan: NCSE Kent

Comparison with MCMC

– Carmen Fernández: Univ. Lancaster

1. Introduction

Answering questions about wildlife systems

How many ?

Population trends

Vital rates

What if ?

– scenario planning

– risk assessment

– decision support

Survey design

– adaptive management

State space model

State process density gt(nt|nt-1 ; Θ)

Observation process density ft(yt|nt ; Θ)

Initial state density g0(n0 ; Θ)

Bayesian approach, so:

Priors on Θ

Initial state density + state density gives prior on n1:T

British grey seal

Population in recovery from historical exploitation NERC Special Committee on Seals

Data

Aerial surveys of breeding colonies since 1960s count pups Other data: intensive studies, radio tracking, genetic, counts at

haul-outs

Pup production estimates

1985 1990 1995 2000 2005

2000

3000

4000

5000

Year

Pup

s

North.Sea

1985 1990 1995 2000 2005

1500

2000

2500

3000

Year

Pup

s

Inner.Hebrides

1985 1990 1995 2000 2005

8000

1000

012

000

Year

Pup

s

Outer.Hebrides

1985 1990 1995 2000 2005

6000

1000

014

000

1800

0

Year

Pup

s

Orkneys

Orkney example colonies

Time

1960 1970 1980 1990 2000

05

00

15

00

Faraholm

Time

1960 1970 1980 1990 2000

05

00

15

00

Faray

Time

1960 1970 1980 1990 2000

05

00

15

00

Copinsay

Time

1960 1970 1980 1990 2000

02

00

40

06

00

Calf.of.Eday

Time

1960 1970 1980 1990 2000

40

06

00

80

0

Muckle.Greenhol

Time

1960 1970 1980 1990 2000

20

04

00

60

08

00

Little.Linga

Time

1960 1970 1980 1990 2000

01

02

03

04

0

Wartholm

Time

1960 1970 1980 1990 2000

04

08

01

20

Point.of.Spurne

State process modelLife cycle graph representation

tp ,1,5.0 pup 1 2 3 4 5 6+

tr ,a a a a

a

density dependence here…

a

… or here

Density dependencee.g. in pup survival

pups

surv

ival

0 20000 40000 60000 80000 100000

0.2

0.4

0.6

0.8

pups

1 ye

ar o

lds

0 20000 40000 60000 80000 100000

020

0040

0060

00

0001.0 8.0max rp 1,,0

max,, 1

trr

ptrp n

Carrying

capacity χr

More flexible models of density dependence

1,,0

max,,

1

trr

ptrp

n

State process model4 regions

ta ,11tp ,1,5.0 pup 1 2 3 4 5 6+North Sea

tr ,a a a a

a

pup 1 2 3 4 5 6+Inner Hebrides

pup 1 2 3 4 5 6+Outer Hebrides

pup 1 2 3 4 5 6+Orkneys

ta ,21

ta ,31

ta ,41

movement depends on • distance• density dependence• site faithfulness

SSMs of widllife population dynamics:Summary of Features

State vector high dimensional (seal model: 7 x 4 x 22 = 616).

Observations only available on a subset of these states (seal

model: 1 x 4 x 22 = 88)

State process density is a convolution of sub-processes so hard

to evaluate.

Parameter vector is often quite large (seal model: 11-12).

Parameters often partially confounded, and some are poorly

informed by the data.

Fitting state-space models

Analytic approaches– Kalman filter (Gaussian linear model; Besbeas et al.)– Extended Kalman filter (Gaussian nonlinear model – approximate)

+ other KF variations– Numerical maximization of the likelihood

Monte Carlo approximations– Likelihood-based (Geyer; de Valpine)– Bayesian

Rejection Sampling Damien Clancy Markov chain Monte Carlo (MCMC; Bob O’Hara, Ruth King) Sequential Importance Sampling (SIS) a.k.a.

Monte Carlo particle filtering

Inference tasks for time series data

Observe data y1:t = (y1,...,yt)

We wish to infer the unobserved states n1:t = (n1,...,nt) and parameters Θ

Fundamental inference tasks:

– Smoothing p(n1:t, Θ| y1:t)

– Filtering p(nt, Θt| y1:t)

– Prediction p(nt+x| y1:t) x>0

Filtering

Filtering forms the basis for the other inference tasks Filtering is easier than smoothing (and can be very fast)

– Filtering recursion: divide and conquor approach that considers each new data point one at a time

p(n0) p(n1|y1)

)|(

)|()|()|(

:11

111:111:11

tt

ttttttt p

fpp

yy

nyynyn

)|(

)|( )|()|(

:11

11111:1

tt

ttttttttt

p

fdgp

yy

nynnnyn

Only need to integrate over nt, not n1:t

p(n2|y1:2)

y1y2

p(n3|y1:3)

y3

p(n4|y1:4)

y4

Monte-Carlo particle filters:online inference for evolving datasets

Particle filtering used when fast online methods required to produce updated (filtered) estimates as new data arrives:

– Tracking applications in radar, sonar, etc.

– Finance Stock prices, exchange rates arrive sequentially. Online

update of portfolios.

– Medical monitoring Online monitoring of ECG data for sick patients

– Digital communications

– Speech recognition and processing

2. Monte Carlo Particle Filtering

Variants/Synonyms:Sequential Monte Carlo methods

Sequential Importance Sampling (SIS)Sampling Importance Sampling Resampling (SISR)

Bootstrap FilterInteracting Particle Filter Auxiliary Particle Filter

Importance sampling

Want to make inferences about some function p(), but cannot

evaluate it directly

Solution:

– Sample from another function q() (the importance function)

that has the same support as p() (or wider support)

– Correct using importance weights ()() qpw

Example: 0 20 40 60 80 100

0.0

0

target, p(x)

x

p(x

)

0 20 40 60 80 100

0.0

00

proposal, q(x)

x

q(x

)

0 20 40 60 80 100

0.0

1.0

sample from q(x)

samp

0 20 40 60 80 100

03

sample weights p(x)/q(x)

samp

w(x

)

0 20 40 60 80 100

0.0

00

kernel density estimate

x

p̂x

Importance sampling algorithm

Given p(nt|y1:t) and yt+1 want to update to p(nt+1|y1:t+1),

Prediction step:Make K random draws (i.e., simulate K “particles”) from importance function

Correction step:Calculate:

Normalize weights so that Approximate the target density:

Kiqn it ,...,1(),~)(

1

()

)|( 1:1)(

)(1

1

q

ynpw t

ii

tt

K

i

itw

1

)(1 1

K

i

itt

ittt nnwynp

1

)(11

)(11:11 )()|(

0 20 40 60 80 100

0.0

0

target, p(x)

x

p(x

)

0 20 40 60 80 100

0.0

00

proposal, q(x)

x

q(x

)

0 20 40 60 80 100

0.0

1.0

sample from q(x)

samp

0 20 40 60 80 100

03

sample weights p(x)/q(x)

samp

w(x

)

0 20 40 60 80 100

0.0

00

kernel density estimate

xp^

x

Importance sampling:take home message

The key to successful importance sampling is finding a proposal q()

that:

– we can generate random values from

– has weights p()/q() that can be evaluated

The key to efficient importance sampling is finding a proposal q()

that:

– we can easily/quickly generate random values from

– has weights p()/q() that can be evaluated easily/quickly

– is close to the target distribution

Sequential importance sampling

SIS is just repeated application of importance sampling at each

time step

Basic sequential importance sampling:

– Proposal distribution q() = g(nt+1|nt)

– Leads to weights

To do basic SIS, need to be able to:

– Simulate forward from the state process

– Evaluate the observation process density (the likelihood)

)|( )(11

)()(1

itt

it

it nyfww

Basic SIS algorithm

Generate K “particles” from the prior on {n0, Θ} and with

weights 1/K:

For each time period t=1,...,T

– For each particle i=1,...,K

Prediction step:

Correction step:

Kiwn it

ii ,...,1 },,{ )()(0

)(0

)|(~ 1)(1 tt

it nngn

)|( )(11

)()(1

itt

it

it nyfww

Justification of weights

q

ynpw t

iti

t1:1

)(1)(

1

|

)(

1

:11

:1)(1

)(11

|

|||

itt

tt

ti

ti

tt

nng

yypynpnyf

)(

1

:1)(1

)(11

|

||i

tt

ti

ti

tt

nng

ynpnyf

)(

1

)(1:1

)()(11

|

|||i

tt

ittt

it

itt

nng

nngynpnyf

ti

ti

tt ynpnyf :1)()(

11 ||

)()(11 | i

ti

tt wnyf

Example of basic SIS

State-space model of exponential population growth

– State model

– Observation model

– Priors

)(~1 tt nPoisn

)15.0,(~ 21

211 ttt nnNy

)14(~0 Poisn

)1.0,08.1( 2N

Example of basic SISt=1

Obs: 12

0.0280.0120.2010.0730.0380.0290.0290.0000.0000.012

Predict Correct

1112141316162014916

Sample from prior

1.0551.1071.1950.9740.9361.0291.0811.2011.0000.958

n0 Θ0 w0

0.10.10.10.10.10.10.10.10.10.1

0.10.10.10.10.10.10.10.10.10.1

171811152017177622

Prior at t=1

1.0551.1071.1950.9740.9361.0291.0811.2011.0000.958

n1 Θ0 w0

171811152017177622

Posterior at t=1

1.0551.1071.1950.9740.9361.0291.0811.2011.0000.958

0.0630.0340.5580.2020.0100.0630.0630.0000.0000.003

n1 Θ1 w1gives f()

Example of basic SISt=2

Obs: 14gives f()

0.1600.1900.1120.0080.0460.1600.0110.0000.0460.007

Predict Correct

171811152017177622

Posterior at t=1

1.0551.1071.1950.9740.9361.0291.0811.2011.0000.958

n1 Θ1 w!

0.0630.0340.5580.2020.0100.0630.0630.0000.0000.003

0.0630.0340.5580.2020.0100.0630.0630.0000.0000.003

1514121011152191120

Prior at t=2

1.0551.1071.1950.9740.9361.0291.0811.2011.0000.958

n2 Θ1 w1

1514121011152191120

Posterior at t=2

1.0551.1071.1950.9740.9361.0291.0811.2011.0000.958

0.1050.0680.6910.0150.0050.1050.0070.0000.0000.000

n2 Θ2 w2

Problem: particle depletion Variance of weights increases with time, until few particles have

almost all the weight

Results in large Monte Carlo error in approximation

Can quantify:

From previous example:

K

i

itt

ittt nnwynp

1

)(11

)(11:11 )()|(

2)(1size sample effective

twCV

K

Time 0 1 2

ESS 10.0 2.5 1.8

Problem: particle depletion

Worse when:

– Observation error is small

– Lots of data at any one time point

– State process has little stochasticity

– Priors are diffuse or not congruent with observations

– State process model incorrect (e.g., time varying)

– Outliers in the data

Some intuition In a (basic) PF, we simulate particles from the prior, and

gradually focus in on the full posterior by filtering the particles using data from one time period at a time

Analogies with MCMC:– In MCMC, we take correlated samples from the posterior. We

make proposals that are accepted stochastically. Problem is to find a “good” proposal Limitation is time – has the sampler converged yet?

– In PF, we get an importance sample from the posterior. We generate particles from a proposal, that are assigned weights (and other stuff – see later). Problem is to find a “good” proposal Limitation is memory – do we have enough particles?

So, for each “trick” in MCMC, there is probably an analogous “trick” in PF (and visa versa)

3. Particle filtering “tricks”

An advanced randomization technique

Tricks: solutions to the problem of particle depletion

Pruning: throw out “bad” particles (rejection)

Enrichment: boost “good” particles (resampling)

– Directed enrichment (auxiliary particle filter)

– Mutation (kernel smoothing)

Other stuff

– Better proposals

– Better resampling schemes

– …

Rejection control

Idea: throw out particles with low weights Basic algorithm, at time t:

– Have a pre-determined threshold, ct, where 0 < ct <=1

– For i = 1, … , K, accept particle i with probability

– If particle is accepted, update weight to

– Now we have fewer than K samples Can make up samples by sampling from the priors,

projecting forward to the current time point and repeating the rejection control

t

iti

c

wr

)()( ,1min

),max( )(*)(t

it

it cww

Rejection control - discussion

Particularly useful at t=1 with diffuse priors

Can have a sequence of control points (not necessarily every

year)

Check points don’t need to be fixed – can trigger when variance

of weights gets too high

Thresholds, ct, don’t need to be set in advance but can be set

adaptively (e.g., mean of weights)

Instead of restarting at time t=0, can restart by sampling from

particles at previous check point (= partial rejection control)

Resampling: pruning and enrichment

Idea: allow “good” particles to amplify themselves while killing off “bad” particles

Algorithm. Before and/or after each time step (not necessarily every time step)

– For j = 1, … , K

Sample independently from the set of particles according to the probabilities

Assign new weights

Reduces particle depletion of states as “children” particles with the same “parent” now evolve independently

},,{ )()()( jt

jt

jt wn

Kiwn it

it

it ,...,1},,,{ )()()(

)()1( ,..., Ktt aa

)()(*)( i

ti

tj

t aww

Resample probabilities

Should be related to the weights

(as in the bootstrap filter)

– α could vary according to the variance of weights

– α = ½ has been suggested

related to “future trend” – as in auxiliary particle filter

)()( it

it wa

10 where)()( i

ti

t wa

)(ita

Directed resampling: auxiliary particle filter

Idea: Pre-select particles likely to have high weights in future

Example algorithm.

– For j = 1, … , K

Sample independently from the set of particles according to the probabilities

Predict:

Correct:

If “future” observations are available can extend to look >1 time step ahead – e.g., protein folding application

)|)|(( 1)(

1)()(

ti

tti

ti

t ynnEfwa

},,{ )()()( jt

jt

jt wn

Kiwn it

it

it ,...,1},,,{ )()()(

)|(~ )(1

)(1

jtt

jt nngn

)(

)(11)(

1)|(

jt

jttj

ta

nyfw

Can obtain by projecting forward deterministically

Kernel smoothing: enrichment of parameters through mutation

Idea: Introduce small “mutations” into parameter values when resampling

Algorithm:

– Given particles

– Let Vt be the variance matrix of the

– For i = 1, … , K Sample where h controls the

size of the perturbations

– Variance of parameters is now (1+h2)Vt, so need shrinkage to preserve 1st 2 moments

Kiwn it

it

it ,...,1},,,{ )()()(

s)(it

),(N from 2)(*)(t

it

it h V

Kernel smoothing - discussion

Previous algorithm does not preserve the relationship between

parameters and states

– Leads to poor smoothing inference

– Possibly unreliable filtered inference?

– Pragmatically – use as small a value of h as possible

Extensions:

– Kernel smooth states as well as parameters

– Local kernel smoothing

Other “tricks” Reducing dimension:

– Rao Blackwellization – integrating out some part of the model

Better proposals:

– Start with an importance sample (rather than from priors)

– Conditional proposals

Better resampling:

– Residual resamling

– Stratified resampling

Alternative “mutation” algorithms:

– MCMC within PF

Gradual focussing on posterior:

– Tempering/anneling

4. Applications

(i) Faray example Motivation: Comparison with

Kalman Filter (KF) via Integrated Population Modelling methods of Besbeas et al.

1985 1990 1995 2000

500

1000

1500

2000

Faray

Year

Pup

cou

nt

Example State Process Model: Density dependent emigration

p5.0pup 1 2 3 4 5 6+

ta

density dependent emigration

ta ta ta tata

2004,...,1,max1

1,...,19841

1,1,1,0

tnnn

t

ttt

t

τ fixed at 1991

Observation Process Model

22,0,0 ,N~ ttt nny

Ψ = CV of observations

Priors Parameters:

– Informative priors on survival rates from intensive studies

(mark-recapture)

– Informative priors on fecundity, carrying capacity and

observation CV from expert opinion

Initial values for states in 1984:

– For pups, assume

– For other ages:

Stable age prior

More diffuse prior

22198419841984,0 ,N~ yyn

Fitting the Faray data

One colony: relatively low dimension problem

So few “tricks” required

– Pruning (rejection control) in first time period

– Multiple runs of sampler until required accuracy reached

(note – ideal for parallelization)

– Pruning of final results (to reduce number of particles stored)

1985 1990 1995 2000

500

1000

1500

2000

Year

Pup

s

Faray

Results – Smoothed states

KF Result

SIS ResultMore diffuse prior

Posterior parameter estimates

Param 1 Param 2

φa 0.67 0.81

φp 0.17 0.49

α 0.19 0.48

ψ 0.19 0.05

β 0.23 0.33

Sensitivity to priors

(Method of Millar, 2004)

Prior

Posterior median

Median ML est from KF

phi_a 0.961

Den

sity

0.92 0.94 0.96 0.98

010

2030

4050

phi_j 0.829

Den

sity

0.5 0.6 0.7 0.8 0.9 1.0

01

23

45

6

alpha 0.855

Den

sity

0.5 0.6 0.7 0.8 0.9 1.0

01

23

45

6psi 0.0178

Den

sity

0.02 0.06 0.10

010

2030

4050

beta_faray 0.000158

Den

sity

0.00005 0.00015 0.00025

050

0010

000

1500

0

1985 1990 1995 2000

500

1000

1500

2000

2500

Year

Pup

s

Faray

Results – SIS Stable age prior

KF Result

SIS ResultStable age prior

(ii) Extension to regional model

ta ,11tp ,1,5.0 pup 1 2 3 4 5 6+North Sea

a a a a

a

pup 1 2 3 4 5 6+Inner Hebrides

pup 1 2 3 4 5 6+Outer Hebrides

pup 1 2 3 4 5 6+Orkneys

ta ,21

ta ,31

ta ,41

density dependent juvenile survival

movement depends on • distance• density dependence• site faithfulness

Fitting the regional data

Higher dimensional problem (7x4xN.years states; 11 parameters)

More “tricks” required for an efficient sampler

– Pruning (rejection control) in first time period

– Multiple runs with rejection control of final results

– Directed enrichment (auxiliary particle filter with kernel

smoothing of parameters)

Estimated pup production

Year

Pup

s

1985 1990 1995 2000

1500

3500

North Sea

Year

Pup

s

1985 1990 1995 2000

1500

3000

Inner Hebrides

Year

Pup

s

1985 1990 1995 2000

8000

1200

0

Outer Hebrides

Year

Pup

s

1985 1990 1995 2000

6000

1600

0

Orkneys

0.93 0.95 0.97

01

02

03

04

0

phi.adult 0.966

0.6 0.7 0.8 0.9

01

23

45

phi.juv.max 0.734

0.92 0.96

01

02

03

0

alpha 0.973

0.06 0.07 0.08 0.09

01

03

05

0

psi 0.07

2 4 6 8 10 14

0.0

0.1

00

.20

0.3

0

gamma.dd 3.32

0.5 1.5 2.5

0.0

0.4

0.8

gamma.dist 0.792

0.2 0.6 1.0 1.4

0.0

1.0

2.0

gamma.sf 0.355

0.0006 0.0010 0.0014

05

00

15

00

beta.ns 0.000906

0.0008 0.0014 0.0020

05

00

10

00

15

00

beta.ih 0.00127

0.0002 0.0004

02

00

04

00

06

00

0

beta.oh 0.000304

0.00010 0.00020 0.00030

04

00

08

00

0beta.ork 0.000183

Posterior parameter estimates

Year

Adu

lts

2004 2008 2012

9000

1300

0North Sea

Year

Adu

lts

2004 2008 2012

7000

1000

0

Inner Hebrides

Year

Adu

lts

2004 2008 2012

2500

040

000

Outer Hebrides

Year

Adu

lts

2004 2008 2012

4000

060

000

Orkneys

Predicted adults

(iii) Comparison with MCMC

Motivation:

– Which is more efficient?

– Which is more general?

– Do the “tricks” used in SIS cause bias?

Example applications:

– Simulated data for Coho salmon

– Grey seal data – 4 region model with movement and density

dependent pup survival

Summary of findings

To be efficient, the MCMC sampler was not at all general

We also used an additional “trick” in SIS: integrating out the

observation CV parameter. SIS algorithm still quite general

however.

MCMC was more efficient (lower MC variation per unit CPU

time)

SIS algorithm was less efficient, but was not significantly biased

Update: Kernel smoothing bias

1985 1990 1995 2000 2005

1000

2000

3000

4000

5000

Year

Pup

s

North.Sea

1985 1990 1995 2000 2005

1000

2000

3000

4000

Year

Pup

s

Inner.Hebrides

1985 1990 1995 2000 2005

6000

8000

1200

0

Year

Pup

s

Outer.Hebrides

1985 1990 1995 2000 2005

5000

1000

015

000

2000

0

Year

Pup

s

Orkneys

1985 1990 1995 2000 2005

1000

2000

3000

4000

5000

Year

Pup

s

North.Sea

1985 1990 1995 2000 2005

1000

2000

3000

4000

Year

Pup

s

Inner.Hebrides

1985 1990 1995 2000 2005

6000

8000

1000

014

000

Year

Pup

s

Outer.Hebrides

1985 1990 1995 2000 2005

5000

1000

015

000

2000

0

Year

Pup

s

Orkneys

KS discount = 0.999999 KS discount = 0.997

Can’t we discuss

this?

5. DiscussionI’ll make

you fit into my

model!!!

Modelling framework

State-space framework

– Can explicitly incorporate knowledge of biology into state process models

– Explicitly model sources of uncertainty in the system

– Bring together diverse sources of information

Bayesian approach

– Expert knowledge frequently useful since data is often uninformative

– (In theory) can fit models of arbitrary complexity

SIS vs KF

Like SIS, use of KF and extensions is still an active research

topic

KF is certainly faster – but is it accurate and flexible enough?

May be complementary:

– KF could be used for initial model investigation/selection

– KF could provide a starting importance sample for a particle

filter

SIS vs MCMC SIS:

– In other fields, widely used for “on-line” problems – where the emphasis is on fast filtered estimates foot and mouth outbreak? N. American West coast salmon harvest openings?

– Can the general algorithms be made more efficient? MCMC:

– Better for “off-line” problems? – plenty of time to develop and run highly customized, efficient samplers

– Are general, efficient samplers possible for this class of problems?

Current disadvantages of SIS:– Methods less well developed than for MCMC?– No general software (no WinBUGS equivalent – “WinSIS”)

Current / future research

SIS:

– Efficient general algorithms (and software)

– Comparison with MCMC and Kalman filter

– Parallelization

– Model selection and multi-model inference

– Diagnostics

Wildlife population models:

– Other seal models (random effects, covariates, colony-level analysis, more data…)

– Other applications (salmon, sika deer, Canadian seals, killer whales, …)

!Just another particle…

Inference from different models DDS EDDS North sea 12.0

(9.3 16.3) 18.2

(9.9 26.2) Inner Hebrides

8.9 (6.9 11.7)

10.5 (7 14.3)

Outer Hebrides

32.2 (23.8 43.3)

41.3 (27.4 55.2)

Orkney 52.2 (39.2 70.4)

74.1 (44.3 98.4)

Total 105.2 (79.3 141.7)

144.1 (88.6 194.1)

DDF EDDF North sea 26.6

(19.3 38.6) 21.9

(16.4 29.7) Inner Hebrides

21.9 (15.3 33.4)

15.2 (11.5 25.6)

Outer Hebrides

85.8 (58.1 135.8)

59.5 (44.5 95.6)

Orkney 106.6 (77.9 153.1)

83.8 (64.4 119.4)

Total 240.9 (170.5 361)

180.3 (136.9 270.3)

1Assuming N adult males is 0.73*N adult females

Model selection

Model LnL AIC ΔAIC Akaike (AIC) weight

AICc ΔAICc Akaike (AICc) weight

DDS -719.55 1459.01 1.70 0.21 1461.96 1.79 0.22 EDDS -718.67 1459.35 2.04 0.18 1462.82 2.66 0.14 DDF -718.65 1457.31 0.00 0.50 1460.17 0.00 0.55 EDDF -719.21 1460.41 3.10 0.10 1463.89 3.72 0.09

Effect of independent estimate of total population size

DDS & DDF Models

Assumes independent estimate is normally distributed with 15%CV.

Calculations based on data from 1984-2004.

top related