part 24: bayesian estimation 24-1/35 econometrics i professor william greene stern school of...
TRANSCRIPT
Part 24: Bayesian Estimation24-1/35
Econometrics IProfessor William Greene
Stern School of Business
Department of Economics
Part 24: Bayesian Estimation24-2/35
Econometrics I
Part 24 – Bayesian Estimation
Part 24: Bayesian Estimation24-3/35
Bayesian Estimators
“Random Parameters” vs. Randomly Distributed Parameters
Models of Individual Heterogeneity Random Effects: Consumer Brand Choice Fixed Effects: Hospital Costs
Part 24: Bayesian Estimation24-4/35
Bayesian Estimation
Specification of conditional likelihood: f(data | parameters)
Specification of priors: g(parameters) Posterior density of parameters:
Posterior mean = E[parameters|data]
(data | parameters) (parameters)(parameters|data)
(data)
f gf
f
Part 24: Bayesian Estimation24-5/35
The Marginal Density for the Data is Irrelevant
f(data| )p( ) L(data| )p( )f( | data)
f(data) f(data)
Joint density of and data is f(data, ) =L(data| )p( )
Marginal density of the data is
f(data)= f(data, )d L(data| )p( )d
Thus, f( | data
L(data| )p( )
)L(data| )p( )d
L(data| )p( )dPosterior Mean = p( | data)d
L(data| )p( )d
Requires specification of the likeihood and the prior.
Part 24: Bayesian Estimation24-6/35
Computing Bayesian Estimators
First generation: Do the integration (math)
Contemporary - Simulation: (1) Deduce the posterior (2) Draw random samples of draws from the posterior and
compute the sample means and variances of the samples. (Relies on the law of large numbers.)
(data | ) ( )( | data)
(data)
f gE d
f
Part 24: Bayesian Estimation24-7/35
Modeling Issues
As N , the likelihood dominates and the prior disappears Bayesian and Classical MLE converge. (Needs the mode of the posterior to converge to the mean.)
Priors Diffuse large variances imply little prior information.
(NONINFORMATIVE) INFORMATIVE priors – finite variances that appear in the
posterior. “Taints” any final results.
Part 24: Bayesian Estimation24-8/35
A Practical Problem
2 2v 12 v 2
2 vs (1/ ) K / 2 2 1 1/ 22
2 1 1
Sampling from the joint posterior may be impossible.
E.g., linear regression.
[vs ] 1f( , | , ) e [2 ] | ( ) |
(v 2)
exp( (1/ 2)( ) [ ( ) ] ( ))
What is this???
T
β y X XX
β b XX β b
2
o do 'simulation based estimation' here, we need joint
observations on ( , ).β
Part 24: Bayesian Estimation24-9/35
A Solution to the Sampling Problem
2
2
The joint posterior, p( , |data) is intractable. But,
For inference about , a sample from the marginal
posterior, p( |data) would suffice.
For inference about , a sample from the marginal
p
β
β
β
2 2
2 2 1
22 i i
osterior of , p( |data) would suffice.
Can we deduce these? For this problem, we do have conditionals:
p( | ,data) = N[ , ( ) ]
(y ) p( | ,data) = K a gamma distributioi
2
β b X'X
x ββ
2
n
Can we use this information to sample from p( |data) and
p( |data)?
β
Part 24: Bayesian Estimation24-10/35
The Gibbs Sampler
Target: Sample from marginals of f(x1, x2) = joint distribution Joint distribution is unknown or it is not possible to sample from the joint
distribution. Assumed: f(x1|x2) and f(x2|x1) both known and samples can be drawn from
both. Gibbs sampling: Obtain one draw from x1,x2 by many cycles between x1|x2
and x2|x1. Start x1,0 anywhere in the right range. Draw x2,0 from x2|x1,0. Return to x1,1 from x1|x2,0 and so on. Several thousand cycles produces the draws Discard the first several thousand to avoid initial conditions. (Burn in)
Average the draws to estimate the marginal means.
Part 24: Bayesian Estimation24-11/35
Bivariate Normal Sampling
1 1 1
2 2 2r r
1 2
0 1Draw a random sample from bivariate normal ,
0 1
v u u(1) Direct approach: where are two
v u u
1 0 independent standard normal draws (easy) and =
21 2
21 2 2
22 1 1
1 such that '= . , 1 .
1
(2) Gibbs sampler: v | v ~N v , 1
v | v ~N v , 1
Part 24: Bayesian Estimation24-12/35
Gibbs Sampling for the Linear Regression Model
i2
β b X'X
x ββ
2 2 1
22 i i
p( | ,data) = N[ , ( ) ]
(y ) p( | ,data) = K
a gamma distribution
Iterate back and forth between these two distributions
Part 24: Bayesian Estimation24-13/35
Application – the Probit Model
i ix β+
β
i i
i i
i
i
(a) y * ~N[0,1]
(b) y 1 if y * > 0, 0 otherwise
Consider estimation of and y * (data augmentation)
(1) If y* were observed, this would be a linear regression
(y would not be useful
i
β|
β
x β
i
i i
i
since it is just sgn(y*).)
We saw in the linear model before, p( y *,y )
(2) If (only) were observed, y * would be a draw from
the normal distribution with mean and variance 1.
Bu βi i i it, y gives the sign of y * . y * | ,y is a draw from
the truncated normal (above if y=0, below if y=1)
Part 24: Bayesian Estimation24-14/35
Gibbs Sampling for the Probit Model
i
i
(1) Choose an initial value for (maybe the MLE)
(2) Generate y * by sampling N observations from
the truncated normal with mean and variance 1,
truncated above 0 if y 0, from below if y
i
β
x β
i
-1 -1
1.
(3) Generate by drawing a random normal vector with
mean vector ( ) * and variance matrix ( )
(4) Return to 2 10,000 times, retaining the last 5,000
draws - first 5,000 are the
β
X'X X'y X'X
'burn in.'
(5) Estimate the posterior mean of by averaging the
last 5,000 draws.
(This corresponds to a uniform prior over .)
β
β
Part 24: Bayesian Estimation24-15/35
Generating Random Draws from f(X)
-1
The inverse probability method of sampling
random draws:
If F(x) is the CDF of random variable x, then
a random draw on x may be obtained as F (u)
where u is a draw from the standard uniform (0,1).
Exampl
-1
-1i i
es:
Exponential: f(x)= exp(- x); F(x)=1-exp(- x)
x = -(1/ )log(1-u)
Normal: F(x) = (x); x = (u)
Truncated Normal: x= + [1-(1-u)* ( )] for y=1;
x -1i i= + [u (- )] for y=0.
Part 24: Bayesian Estimation24-16/35
? Generate raw dataCalc ; Ran(13579) $Sample ; 1 - 250 $Create ; x1 = rnn(0,1) ; x2 = rnn(0,1) $Create ; ys = .2 + .5*x1 - .5*x2 + rnn(0,1) ; y = ys > 0 $Namelist; x = one,x1,x2$Matrix ; xxi = <x’x> $Calc ; Rep = 200 ; Ri = 1/(Rep-25)$? Starting values and accumulate mean and variance matricesMatrix ; beta=[0/0/0] ; bbar=init(3,1,0);bv=init(3,3,0)$$Proc = gibbs $ Markov Chain – Monte Carlo iterationsDo for ; simulate ; r =1,Rep $? ------- [ Sample y* | beta ] --------------------------Create ; mui = x'beta ; f = rnu(0,1) ; if(y=1) ysg = mui + inp(1-(1-f)*phi( mui)); (else) ysg = mui + inp( f *phi(-mui)) $? ------- [ Sample beta | y*] ---------------------------Matrix ; mb = xxi*x'ysg ; beta = rndm(mb,xxi) $? ------- [ Sum posterior mean and variance. Discard burn in. ]Matrix ; if[r > 25] ; bbar=bbar+beta ; bv=bv+beta*beta'$Enddo ; simulate $Endproc $Execute ; Proc = Gibbs $ Matrix ; bbar=ri*bbar ; bv=ri*bv-bbar*bbar' $Probit ; lhs = y ; rhs = x $Matrix ; Stat(bbar,bv,x) $
Part 24: Bayesian Estimation24-17/35
Example: Probit MLE vs. Gibbs
--> Matrix ; Stat(bbar,bv); Stat(b,varb) $+---------------------------------------------------+|Number of observations in current sample = 1000 ||Number of parameters computed here = 3 ||Number of degrees of freedom = 997 |+---------------------------------------------------++---------+--------------+----------------+--------+---------+|Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] |+---------+--------------+----------------+--------+---------+ BBAR_1 .21483281 .05076663 4.232 .0000 BBAR_2 .40815611 .04779292 8.540 .0000 BBAR_3 -.49692480 .04508507 -11.022 .0000+---------+--------------+----------------+--------+---------+|Variable | Coefficient | Standard Error |b/St.Er.|P[|Z|>z] |+---------+--------------+----------------+--------+---------+ B_1 .22696546 .04276520 5.307 .0000 B_2 .40038880 .04671773 8.570 .0000 B_3 -.50012787 .04705345 -10.629 .0000
Part 24: Bayesian Estimation24-18/35
A Random Effects Approach Allenby and Rossi, “Marketing Models of
Consumer Heterogeneity” Discrete Choice Model – Brand Choice “Hierarchical Bayes” Multinomial Probit
Panel Data: Purchases of 4 brands of Ketchup
Part 24: Bayesian Estimation24-19/35
Structure
, , ,
, ,
,
Conditional data generation mechanism
* , .
1[ * ]
(constant, log price, "availability," "f
it j i it j it j
it j it j
it j
y Utility for consumer i, choice t, brand j
Y y maximum utility among the J choices
x
x
, 1
eatured")
~ [0, ], 1
Implies a J outcome multinomial probit model.
it j jN
Part 24: Bayesian Estimation24-20/35
Bayesian Priors
10 0 0
~ [ , ],
, ~ [ , ]
~ [ , ] (looks like chi-squared), =3, 1
~ [ , ],
~ [ , ], 8,
i
i i i
j j i
Prior Densities
N
Implies N
Inverse Gamma v s v s
Priors over model parameters
N a
Wishart v v
V
w w V
V 0
V V V
0 8 I
Part 24: Bayesian Estimation24-21/35
Bayesian Estimator Joint Posterior= Integral does not exist in closed form. Estimate by random samples from the joint
posterior. Full joint posterior is not known, so not possible
to sample from the joint posterior.
1 1[ ,..., , , , ,..., | ]N JE V data
Part 24: Bayesian Estimation24-22/35
Gibbs Cycles for the MNP Model Samples from the marginal posteriors
Marginal posterior for the individual parameters
(Known and can be sampled)
| , , ,
Marginal posterior for the common parameters
(Each known and each can be sampled)
| , ,
| , ,
i data
data
da
V
V
V
| , ,
ta
dataV
Part 24: Bayesian Estimation24-23/35
Results Individual parameter vectors and disturbance variances Individual estimates of choice probabilities The same as the “random parameters model” with slightly different
weights. Allenby and Rossi call the classical method an “approximate
Bayesian” approach. (Greene calls the Bayesian estimator an “approximate random
parameters model”) Who’s right?
Bayesian layers on implausible uninformative priors and calls the maximum likelihood results “exact” Bayesian estimators
Classical is strongly parametric and a slave to the distributional assumptions.
Bayesian is even more strongly parametric than classical. Neither is right – Both are right.
Part 24: Bayesian Estimation24-24/35
Comparison of Maximum Simulated Likelihood and Hierarchical Bayes
Ken Train: “A Comparison of Hierarchical Bayes and Maximum Simulated Likelihood for Mixed Logit”
Mixed Logit
( , , ) ( , , ) ( , , ),
1,..., individuals,
1,..., choice situations
1,..., alternatives (may also vary)
i
i
U i t j i t j i t j
i N
t T
j J
x
Part 24: Bayesian Estimation24-25/35
Stochastic Structure – Conditional Likelihood
, ,
, ,1
, *,
1, *,1
exp( )Pr ob( , , )
exp( )
exp( )
exp( )
* indicator for the specific choice made by i at time t.
i i j t
i i j tj
T i i j t
ti i j tj
i j t
Likelihood
j
J
J
x
x
x
x
Note individual specific parameter vector, i
Part 24: Bayesian Estimation24-26/35
Classical Approach
1/2
, *,
1 1, ,1
~ [ , ]; write
where ( ) ( )
exp[( ]log
exp[( ]
Maximize over using maximum simulated likel
i
i i
i j
TN i i j tii t
i i i j tj
N
diag uncorrelated
Log likelihood d
Jw
b
b + w
b + v
b w ) xw
b w ) x
b,
ihood
(random parameters model)
Part 24: Bayesian Estimation24-27/35
Bayesian Approach – Gibbs Sampling and Metropolis-Hastings
1
1
1
( | , )
( ,..., | , ) ( )
( ,..., | ) ( )
( ) ( )
N
ii
N
N
Posterior L data priors
Prior N normal
IG parameters Inverse gamma
g assumed parameters Normal with large variance
b
b |
Part 24: Bayesian Estimation24-28/35
Gibbs Sampling from Posteriors: b
1
1
( | ,..., , ) [ , (1/ ) ]
(1/ )
Easy to sample from Normal with known
mean and variance by transforming a set
of draws from standard normal.
N
N
ii
p Normal N
N
b
Part 24: Bayesian Estimation24-29/35
Gibbs Sampling from Posteriors: Ω
1
2,1
r,k
R 2r,k1
( | , ,..., ) ~ [1 ,1 ]
(1/ ) ( ) for each k=1,...,K
Draw from inverse gamma for each k:
Draw 1+N draws from N[0,1] = h ,
(1+N )then the draw is
h
k N k
N
k k i ki
k
r
p Inverse Gamma N NV
V N b
V
b
Part 24: Bayesian Estimation24-30/35
Gibbs Sampling from Posteriors: i
( | , ) ( | ) ( , )
M=a constant, L=likelihood, g=prior
(This is the definition of the posterior.)
Not clear how to sample.
Use Metropolis Hastings algorithm.
i i ip M L data g b | b
Part 24: Bayesian Estimation24-31/35
Metropolis – Hastings Method
,0
,1
r
:
an 'old' draw (vector)
the 'new' draw (vector)
d = ,
=a constant (see below)
the diagonal matrix of standard deviations
=a vector of K draws from standard normal
i
i
Define
r
r
v
v
Part 24: Bayesian Estimation24-32/35
Metropolis Hastings: A Draw of i
,1 ,0
,1
,0
,1 ,0
,1
:
( )( )
( )
a random draw from U(0,1)
If U < R, use ,
During Gibbs iterations, draw
controls acceptance rate. Try for
i i r
i
i
i i
i
Trial value d
PosteriorR Ms cancel
Posterior
U
else keep
.4.
Part 24: Bayesian Estimation24-33/35
Application: Energy Suppliers N=361 individuals, 2 to 12 hypothetical suppliers X= (1) fixed rates,
(2) contract length, (3) local (0,1), (4) well known company (0,1), (5) offer TOD rates (0,1), (6) offer seasonal rates (0,1).
Part 24: Bayesian Estimation24-34/35
Estimates: Mean of Individual i
MSL Estimate Bayes Posterior Mean
Price -1.04 (0.396) -1.04 (0.0374)
Contract -0.208 (0.0240) -0.194 (0.0224)
Local 2.40 (0.127) 2.41 (0.140)
Well Known 1.74 (0.0927) 1.71 (0.100)
TOD -9.94 (0.337) -10.0 (0.315)
Seasonal -10.2 (0.333) -10.2 (0.310)
Part 24: Bayesian Estimation24-35/35
Reconciliation: A Theorem (Bernstein-Von Mises)
The posterior distribution converges to normal with covariance matrix equal to 1/N times the information matrix (same as classical MLE). (The distribution that is converging is the posterior, not the sampling distribution of the estimator of the posterior mean.)
The posterior mean (empirical) converges to the mode of the likelihood function. Same as the MLE. A proper prior disappears asymptotically.
Asymptotic sampling distribution of the posterior mean is the same as that of the MLE.