sources of entropy in dynamic representative agent...

58
Sources of Entropy in Dynamic Representative Agent Models David Backus (NYU), Mikhail Chernov (LBS and LSE), and Stanley Zin (NYU) University of Geneva | December 2010 This version: December 5, 2010 1 / 39

Upload: others

Post on 15-Aug-2020

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Sources of Entropy inDynamic Representative Agent Models

David Backus (NYU), Mikhail Chernov (LBS and LSE),and Stanley Zin (NYU)

University of Geneva | December 2010

This version: December 5, 2010

1 / 39

Page 2: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Understanding dynamic models

How does one discern critical features of modern dynamicmodels?

The size of equity premium is no longer an overidentifyingrestriction

1 / 39

Page 3: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Excess returns, log r j − log r 1

Standard ExcessVariable Mean Deviation Skewness Kurtosis

EquityS&P 500 0.40 5.56 -0.40 7.90Fama-French (small, high) 0.90 8.94 1.00 12.80Fama-French (large, high) 0.60 7.75 -0.64 11.57Nominal bonds1 year 0.11 1.00 0.57 9.924 years 0.15 2.00 0.10 4.87CurrenciesAUD -0.15 3.32 -0.90 2.50GBP 0.35 3.16 -0.50 1.50OptionsS&P 500 ATM straddles -62.15 119.40 -1.61 6.52

2 / 39

Page 4: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Understanding dynamic models

How does one discern critical features of modern dynamicmodels?

The size of equity premium is no longer an overidentifyingrestriction

3 / 39

Page 5: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Understanding dynamic models

How does one discern critical features of modern dynamicmodels?

The size of equity premium is no longer an overidentifyingrestriction

The models are built up from different state variables

Evidence points to an important role of extreme events

Which pieces are most important quantitatively?

We start by thinking about how risk is priced in these models

We find the concept of entropy and entropy bounds useful for thistask

3 / 39

Page 6: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Outline

Preliminaries: entropy, entropy bound, cumulants

Example: Additive preferences, i.i.d. consumption growth

Generalizing to the non-i.i.d. case

Example: External habit, i.i.d. consumption growth

Example: Recursive preferences, non-i.i.d. consumption growth

4 / 39

Page 7: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Entropy bound

Pricing relation

Et

(

mt+1r jt+1

)

= 1

Entropy: for any x > 0

L(x) ≡ log Ex −E logx ≥ 0

Entropy bound, i.i.d. case (Bansal and Lehmann, 1997; Alvarezand Jermann, 2005; Backus, Chernov, and Martin, 2009)

L(m) ≥ E(log r j − log r1

)

5 / 39

Page 8: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Entropy bound

Pricing relation

Et

(

mt+1r jt+1

)

= 1

Entropy: for any x > 0

L(x) ≡ log Ex −E logx ≥ 0

Entropy bound, i.i.d. case (Bansal and Lehmann, 1997; Alvarezand Jermann, 2005; Backus, Chernov, and Martin, 2009)

L(m) ≥ E(log r j − log r1

)

Name: it is a relative entropy of Q w.r.t. P

L(m) = L(q/p) = logE(q/p)−E log(q/p) = −E log(q/p)

5 / 39

Page 9: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Connection to HJ bound

Net return, rnet,j = r j −1

HJ bound

HJ(m) ≡σ(m)

E(m)≥

E(rnet ,j − rnet,1

)

σ(rnet ,j − rnet,1)

6 / 39

Page 10: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Connection to HJ bound

Net return, rnet,j = r j −1

HJ bound

HJ(m) ≡σ(m)

E(m)≥

E(rnet ,j − rnet,1

)

σ(rnet ,j − rnet,1)

Entropy bound

L(m) ≥ E(log r j − log r1

)

≈ E(rnet ,j − rnet,1

)−E

((rnet ,j)2 − (rnet ,1)2

)/2

6 / 39

Page 11: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Connection to HJ bound

Net return, rnet,j = r j −1

HJ bound

HJ(m) ≡σ(m)

E(m)≥

E(rnet ,j − rnet,1

)

σ(rnet ,j − rnet,1)

Entropy bound

L(m) ≥ E(log r j − log r1

)

≈ E(rnet ,j − rnet,1

)−E

((rnet ,j)2 − (rnet ,1)2

)/2

The two coincide when m is normal

6 / 39

Page 12: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Cumulants

Cumulant generating function

k(s;x) = log Eesx =∞

∑j=1

κj(x)sj/j!

Cumulants are almost moments

mean = κ1(x)

variance = κ2(x)

skewness = κ3(x)/κ3/22 (x)

excess kurtosis = κ4(x)/κ22(x)

If x is normal, κj(x) = 0 for j > 2

7 / 39

Page 13: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Entropy and cumulants

Computation

L(m) = logEelogm −E logm = k(1, log m)−κ1(logm)

8 / 39

Page 14: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Entropy and cumulants

Computation

L(m) = logEelogm −E logm = k(1, log m)−κ1(logm)

Intuition

L(m) = κ2(logm)/2!︸ ︷︷ ︸

(log)normal term

+κ3(log m)/3!+ κ4(log m)/4!+ · · ·︸ ︷︷ ︸

high-order cumulants

8 / 39

Page 15: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Barro’s disasters

Consumption growth iid

log gt+1 = wt+1 + zt+1

wt+1 ∼ N (µ,v)

zt+1|j ∼ N (jθ, jδ2)

j ≥ 0 has probability e−hhj/j!

Pricing kernel and entropy

logmt+1 = log β+(α−1) loggt+1

L(m) = log Eelogm −E logm

= k(α−1; log g)︸ ︷︷ ︸

k(1;log β+(α−1) log gt+1)

− (α−1)κ1(log g)︸ ︷︷ ︸

κ1(log β+(α−1) log gt+1)

9 / 39

Page 16: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Barro’s disasters

Consumption growth iid

log gt+1 = wt+1 + zt+1

wt+1 ∼ N (µ,v)

zt+1|j ∼ N (jθ, jδ2)

j ≥ 0 has probability e−hhj/j!

Pricing kernel and entropy

logmt+1 = log β+(α−1) loggt+1

L(m) = log Eelogm −E logm

= k(α−1; log g)︸ ︷︷ ︸

k(1;log β+(α−1) log gt+1)

− (α−1)κ1(log g)︸ ︷︷ ︸

κ1(log β+(α−1) log gt+1)

= (α−1)2v/2+ h(e(α−1)θ+(α−1)2δ2/2 − (α−1)θ−1)

9 / 39

Page 17: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Entropy

0 2 4 6 8 10 120

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

Risk Aversion α

Ent

ropy

of P

ricin

g K

erne

l L(m

)

Alvarez−Jermann lower boundnormal

10 / 39

Page 18: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Entropy

0 2 4 6 8 10 120

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

Risk Aversion α

Ent

ropy

of P

ricin

g K

erne

l L(m

)

Alvarez−Jermann lower boundnormal

disasters

10 / 39

Page 19: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Entropy bound, non-i.i.d. case

Entropy bound

L(m) ≥ E(log r j − log r1

)+ L(q1)

︸ ︷︷ ︸

non-i.i.d. piece

q1 − price of a one-period riskless bond

Conditional entropy:

Lt(mt+1) = logEtmt+1 −Et logmt+1

Average conditional entropy (ACE)

L(m) = ELt(mt+1)+ L(Et(mt+1)) = ELt(mt+1)+ L(q1)

ELt(mt+1) ≥ E(log r j − log r1

)

11 / 39

Page 20: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Entropy and the Yield Curve

There is a tight connection between the dynamic properties of thepricing kernel and the yield curve

One should be able to use the yield curve as an additionalinformation source about the entropy of the prcing kernel

We illustrate both points using the Vasicek model and thenextend to the representative agent models

12 / 39

Page 21: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Multi-period ACE

One period:

Lt(mt+1) = logEtmt+1 −Et logmt+1 = −y1t −Et logmt+1

ELt(mt+1) = −Ey1 −E logm.

Two periods:

Lt(mt+1mt+2) = log Et(mt+1mt+2)−Et log(mt+1mt+2)

ELt(mt+1mt+2) = −2Ey2 −2E logm

2ELt(mt+1)−ELt(mt+1mt+2) = 2E(y2 − y1)

n periods:

ELt(mt+1)−ELt(mt,t+n)/n = E(yn − y1),

where mt,t+n = Πnj=1mt+j .

13 / 39

Page 22: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Loglinear pricing kernel

Let the pricing kernel be

logmt = δ+∞

∑j=0

ajwt−j + but = δ+ a(L)wt + but

with {wt} ∼ NID(0,1) and {ut} ∼ NID(0,1).

The serial covariance of order k is

cov(logmt , log mt−k) =∞

∑j=0

ajaj+k .

ACE (An = ∑nj=0 aj )

ELt(mt+1) = Lt(mt+1) = (a20 + b2)/2 = (A2

0 + b2)/2,

ELt(mt,t+n) = Lt(mt,t+n) =1

2

(n−1

∑j=0

A2j +(n−1)b2

)

.

14 / 39

Page 23: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Vasicek

ARMA(1,1) kernel

logmt+1 = δ(1−ϕ)+ ϕ logmt + v1/2wt+1 + θv1/2wt ,

a0 = v1/2,a1 = (θ+ ϕ)v1/2

aj+1 = ϕaj , j > 1,

An = a0 + a1(1−ϕn)/(1−ϕ)

Vasicek interest rate

rt = −Lt(mt+1)−Et(log mt+1)

= −(δ+ v/2)(1−ϕ)+ ϕrt−1 +(θ+ ϕ)v1/2wt

15 / 39

Page 24: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Multi-Period Entropy

0 20 40 60 80 100 120−0.02

0

0.02

0.04

0.06

0.08

0.1

aj, j=0, 1, ...

0 20 40 60 80 100 120−7

−6

−5

−4

−3

−2

−1

0

1x 10

−4 aj, j=1, 2, ...

0 20 40 60 80 100 1200.065

0.07

0.075

0.08

0.085

0.09

Maturity, months

Aj

0 20 40 60 80 100 1200

0.5

1

1.5

2

2.5

3

3.5

Maturity, months

annu

al %

E(yn−y1)

16 / 39

Page 25: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Advantages ofaverage conditional entropy (ACE)

Transparent lower bound: expected excess return (in logs) –helps with quantitative assesment

Alternatively, ACE measures the highest risk premium in theeconomy – helps with qualitative assesment

A tight link between the one-period and n−period ACEs: the termspread

Conditional entropy is easy to compute; to compute ACE evaluateconditional entropy at steady-state values (for log - linear modelsor approximations)

ACE is comparable across different models with different statevariables, preferences, etc.

17 / 39

Page 26: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Key models

External habit

Recursive preferences

Heterogeneous preferences

....

18 / 39

Page 27: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

External habit

Equations (Abel/Campbell-Cochrane/Chan-Kogan/Heaton)

Ut =∞

∑j=0

βju(ct+j ,xt+j),

u(ct ,xt) = (f (ct ,xt)α −1)/α.

Habit is a function of past consumption: xt = h(ct−1),e.g., Abel: xt = ct−1.

Dependence on habit

Abel: f (ct ,xt) = ct/xt

Campbell-Cochrane: f (ct ,xt) = ct − xt

Pricing kernel:

mt+1 = βuc(ct+1,xt+1)

uc(ct ,xt)= β

(f (ct+1,xt+1)

f (ct ,xt)

)α−1( fc(ct+1,xt+1)

fc(ct ,xt)

)

19 / 39

Page 28: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Example 1:Abel (1990) + Chan and Kogan (2002)

Preferences: f (ct ,xt) = ct/xt

Chan and Kogan have extended the habit formulation:

logxt+1 = (1−φ)∞

∑i=0

φi logct−i = φ log xt +(1−φ) logct

Relative (log) consumption

logst ≡ log(ct/xt) = φ log st−1 + loggt

Pricing kernel:

logmt+1 = logβ+(α−1) loggt+1 −α log(xt+1/xt)

= logβ+(α−1) loggt+1 −α(1−φ) logst

20 / 39

Page 29: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Abel-Chan-Kogan

Pricing kernel:

logmt+1 = logβ+(α−1) loggt+1 −α(1−φ) logst

Conditional entropy: Lt(mt+1) = logEtelog mt+1 −Et logmt+1

log Etelog mt+1 = logβ+ k(α−1; logg)−α(1−φ) log st

Et logmt+1 = logβ+(α−1)κ1(log g)−α(1−φ) log st

Lt(mt+1) = k(α−1; log g)− (α−1)κ1(logg)

21 / 39

Page 30: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Abel-Chan-Kogan

Pricing kernel:

logmt+1 = logβ+(α−1) loggt+1 −α(1−φ) logst

Conditional entropy: Lt(mt+1) = logEtelog mt+1 −Et logmt+1

log Etelog mt+1 = logβ+ k(α−1; logg)−α(1−φ) log st

Et logmt+1 = logβ+(α−1)κ1(log g)−α(1−φ) log st

Lt(mt+1) = k(α−1; log g)− (α−1)κ1(logg)

ACE: ELt(mt+1) = k(α−1; log g)− (α−1)κ1(log g)

It is exactly the same as in the CRRA case

21 / 39

Page 31: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Example 2: Campbell and Cochrane (1999)

Preferences: f (ct ,xt) = ct − xt

Campbell and Cochrane specify (log) surplus consumption ratiodirectly:

logst = log[(ct − xt)/ct ]

logst = φ(log st−1 − log s̄)+ λ(logst−1)(log gt −κ1(log g)).

22 / 39

Page 32: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Example 2: Campbell and Cochrane (1999)

Preferences: f (ct ,xt) = ct − xt

Campbell and Cochrane specify (log) surplus consumption ratiodirectly:

logst = log[(ct − xt)/ct ]

logst = φ(log st−1 − log s̄)+ λ(logst−1)(log gt −κ1(log g)).

Compare to relative (log) consumption in Chan and Kogan

logst ≡ log(ct/xt) = φ log st−1 + loggt

22 / 39

Page 33: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Example 2: Campbell and Cochrane (1999)

Preferences: f (ct ,xt) = ct − xt

Campbell and Cochrane specify (log) surplus consumption ratiodirectly:

logst = log[(ct − xt)/ct ]

logst = φ(log st−1 − log s̄)+ λ(logst−1)(log gt −κ1(log g)).

Pricing kernel:

logmt+1 = logβ+(α−1) loggt+1 +(α−1) log(st+1/st)

= logβ− (α−1)λ(logst)κ1(log g)

+ (α−1)(1+ λ(logst)) log gt+1

+ (α−1)(φ−1)(log st − log s̄)

23 / 39

Page 34: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Example 2: Campbell and Cochrane (1999)

Preferences: f (ct ,xt) = ct − xt

Pricing kernel:

logmt+1 = logβ+(α−1) loggt+1 +(α−1) log(st+1/st)

= logβ− (α−1)λ(logst)κ1(log g)

+ (α−1)(1+ λ(logst)) log gt+1

+ (α−1)(φ−1)(log st − log s̄)

Conditional entropy:

Lt(mt+1) = k((α−1)(1+ λ(log st)); log g)

− (α−1)(1+ λ(logst))κ1(log g)

24 / 39

Page 35: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Additional assumptions

To compute ACE, we have to specify λ and log g

Conditional volatility of the consumption surplus ratio

λ(logst) =1

σ

1−φ−b/(1−α)

1−α√

1−2(logst − log s̄)−1

In discrete time, there is an upper bound on logst to ensurepositivity of λIn continuous time, this bound never binds so we will ignore itIn Campbell and Cochrane, b = 0 to ensure a constant log r1

Consumption growth is i.i.d.Case 1. loggt+1 = wt+1 ∼ N (µ,σ2)Case 2. loggt+1 = wt+1 − zt+1, zt+1|j ∼ Gamma(j,θ−1), j̄ = h

25 / 39

Page 36: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Campbell and Cochrane, Case 1

Conditional entropy:

Lt(mt+1) = ((α−1)(φ−1)−b)/2+ b(log st − log s̄)

ACE: ELt(mt+1) = ((α−1)(φ−1)−b)/2

26 / 39

Page 37: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Campbell and Cochrane, Case 1

Conditional entropy:

Lt(mt+1) = ((α−1)(φ−1)−b)/2+ b(log st − log s̄)

ACE: ELt(mt+1) = ((α−1)(φ−1)−b)/2

All authors use α = −1

ACE for different calibrations (quarterly)

26 / 39

Page 38: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Campbell and Cochrane, Case 1

Conditional entropy:

Lt(mt+1) = ((α−1)(φ−1)−b)/2+ b(log st − log s̄)

ACE: ELt(mt+1) = ((α−1)(φ−1)−b)/2

All authors use α = −1

ACE for different calibrations (quarterly)Campbell and Cochrane (1999): φ = 0.97, b = 0;ELt(mt+1) = 0.0300 (0.120 annual)

26 / 39

Page 39: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Campbell and Cochrane, Case 1

Conditional entropy:

Lt(mt+1) = ((α−1)(φ−1)−b)/2+ b(log st − log s̄)

ACE: ELt(mt+1) = ((α−1)(φ−1)−b)/2

All authors use α = −1

ACE for different calibrations (quarterly)Campbell and Cochrane (1999): φ = 0.97, b = 0;ELt(mt+1) = 0.0300 (0.120 annual)Wachter (2006): φ = 0.97, b = 0.011;ELt(mt+1) = 0.0245 (0.098 annual)

26 / 39

Page 40: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Campbell and Cochrane, Case 1

Conditional entropy:

Lt(mt+1) = ((α−1)(φ−1)−b)/2+ b(log st − log s̄)

ACE: ELt(mt+1) = ((α−1)(φ−1)−b)/2

All authors use α = −1

ACE for different calibrations (quarterly)Campbell and Cochrane (1999): φ = 0.97, b = 0;ELt(mt+1) = 0.0300 (0.120 annual)Wachter (2006): φ = 0.97, b = 0.011;ELt(mt+1) = 0.0245 (0.098 annual)Verdelhan (2009): φ = 0.99, b = −0.011;ELt(mt+1) = 0.0155 (0.062 annual)

26 / 39

Page 41: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Campbell and Cochrane, Case 2

Conditional entropy:

Lt(mt+1) = (α−1)(1+ λ(log st))hθ+ ((1+(α−1)(1+ λ(log st))θ)−1 −1)h

+ ((α−1)(φ−1)−b)/2+ b(log st − log s̄)

ACE: use log-linearization around log s̄

ELt(mt+1) = hd2/(1+ d)+ ((α−1)(φ−1)−b)/2︸ ︷︷ ︸

no-jump case

d =θσ√

(α−1)(φ−1)−b

27 / 39

Page 42: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Campbell and Cochrane, Case 2

Conditional entropy:

Lt(mt+1) = (α−1)(1+ λ(log st))hθ+ ((1+(α−1)(1+ λ(log st))θ)−1 −1)h

+ ((α−1)(φ−1)−b)/2+ b(log st − log s̄)

ACE: use log-linearization around log s̄

ELt(mt+1) = hd2/(1+ d)+ ((α−1)(φ−1)−b)/2︸ ︷︷ ︸

no-jump case

d =θσ√

(α−1)(φ−1)−b

Calibration as above + vol of logg + jump parameters:σ2 = (0.035)2/4−hθ2

Barro-Nakamura-Steinsson-Ursua: h = 0.01/4, θ = 0.15Backus-Chernov-Martin: h = 1.3864/4, θ = 0.0229

27 / 39

Page 43: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Campbell and Cochrane, Case 2

Calibration ACE ACE (case 1) ACE jumps

CC + BNSU 0.0341 0.0300 0.0041W + BNSU 0.0281 0.0245 0.0036V + BNSU 0.0181 0.0155 0.0026

CC + BCM 0.0883 0.0300 0.0583W + BCM 0.0737 0.0245 0.0492V + BCM 0.0487 0.0155 0.0332

28 / 39

Page 44: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Recursive preferences

Equations (Kreps-Porteus/Epstein-Zin/Weil)

Ut =[(1−β)c

ρt + βµt(Ut+1)

ρ]1/ρ

µt(Ut+1) =(EtU

αt+1

)1/α

EIS = 1/(1−ρ)

CRRA = 1−αα = ρ ⇒ additive preferences

29 / 39

Page 45: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Recursive preferences: pricing kernel

Scale problem by ct (ut = Ut/ct , gt+1 = ct+1/ct )

ut = [(1−β)+ βµt(gt+1ut+1)ρ]1/ρ

Pricing kernel (mrs)

mt+1 = β(

ct+1

ct

)ρ−1( Ut+1

µt(Ut+1)

)α−ρ

= β gρ−1t+1︸︷︷︸

short-run risk

(gt+1ut+1

µt(gt+1ut+1)

)α−ρ

︸ ︷︷ ︸

long-run risk

Note the role of recursive preferences: if α = ρSecond term disappearsNo role for predictable consumption growth or volatility (coming)

30 / 39

Page 46: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Loglinear approximation

Loglinear approximation

log ut = ρ−1 log [(1−β)+ βµt(gt+1ut+1)ρ]

= ρ−1 log[

(1−β)+ βeρ logµt (gt+1ut+1)]

≈ b0 + b1 logµt(gt+1ut+1).

Exact if ρ = 0 : b0 = 0, b1 = β

Otherwise, b1 = βeρ logµ/[(1−β)+ βeρ logµ]

Important: in general, b1 reflects preferences

Solve by guess and verify

31 / 39

Page 47: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Data-generating processes

Use MA representation for the variables, e.g.,

xt = x + γ(L)wt .

An important term that shows up in many expressions

γ(b1) ≡∞

∑i=0

bi1γi .

The term captures interaction of variable’s persistence γ andpreferences b1.

32 / 39

Page 48: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Example 1: Long-Run Risk and SV

Consumption growth

loggt = g + γw(L)v1/2t−1w1t

vt = v + νw(L)w2t

(w1t ,w2t) ∼ NID(0, I)

Guess value function

logut = u + ωg(L)v1/2t−1w1t + ωv(L)w2t

Solution includes

ωg,0 + γw ,0 = γw(b1)

ωv ,0 = b1(α/2)γw (b1)2νw (b1)

33 / 39

Page 49: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Long-Run Risk and SV

Conditional entropy

Lt(mt+1) = [(ρ−1)γw ,0 +(α−ρ)γw(b1)]2vt/2

+ (α−ρ)2(b1(α/2)γw (b1)2νw(b1))

2/2

Two calibrations (Bansal and Yaron, 2004; Bansal, Kiku, andYaron, 2009)

νw ,j = σv νj , νBY = 0.985, νBKY = 0.999

ACE

BY: 0.0715+ 0.0432 = 0.1147 (1.38 annual)

34 / 39

Page 50: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Long-Run Risk and SV

Conditional entropy

Lt(mt+1) = [(ρ−1)γw ,0 +(α−ρ)γw(b1)]2vt/2

+ (α−ρ)2(b1(α/2)γw (b1)2νw(b1))

2/2

Two calibrations (Bansal and Yaron, 2004; Bansal, Kiku, andYaron, 2009)

νw ,j = σv νj , νBY = 0.985, νBKY = 0.999

ACE

BY: 0.0715+ 0.0432 = 0.1147 (1.38 annual)

BKY: 0.0592+ 0.1912 = 0.2504 (3.00 annual)

34 / 39

Page 51: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Vasicek vs Bansal - Yaron

Both models have a loglinear pricing kernel (BY with constant vol)

logmt+1 = δ(1−ϕ)+ ϕ logmt + v1/2wt+1 + θv1/2wt ,

a0 = v1/2,a1 = (θ+ ϕ)v1/2

aj+1 = ϕaj , j > 1,

An = a0 + a1(1−ϕn)/(1−ϕ)

Bansal - Yaron (constant volatility)

a0 = [(α−ρ)γw (b1)+ (ρ−1)γω,0]v1/2,

aj+1 = (ρ−1)γw ,j+1 = (ρ−1)δγj = γaj ,

An = a0 +(ρ−1)δ(1− γn)/(1− γ)

35 / 39

Page 52: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Multi-Period Entropy

0 20 40 60 80 100 120−0.4

−0.3

−0.2

−0.1

0

0.1

aj, j=0, 1, ...

0 20 40 60 80 100 120−7

−6

−5

−4

−3

−2

−1

0

1x 10

−4 aj, j=1, 2, ...

0 20 40 60 80 100 120−0.5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

Maturity, months

Aj

0 20 40 60 80 100 120−20

−15

−10

−5

0

5

Maturity, months

annu

al %

E(yn−y1)

36 / 39

Page 53: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Example 2: Disasters

Consumption growth

loggt = g + v1/2w1t + γz,0zt

ht = h + ωw(L)w3t

(w1t ,w3t) ∼ NID(0, I)

zt |j ∼ N (jθ, j)

j ≥ 0 has jump intensity ht

Guess value function

logut = u + ωh(L)w3t

Solution includes

ωh,0 = b1ωw (b1)(eαγz,0θ+α2γ2

z,0/2 −αγz,0θ−1)/α

37 / 39

Page 54: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Disasters

Conditional entropy

Lt(mt+1) = (α−1)2v/2

+ (e(α−1)γz,0θ+(α−1)2γ2z,0/2 − (α−1)γz,0θ−1)ht

+ (α−ρ)2[b1ωw (b1)(eαγz,0θ+α2γ2

z,0/2 −1)/α]2/2

Two calibrations (Bansal and Yaron, 2004; Wachter, 2009)

αBY = −9,ρBY = 1/3;αW = −2,ρW = 0;

ACE

W: 0.0002+ 0.0012+ 0.0019 = 0.0033 (0.04 annual)

38 / 39

Page 55: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

ACE: Disasters

Conditional entropy

Lt(mt+1) = (α−1)2v/2

+ (e(α−1)γz,0θ+(α−1)2γ2z,0/2 − (α−1)γz,0θ−1)ht

+ (α−ρ)2[b1ωw (b1)(eαγz,0θ+α2γ2

z,0/2 −1)/α]2/2

Two calibrations (Bansal and Yaron, 2004; Wachter, 2009)

αBY = −9,ρBY = 1/3;αW = −2,ρW = 0;

ACE

W: 0.0002+ 0.0012+ 0.0019 = 0.0033 (0.04 annual)

BY: 0.0026+ 0.0810+ 2.6509 = 2.7345 (32.81 annual)

38 / 39

Page 56: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Conclusions

Time dependence via external habitNo time-dependence in consumption growthNevertheless: habit with varying volatility (Campbell andCochrane, 1999) may have a substantial impact on the entropy ofthe pricing kernel

39 / 39

Page 57: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Conclusions

Time dependence via external habitNo time-dependence in consumption growthNevertheless: habit with varying volatility (Campbell andCochrane, 1999) may have a substantial impact on the entropy ofthe pricing kernel

Time dependence via recursive preferences

Little time-dependence in pricing kernelNevertheless: interaction of (modest) dynamics in consumptiongrowth and recursive preferences can have a substantial impacton the entropy of the pricing kernel

39 / 39

Page 58: Sources of Entropy in Dynamic Representative Agent Modelspages.stern.nyu.edu/~dbackus/BCZ/ms/BCZ_MC_slides_GVA1012.pdfExcess returns, logrj −logr1 Standard Excess Variable Mean Deviation

Conclusions

Time dependence via external habitNo time-dependence in consumption growthNevertheless: habit with varying volatility (Campbell andCochrane, 1999) may have a substantial impact on the entropy ofthe pricing kernel

Time dependence via recursive preferences

Little time-dependence in pricing kernelNevertheless: interaction of (modest) dynamics in consumptiongrowth and recursive preferences can have a substantial impacton the entropy of the pricing kernel

More broadly, entropy offers an intuitive way to asses complicatedmodels with various forms of non-normalities

39 / 39