optimal monetary policy under uncertainty in dsge...

56
Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic Approach Lars E.O. Svensson 1 Noah Williams 2 1 Sveriges Riksbank 2 University of Wisconsin - Madison November 2009 Svensson and Williams Optimal Model Policy Under Uncertainty

Upload: trinhnhan

Post on 05-Jul-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Optimal Monetary Policy under Uncertaintyin DSGE Models:

A Markov Jump-Linear-Quadratic Approach

Lars E.O. Svensson1 Noah Williams2

1Sveriges Riksbank

2University of Wisconsin - Madison

November 2009

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 2: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Introduction

I have long been interested in the analysis of monetary policyunder uncertainty. The problems arise from what we do notknow; we must deal with the uncertainty from the base of whatwe do know. [...]

The Fed faces many uncertainties, and must adjust its onepolicy instrument to navigate as best it can this sea ofuncertainty. Our fundamental principle is that we must use thatone policy instrument to achieve long-run price stability. [...]

My bottom line is that market participants should concentrateon the fundamentals. If the bond traders can get it right,they’ll do most of the stabilization work for us, and we at theFed can sit back and enjoy life.

William Poole, (1998) “A Policymaker Confronts Uncertainty”

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 3: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Overview

Develop methods for policy analysis under uncertainty.Methods have broad potential applications.Consider optimal policy when policymakers don’t observetrue economic structure, must learn from observations.Classic problem of learning and control: actions haveinformational component. Motive to alter actions tomitigate future uncertainty (“experimentation”).Unlike most previous literature we consider forward-lookingmodels. Particular focus on DSGE models.Issues:

How does uncertainty affect policy?How does learning affect losses?How does experimentation motive affect policy and losses?

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 4: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Overview

Develop methods for policy analysis under uncertainty.Methods have broad potential applications.Consider optimal policy when policymakers don’t observetrue economic structure, must learn from observations.Classic problem of learning and control: actions haveinformational component. Motive to alter actions tomitigate future uncertainty (“experimentation”).Unlike most previous literature we consider forward-lookingmodels. Particular focus on DSGE models.Issues:

How does uncertainty affect policy?How does learning affect losses?How does experimentation motive affect policy and losses?

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 5: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Some Related Literature

This paper: application of our work inSvensson-Williams (2007- . . .)Aoki (1967), Chow (1973): Multiplicative uncertainty inLQ model (only backward-looking/control case)Control theory: Costa-Fragoso-Marques (2005), othersRecursive saddlepoint method: Marcet-Marimon (1998)Blake-Zampolli (2005), Zampolli (2005): similar observablemodes case, less generalWieland (2000, 2006), Beck and Wieland (2002): Optimalexperimentation with backward looking modelsCogley, Colacito, Sargent (2007): Adaptive policy asapproximation to Bayesian, expectational variablesTesfaselassie, Schaling, Eijffinger (2006), Ellison (2006):similar, less general

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 6: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Some Related Literature

This paper: application of our work inSvensson-Williams (2007- . . .)Aoki (1967), Chow (1973): Multiplicative uncertainty inLQ model (only backward-looking/control case)Control theory: Costa-Fragoso-Marques (2005), othersRecursive saddlepoint method: Marcet-Marimon (1998)Blake-Zampolli (2005), Zampolli (2005): similar observablemodes case, less generalWieland (2000, 2006), Beck and Wieland (2002): Optimalexperimentation with backward looking modelsCogley, Colacito, Sargent (2007): Adaptive policy asapproximation to Bayesian, expectational variablesTesfaselassie, Schaling, Eijffinger (2006), Ellison (2006):similar, less general

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 7: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

The Model

Standard linear rational expectations framework:

Xt+1 = A11Xt + A12xt + B1it + C1εt+1

EtHxt+1 = A21Xt + A22xt + B2it + C2εt

Xt predetermined, xt forward-looking, it CB instruments(controls)εt i.i.d. shocks, N (0, I )

Matrices can take Nj different values in period t,corresponding to n modes jt = 1, 2, ..., Nj .Modes jt follow Markov chain w/transition matrixP = [Pjk ].

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 8: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

The Model

Markov jump-linear-quadratic framework:

Xt+1 = A11,t+1Xt + A12,t+1xt + B1,t+1it + C1,t+1εt+1

EtHt+1xt+1 = A21,tXt + A22,txt + B2,tit + C2,tεt

Xt predetermined, xt forward-looking, it CB instruments(controls)εt i.i.d. shocks, N (0, I )

Matrices can take Nj different values in period t,corresponding to n modes jt = 1, 2, ..., Nj .Modes jt follow Markov chain w/transition matrixP = [Pjk ].

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 9: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Beliefs and Loss

Central bank (and aggregate private sector) observe Xt , it ,do not (in general) observe jt or εt .pt|t : perceived probabilities of modes in period tPrediction equation: pt+1|t = P ′pt|t

CB intertemporal loss function:

Et

∞∑τ=0

δτLt+τ (1)

Period loss:

Lt ≡12

Xtxtit

Wt

Xtxtit

(2)

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 10: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Beliefs and Loss

Central bank (and aggregate private sector) observe Xt , it ,do not (in general) observe jt or εt .pt|t : perceived probabilities of modes in period tPrediction equation: pt+1|t = P ′pt|t

CB intertemporal loss function:

Et

∞∑τ=0

δτLt+τ (1)

Period loss:

Lt ≡12

Xtxtit

Wt

Xtxtit

(2)

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 11: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

General and Tractable Way to Model Uncertainty

Large variety of uncertainty configurations. Approximate most(all?) relevant kinds of model uncertainty

Regime switching modelsi.i.d. and serially correlated random model coefficients(generalized Brainard-type uncertainty)Different structural models

Different variables, different number of leads and lagsBackward- or forward-looking models

Particular variablePrivate-sector expectations

Ambiguity aversion, robust control (P ∈ P)Different forms of CB judgment (for instance, perceiveduncertainty)And many more . . .

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 12: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

General and Tractable Way to Model Uncertainty

Large variety of uncertainty configurations. Approximate most(all?) relevant kinds of model uncertainty

Regime switching modelsi.i.d. and serially correlated random model coefficients(generalized Brainard-type uncertainty)Different structural models

Different variables, different number of leads and lagsBackward- or forward-looking models

Particular variablePrivate-sector expectations

Ambiguity aversion, robust control (P ∈ P)Different forms of CB judgment (for instance, perceiveduncertainty)And many more . . .

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 13: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

General and Tractable Way to Model Uncertainty

Large variety of uncertainty configurations. Approximate most(all?) relevant kinds of model uncertainty

Regime switching modelsi.i.d. and serially correlated random model coefficients(generalized Brainard-type uncertainty)Different structural models

Different variables, different number of leads and lagsBackward- or forward-looking models

Particular variablePrivate-sector expectations

Ambiguity aversion, robust control (P ∈ P)Different forms of CB judgment (for instance, perceiveduncertainty)And many more . . .

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 14: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

General and Tractable Way to Model Uncertainty

Large variety of uncertainty configurations. Approximate most(all?) relevant kinds of model uncertainty

Regime switching modelsi.i.d. and serially correlated random model coefficients(generalized Brainard-type uncertainty)Different structural models

Different variables, different number of leads and lagsBackward- or forward-looking models

Particular variablePrivate-sector expectations

Ambiguity aversion, robust control (P ∈ P)Different forms of CB judgment (for instance, perceiveduncertainty)And many more . . .

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 15: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Approximate MJLQ Models

MJLQ models provide convenient approximations fornonlinear DSGE models.Underlying function of interest: f (X , θ), where X iscontinuous, θ ∈ {θ1, . . . , θnj}.Taylor approximation around (X , θ):

f (X , θj) ≈ f (X , θ) + fX (X , θ)(X − X) + fθ(X , θ)(θj − θ).

Valid as X → X and θ → θ: small shocks to X and θ.MJLQ approximation around (Xj , θj):

f (X , θj) ≈ f (Xj , θj) + fX (Xj , θj)(X − Xj).

Valid as X → Xj : small shocks to X , slow variation in θ(P → I ).

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 16: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Approximate MJLQ Models

MJLQ models provide convenient approximations fornonlinear DSGE models.Underlying function of interest: f (X , θ), where X iscontinuous, θ ∈ {θ1, . . . , θnj}.Taylor approximation around (X , θ):

f (X , θj) ≈ f (X , θ) + fX (X , θ)(X − X) + fθ(X , θ)(θj − θ).

Valid as X → X and θ → θ: small shocks to X and θ.MJLQ approximation around (Xj , θj):

f (X , θj) ≈ f (Xj , θj) + fX (Xj , θj)(X − Xj).

Valid as X → Xj : small shocks to X , slow variation in θ(P → I ).

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 17: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Four Different CasesIn each we assume commitment under timeless perspective.Follow Marcet-Marimon (1999). Convert tosaddlepoint/min-max problem. Extended state vector includeslagged Lagrange multipliers, controls include currentmultipliers.

1 Observable modes (OBS)Current mode known, uncertainty about future modes

2 Optimal policy with no learning (NL)Naive updating equation: pt+1|t+1 = P ′pt|t

3 Adaptive optimal policy (AOP)Policy as in NL, Bayesian updating of pt+1|t+1 each periodNo experimentation

4 Bayesian optimal policy (BOP)Optimal policy taking Bayesian updating into accountOptimal experimentation

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 18: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Four Different CasesIn each we assume commitment under timeless perspective.Follow Marcet-Marimon (1999). Convert tosaddlepoint/min-max problem. Extended state vector includeslagged Lagrange multipliers, controls include currentmultipliers.

1 Observable modes (OBS)Current mode known, uncertainty about future modes

2 Optimal policy with no learning (NL)Naive updating equation: pt+1|t+1 = P ′pt|t

3 Adaptive optimal policy (AOP)Policy as in NL, Bayesian updating of pt+1|t+1 each periodNo experimentation

4 Bayesian optimal policy (BOP)Optimal policy taking Bayesian updating into accountOptimal experimentation

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 19: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Four Different CasesIn each we assume commitment under timeless perspective.Follow Marcet-Marimon (1999). Convert tosaddlepoint/min-max problem. Extended state vector includeslagged Lagrange multipliers, controls include currentmultipliers.

1 Observable modes (OBS)Current mode known, uncertainty about future modes

2 Optimal policy with no learning (NL)Naive updating equation: pt+1|t+1 = P ′pt|t

3 Adaptive optimal policy (AOP)Policy as in NL, Bayesian updating of pt+1|t+1 each periodNo experimentation

4 Bayesian optimal policy (BOP)Optimal policy taking Bayesian updating into accountOptimal experimentation

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 20: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Four Different CasesIn each we assume commitment under timeless perspective.Follow Marcet-Marimon (1999). Convert tosaddlepoint/min-max problem. Extended state vector includeslagged Lagrange multipliers, controls include currentmultipliers.

1 Observable modes (OBS)Current mode known, uncertainty about future modes

2 Optimal policy with no learning (NL)Naive updating equation: pt+1|t+1 = P ′pt|t

3 Adaptive optimal policy (AOP)Policy as in NL, Bayesian updating of pt+1|t+1 each periodNo experimentation

4 Bayesian optimal policy (BOP)Optimal policy taking Bayesian updating into accountOptimal experimentation

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 21: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

1. Observable modes (OBS)

Policymakers (and public) observe jt , know jt+1 drawnaccording to P.Analogue of regime-switching models in econometrics.Law of motion for Xt linear, preferences quadratic in Xtconditional on modes.Solution linear in Xt for given j

it = Fjt Xt ,

Value function quadratic in Xt for given j

V (Xt , jt) ≡12X ′

tVXX ,jtXt + wjt .

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 22: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

1. Observable modes (OBS)

Policymakers (and public) observe jt , know jt+1 drawnaccording to P.Analogue of regime-switching models in econometrics.Law of motion for Xt linear, preferences quadratic in Xtconditional on modes.Solution linear in Xt for given j

it = Fjt Xt ,

Value function quadratic in Xt for given j

V (Xt , jt) ≡12X ′

tVXX ,jtXt + wjt .

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 23: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

2. Optimal policy with no learning (NL)

Interpretation: Policymakers forget past Xt−1, . . . in periodt when choosing it .Allows for persistence of modes. But means beliefs don’tsatisfy law of iterated expectations. Requires slightly morecomplicated Bellman equation.Law of motion linear in Xt , dual preferences quadratic inXt . pt|t exogenous.Solution linear in Xt for given pt|t

it = Fi(pt|t)Xt ,

Value function quadratic in Xt for given pt|t

V (Xt , pt|t) ≡12X ′

tVXX (pt|t)Xt + w(pt|t).

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 24: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

2. Optimal policy with no learning (NL)

Interpretation: Policymakers forget past Xt−1, . . . in periodt when choosing it .Allows for persistence of modes. But means beliefs don’tsatisfy law of iterated expectations. Requires slightly morecomplicated Bellman equation.Law of motion linear in Xt , dual preferences quadratic inXt . pt|t exogenous.Solution linear in Xt for given pt|t

it = Fi(pt|t)Xt ,

Value function quadratic in Xt for given pt|t

V (Xt , pt|t) ≡12X ′

tVXX (pt|t)Xt + w(pt|t).

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 25: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

3. Adaptive optimal policy (AOP)

Similar to adaptive learning, anticipated utility, passivelearning.Policy as under NL (disregarding Bayesian updating),it = i(Xt , pt|t), xt = z(Xt , pt|t)

Transition equation for pt+1|t+1 from Bayes rule:

pt+1|t+1 = Q(Xt , pt|t , xt , it , jt , εt , jt+1, εt+1).

Nonlinear, interacts with Xt . True AOP value function notquadratic in Xt .Evaluation of loss more complex numerically, but recursiveimplementation simple.

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 26: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

3. Adaptive optimal policy (AOP)

Similar to adaptive learning, anticipated utility, passivelearning.Policy as under NL (disregarding Bayesian updating),it = i(Xt , pt|t), xt = z(Xt , pt|t)

Transition equation for pt+1|t+1 from Bayes rule:

pt+1|t+1 = Q(Xt , pt|t , xt , it , jt , εt , jt+1, εt+1).

Nonlinear, interacts with Xt . True AOP value function notquadratic in Xt .Evaluation of loss more complex numerically, but recursiveimplementation simple.

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 27: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Bayesian Updating Makes Beliefs Random

Ex post,

pt+1|t+1 = Q(Xt , pt|t , xt , it , jt , εt , jt+1, εt+1)

is random variable, depends on jt+1 and εt+1.Note that

Etpt+1|t+1 = pt+1|t = P ′pt|t .

Bayesian updating gives a mean-preserving spread ofpt+1|t+1.If V (Xt , pt|t) concave in pt|t , lower loss under AOP, and it’sbeneficial to learn.Note that we assume symmetric beliefs. Learning bypublic changes the nature of policy problem, may makestabilization more difficult.

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 28: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Bayesian Updating Makes Beliefs Random

Ex post,

pt+1|t+1 = Q(Xt , pt|t , xt , it , jt , εt , jt+1, εt+1)

is random variable, depends on jt+1 and εt+1.Note that

Etpt+1|t+1 = pt+1|t = P ′pt|t .

Bayesian updating gives a mean-preserving spread ofpt+1|t+1.If V (Xt , pt|t) concave in pt|t , lower loss under AOP, and it’sbeneficial to learn.Note that we assume symmetric beliefs. Learning bypublic changes the nature of policy problem, may makestabilization more difficult.

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 29: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Bayesian Updating Makes Beliefs Random

Ex post,

pt+1|t+1 = Q(Xt , pt|t , xt , it , jt , εt , jt+1, εt+1)

is random variable, depends on jt+1 and εt+1.Note that

Etpt+1|t+1 = pt+1|t = P ′pt|t .

Bayesian updating gives a mean-preserving spread ofpt+1|t+1.If V (Xt , pt|t) concave in pt|t , lower loss under AOP, and it’sbeneficial to learn.Note that we assume symmetric beliefs. Learning bypublic changes the nature of policy problem, may makestabilization more difficult.

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 30: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

4. Bayesian optimal policy (BOP)

Optimal “experimentation” incorporated: may alter actionsto mitigate future uncertainty. More complex numerically.Dual Bellman equation as in AOP, but now with beliefupdating equation incorporated in optimization.Because of the nonlinearity of Bayesian updating, solutionno longer linear in Xt for given pt|t .Dual value function V (Xt , pt|t), primal value functionV (Xt , pt|t), no longer quadratic in Xt for given pt|tAlways weakly better than AOP in backward-lookingmodels. Not necessarily true in forward-looking:experimentation by public changes policymaker constraints.

Backward: V = mini∈I

Et [L + δV ]

Forward: V = maxγ∈Γ

mini∈I

Et

[L + δV

]Svensson and Williams Optimal Model Policy Under Uncertainty

Page 31: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

4. Bayesian optimal policy (BOP)

Optimal “experimentation” incorporated: may alter actionsto mitigate future uncertainty. More complex numerically.Dual Bellman equation as in AOP, but now with beliefupdating equation incorporated in optimization.Because of the nonlinearity of Bayesian updating, solutionno longer linear in Xt for given pt|t .Dual value function V (Xt , pt|t), primal value functionV (Xt , pt|t), no longer quadratic in Xt for given pt|tAlways weakly better than AOP in backward-lookingmodels. Not necessarily true in forward-looking:experimentation by public changes policymaker constraints.

Backward: V = mini∈I

Et [L + δV ]

Forward: V = maxγ∈Γ

mini∈I

Et

[L + δV

]Svensson and Williams Optimal Model Policy Under Uncertainty

Page 32: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

4. Bayesian optimal policy (BOP)

Optimal “experimentation” incorporated: may alter actionsto mitigate future uncertainty. More complex numerically.Dual Bellman equation as in AOP, but now with beliefupdating equation incorporated in optimization.Because of the nonlinearity of Bayesian updating, solutionno longer linear in Xt for given pt|t .Dual value function V (Xt , pt|t), primal value functionV (Xt , pt|t), no longer quadratic in Xt for given pt|tAlways weakly better than AOP in backward-lookingmodels. Not necessarily true in forward-looking:experimentation by public changes policymaker constraints.

Backward: V = mini∈I

Et [L + δV ]

Forward: V = maxγ∈Γ

mini∈I

Et

[L + δV

]Svensson and Williams Optimal Model Policy Under Uncertainty

Page 33: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Numerical Methods & Summary of Results

Suite of programs available on my website for OBS, NLcases. Very fast, efficient, and adaptable.For AOP and BOP, use Miranda-Fackler collocationmethods, CompEcon toolbox.Under NL, V (Xt , pt|t) is not always concave in pt|t .AOP significantly different from NL, but not necessarilylower. Learning typically beneficial in backward-lookingmodels, not always in forward-looking.May be easier to control expectations when agents don’tlearn: the bond traders may get it (more) right, but thatdoesn’t always improve welfare.BOP modestly lower loss than AOPEthical and other issues with BOP relative to AOP:Perhaps not much of a practical problem?

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 34: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Numerical Methods & Summary of Results

Suite of programs available on my website for OBS, NLcases. Very fast, efficient, and adaptable.For AOP and BOP, use Miranda-Fackler collocationmethods, CompEcon toolbox.Under NL, V (Xt , pt|t) is not always concave in pt|t .AOP significantly different from NL, but not necessarilylower. Learning typically beneficial in backward-lookingmodels, not always in forward-looking.May be easier to control expectations when agents don’tlearn: the bond traders may get it (more) right, but thatdoesn’t always improve welfare.BOP modestly lower loss than AOPEthical and other issues with BOP relative to AOP:Perhaps not much of a practical problem?

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 35: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Numerical Methods & Summary of Results

Suite of programs available on my website for OBS, NLcases. Very fast, efficient, and adaptable.For AOP and BOP, use Miranda-Fackler collocationmethods, CompEcon toolbox.Under NL, V (Xt , pt|t) is not always concave in pt|t .AOP significantly different from NL, but not necessarilylower. Learning typically beneficial in backward-lookingmodels, not always in forward-looking.May be easier to control expectations when agents don’tlearn: the bond traders may get it (more) right, but thatdoesn’t always improve welfare.BOP modestly lower loss than AOPEthical and other issues with BOP relative to AOP:Perhaps not much of a practical problem?

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 36: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

New Keynesian Phillips Curve Examples

πt = (1− ωjt )πt−1 + ωjt Etπt+1 + γjt yt + cjtεt

Assume policymakers directly control output gap yt .Period loss function

Lt = π2t + 0.1y2

t , δ = 0.98

Example 1: How forward-looking is inflation? Assumeω1 = 0.2, ω2 = 0.8. E(ωj) = 0.5. Fix other parameters:γ = 0.1, c = 0.5.Example 2: What is the slope of the Phillips curve?Assume γ1 = 0.05, γ2 = 0.25. E(γj) = 0.15. Fix otherparameters: ω = 0.5, c = 0.5.In both cases, highly persistent modes:

P =

[0.98 0.020.02 0.98

]Svensson and Williams Optimal Model Policy Under Uncertainty

Page 37: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

New Keynesian Phillips Curve Examples

πt = (1− ωjt )πt−1 + ωjt Etπt+1 + γjt yt + cjtεt

Assume policymakers directly control output gap yt .Period loss function

Lt = π2t + 0.1y2

t , δ = 0.98

Example 1: How forward-looking is inflation? Assumeω1 = 0.2, ω2 = 0.8. E(ωj) = 0.5. Fix other parameters:γ = 0.1, c = 0.5.Example 2: What is the slope of the Phillips curve?Assume γ1 = 0.05, γ2 = 0.25. E(γj) = 0.15. Fix otherparameters: ω = 0.5, c = 0.5.In both cases, highly persistent modes:

P =

[0.98 0.020.02 0.98

]Svensson and Williams Optimal Model Policy Under Uncertainty

Page 38: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 1: Effect of UncertaintyConstant coefficients vs. OBS

−5 0 5

−8

−6

−4

−2

0

2

4

6

8

Policy: OBS and Constant Modes

πt

yt

−5 0 50

5

10

15

20

25

30

35

Loss: OBS and Constant Modes

πt

Lo

ss

OBS 1OBS 2E(OBS)Constant

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 39: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 1: Value functions

0.2 0.4 0.6 0.830

40

50

60

70

80

Loss: NL

p1t

Lo

ss

0.2 0.4 0.6 0.8

40

50

60

70

80

Loss: BOP

p1t

Lo

ss

0.2 0.4 0.6 0.8

40

50

60

70

80

Loss: AOP

p1t

Lo

ss

π

t=0

πt=−5

πt=3.33

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 40: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 1: Loss Differences

0.2 0.4 0.6 0.80

0.5

1

1.5

Loss difference: BOP−NL

p1t

Lo

ss

0.2 0.4 0.6 0.8

−6.5

−6

−5.5

−5

−4.5

−4

−3.5

−3

−2.5

−2

x 10−9 Loss differences: BOP−AOP

p1t

Lo

ss

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 41: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 1: Optimal Policies

−5 0 5

−8

−6

−4

−2

0

2

4

6

8

Policy: AOP

πt

yt

−5 0 5

−8

−6

−4

−2

0

2

4

6

8

Policy: BOP

πt

yt

p

1t=0.89

p1t

=0.5

p1t

=0.11

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 42: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 1: Policy Differences: BOP-AOP

−5 0 5−3

−2

−1

0

1

2

3x 10

−9 Policy difference: BOP−AOP

πt

yt

−5

0

5

0.20.4

0.60.8

−10

−5

0

5

x 10−9

πt

Policy difference: BOP−AOP

p1t

yt

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 43: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 2: Effect of UncertaintyConstant coefficients vs. OBS

−5 0 5

−4

−3

−2

−1

0

1

2

3

4

Policy: OBS and Constant Modes

πt

yt

OBS 1OBS 2E(OBS)Constant

−5 0 50

2

4

6

8

10

12

14

16

18

Loss: OBS and Constant Modes

πt

Lo

ss

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 44: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 2: Value functions

0.2 0.4 0.6 0.8

20

22

24

26

28

Loss: NL

p1t

Lo

ss

0.2 0.4 0.6 0.8

20

22

24

26

28

Loss: BOP

p1t

Lo

ss

0.2 0.4 0.6 0.8

20

22

24

26

28

Loss: AOP

p1t

Lo

ss

π

t=0

πt=−2

πt=3

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 45: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 2: Loss Differences

0.2 0.4 0.6 0.80.25

0.3

0.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

Loss difference: BOP−NL

p1t

Lo

ss

0.2 0.4 0.6 0.8−0.026

−0.024

−0.022

−0.02

−0.018

−0.016

−0.014

−0.012

Loss differences: BOP−AOP

p1t

Lo

ss

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 46: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 2: Optimal Policies

−5 0 5

−4

−3

−2

−1

0

1

2

3

4

Policy: AOP

πt

yt

−5 0 5

−4

−3

−2

−1

0

1

2

3

4

Policy: BOP

πt

yt

p

1t=0.92

p1t

=0.5

p1t

=0.08

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 47: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 2: Policy Differences: BOP-AOP

−5 0 5

−0.4

−0.3

−0.2

−0.1

0

0.1

0.2

0.3

0.4

Policy difference: BOP−AOP

πt

yt

−5

0

5

0.20.4

0.60.8

−1

−0.5

0

0.5

πt

Policy difference: BOP−AOP

p1t

yt

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 48: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 3: Estimated New Keynesian Model

πt = ωfjEtπt+1 + (1− ωfj)πt−1 + γjyt + cπjεπt ,

yt = βfjEtyt+1 + (1− βfj) [βyjyt−1 + (1− βyj)yt−2]

−βrj (it − Etπt+1) + cyjεyt .

Estimated hybrid model constrained to have one modebackward-looking, one partially forward-looking.

Parameter Mean Mode 1 Mode 2ωf 0.0938 0.3272 0γ 0.0474 0.0580 0.0432βf 0.1375 0.4801 0βr 0.0304 0.0114 0.0380βy 1.3331 1.5308 1.2538cπ 0.8966 1.0621 0.8301cy 0.5572 0.5080 0.5769

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 49: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 3: Estimated New Keynesian Model

πt = ωfjEtπt+1 + (1− ωfj)πt−1 + γjyt + cπjεπt ,

yt = βfjEtyt+1 + (1− βfj) [βyjyt−1 + (1− βyj)yt−2]

−βrj (it − Etπt+1) + cyjεyt .

Estimated hybrid model constrained to have one modebackward-looking, one partially forward-looking.

Parameter Mean Mode 1 Mode 2ωf 0.0938 0.3272 0γ 0.0474 0.0580 0.0432βf 0.1375 0.4801 0βr 0.0304 0.0114 0.0380βy 1.3331 1.5308 1.2538cπ 0.8966 1.0621 0.8301cy 0.5572 0.5080 0.5769

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 50: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 3: More Detail

Estimated transition probabilities

P =

[0.9579 0.04210.0169 0.9831

]Loss function:

Lt = π2t + y2

t + 0.2(it − it−1)2, δ = 1

Only feasible to consider NL and AOP. Evaluate them via1000 simulations of 1000 periods each.

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 51: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 3: Simulated Impulse Responses

0 10 20 30 40 50

0.20.4

0.60.8

1Response of π to π shock

0 10 20 30 40 50

−1

−0.5

0Response of y to π shock

0 10 20 30 40 50

0

1

2

3

Response of i to π shock

0 10 20 30 40 50

0.05

0.1

0.15

0.2

Response of π to y shock

0 10 20 30 40 50−0.2

0

0.2

0.4

0.6

Response of y to y shock

0 10 20 30 40 500

1

2

3Response of i to y shock

ConstantAOP MedianNL Median

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 52: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 3: Simulated Distributions

0 20 40 60 80 1000

0.05

0.1

0.15

Distribution of Eπ2

t

AOPNL

0 20 40 60 80 1000

0.05

0.1

0.15

0.2

Distribution of Ey2

t

50 100 150 2000

0.005

0.01

0.015

0.02

0.025

0.03

Distribution of Ei2t

0 50 100 150 2000

0.01

0.02

0.03

0.04

0.05

0.06

Distribution of ELt

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 53: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Example 3: Representative Simulation

0 100 200 300 400 500 600 700 800 900 1000−20

−10

0

10

Inflation

0 100 200 300 400 500 600 700 800 900 1000−20

0

20

Output Gap

0 100 200 300 400 500 600 700 800 900 1000−50

0

50Interest Rate

0 100 200 300 400 500 600 700 800 900 10000

0.5

1Probability in Mode 1

AOPNL

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 54: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Conclusion

MJLQ framework flexible, powerful, yet tractable way ofhandling model uncertainty and non-certainty equivalence.Large variety of uncertainty configurations, also able toincorporate a large variety of CB judgment.Extension to forward-looking variables via recursivesaddlepoint method.Straightforward to incorporate unobservable modes w/olearning.Adaptive policy as easy to implement, harder to evaluate.Bayesian optimal policy more complex, particularly inforward-looking cases.Learning has sizeable effects, may or may not be beneficial.Experimentation seems to have relatively little effect.

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 55: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Conclusion

MJLQ framework flexible, powerful, yet tractable way ofhandling model uncertainty and non-certainty equivalence.Large variety of uncertainty configurations, also able toincorporate a large variety of CB judgment.Extension to forward-looking variables via recursivesaddlepoint method.Straightforward to incorporate unobservable modes w/olearning.Adaptive policy as easy to implement, harder to evaluate.Bayesian optimal policy more complex, particularly inforward-looking cases.Learning has sizeable effects, may or may not be beneficial.Experimentation seems to have relatively little effect.

Svensson and Williams Optimal Model Policy Under Uncertainty

Page 56: Optimal Monetary Policy under Uncertainty in DSGE …nwilliam/Econ899_files/OldVersion/BOP_slides_u… · Optimal Monetary Policy under Uncertainty in DSGE Models: A Markov Jump-Linear-Quadratic

Conclusion

MJLQ framework flexible, powerful, yet tractable way ofhandling model uncertainty and non-certainty equivalence.Large variety of uncertainty configurations, also able toincorporate a large variety of CB judgment.Extension to forward-looking variables via recursivesaddlepoint method.Straightforward to incorporate unobservable modes w/olearning.Adaptive policy as easy to implement, harder to evaluate.Bayesian optimal policy more complex, particularly inforward-looking cases.Learning has sizeable effects, may or may not be beneficial.Experimentation seems to have relatively little effect.

Svensson and Williams Optimal Model Policy Under Uncertainty