reliably computing nonlinear dynamic stochastic model … · 2018-10-11 · 1 introduction this...
TRANSCRIPT
Finance and Economics Discussion SeriesDivisions of Research amp Statistics and Monetary Affairs
Federal Reserve Board Washington DC
Reliably Computing Nonlinear Dynamic Stochastic ModelSolutions An Algorithm with Error Formulas
Gary S Anderson
2018-070
Please cite this paper asAnderson Gary S (2018) ldquoReliably Computing Nonlinear Dynamic Stochastic ModelSolutions An Algorithm with Error Formulasrdquo Finance and Economics DiscussionSeries 2018-070 Washington Board of Governors of the Federal Reserve Systemhttpsdoiorg1017016FEDS2018070
NOTE Staff working papers in the Finance and Economics Discussion Series (FEDS) are preliminarymaterials circulated to stimulate discussion and critical comment The analysis and conclusions set forthare those of the authors and do not indicate concurrence by other members of the research staff or theBoard of Governors References in publications to the Finance and Economics Discussion Series (other thanacknowledgement) should be cleared with the author(s) to protect the tentative character of these papers
Reliably Computing Nonlinear Dynamic StochasticModel Solutions An Algorithm with Error Formulas
Gary S Andersonlowast
Friday 28th September 2018 1118am
Abstract
This paper provides a new technique for representing discrete time nonlinear dy-namic stochastic time invariant maps Using this new series representation the papershows how to augment a solution strategy based on projection methods with an ad-ditional set of constraints thereby enhancing its reliability The paper also providesgeneral formulas for evaluating the accuracy of proposed solutions The technique canreadily accommodate models with occasionally binding constraints and regime switch-ing The algorithm uses Smolyak polynomial function approximation in a way whichmakes it possible to exploit a high degree of parallelism
lowastThe analysis and conclusions set forth are those of the author and do not indicate concurrence byother members of the research staff or the Board of Governors I would like to thank Luca GuerrieriChristopher Gust Hess Chung Benjamin Johannsen Robert Tetlow and Etienne Gagnon for their commentsand suggestions I would also like to thank Stephanie Harrington for her help with document preparation
1
Contents1 Introduction 3
2 A New Series Representation For Bounded Time Series 521 Linear Reference Models and a Formula for ldquoAnticipated Shocksrdquo 522 The Linear Reference Model in Action Arbitrary Bounded Time Series 623 Assessing xt Errors 8
231 Truncation Error 8232 Path Error 9
3 Dynamic Stochastic Time Invariant Maps 1031 An RBC Example 1032 A Model Specification Constraint 13
4 Approximating Model Solution Errors 1541 Approximating a Global Decision Rule Error Bound 1542 Approximating Local Decision Rule Errors 1643 Assessing the Error Approximation 16
5 Improving Proposed Solutions 2251 Some Practical Considerations for Applying the Series Formula 2252 How My Approach Differs From the Usual Approach 2253 Algorithm Overview 23
531 Single Equation System Case 23532 Multiple Equation System Case 24
54 Approximating a Known Solution η = 1 U(c) = Log(c) 2655 Approximating an Unknown Solution η = 3 37
6 A Model with Occasionally Binding Constraints 44
7 Conclusions 52
A Error Approximation Formula Proof 56
B A Regime Switching Example 59
C Algorithm Pseudo-code 62
2
1 IntroductionThis paper provides a new technique for the solution of discrete time dynamic stochasticinfinite-horizon economies with bounded-time-invariant solutions It describes how to obtaindecision rules satisfying a system of first-order conditions ldquoEuler Equationsrdquo and how toassess their accuracy1 For an overview of techniques for solving nonlinear models see (Judd1992 Christiano and Fisher 2000 Doraszelskiy and Judd 2004 Gaspar and Judd 1997Judd et al 2014 Marcet and Lorenzoni 1999 Judd et al 2011 Maliar and Maliar 2001Binning and Maih 2017 Judd et al 2017b)
This new series representation for bounded time series adapted from (Anderson 2010)
x(xtminus1 εt) = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F νφZt+ν(xtminus1 εt)
makes it possible to construct a series representation for any bounded time invariant discretetime map As yet the author has found no comparable use of a linear reference dynamicalsystem for conveniently transforming one bounded infinite dimensional series into another
It turns out that this representation provides a way to use Euler equation errors toapproximate the error in components of decision rules2 This paper provides approximationformulas explicitly relating the Euler equation errors to the components of errors in thedecision rule
In addition the series representation provides exploitable constraints on model solutionsthat enhance an algorithmrsquos reliability Practitioners using projection methods and othertechniques have often found models difficult to solve They find that it can be difficult tofind an initial guess for a solution so that the calculations converge and that sometimesthe convergent decision rules lack appropriate long run properties like stability Further-more economists have an increasing interest in crafting models with occasionally bindingconstraints and regime switching where unfortunately these problems seem to arise moreoften The series representation overcomes these problems by making it easier to choose aninitial guess that converges and by ensuring that the solutions have appropriate long runproperties
The numerical implementation of the algorithm builds upon the work of (Judd et al 20112017b) In particular it uses the an-isotropic Smolyak Method the adaptive parallelotopemethod (Judd et al 2014) and precomputes all integrals required by the calculations (Juddet al 2017b) The algorithm should scale well to large models as many of the algorithmrsquoscomponents can be computed in parallel with limited amounts of information flowing betweencompute nodes
The paper is organized as follows Section 2 presents a useful new series representation forany bounded time series Section 3 shows how to use this series representation to representcharacterize time invariant maps Section 4 provides formulas for approximating dynamicstochastic model errors in proposed solutions Section 5 presents a new solution algorithm for
1The series representation presented here should also prove useful for applications based on value functioniteration but I defer treating this topic for future work
2 Others have also studied approximation error for dynamic stochastic models (Judd et al 2017a Santosand Peralta-alva 2005 Santos 2000)
3
improving proposed solutions Section 6 applies the technique to a model with occasionallybinding constraints Section 7 concludes
4
2 A New Series Representation For Bounded Time Se-ries
Since (Blanchard and Kahn 1980) numerous authors have developed solution algorithmsand formulas for solving linear rational expectations models It turns out that the solutionsassociated with these linear saddle point models by quantifying the potential impact ofanticipated future shocks can play a new and somewhat surprising role in characterizing thesolutions for highly nonlinear models In preparation for discussing how these calculationscan be useful for representing time invariant maps this section describes the relationshipbetween linear reference models and bounded time series
21 Linear Reference Models and a Formula for ldquoAnticipated ShocksrdquoFor any linear homogeneous L dimensional deterministic system that produces a uniquestable solution
Hminus1xtminus1 +H0xt +H1xt+1 = 0
inhomogeneous solutions
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc
can be computed as
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc
where
φ = (H0 +H1B)minus1 and F = minusφH1
(Anderson 2010)
It will be useful to collect the components of this linear reference model solutionL equivHψε ψcB φ F for later use3
Theorem 21 Consider any bounded discrete time path
xt isin RL with xtinfin le X forallt gt 0 (1)
Now given the trajectories 1 define zt as
zt equiv Hminus1xtminus1 +H0xt +H1xt+1 minus ψc minus ψεεt3In Section 22 I will use these matrices to construct a series representation for any bounded path and
in 5 compute improvements in proposed model solutions
5
One can then express the xt solving
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc + zt
using
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφzt+ν (2)
and
xt+k+1 = Bxt+k + (I minus F )minus1φψc +infinsumν=0
F νφzt+k+ν+1 forallt k ge 0
Proof See (Anderson 2010)
Consequently given a bounded time series 1 and a stable linear homogeneous systemL one can easily compute a corresponding series representation like 2 for xt Interestinglythe linear model H the constant term ψc and the impact of the stochastic shocks ψε can bechosen rather arbitrarily ndash the only constraints being the existence of a saddle-point solutionfor the linear system and that all z-values fall in the row space of H1 Note that since theeigenvalues of F are all less than one in magnitude the formula supports the intuition thatdistant future shocks influence current conditions less than imminent shocks
The transformation xt xt xt+1 xt+2 rarr L(xtminus1 εt)rarr zt zt+1 zt+2 is invertiblezt zt+1 zt+2 rarr L(xtminus1 εt)minus1 rarr xt xt xt+1 xt+2 and the inverse can be interpretedas giving the impact of ldquofully anticipated future shocksrdquo on the path of xt in a linear perfectforesight model Note that the reference model is deterministic so that the zt(xtminus1 εt)functions will fully account for any stochastic aspects encountered in the models I analyze
A key feature to note is that the series representation can accommodate arbitrarily com-plex time series trajectories so long as these trajectories are bounded Later this observationwill give us some confidence in the robustness of the algorithm for constructing series rep-resentations for unknown families of functions satisfying complicated systems of dynamicnon-linear equations
22 The Linear Reference Model in Action Arbitrary BoundedTime Series
Consider the following matrix constructed from ldquoalmostrdquo arbitrary coefficients[Hminus1 H0 H1
]=
01 05 -05 1 04 09 1 1 09
02 02 -05 7 04 08 3 2 06
01 -025 -15 21 047 19 21 21 39(3)
with ψc = ψε = 0 ψz = I I have chosen these coefficients so that the linear model hasa unique stable solution and since H1 is full rank its column space will span the columnspace for all non zero values for zt isin R34 It turns out that the algorithm is very forgivingabout the choice of L
4The flexibility in choosing L will become more important in later in the paper where we must choose alinear reference model in a context where we will have little guidance about what might make a good choice
6
The solution for this system is
B =-00282384 -00552487 000939369
-00664679 -0700462 -00718527
-0163638 -139868 0331726
φ =00210079 015727 -00531634
120712 -00553003 -0431842
258165 -0183521 -0578227
F =-0381174 -0223904 00940684
-0134352 -0189653 0630956
-0816814 -100033 00417094
It is straightforward to apply 2 to the following three bounded families of time seriespaths
x1t = xtminus11 + εt
Dπ(t)
where Dπ(t) gives the t-th digit of π
x2t = xtminus1(1 + εt)2 (minus1)t
ax3t = εt
and the εt are a sequence of pseudo random draws from the uniform distributions U(minus4 4)produced subsequent to the selection of a random seed
randomseed(xtminus1+ εt)
The three panels in Figure 1 display these three time series generated for a particular initialstate vector and shock value The first set of trajectories characterizes a function of the digitsin the decimal representation of π The second set of trajectories oscillates between two valuesdetermined by a nonlinear function of the initial conditions xtminus1 and the shock The thirdset of trajectories characterizes a sequence of uniformly distributed random numbers basedon a seed determined by a nonlinear function of the initial conditions and the shock Thesepaths were chosen to emphasize that continuity is not important for the existence of theseries representation that the trajectories need not converge to a fixed point and need notbe produced by some linear rational expectations solution or by the iteration of a discrete-time map The spanning condition and the boundedness of the paths are sufficient conditionsfor the existence of the series representation based on the linear reference modelL5
Though unintuitive and perhaps lacking a compelling interpretation the values for ztexist provide a series for the xt and are easy to compute One can apply the calculationsfor any given initial condition to produce a z series exactly replicating any bounded set oftrajectories Consequently these numerical linear algebra calculations can easily transforma zt series into an xt series and vice versa Since this can be done for any path starting fromany particular initial conditions any discrete time invariant map that generates families ofbounded paths has a series representation
5Although potentially useful in some contexts this paper will not investigate representations for familiesof unbounded but slowly diverging trajectories
7
10 20 30 40 50
5
10
15
Pi Digits Path(10 10 105 5 5)
10 20 30 40 50
-02
02
04
06
Oscillatory Path(10 10 105 5 5)
10 20 30 40 50
2
4
6
8
10
Pseudo Random Path(10 10 105 5 5)
Figure 1 Arbitrary Bounded Time Series Paths
23 Assessing xt ErrorsThe formula 2 computes the impact on the current state of fully anticipated future shocksIt characterizes the impact exactly However one can contemplate the impact of at least twodeviations from the exact calculation one could truncate a series of correct values of zt orone might have imprecise values of zt along the path
231 Truncation Error
The series representation can compute the entire series exactly if all the terms are includedbut it will at times be useful to exploit the fact that the terms for state vectors closer tothe initial time have the most important impact One could consider approximating Xt bytruncating the series at a finite number of terms
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +ksums=0
F sφzt
We can bound the series approximation truncation errors
infinsums=k+1
F sφψz = (I minus F )minus1F k+1φψz
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1F k+1φψz
∥∥∥infin
(Hminus1infin + H0infin + H1infin)∥∥∥X ∥∥∥
infin
Figure 2 shows that this truncation error can be a very conservative measure of theaccuracy of the truncated series The orange line represents the computed approximationof the infinity norm of the difference between xt from the full series and a truncated seriesfor different lengths of the truncation The blue line shows the infinity norm of the actualdifference between the xt computed using the full series and the value obtained using atruncated series The series requires only the first 20 terms to compute the initial value ofthe state vector to high precision
8
10 20 30 40 50
20
40
60
80
Truncation Error Bound and Actural Error
Figure 2 xt Error Approximation Versus Actual Error
232 Path Error
We can assess the impact of perturbed values for zt by computing the maximum discrepancy∆ztimpacting the zt and applying the partial sum formula One can approximate Xt using
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsums=0
F sφ(zt + ∆zt)
Note yet again that for approximating x(xtminus1 ε) the impact of a given realization alongthe path declines for those realizations which are more temporally distant However we canconservatively bound the series approximation errors by using the largest possible ∆zt in theformula
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1φψz
∥∥∥infin∆ztinfin
9
3 Dynamic Stochastic Time Invariant MapsMany dynamic stochastic models have solutions that fall in the class of bounded time invari-ant maps Consequently if we take care (see Section 32) we can apply the law of iteratedexpectations to compute conditional expected solution paths forward from any initial valueand realization of (xtminus1 εt)6 As a result we can use the family of conditional expectationspaths along with a contrived linear reference model to generate a series representation forthe model solutions and to approximate errors
So long as the trajectories are bounded the time invariant maps can be represented usingthe framework from section 2 The series representation will provide a linearly weighted sumof zt(xtminus1 εt) functions that give us an approximation for the model solutions This sectionpresents a familiar RBC model as an example7
31 An RBC ExampleConsider the model described in (Maliar and Maliar 2005)8
maxu(ct) + Et
infinsumt=1
δtu(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(kt)f(kt) = kαt
u(c) = c1minusη minus 11minus η
The first order conditions for the model are
1cηt
= αδkαminus1t Et
(θt+1
cηt+1
)ct + kt = θtminus1k
αtminus1
θt = θρtminus1eεt
It is well known that when η = d = 1 we have a closed form solution (Lettau 2003)
xt(xtminus1 εt) equiv D(xtminus1 εt) equiv
ct(xtminus1 εt)kt(xtminus1 εt)θt(xtminus1 εt)
=
(1minus αδ)θtkαtminus1αδθtk
αtminus1
θρtminus1eεt
6Economists have long used similar manipulation of time series paths for dynamic models Indeed Potter
constructs his generalized impulse response functions using differences in conditional expectations paths(Potter 2000 Koop et al 1996)
7 Later in order to handle models with regime switching and occasionally binding constraints I will needto consider more complicated collections of equation systems with Boolean gates I will show how to applythe series formulation and to approximate the errors for these models as well
8Here I set their β = 1 and do not discuss quasi-geometric discounting or time-inconsistency
10
For mean zero iid εt we can easily compute the conditional expectation of the model variablesfor any given θt+k kt+k
D(xt+k+1) equiv
Et(ct+k+1|θt+k kt+k)Et(kt+k+1|θt+k kt+k)Et(θt+k+1|θt+k kt+k)
=
(1minus αδ)kαt+ke
σ22 θρt+k
αδkαt+keσ22 θρt+k
eσ22 θρt+k
For any given values of ktminus1 θtminus1 εt the model decision rule (D) and decision rule
conditional expectation (D) can generate a conditional expectations path forward from anygiven initial (xtminus1 εt)
xt(xtminus1 εt) = D(xtminus1 εt)xt+k+1(xtminus1 εt) = D(xt+k(xtminus1 εt))
and corresponding paths for (z1t(xtminus1 εt) z2t(xtminus1 εt) z3t(xtminus1 εt))
zt+k equiv Hminus1xt+kminus1 +H0xt+k +H1xt+K+1
Formula 2 requires that
x(xtminus1 εt) = B
ctminus1ktminus1θtminus1
+ φψεεt + (I minus F )minus1φψc +infinsumν=0
F sφzt+ν(ktminus1 θtminus1 εt)
For example using d = 1 and the following parameter values and using the arbitrarylinear reference model (3) we can generate a series representation for the model solutions
alpha rarr 036delta rarr 095rho rarr 095sigma rarr 001
we have
csskssθss
=[0360408
0187324
1001
]
ktminus1θtminus1εt
=[018
11
001
](4)
Figure 3 shows from top to bottom the paths of θt ct and kt from the initial valuesgiven in Equation 4
By including enough terms we can compute the solution for xt exactly Figure 4 showsthe impact that truncating the series has on approximation of the time t values of thestate variables The approximated magnitude for the error Bn shown in red is again verypessimistic compared to the actual error Zn = xt minus xtinfin shown in blue With enoughterms even using an ldquoalmostrdquo arbitrarily chosen linear model the series approximationprovides a machine precision accurate value for the time t state vector
Figure 5 shows that using a linearization that better tracks the nonlinear model pathsimproves the approximation Zn shown in blue and the approximate error Bn shown in redUsing the linearization of the RBC model around the ergodic mean produces a tighter butstill pessimistic bound on the errors for the initial state vector Again the first few termsmake most of the difference in approximating the value of the state variables
11
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
Reliably Computing Nonlinear Dynamic StochasticModel Solutions An Algorithm with Error Formulas
Gary S Andersonlowast
Friday 28th September 2018 1118am
Abstract
This paper provides a new technique for representing discrete time nonlinear dy-namic stochastic time invariant maps Using this new series representation the papershows how to augment a solution strategy based on projection methods with an ad-ditional set of constraints thereby enhancing its reliability The paper also providesgeneral formulas for evaluating the accuracy of proposed solutions The technique canreadily accommodate models with occasionally binding constraints and regime switch-ing The algorithm uses Smolyak polynomial function approximation in a way whichmakes it possible to exploit a high degree of parallelism
lowastThe analysis and conclusions set forth are those of the author and do not indicate concurrence byother members of the research staff or the Board of Governors I would like to thank Luca GuerrieriChristopher Gust Hess Chung Benjamin Johannsen Robert Tetlow and Etienne Gagnon for their commentsand suggestions I would also like to thank Stephanie Harrington for her help with document preparation
1
Contents1 Introduction 3
2 A New Series Representation For Bounded Time Series 521 Linear Reference Models and a Formula for ldquoAnticipated Shocksrdquo 522 The Linear Reference Model in Action Arbitrary Bounded Time Series 623 Assessing xt Errors 8
231 Truncation Error 8232 Path Error 9
3 Dynamic Stochastic Time Invariant Maps 1031 An RBC Example 1032 A Model Specification Constraint 13
4 Approximating Model Solution Errors 1541 Approximating a Global Decision Rule Error Bound 1542 Approximating Local Decision Rule Errors 1643 Assessing the Error Approximation 16
5 Improving Proposed Solutions 2251 Some Practical Considerations for Applying the Series Formula 2252 How My Approach Differs From the Usual Approach 2253 Algorithm Overview 23
531 Single Equation System Case 23532 Multiple Equation System Case 24
54 Approximating a Known Solution η = 1 U(c) = Log(c) 2655 Approximating an Unknown Solution η = 3 37
6 A Model with Occasionally Binding Constraints 44
7 Conclusions 52
A Error Approximation Formula Proof 56
B A Regime Switching Example 59
C Algorithm Pseudo-code 62
2
1 IntroductionThis paper provides a new technique for the solution of discrete time dynamic stochasticinfinite-horizon economies with bounded-time-invariant solutions It describes how to obtaindecision rules satisfying a system of first-order conditions ldquoEuler Equationsrdquo and how toassess their accuracy1 For an overview of techniques for solving nonlinear models see (Judd1992 Christiano and Fisher 2000 Doraszelskiy and Judd 2004 Gaspar and Judd 1997Judd et al 2014 Marcet and Lorenzoni 1999 Judd et al 2011 Maliar and Maliar 2001Binning and Maih 2017 Judd et al 2017b)
This new series representation for bounded time series adapted from (Anderson 2010)
x(xtminus1 εt) = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F νφZt+ν(xtminus1 εt)
makes it possible to construct a series representation for any bounded time invariant discretetime map As yet the author has found no comparable use of a linear reference dynamicalsystem for conveniently transforming one bounded infinite dimensional series into another
It turns out that this representation provides a way to use Euler equation errors toapproximate the error in components of decision rules2 This paper provides approximationformulas explicitly relating the Euler equation errors to the components of errors in thedecision rule
In addition the series representation provides exploitable constraints on model solutionsthat enhance an algorithmrsquos reliability Practitioners using projection methods and othertechniques have often found models difficult to solve They find that it can be difficult tofind an initial guess for a solution so that the calculations converge and that sometimesthe convergent decision rules lack appropriate long run properties like stability Further-more economists have an increasing interest in crafting models with occasionally bindingconstraints and regime switching where unfortunately these problems seem to arise moreoften The series representation overcomes these problems by making it easier to choose aninitial guess that converges and by ensuring that the solutions have appropriate long runproperties
The numerical implementation of the algorithm builds upon the work of (Judd et al 20112017b) In particular it uses the an-isotropic Smolyak Method the adaptive parallelotopemethod (Judd et al 2014) and precomputes all integrals required by the calculations (Juddet al 2017b) The algorithm should scale well to large models as many of the algorithmrsquoscomponents can be computed in parallel with limited amounts of information flowing betweencompute nodes
The paper is organized as follows Section 2 presents a useful new series representation forany bounded time series Section 3 shows how to use this series representation to representcharacterize time invariant maps Section 4 provides formulas for approximating dynamicstochastic model errors in proposed solutions Section 5 presents a new solution algorithm for
1The series representation presented here should also prove useful for applications based on value functioniteration but I defer treating this topic for future work
2 Others have also studied approximation error for dynamic stochastic models (Judd et al 2017a Santosand Peralta-alva 2005 Santos 2000)
3
improving proposed solutions Section 6 applies the technique to a model with occasionallybinding constraints Section 7 concludes
4
2 A New Series Representation For Bounded Time Se-ries
Since (Blanchard and Kahn 1980) numerous authors have developed solution algorithmsand formulas for solving linear rational expectations models It turns out that the solutionsassociated with these linear saddle point models by quantifying the potential impact ofanticipated future shocks can play a new and somewhat surprising role in characterizing thesolutions for highly nonlinear models In preparation for discussing how these calculationscan be useful for representing time invariant maps this section describes the relationshipbetween linear reference models and bounded time series
21 Linear Reference Models and a Formula for ldquoAnticipated ShocksrdquoFor any linear homogeneous L dimensional deterministic system that produces a uniquestable solution
Hminus1xtminus1 +H0xt +H1xt+1 = 0
inhomogeneous solutions
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc
can be computed as
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc
where
φ = (H0 +H1B)minus1 and F = minusφH1
(Anderson 2010)
It will be useful to collect the components of this linear reference model solutionL equivHψε ψcB φ F for later use3
Theorem 21 Consider any bounded discrete time path
xt isin RL with xtinfin le X forallt gt 0 (1)
Now given the trajectories 1 define zt as
zt equiv Hminus1xtminus1 +H0xt +H1xt+1 minus ψc minus ψεεt3In Section 22 I will use these matrices to construct a series representation for any bounded path and
in 5 compute improvements in proposed model solutions
5
One can then express the xt solving
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc + zt
using
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφzt+ν (2)
and
xt+k+1 = Bxt+k + (I minus F )minus1φψc +infinsumν=0
F νφzt+k+ν+1 forallt k ge 0
Proof See (Anderson 2010)
Consequently given a bounded time series 1 and a stable linear homogeneous systemL one can easily compute a corresponding series representation like 2 for xt Interestinglythe linear model H the constant term ψc and the impact of the stochastic shocks ψε can bechosen rather arbitrarily ndash the only constraints being the existence of a saddle-point solutionfor the linear system and that all z-values fall in the row space of H1 Note that since theeigenvalues of F are all less than one in magnitude the formula supports the intuition thatdistant future shocks influence current conditions less than imminent shocks
The transformation xt xt xt+1 xt+2 rarr L(xtminus1 εt)rarr zt zt+1 zt+2 is invertiblezt zt+1 zt+2 rarr L(xtminus1 εt)minus1 rarr xt xt xt+1 xt+2 and the inverse can be interpretedas giving the impact of ldquofully anticipated future shocksrdquo on the path of xt in a linear perfectforesight model Note that the reference model is deterministic so that the zt(xtminus1 εt)functions will fully account for any stochastic aspects encountered in the models I analyze
A key feature to note is that the series representation can accommodate arbitrarily com-plex time series trajectories so long as these trajectories are bounded Later this observationwill give us some confidence in the robustness of the algorithm for constructing series rep-resentations for unknown families of functions satisfying complicated systems of dynamicnon-linear equations
22 The Linear Reference Model in Action Arbitrary BoundedTime Series
Consider the following matrix constructed from ldquoalmostrdquo arbitrary coefficients[Hminus1 H0 H1
]=
01 05 -05 1 04 09 1 1 09
02 02 -05 7 04 08 3 2 06
01 -025 -15 21 047 19 21 21 39(3)
with ψc = ψε = 0 ψz = I I have chosen these coefficients so that the linear model hasa unique stable solution and since H1 is full rank its column space will span the columnspace for all non zero values for zt isin R34 It turns out that the algorithm is very forgivingabout the choice of L
4The flexibility in choosing L will become more important in later in the paper where we must choose alinear reference model in a context where we will have little guidance about what might make a good choice
6
The solution for this system is
B =-00282384 -00552487 000939369
-00664679 -0700462 -00718527
-0163638 -139868 0331726
φ =00210079 015727 -00531634
120712 -00553003 -0431842
258165 -0183521 -0578227
F =-0381174 -0223904 00940684
-0134352 -0189653 0630956
-0816814 -100033 00417094
It is straightforward to apply 2 to the following three bounded families of time seriespaths
x1t = xtminus11 + εt
Dπ(t)
where Dπ(t) gives the t-th digit of π
x2t = xtminus1(1 + εt)2 (minus1)t
ax3t = εt
and the εt are a sequence of pseudo random draws from the uniform distributions U(minus4 4)produced subsequent to the selection of a random seed
randomseed(xtminus1+ εt)
The three panels in Figure 1 display these three time series generated for a particular initialstate vector and shock value The first set of trajectories characterizes a function of the digitsin the decimal representation of π The second set of trajectories oscillates between two valuesdetermined by a nonlinear function of the initial conditions xtminus1 and the shock The thirdset of trajectories characterizes a sequence of uniformly distributed random numbers basedon a seed determined by a nonlinear function of the initial conditions and the shock Thesepaths were chosen to emphasize that continuity is not important for the existence of theseries representation that the trajectories need not converge to a fixed point and need notbe produced by some linear rational expectations solution or by the iteration of a discrete-time map The spanning condition and the boundedness of the paths are sufficient conditionsfor the existence of the series representation based on the linear reference modelL5
Though unintuitive and perhaps lacking a compelling interpretation the values for ztexist provide a series for the xt and are easy to compute One can apply the calculationsfor any given initial condition to produce a z series exactly replicating any bounded set oftrajectories Consequently these numerical linear algebra calculations can easily transforma zt series into an xt series and vice versa Since this can be done for any path starting fromany particular initial conditions any discrete time invariant map that generates families ofbounded paths has a series representation
5Although potentially useful in some contexts this paper will not investigate representations for familiesof unbounded but slowly diverging trajectories
7
10 20 30 40 50
5
10
15
Pi Digits Path(10 10 105 5 5)
10 20 30 40 50
-02
02
04
06
Oscillatory Path(10 10 105 5 5)
10 20 30 40 50
2
4
6
8
10
Pseudo Random Path(10 10 105 5 5)
Figure 1 Arbitrary Bounded Time Series Paths
23 Assessing xt ErrorsThe formula 2 computes the impact on the current state of fully anticipated future shocksIt characterizes the impact exactly However one can contemplate the impact of at least twodeviations from the exact calculation one could truncate a series of correct values of zt orone might have imprecise values of zt along the path
231 Truncation Error
The series representation can compute the entire series exactly if all the terms are includedbut it will at times be useful to exploit the fact that the terms for state vectors closer tothe initial time have the most important impact One could consider approximating Xt bytruncating the series at a finite number of terms
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +ksums=0
F sφzt
We can bound the series approximation truncation errors
infinsums=k+1
F sφψz = (I minus F )minus1F k+1φψz
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1F k+1φψz
∥∥∥infin
(Hminus1infin + H0infin + H1infin)∥∥∥X ∥∥∥
infin
Figure 2 shows that this truncation error can be a very conservative measure of theaccuracy of the truncated series The orange line represents the computed approximationof the infinity norm of the difference between xt from the full series and a truncated seriesfor different lengths of the truncation The blue line shows the infinity norm of the actualdifference between the xt computed using the full series and the value obtained using atruncated series The series requires only the first 20 terms to compute the initial value ofthe state vector to high precision
8
10 20 30 40 50
20
40
60
80
Truncation Error Bound and Actural Error
Figure 2 xt Error Approximation Versus Actual Error
232 Path Error
We can assess the impact of perturbed values for zt by computing the maximum discrepancy∆ztimpacting the zt and applying the partial sum formula One can approximate Xt using
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsums=0
F sφ(zt + ∆zt)
Note yet again that for approximating x(xtminus1 ε) the impact of a given realization alongthe path declines for those realizations which are more temporally distant However we canconservatively bound the series approximation errors by using the largest possible ∆zt in theformula
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1φψz
∥∥∥infin∆ztinfin
9
3 Dynamic Stochastic Time Invariant MapsMany dynamic stochastic models have solutions that fall in the class of bounded time invari-ant maps Consequently if we take care (see Section 32) we can apply the law of iteratedexpectations to compute conditional expected solution paths forward from any initial valueand realization of (xtminus1 εt)6 As a result we can use the family of conditional expectationspaths along with a contrived linear reference model to generate a series representation forthe model solutions and to approximate errors
So long as the trajectories are bounded the time invariant maps can be represented usingthe framework from section 2 The series representation will provide a linearly weighted sumof zt(xtminus1 εt) functions that give us an approximation for the model solutions This sectionpresents a familiar RBC model as an example7
31 An RBC ExampleConsider the model described in (Maliar and Maliar 2005)8
maxu(ct) + Et
infinsumt=1
δtu(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(kt)f(kt) = kαt
u(c) = c1minusη minus 11minus η
The first order conditions for the model are
1cηt
= αδkαminus1t Et
(θt+1
cηt+1
)ct + kt = θtminus1k
αtminus1
θt = θρtminus1eεt
It is well known that when η = d = 1 we have a closed form solution (Lettau 2003)
xt(xtminus1 εt) equiv D(xtminus1 εt) equiv
ct(xtminus1 εt)kt(xtminus1 εt)θt(xtminus1 εt)
=
(1minus αδ)θtkαtminus1αδθtk
αtminus1
θρtminus1eεt
6Economists have long used similar manipulation of time series paths for dynamic models Indeed Potter
constructs his generalized impulse response functions using differences in conditional expectations paths(Potter 2000 Koop et al 1996)
7 Later in order to handle models with regime switching and occasionally binding constraints I will needto consider more complicated collections of equation systems with Boolean gates I will show how to applythe series formulation and to approximate the errors for these models as well
8Here I set their β = 1 and do not discuss quasi-geometric discounting or time-inconsistency
10
For mean zero iid εt we can easily compute the conditional expectation of the model variablesfor any given θt+k kt+k
D(xt+k+1) equiv
Et(ct+k+1|θt+k kt+k)Et(kt+k+1|θt+k kt+k)Et(θt+k+1|θt+k kt+k)
=
(1minus αδ)kαt+ke
σ22 θρt+k
αδkαt+keσ22 θρt+k
eσ22 θρt+k
For any given values of ktminus1 θtminus1 εt the model decision rule (D) and decision rule
conditional expectation (D) can generate a conditional expectations path forward from anygiven initial (xtminus1 εt)
xt(xtminus1 εt) = D(xtminus1 εt)xt+k+1(xtminus1 εt) = D(xt+k(xtminus1 εt))
and corresponding paths for (z1t(xtminus1 εt) z2t(xtminus1 εt) z3t(xtminus1 εt))
zt+k equiv Hminus1xt+kminus1 +H0xt+k +H1xt+K+1
Formula 2 requires that
x(xtminus1 εt) = B
ctminus1ktminus1θtminus1
+ φψεεt + (I minus F )minus1φψc +infinsumν=0
F sφzt+ν(ktminus1 θtminus1 εt)
For example using d = 1 and the following parameter values and using the arbitrarylinear reference model (3) we can generate a series representation for the model solutions
alpha rarr 036delta rarr 095rho rarr 095sigma rarr 001
we have
csskssθss
=[0360408
0187324
1001
]
ktminus1θtminus1εt
=[018
11
001
](4)
Figure 3 shows from top to bottom the paths of θt ct and kt from the initial valuesgiven in Equation 4
By including enough terms we can compute the solution for xt exactly Figure 4 showsthe impact that truncating the series has on approximation of the time t values of thestate variables The approximated magnitude for the error Bn shown in red is again verypessimistic compared to the actual error Zn = xt minus xtinfin shown in blue With enoughterms even using an ldquoalmostrdquo arbitrarily chosen linear model the series approximationprovides a machine precision accurate value for the time t state vector
Figure 5 shows that using a linearization that better tracks the nonlinear model pathsimproves the approximation Zn shown in blue and the approximate error Bn shown in redUsing the linearization of the RBC model around the ergodic mean produces a tighter butstill pessimistic bound on the errors for the initial state vector Again the first few termsmake most of the difference in approximating the value of the state variables
11
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
Contents1 Introduction 3
2 A New Series Representation For Bounded Time Series 521 Linear Reference Models and a Formula for ldquoAnticipated Shocksrdquo 522 The Linear Reference Model in Action Arbitrary Bounded Time Series 623 Assessing xt Errors 8
231 Truncation Error 8232 Path Error 9
3 Dynamic Stochastic Time Invariant Maps 1031 An RBC Example 1032 A Model Specification Constraint 13
4 Approximating Model Solution Errors 1541 Approximating a Global Decision Rule Error Bound 1542 Approximating Local Decision Rule Errors 1643 Assessing the Error Approximation 16
5 Improving Proposed Solutions 2251 Some Practical Considerations for Applying the Series Formula 2252 How My Approach Differs From the Usual Approach 2253 Algorithm Overview 23
531 Single Equation System Case 23532 Multiple Equation System Case 24
54 Approximating a Known Solution η = 1 U(c) = Log(c) 2655 Approximating an Unknown Solution η = 3 37
6 A Model with Occasionally Binding Constraints 44
7 Conclusions 52
A Error Approximation Formula Proof 56
B A Regime Switching Example 59
C Algorithm Pseudo-code 62
2
1 IntroductionThis paper provides a new technique for the solution of discrete time dynamic stochasticinfinite-horizon economies with bounded-time-invariant solutions It describes how to obtaindecision rules satisfying a system of first-order conditions ldquoEuler Equationsrdquo and how toassess their accuracy1 For an overview of techniques for solving nonlinear models see (Judd1992 Christiano and Fisher 2000 Doraszelskiy and Judd 2004 Gaspar and Judd 1997Judd et al 2014 Marcet and Lorenzoni 1999 Judd et al 2011 Maliar and Maliar 2001Binning and Maih 2017 Judd et al 2017b)
This new series representation for bounded time series adapted from (Anderson 2010)
x(xtminus1 εt) = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F νφZt+ν(xtminus1 εt)
makes it possible to construct a series representation for any bounded time invariant discretetime map As yet the author has found no comparable use of a linear reference dynamicalsystem for conveniently transforming one bounded infinite dimensional series into another
It turns out that this representation provides a way to use Euler equation errors toapproximate the error in components of decision rules2 This paper provides approximationformulas explicitly relating the Euler equation errors to the components of errors in thedecision rule
In addition the series representation provides exploitable constraints on model solutionsthat enhance an algorithmrsquos reliability Practitioners using projection methods and othertechniques have often found models difficult to solve They find that it can be difficult tofind an initial guess for a solution so that the calculations converge and that sometimesthe convergent decision rules lack appropriate long run properties like stability Further-more economists have an increasing interest in crafting models with occasionally bindingconstraints and regime switching where unfortunately these problems seem to arise moreoften The series representation overcomes these problems by making it easier to choose aninitial guess that converges and by ensuring that the solutions have appropriate long runproperties
The numerical implementation of the algorithm builds upon the work of (Judd et al 20112017b) In particular it uses the an-isotropic Smolyak Method the adaptive parallelotopemethod (Judd et al 2014) and precomputes all integrals required by the calculations (Juddet al 2017b) The algorithm should scale well to large models as many of the algorithmrsquoscomponents can be computed in parallel with limited amounts of information flowing betweencompute nodes
The paper is organized as follows Section 2 presents a useful new series representation forany bounded time series Section 3 shows how to use this series representation to representcharacterize time invariant maps Section 4 provides formulas for approximating dynamicstochastic model errors in proposed solutions Section 5 presents a new solution algorithm for
1The series representation presented here should also prove useful for applications based on value functioniteration but I defer treating this topic for future work
2 Others have also studied approximation error for dynamic stochastic models (Judd et al 2017a Santosand Peralta-alva 2005 Santos 2000)
3
improving proposed solutions Section 6 applies the technique to a model with occasionallybinding constraints Section 7 concludes
4
2 A New Series Representation For Bounded Time Se-ries
Since (Blanchard and Kahn 1980) numerous authors have developed solution algorithmsand formulas for solving linear rational expectations models It turns out that the solutionsassociated with these linear saddle point models by quantifying the potential impact ofanticipated future shocks can play a new and somewhat surprising role in characterizing thesolutions for highly nonlinear models In preparation for discussing how these calculationscan be useful for representing time invariant maps this section describes the relationshipbetween linear reference models and bounded time series
21 Linear Reference Models and a Formula for ldquoAnticipated ShocksrdquoFor any linear homogeneous L dimensional deterministic system that produces a uniquestable solution
Hminus1xtminus1 +H0xt +H1xt+1 = 0
inhomogeneous solutions
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc
can be computed as
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc
where
φ = (H0 +H1B)minus1 and F = minusφH1
(Anderson 2010)
It will be useful to collect the components of this linear reference model solutionL equivHψε ψcB φ F for later use3
Theorem 21 Consider any bounded discrete time path
xt isin RL with xtinfin le X forallt gt 0 (1)
Now given the trajectories 1 define zt as
zt equiv Hminus1xtminus1 +H0xt +H1xt+1 minus ψc minus ψεεt3In Section 22 I will use these matrices to construct a series representation for any bounded path and
in 5 compute improvements in proposed model solutions
5
One can then express the xt solving
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc + zt
using
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφzt+ν (2)
and
xt+k+1 = Bxt+k + (I minus F )minus1φψc +infinsumν=0
F νφzt+k+ν+1 forallt k ge 0
Proof See (Anderson 2010)
Consequently given a bounded time series 1 and a stable linear homogeneous systemL one can easily compute a corresponding series representation like 2 for xt Interestinglythe linear model H the constant term ψc and the impact of the stochastic shocks ψε can bechosen rather arbitrarily ndash the only constraints being the existence of a saddle-point solutionfor the linear system and that all z-values fall in the row space of H1 Note that since theeigenvalues of F are all less than one in magnitude the formula supports the intuition thatdistant future shocks influence current conditions less than imminent shocks
The transformation xt xt xt+1 xt+2 rarr L(xtminus1 εt)rarr zt zt+1 zt+2 is invertiblezt zt+1 zt+2 rarr L(xtminus1 εt)minus1 rarr xt xt xt+1 xt+2 and the inverse can be interpretedas giving the impact of ldquofully anticipated future shocksrdquo on the path of xt in a linear perfectforesight model Note that the reference model is deterministic so that the zt(xtminus1 εt)functions will fully account for any stochastic aspects encountered in the models I analyze
A key feature to note is that the series representation can accommodate arbitrarily com-plex time series trajectories so long as these trajectories are bounded Later this observationwill give us some confidence in the robustness of the algorithm for constructing series rep-resentations for unknown families of functions satisfying complicated systems of dynamicnon-linear equations
22 The Linear Reference Model in Action Arbitrary BoundedTime Series
Consider the following matrix constructed from ldquoalmostrdquo arbitrary coefficients[Hminus1 H0 H1
]=
01 05 -05 1 04 09 1 1 09
02 02 -05 7 04 08 3 2 06
01 -025 -15 21 047 19 21 21 39(3)
with ψc = ψε = 0 ψz = I I have chosen these coefficients so that the linear model hasa unique stable solution and since H1 is full rank its column space will span the columnspace for all non zero values for zt isin R34 It turns out that the algorithm is very forgivingabout the choice of L
4The flexibility in choosing L will become more important in later in the paper where we must choose alinear reference model in a context where we will have little guidance about what might make a good choice
6
The solution for this system is
B =-00282384 -00552487 000939369
-00664679 -0700462 -00718527
-0163638 -139868 0331726
φ =00210079 015727 -00531634
120712 -00553003 -0431842
258165 -0183521 -0578227
F =-0381174 -0223904 00940684
-0134352 -0189653 0630956
-0816814 -100033 00417094
It is straightforward to apply 2 to the following three bounded families of time seriespaths
x1t = xtminus11 + εt
Dπ(t)
where Dπ(t) gives the t-th digit of π
x2t = xtminus1(1 + εt)2 (minus1)t
ax3t = εt
and the εt are a sequence of pseudo random draws from the uniform distributions U(minus4 4)produced subsequent to the selection of a random seed
randomseed(xtminus1+ εt)
The three panels in Figure 1 display these three time series generated for a particular initialstate vector and shock value The first set of trajectories characterizes a function of the digitsin the decimal representation of π The second set of trajectories oscillates between two valuesdetermined by a nonlinear function of the initial conditions xtminus1 and the shock The thirdset of trajectories characterizes a sequence of uniformly distributed random numbers basedon a seed determined by a nonlinear function of the initial conditions and the shock Thesepaths were chosen to emphasize that continuity is not important for the existence of theseries representation that the trajectories need not converge to a fixed point and need notbe produced by some linear rational expectations solution or by the iteration of a discrete-time map The spanning condition and the boundedness of the paths are sufficient conditionsfor the existence of the series representation based on the linear reference modelL5
Though unintuitive and perhaps lacking a compelling interpretation the values for ztexist provide a series for the xt and are easy to compute One can apply the calculationsfor any given initial condition to produce a z series exactly replicating any bounded set oftrajectories Consequently these numerical linear algebra calculations can easily transforma zt series into an xt series and vice versa Since this can be done for any path starting fromany particular initial conditions any discrete time invariant map that generates families ofbounded paths has a series representation
5Although potentially useful in some contexts this paper will not investigate representations for familiesof unbounded but slowly diverging trajectories
7
10 20 30 40 50
5
10
15
Pi Digits Path(10 10 105 5 5)
10 20 30 40 50
-02
02
04
06
Oscillatory Path(10 10 105 5 5)
10 20 30 40 50
2
4
6
8
10
Pseudo Random Path(10 10 105 5 5)
Figure 1 Arbitrary Bounded Time Series Paths
23 Assessing xt ErrorsThe formula 2 computes the impact on the current state of fully anticipated future shocksIt characterizes the impact exactly However one can contemplate the impact of at least twodeviations from the exact calculation one could truncate a series of correct values of zt orone might have imprecise values of zt along the path
231 Truncation Error
The series representation can compute the entire series exactly if all the terms are includedbut it will at times be useful to exploit the fact that the terms for state vectors closer tothe initial time have the most important impact One could consider approximating Xt bytruncating the series at a finite number of terms
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +ksums=0
F sφzt
We can bound the series approximation truncation errors
infinsums=k+1
F sφψz = (I minus F )minus1F k+1φψz
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1F k+1φψz
∥∥∥infin
(Hminus1infin + H0infin + H1infin)∥∥∥X ∥∥∥
infin
Figure 2 shows that this truncation error can be a very conservative measure of theaccuracy of the truncated series The orange line represents the computed approximationof the infinity norm of the difference between xt from the full series and a truncated seriesfor different lengths of the truncation The blue line shows the infinity norm of the actualdifference between the xt computed using the full series and the value obtained using atruncated series The series requires only the first 20 terms to compute the initial value ofthe state vector to high precision
8
10 20 30 40 50
20
40
60
80
Truncation Error Bound and Actural Error
Figure 2 xt Error Approximation Versus Actual Error
232 Path Error
We can assess the impact of perturbed values for zt by computing the maximum discrepancy∆ztimpacting the zt and applying the partial sum formula One can approximate Xt using
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsums=0
F sφ(zt + ∆zt)
Note yet again that for approximating x(xtminus1 ε) the impact of a given realization alongthe path declines for those realizations which are more temporally distant However we canconservatively bound the series approximation errors by using the largest possible ∆zt in theformula
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1φψz
∥∥∥infin∆ztinfin
9
3 Dynamic Stochastic Time Invariant MapsMany dynamic stochastic models have solutions that fall in the class of bounded time invari-ant maps Consequently if we take care (see Section 32) we can apply the law of iteratedexpectations to compute conditional expected solution paths forward from any initial valueand realization of (xtminus1 εt)6 As a result we can use the family of conditional expectationspaths along with a contrived linear reference model to generate a series representation forthe model solutions and to approximate errors
So long as the trajectories are bounded the time invariant maps can be represented usingthe framework from section 2 The series representation will provide a linearly weighted sumof zt(xtminus1 εt) functions that give us an approximation for the model solutions This sectionpresents a familiar RBC model as an example7
31 An RBC ExampleConsider the model described in (Maliar and Maliar 2005)8
maxu(ct) + Et
infinsumt=1
δtu(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(kt)f(kt) = kαt
u(c) = c1minusη minus 11minus η
The first order conditions for the model are
1cηt
= αδkαminus1t Et
(θt+1
cηt+1
)ct + kt = θtminus1k
αtminus1
θt = θρtminus1eεt
It is well known that when η = d = 1 we have a closed form solution (Lettau 2003)
xt(xtminus1 εt) equiv D(xtminus1 εt) equiv
ct(xtminus1 εt)kt(xtminus1 εt)θt(xtminus1 εt)
=
(1minus αδ)θtkαtminus1αδθtk
αtminus1
θρtminus1eεt
6Economists have long used similar manipulation of time series paths for dynamic models Indeed Potter
constructs his generalized impulse response functions using differences in conditional expectations paths(Potter 2000 Koop et al 1996)
7 Later in order to handle models with regime switching and occasionally binding constraints I will needto consider more complicated collections of equation systems with Boolean gates I will show how to applythe series formulation and to approximate the errors for these models as well
8Here I set their β = 1 and do not discuss quasi-geometric discounting or time-inconsistency
10
For mean zero iid εt we can easily compute the conditional expectation of the model variablesfor any given θt+k kt+k
D(xt+k+1) equiv
Et(ct+k+1|θt+k kt+k)Et(kt+k+1|θt+k kt+k)Et(θt+k+1|θt+k kt+k)
=
(1minus αδ)kαt+ke
σ22 θρt+k
αδkαt+keσ22 θρt+k
eσ22 θρt+k
For any given values of ktminus1 θtminus1 εt the model decision rule (D) and decision rule
conditional expectation (D) can generate a conditional expectations path forward from anygiven initial (xtminus1 εt)
xt(xtminus1 εt) = D(xtminus1 εt)xt+k+1(xtminus1 εt) = D(xt+k(xtminus1 εt))
and corresponding paths for (z1t(xtminus1 εt) z2t(xtminus1 εt) z3t(xtminus1 εt))
zt+k equiv Hminus1xt+kminus1 +H0xt+k +H1xt+K+1
Formula 2 requires that
x(xtminus1 εt) = B
ctminus1ktminus1θtminus1
+ φψεεt + (I minus F )minus1φψc +infinsumν=0
F sφzt+ν(ktminus1 θtminus1 εt)
For example using d = 1 and the following parameter values and using the arbitrarylinear reference model (3) we can generate a series representation for the model solutions
alpha rarr 036delta rarr 095rho rarr 095sigma rarr 001
we have
csskssθss
=[0360408
0187324
1001
]
ktminus1θtminus1εt
=[018
11
001
](4)
Figure 3 shows from top to bottom the paths of θt ct and kt from the initial valuesgiven in Equation 4
By including enough terms we can compute the solution for xt exactly Figure 4 showsthe impact that truncating the series has on approximation of the time t values of thestate variables The approximated magnitude for the error Bn shown in red is again verypessimistic compared to the actual error Zn = xt minus xtinfin shown in blue With enoughterms even using an ldquoalmostrdquo arbitrarily chosen linear model the series approximationprovides a machine precision accurate value for the time t state vector
Figure 5 shows that using a linearization that better tracks the nonlinear model pathsimproves the approximation Zn shown in blue and the approximate error Bn shown in redUsing the linearization of the RBC model around the ergodic mean produces a tighter butstill pessimistic bound on the errors for the initial state vector Again the first few termsmake most of the difference in approximating the value of the state variables
11
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
1 IntroductionThis paper provides a new technique for the solution of discrete time dynamic stochasticinfinite-horizon economies with bounded-time-invariant solutions It describes how to obtaindecision rules satisfying a system of first-order conditions ldquoEuler Equationsrdquo and how toassess their accuracy1 For an overview of techniques for solving nonlinear models see (Judd1992 Christiano and Fisher 2000 Doraszelskiy and Judd 2004 Gaspar and Judd 1997Judd et al 2014 Marcet and Lorenzoni 1999 Judd et al 2011 Maliar and Maliar 2001Binning and Maih 2017 Judd et al 2017b)
This new series representation for bounded time series adapted from (Anderson 2010)
x(xtminus1 εt) = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F νφZt+ν(xtminus1 εt)
makes it possible to construct a series representation for any bounded time invariant discretetime map As yet the author has found no comparable use of a linear reference dynamicalsystem for conveniently transforming one bounded infinite dimensional series into another
It turns out that this representation provides a way to use Euler equation errors toapproximate the error in components of decision rules2 This paper provides approximationformulas explicitly relating the Euler equation errors to the components of errors in thedecision rule
In addition the series representation provides exploitable constraints on model solutionsthat enhance an algorithmrsquos reliability Practitioners using projection methods and othertechniques have often found models difficult to solve They find that it can be difficult tofind an initial guess for a solution so that the calculations converge and that sometimesthe convergent decision rules lack appropriate long run properties like stability Further-more economists have an increasing interest in crafting models with occasionally bindingconstraints and regime switching where unfortunately these problems seem to arise moreoften The series representation overcomes these problems by making it easier to choose aninitial guess that converges and by ensuring that the solutions have appropriate long runproperties
The numerical implementation of the algorithm builds upon the work of (Judd et al 20112017b) In particular it uses the an-isotropic Smolyak Method the adaptive parallelotopemethod (Judd et al 2014) and precomputes all integrals required by the calculations (Juddet al 2017b) The algorithm should scale well to large models as many of the algorithmrsquoscomponents can be computed in parallel with limited amounts of information flowing betweencompute nodes
The paper is organized as follows Section 2 presents a useful new series representation forany bounded time series Section 3 shows how to use this series representation to representcharacterize time invariant maps Section 4 provides formulas for approximating dynamicstochastic model errors in proposed solutions Section 5 presents a new solution algorithm for
1The series representation presented here should also prove useful for applications based on value functioniteration but I defer treating this topic for future work
2 Others have also studied approximation error for dynamic stochastic models (Judd et al 2017a Santosand Peralta-alva 2005 Santos 2000)
3
improving proposed solutions Section 6 applies the technique to a model with occasionallybinding constraints Section 7 concludes
4
2 A New Series Representation For Bounded Time Se-ries
Since (Blanchard and Kahn 1980) numerous authors have developed solution algorithmsand formulas for solving linear rational expectations models It turns out that the solutionsassociated with these linear saddle point models by quantifying the potential impact ofanticipated future shocks can play a new and somewhat surprising role in characterizing thesolutions for highly nonlinear models In preparation for discussing how these calculationscan be useful for representing time invariant maps this section describes the relationshipbetween linear reference models and bounded time series
21 Linear Reference Models and a Formula for ldquoAnticipated ShocksrdquoFor any linear homogeneous L dimensional deterministic system that produces a uniquestable solution
Hminus1xtminus1 +H0xt +H1xt+1 = 0
inhomogeneous solutions
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc
can be computed as
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc
where
φ = (H0 +H1B)minus1 and F = minusφH1
(Anderson 2010)
It will be useful to collect the components of this linear reference model solutionL equivHψε ψcB φ F for later use3
Theorem 21 Consider any bounded discrete time path
xt isin RL with xtinfin le X forallt gt 0 (1)
Now given the trajectories 1 define zt as
zt equiv Hminus1xtminus1 +H0xt +H1xt+1 minus ψc minus ψεεt3In Section 22 I will use these matrices to construct a series representation for any bounded path and
in 5 compute improvements in proposed model solutions
5
One can then express the xt solving
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc + zt
using
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφzt+ν (2)
and
xt+k+1 = Bxt+k + (I minus F )minus1φψc +infinsumν=0
F νφzt+k+ν+1 forallt k ge 0
Proof See (Anderson 2010)
Consequently given a bounded time series 1 and a stable linear homogeneous systemL one can easily compute a corresponding series representation like 2 for xt Interestinglythe linear model H the constant term ψc and the impact of the stochastic shocks ψε can bechosen rather arbitrarily ndash the only constraints being the existence of a saddle-point solutionfor the linear system and that all z-values fall in the row space of H1 Note that since theeigenvalues of F are all less than one in magnitude the formula supports the intuition thatdistant future shocks influence current conditions less than imminent shocks
The transformation xt xt xt+1 xt+2 rarr L(xtminus1 εt)rarr zt zt+1 zt+2 is invertiblezt zt+1 zt+2 rarr L(xtminus1 εt)minus1 rarr xt xt xt+1 xt+2 and the inverse can be interpretedas giving the impact of ldquofully anticipated future shocksrdquo on the path of xt in a linear perfectforesight model Note that the reference model is deterministic so that the zt(xtminus1 εt)functions will fully account for any stochastic aspects encountered in the models I analyze
A key feature to note is that the series representation can accommodate arbitrarily com-plex time series trajectories so long as these trajectories are bounded Later this observationwill give us some confidence in the robustness of the algorithm for constructing series rep-resentations for unknown families of functions satisfying complicated systems of dynamicnon-linear equations
22 The Linear Reference Model in Action Arbitrary BoundedTime Series
Consider the following matrix constructed from ldquoalmostrdquo arbitrary coefficients[Hminus1 H0 H1
]=
01 05 -05 1 04 09 1 1 09
02 02 -05 7 04 08 3 2 06
01 -025 -15 21 047 19 21 21 39(3)
with ψc = ψε = 0 ψz = I I have chosen these coefficients so that the linear model hasa unique stable solution and since H1 is full rank its column space will span the columnspace for all non zero values for zt isin R34 It turns out that the algorithm is very forgivingabout the choice of L
4The flexibility in choosing L will become more important in later in the paper where we must choose alinear reference model in a context where we will have little guidance about what might make a good choice
6
The solution for this system is
B =-00282384 -00552487 000939369
-00664679 -0700462 -00718527
-0163638 -139868 0331726
φ =00210079 015727 -00531634
120712 -00553003 -0431842
258165 -0183521 -0578227
F =-0381174 -0223904 00940684
-0134352 -0189653 0630956
-0816814 -100033 00417094
It is straightforward to apply 2 to the following three bounded families of time seriespaths
x1t = xtminus11 + εt
Dπ(t)
where Dπ(t) gives the t-th digit of π
x2t = xtminus1(1 + εt)2 (minus1)t
ax3t = εt
and the εt are a sequence of pseudo random draws from the uniform distributions U(minus4 4)produced subsequent to the selection of a random seed
randomseed(xtminus1+ εt)
The three panels in Figure 1 display these three time series generated for a particular initialstate vector and shock value The first set of trajectories characterizes a function of the digitsin the decimal representation of π The second set of trajectories oscillates between two valuesdetermined by a nonlinear function of the initial conditions xtminus1 and the shock The thirdset of trajectories characterizes a sequence of uniformly distributed random numbers basedon a seed determined by a nonlinear function of the initial conditions and the shock Thesepaths were chosen to emphasize that continuity is not important for the existence of theseries representation that the trajectories need not converge to a fixed point and need notbe produced by some linear rational expectations solution or by the iteration of a discrete-time map The spanning condition and the boundedness of the paths are sufficient conditionsfor the existence of the series representation based on the linear reference modelL5
Though unintuitive and perhaps lacking a compelling interpretation the values for ztexist provide a series for the xt and are easy to compute One can apply the calculationsfor any given initial condition to produce a z series exactly replicating any bounded set oftrajectories Consequently these numerical linear algebra calculations can easily transforma zt series into an xt series and vice versa Since this can be done for any path starting fromany particular initial conditions any discrete time invariant map that generates families ofbounded paths has a series representation
5Although potentially useful in some contexts this paper will not investigate representations for familiesof unbounded but slowly diverging trajectories
7
10 20 30 40 50
5
10
15
Pi Digits Path(10 10 105 5 5)
10 20 30 40 50
-02
02
04
06
Oscillatory Path(10 10 105 5 5)
10 20 30 40 50
2
4
6
8
10
Pseudo Random Path(10 10 105 5 5)
Figure 1 Arbitrary Bounded Time Series Paths
23 Assessing xt ErrorsThe formula 2 computes the impact on the current state of fully anticipated future shocksIt characterizes the impact exactly However one can contemplate the impact of at least twodeviations from the exact calculation one could truncate a series of correct values of zt orone might have imprecise values of zt along the path
231 Truncation Error
The series representation can compute the entire series exactly if all the terms are includedbut it will at times be useful to exploit the fact that the terms for state vectors closer tothe initial time have the most important impact One could consider approximating Xt bytruncating the series at a finite number of terms
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +ksums=0
F sφzt
We can bound the series approximation truncation errors
infinsums=k+1
F sφψz = (I minus F )minus1F k+1φψz
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1F k+1φψz
∥∥∥infin
(Hminus1infin + H0infin + H1infin)∥∥∥X ∥∥∥
infin
Figure 2 shows that this truncation error can be a very conservative measure of theaccuracy of the truncated series The orange line represents the computed approximationof the infinity norm of the difference between xt from the full series and a truncated seriesfor different lengths of the truncation The blue line shows the infinity norm of the actualdifference between the xt computed using the full series and the value obtained using atruncated series The series requires only the first 20 terms to compute the initial value ofthe state vector to high precision
8
10 20 30 40 50
20
40
60
80
Truncation Error Bound and Actural Error
Figure 2 xt Error Approximation Versus Actual Error
232 Path Error
We can assess the impact of perturbed values for zt by computing the maximum discrepancy∆ztimpacting the zt and applying the partial sum formula One can approximate Xt using
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsums=0
F sφ(zt + ∆zt)
Note yet again that for approximating x(xtminus1 ε) the impact of a given realization alongthe path declines for those realizations which are more temporally distant However we canconservatively bound the series approximation errors by using the largest possible ∆zt in theformula
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1φψz
∥∥∥infin∆ztinfin
9
3 Dynamic Stochastic Time Invariant MapsMany dynamic stochastic models have solutions that fall in the class of bounded time invari-ant maps Consequently if we take care (see Section 32) we can apply the law of iteratedexpectations to compute conditional expected solution paths forward from any initial valueand realization of (xtminus1 εt)6 As a result we can use the family of conditional expectationspaths along with a contrived linear reference model to generate a series representation forthe model solutions and to approximate errors
So long as the trajectories are bounded the time invariant maps can be represented usingthe framework from section 2 The series representation will provide a linearly weighted sumof zt(xtminus1 εt) functions that give us an approximation for the model solutions This sectionpresents a familiar RBC model as an example7
31 An RBC ExampleConsider the model described in (Maliar and Maliar 2005)8
maxu(ct) + Et
infinsumt=1
δtu(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(kt)f(kt) = kαt
u(c) = c1minusη minus 11minus η
The first order conditions for the model are
1cηt
= αδkαminus1t Et
(θt+1
cηt+1
)ct + kt = θtminus1k
αtminus1
θt = θρtminus1eεt
It is well known that when η = d = 1 we have a closed form solution (Lettau 2003)
xt(xtminus1 εt) equiv D(xtminus1 εt) equiv
ct(xtminus1 εt)kt(xtminus1 εt)θt(xtminus1 εt)
=
(1minus αδ)θtkαtminus1αδθtk
αtminus1
θρtminus1eεt
6Economists have long used similar manipulation of time series paths for dynamic models Indeed Potter
constructs his generalized impulse response functions using differences in conditional expectations paths(Potter 2000 Koop et al 1996)
7 Later in order to handle models with regime switching and occasionally binding constraints I will needto consider more complicated collections of equation systems with Boolean gates I will show how to applythe series formulation and to approximate the errors for these models as well
8Here I set their β = 1 and do not discuss quasi-geometric discounting or time-inconsistency
10
For mean zero iid εt we can easily compute the conditional expectation of the model variablesfor any given θt+k kt+k
D(xt+k+1) equiv
Et(ct+k+1|θt+k kt+k)Et(kt+k+1|θt+k kt+k)Et(θt+k+1|θt+k kt+k)
=
(1minus αδ)kαt+ke
σ22 θρt+k
αδkαt+keσ22 θρt+k
eσ22 θρt+k
For any given values of ktminus1 θtminus1 εt the model decision rule (D) and decision rule
conditional expectation (D) can generate a conditional expectations path forward from anygiven initial (xtminus1 εt)
xt(xtminus1 εt) = D(xtminus1 εt)xt+k+1(xtminus1 εt) = D(xt+k(xtminus1 εt))
and corresponding paths for (z1t(xtminus1 εt) z2t(xtminus1 εt) z3t(xtminus1 εt))
zt+k equiv Hminus1xt+kminus1 +H0xt+k +H1xt+K+1
Formula 2 requires that
x(xtminus1 εt) = B
ctminus1ktminus1θtminus1
+ φψεεt + (I minus F )minus1φψc +infinsumν=0
F sφzt+ν(ktminus1 θtminus1 εt)
For example using d = 1 and the following parameter values and using the arbitrarylinear reference model (3) we can generate a series representation for the model solutions
alpha rarr 036delta rarr 095rho rarr 095sigma rarr 001
we have
csskssθss
=[0360408
0187324
1001
]
ktminus1θtminus1εt
=[018
11
001
](4)
Figure 3 shows from top to bottom the paths of θt ct and kt from the initial valuesgiven in Equation 4
By including enough terms we can compute the solution for xt exactly Figure 4 showsthe impact that truncating the series has on approximation of the time t values of thestate variables The approximated magnitude for the error Bn shown in red is again verypessimistic compared to the actual error Zn = xt minus xtinfin shown in blue With enoughterms even using an ldquoalmostrdquo arbitrarily chosen linear model the series approximationprovides a machine precision accurate value for the time t state vector
Figure 5 shows that using a linearization that better tracks the nonlinear model pathsimproves the approximation Zn shown in blue and the approximate error Bn shown in redUsing the linearization of the RBC model around the ergodic mean produces a tighter butstill pessimistic bound on the errors for the initial state vector Again the first few termsmake most of the difference in approximating the value of the state variables
11
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
improving proposed solutions Section 6 applies the technique to a model with occasionallybinding constraints Section 7 concludes
4
2 A New Series Representation For Bounded Time Se-ries
Since (Blanchard and Kahn 1980) numerous authors have developed solution algorithmsand formulas for solving linear rational expectations models It turns out that the solutionsassociated with these linear saddle point models by quantifying the potential impact ofanticipated future shocks can play a new and somewhat surprising role in characterizing thesolutions for highly nonlinear models In preparation for discussing how these calculationscan be useful for representing time invariant maps this section describes the relationshipbetween linear reference models and bounded time series
21 Linear Reference Models and a Formula for ldquoAnticipated ShocksrdquoFor any linear homogeneous L dimensional deterministic system that produces a uniquestable solution
Hminus1xtminus1 +H0xt +H1xt+1 = 0
inhomogeneous solutions
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc
can be computed as
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc
where
φ = (H0 +H1B)minus1 and F = minusφH1
(Anderson 2010)
It will be useful to collect the components of this linear reference model solutionL equivHψε ψcB φ F for later use3
Theorem 21 Consider any bounded discrete time path
xt isin RL with xtinfin le X forallt gt 0 (1)
Now given the trajectories 1 define zt as
zt equiv Hminus1xtminus1 +H0xt +H1xt+1 minus ψc minus ψεεt3In Section 22 I will use these matrices to construct a series representation for any bounded path and
in 5 compute improvements in proposed model solutions
5
One can then express the xt solving
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc + zt
using
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφzt+ν (2)
and
xt+k+1 = Bxt+k + (I minus F )minus1φψc +infinsumν=0
F νφzt+k+ν+1 forallt k ge 0
Proof See (Anderson 2010)
Consequently given a bounded time series 1 and a stable linear homogeneous systemL one can easily compute a corresponding series representation like 2 for xt Interestinglythe linear model H the constant term ψc and the impact of the stochastic shocks ψε can bechosen rather arbitrarily ndash the only constraints being the existence of a saddle-point solutionfor the linear system and that all z-values fall in the row space of H1 Note that since theeigenvalues of F are all less than one in magnitude the formula supports the intuition thatdistant future shocks influence current conditions less than imminent shocks
The transformation xt xt xt+1 xt+2 rarr L(xtminus1 εt)rarr zt zt+1 zt+2 is invertiblezt zt+1 zt+2 rarr L(xtminus1 εt)minus1 rarr xt xt xt+1 xt+2 and the inverse can be interpretedas giving the impact of ldquofully anticipated future shocksrdquo on the path of xt in a linear perfectforesight model Note that the reference model is deterministic so that the zt(xtminus1 εt)functions will fully account for any stochastic aspects encountered in the models I analyze
A key feature to note is that the series representation can accommodate arbitrarily com-plex time series trajectories so long as these trajectories are bounded Later this observationwill give us some confidence in the robustness of the algorithm for constructing series rep-resentations for unknown families of functions satisfying complicated systems of dynamicnon-linear equations
22 The Linear Reference Model in Action Arbitrary BoundedTime Series
Consider the following matrix constructed from ldquoalmostrdquo arbitrary coefficients[Hminus1 H0 H1
]=
01 05 -05 1 04 09 1 1 09
02 02 -05 7 04 08 3 2 06
01 -025 -15 21 047 19 21 21 39(3)
with ψc = ψε = 0 ψz = I I have chosen these coefficients so that the linear model hasa unique stable solution and since H1 is full rank its column space will span the columnspace for all non zero values for zt isin R34 It turns out that the algorithm is very forgivingabout the choice of L
4The flexibility in choosing L will become more important in later in the paper where we must choose alinear reference model in a context where we will have little guidance about what might make a good choice
6
The solution for this system is
B =-00282384 -00552487 000939369
-00664679 -0700462 -00718527
-0163638 -139868 0331726
φ =00210079 015727 -00531634
120712 -00553003 -0431842
258165 -0183521 -0578227
F =-0381174 -0223904 00940684
-0134352 -0189653 0630956
-0816814 -100033 00417094
It is straightforward to apply 2 to the following three bounded families of time seriespaths
x1t = xtminus11 + εt
Dπ(t)
where Dπ(t) gives the t-th digit of π
x2t = xtminus1(1 + εt)2 (minus1)t
ax3t = εt
and the εt are a sequence of pseudo random draws from the uniform distributions U(minus4 4)produced subsequent to the selection of a random seed
randomseed(xtminus1+ εt)
The three panels in Figure 1 display these three time series generated for a particular initialstate vector and shock value The first set of trajectories characterizes a function of the digitsin the decimal representation of π The second set of trajectories oscillates between two valuesdetermined by a nonlinear function of the initial conditions xtminus1 and the shock The thirdset of trajectories characterizes a sequence of uniformly distributed random numbers basedon a seed determined by a nonlinear function of the initial conditions and the shock Thesepaths were chosen to emphasize that continuity is not important for the existence of theseries representation that the trajectories need not converge to a fixed point and need notbe produced by some linear rational expectations solution or by the iteration of a discrete-time map The spanning condition and the boundedness of the paths are sufficient conditionsfor the existence of the series representation based on the linear reference modelL5
Though unintuitive and perhaps lacking a compelling interpretation the values for ztexist provide a series for the xt and are easy to compute One can apply the calculationsfor any given initial condition to produce a z series exactly replicating any bounded set oftrajectories Consequently these numerical linear algebra calculations can easily transforma zt series into an xt series and vice versa Since this can be done for any path starting fromany particular initial conditions any discrete time invariant map that generates families ofbounded paths has a series representation
5Although potentially useful in some contexts this paper will not investigate representations for familiesof unbounded but slowly diverging trajectories
7
10 20 30 40 50
5
10
15
Pi Digits Path(10 10 105 5 5)
10 20 30 40 50
-02
02
04
06
Oscillatory Path(10 10 105 5 5)
10 20 30 40 50
2
4
6
8
10
Pseudo Random Path(10 10 105 5 5)
Figure 1 Arbitrary Bounded Time Series Paths
23 Assessing xt ErrorsThe formula 2 computes the impact on the current state of fully anticipated future shocksIt characterizes the impact exactly However one can contemplate the impact of at least twodeviations from the exact calculation one could truncate a series of correct values of zt orone might have imprecise values of zt along the path
231 Truncation Error
The series representation can compute the entire series exactly if all the terms are includedbut it will at times be useful to exploit the fact that the terms for state vectors closer tothe initial time have the most important impact One could consider approximating Xt bytruncating the series at a finite number of terms
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +ksums=0
F sφzt
We can bound the series approximation truncation errors
infinsums=k+1
F sφψz = (I minus F )minus1F k+1φψz
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1F k+1φψz
∥∥∥infin
(Hminus1infin + H0infin + H1infin)∥∥∥X ∥∥∥
infin
Figure 2 shows that this truncation error can be a very conservative measure of theaccuracy of the truncated series The orange line represents the computed approximationof the infinity norm of the difference between xt from the full series and a truncated seriesfor different lengths of the truncation The blue line shows the infinity norm of the actualdifference between the xt computed using the full series and the value obtained using atruncated series The series requires only the first 20 terms to compute the initial value ofthe state vector to high precision
8
10 20 30 40 50
20
40
60
80
Truncation Error Bound and Actural Error
Figure 2 xt Error Approximation Versus Actual Error
232 Path Error
We can assess the impact of perturbed values for zt by computing the maximum discrepancy∆ztimpacting the zt and applying the partial sum formula One can approximate Xt using
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsums=0
F sφ(zt + ∆zt)
Note yet again that for approximating x(xtminus1 ε) the impact of a given realization alongthe path declines for those realizations which are more temporally distant However we canconservatively bound the series approximation errors by using the largest possible ∆zt in theformula
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1φψz
∥∥∥infin∆ztinfin
9
3 Dynamic Stochastic Time Invariant MapsMany dynamic stochastic models have solutions that fall in the class of bounded time invari-ant maps Consequently if we take care (see Section 32) we can apply the law of iteratedexpectations to compute conditional expected solution paths forward from any initial valueand realization of (xtminus1 εt)6 As a result we can use the family of conditional expectationspaths along with a contrived linear reference model to generate a series representation forthe model solutions and to approximate errors
So long as the trajectories are bounded the time invariant maps can be represented usingthe framework from section 2 The series representation will provide a linearly weighted sumof zt(xtminus1 εt) functions that give us an approximation for the model solutions This sectionpresents a familiar RBC model as an example7
31 An RBC ExampleConsider the model described in (Maliar and Maliar 2005)8
maxu(ct) + Et
infinsumt=1
δtu(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(kt)f(kt) = kαt
u(c) = c1minusη minus 11minus η
The first order conditions for the model are
1cηt
= αδkαminus1t Et
(θt+1
cηt+1
)ct + kt = θtminus1k
αtminus1
θt = θρtminus1eεt
It is well known that when η = d = 1 we have a closed form solution (Lettau 2003)
xt(xtminus1 εt) equiv D(xtminus1 εt) equiv
ct(xtminus1 εt)kt(xtminus1 εt)θt(xtminus1 εt)
=
(1minus αδ)θtkαtminus1αδθtk
αtminus1
θρtminus1eεt
6Economists have long used similar manipulation of time series paths for dynamic models Indeed Potter
constructs his generalized impulse response functions using differences in conditional expectations paths(Potter 2000 Koop et al 1996)
7 Later in order to handle models with regime switching and occasionally binding constraints I will needto consider more complicated collections of equation systems with Boolean gates I will show how to applythe series formulation and to approximate the errors for these models as well
8Here I set their β = 1 and do not discuss quasi-geometric discounting or time-inconsistency
10
For mean zero iid εt we can easily compute the conditional expectation of the model variablesfor any given θt+k kt+k
D(xt+k+1) equiv
Et(ct+k+1|θt+k kt+k)Et(kt+k+1|θt+k kt+k)Et(θt+k+1|θt+k kt+k)
=
(1minus αδ)kαt+ke
σ22 θρt+k
αδkαt+keσ22 θρt+k
eσ22 θρt+k
For any given values of ktminus1 θtminus1 εt the model decision rule (D) and decision rule
conditional expectation (D) can generate a conditional expectations path forward from anygiven initial (xtminus1 εt)
xt(xtminus1 εt) = D(xtminus1 εt)xt+k+1(xtminus1 εt) = D(xt+k(xtminus1 εt))
and corresponding paths for (z1t(xtminus1 εt) z2t(xtminus1 εt) z3t(xtminus1 εt))
zt+k equiv Hminus1xt+kminus1 +H0xt+k +H1xt+K+1
Formula 2 requires that
x(xtminus1 εt) = B
ctminus1ktminus1θtminus1
+ φψεεt + (I minus F )minus1φψc +infinsumν=0
F sφzt+ν(ktminus1 θtminus1 εt)
For example using d = 1 and the following parameter values and using the arbitrarylinear reference model (3) we can generate a series representation for the model solutions
alpha rarr 036delta rarr 095rho rarr 095sigma rarr 001
we have
csskssθss
=[0360408
0187324
1001
]
ktminus1θtminus1εt
=[018
11
001
](4)
Figure 3 shows from top to bottom the paths of θt ct and kt from the initial valuesgiven in Equation 4
By including enough terms we can compute the solution for xt exactly Figure 4 showsthe impact that truncating the series has on approximation of the time t values of thestate variables The approximated magnitude for the error Bn shown in red is again verypessimistic compared to the actual error Zn = xt minus xtinfin shown in blue With enoughterms even using an ldquoalmostrdquo arbitrarily chosen linear model the series approximationprovides a machine precision accurate value for the time t state vector
Figure 5 shows that using a linearization that better tracks the nonlinear model pathsimproves the approximation Zn shown in blue and the approximate error Bn shown in redUsing the linearization of the RBC model around the ergodic mean produces a tighter butstill pessimistic bound on the errors for the initial state vector Again the first few termsmake most of the difference in approximating the value of the state variables
11
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
2 A New Series Representation For Bounded Time Se-ries
Since (Blanchard and Kahn 1980) numerous authors have developed solution algorithmsand formulas for solving linear rational expectations models It turns out that the solutionsassociated with these linear saddle point models by quantifying the potential impact ofanticipated future shocks can play a new and somewhat surprising role in characterizing thesolutions for highly nonlinear models In preparation for discussing how these calculationscan be useful for representing time invariant maps this section describes the relationshipbetween linear reference models and bounded time series
21 Linear Reference Models and a Formula for ldquoAnticipated ShocksrdquoFor any linear homogeneous L dimensional deterministic system that produces a uniquestable solution
Hminus1xtminus1 +H0xt +H1xt+1 = 0
inhomogeneous solutions
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc
can be computed as
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc
where
φ = (H0 +H1B)minus1 and F = minusφH1
(Anderson 2010)
It will be useful to collect the components of this linear reference model solutionL equivHψε ψcB φ F for later use3
Theorem 21 Consider any bounded discrete time path
xt isin RL with xtinfin le X forallt gt 0 (1)
Now given the trajectories 1 define zt as
zt equiv Hminus1xtminus1 +H0xt +H1xt+1 minus ψc minus ψεεt3In Section 22 I will use these matrices to construct a series representation for any bounded path and
in 5 compute improvements in proposed model solutions
5
One can then express the xt solving
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc + zt
using
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφzt+ν (2)
and
xt+k+1 = Bxt+k + (I minus F )minus1φψc +infinsumν=0
F νφzt+k+ν+1 forallt k ge 0
Proof See (Anderson 2010)
Consequently given a bounded time series 1 and a stable linear homogeneous systemL one can easily compute a corresponding series representation like 2 for xt Interestinglythe linear model H the constant term ψc and the impact of the stochastic shocks ψε can bechosen rather arbitrarily ndash the only constraints being the existence of a saddle-point solutionfor the linear system and that all z-values fall in the row space of H1 Note that since theeigenvalues of F are all less than one in magnitude the formula supports the intuition thatdistant future shocks influence current conditions less than imminent shocks
The transformation xt xt xt+1 xt+2 rarr L(xtminus1 εt)rarr zt zt+1 zt+2 is invertiblezt zt+1 zt+2 rarr L(xtminus1 εt)minus1 rarr xt xt xt+1 xt+2 and the inverse can be interpretedas giving the impact of ldquofully anticipated future shocksrdquo on the path of xt in a linear perfectforesight model Note that the reference model is deterministic so that the zt(xtminus1 εt)functions will fully account for any stochastic aspects encountered in the models I analyze
A key feature to note is that the series representation can accommodate arbitrarily com-plex time series trajectories so long as these trajectories are bounded Later this observationwill give us some confidence in the robustness of the algorithm for constructing series rep-resentations for unknown families of functions satisfying complicated systems of dynamicnon-linear equations
22 The Linear Reference Model in Action Arbitrary BoundedTime Series
Consider the following matrix constructed from ldquoalmostrdquo arbitrary coefficients[Hminus1 H0 H1
]=
01 05 -05 1 04 09 1 1 09
02 02 -05 7 04 08 3 2 06
01 -025 -15 21 047 19 21 21 39(3)
with ψc = ψε = 0 ψz = I I have chosen these coefficients so that the linear model hasa unique stable solution and since H1 is full rank its column space will span the columnspace for all non zero values for zt isin R34 It turns out that the algorithm is very forgivingabout the choice of L
4The flexibility in choosing L will become more important in later in the paper where we must choose alinear reference model in a context where we will have little guidance about what might make a good choice
6
The solution for this system is
B =-00282384 -00552487 000939369
-00664679 -0700462 -00718527
-0163638 -139868 0331726
φ =00210079 015727 -00531634
120712 -00553003 -0431842
258165 -0183521 -0578227
F =-0381174 -0223904 00940684
-0134352 -0189653 0630956
-0816814 -100033 00417094
It is straightforward to apply 2 to the following three bounded families of time seriespaths
x1t = xtminus11 + εt
Dπ(t)
where Dπ(t) gives the t-th digit of π
x2t = xtminus1(1 + εt)2 (minus1)t
ax3t = εt
and the εt are a sequence of pseudo random draws from the uniform distributions U(minus4 4)produced subsequent to the selection of a random seed
randomseed(xtminus1+ εt)
The three panels in Figure 1 display these three time series generated for a particular initialstate vector and shock value The first set of trajectories characterizes a function of the digitsin the decimal representation of π The second set of trajectories oscillates between two valuesdetermined by a nonlinear function of the initial conditions xtminus1 and the shock The thirdset of trajectories characterizes a sequence of uniformly distributed random numbers basedon a seed determined by a nonlinear function of the initial conditions and the shock Thesepaths were chosen to emphasize that continuity is not important for the existence of theseries representation that the trajectories need not converge to a fixed point and need notbe produced by some linear rational expectations solution or by the iteration of a discrete-time map The spanning condition and the boundedness of the paths are sufficient conditionsfor the existence of the series representation based on the linear reference modelL5
Though unintuitive and perhaps lacking a compelling interpretation the values for ztexist provide a series for the xt and are easy to compute One can apply the calculationsfor any given initial condition to produce a z series exactly replicating any bounded set oftrajectories Consequently these numerical linear algebra calculations can easily transforma zt series into an xt series and vice versa Since this can be done for any path starting fromany particular initial conditions any discrete time invariant map that generates families ofbounded paths has a series representation
5Although potentially useful in some contexts this paper will not investigate representations for familiesof unbounded but slowly diverging trajectories
7
10 20 30 40 50
5
10
15
Pi Digits Path(10 10 105 5 5)
10 20 30 40 50
-02
02
04
06
Oscillatory Path(10 10 105 5 5)
10 20 30 40 50
2
4
6
8
10
Pseudo Random Path(10 10 105 5 5)
Figure 1 Arbitrary Bounded Time Series Paths
23 Assessing xt ErrorsThe formula 2 computes the impact on the current state of fully anticipated future shocksIt characterizes the impact exactly However one can contemplate the impact of at least twodeviations from the exact calculation one could truncate a series of correct values of zt orone might have imprecise values of zt along the path
231 Truncation Error
The series representation can compute the entire series exactly if all the terms are includedbut it will at times be useful to exploit the fact that the terms for state vectors closer tothe initial time have the most important impact One could consider approximating Xt bytruncating the series at a finite number of terms
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +ksums=0
F sφzt
We can bound the series approximation truncation errors
infinsums=k+1
F sφψz = (I minus F )minus1F k+1φψz
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1F k+1φψz
∥∥∥infin
(Hminus1infin + H0infin + H1infin)∥∥∥X ∥∥∥
infin
Figure 2 shows that this truncation error can be a very conservative measure of theaccuracy of the truncated series The orange line represents the computed approximationof the infinity norm of the difference between xt from the full series and a truncated seriesfor different lengths of the truncation The blue line shows the infinity norm of the actualdifference between the xt computed using the full series and the value obtained using atruncated series The series requires only the first 20 terms to compute the initial value ofthe state vector to high precision
8
10 20 30 40 50
20
40
60
80
Truncation Error Bound and Actural Error
Figure 2 xt Error Approximation Versus Actual Error
232 Path Error
We can assess the impact of perturbed values for zt by computing the maximum discrepancy∆ztimpacting the zt and applying the partial sum formula One can approximate Xt using
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsums=0
F sφ(zt + ∆zt)
Note yet again that for approximating x(xtminus1 ε) the impact of a given realization alongthe path declines for those realizations which are more temporally distant However we canconservatively bound the series approximation errors by using the largest possible ∆zt in theformula
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1φψz
∥∥∥infin∆ztinfin
9
3 Dynamic Stochastic Time Invariant MapsMany dynamic stochastic models have solutions that fall in the class of bounded time invari-ant maps Consequently if we take care (see Section 32) we can apply the law of iteratedexpectations to compute conditional expected solution paths forward from any initial valueand realization of (xtminus1 εt)6 As a result we can use the family of conditional expectationspaths along with a contrived linear reference model to generate a series representation forthe model solutions and to approximate errors
So long as the trajectories are bounded the time invariant maps can be represented usingthe framework from section 2 The series representation will provide a linearly weighted sumof zt(xtminus1 εt) functions that give us an approximation for the model solutions This sectionpresents a familiar RBC model as an example7
31 An RBC ExampleConsider the model described in (Maliar and Maliar 2005)8
maxu(ct) + Et
infinsumt=1
δtu(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(kt)f(kt) = kαt
u(c) = c1minusη minus 11minus η
The first order conditions for the model are
1cηt
= αδkαminus1t Et
(θt+1
cηt+1
)ct + kt = θtminus1k
αtminus1
θt = θρtminus1eεt
It is well known that when η = d = 1 we have a closed form solution (Lettau 2003)
xt(xtminus1 εt) equiv D(xtminus1 εt) equiv
ct(xtminus1 εt)kt(xtminus1 εt)θt(xtminus1 εt)
=
(1minus αδ)θtkαtminus1αδθtk
αtminus1
θρtminus1eεt
6Economists have long used similar manipulation of time series paths for dynamic models Indeed Potter
constructs his generalized impulse response functions using differences in conditional expectations paths(Potter 2000 Koop et al 1996)
7 Later in order to handle models with regime switching and occasionally binding constraints I will needto consider more complicated collections of equation systems with Boolean gates I will show how to applythe series formulation and to approximate the errors for these models as well
8Here I set their β = 1 and do not discuss quasi-geometric discounting or time-inconsistency
10
For mean zero iid εt we can easily compute the conditional expectation of the model variablesfor any given θt+k kt+k
D(xt+k+1) equiv
Et(ct+k+1|θt+k kt+k)Et(kt+k+1|θt+k kt+k)Et(θt+k+1|θt+k kt+k)
=
(1minus αδ)kαt+ke
σ22 θρt+k
αδkαt+keσ22 θρt+k
eσ22 θρt+k
For any given values of ktminus1 θtminus1 εt the model decision rule (D) and decision rule
conditional expectation (D) can generate a conditional expectations path forward from anygiven initial (xtminus1 εt)
xt(xtminus1 εt) = D(xtminus1 εt)xt+k+1(xtminus1 εt) = D(xt+k(xtminus1 εt))
and corresponding paths for (z1t(xtminus1 εt) z2t(xtminus1 εt) z3t(xtminus1 εt))
zt+k equiv Hminus1xt+kminus1 +H0xt+k +H1xt+K+1
Formula 2 requires that
x(xtminus1 εt) = B
ctminus1ktminus1θtminus1
+ φψεεt + (I minus F )minus1φψc +infinsumν=0
F sφzt+ν(ktminus1 θtminus1 εt)
For example using d = 1 and the following parameter values and using the arbitrarylinear reference model (3) we can generate a series representation for the model solutions
alpha rarr 036delta rarr 095rho rarr 095sigma rarr 001
we have
csskssθss
=[0360408
0187324
1001
]
ktminus1θtminus1εt
=[018
11
001
](4)
Figure 3 shows from top to bottom the paths of θt ct and kt from the initial valuesgiven in Equation 4
By including enough terms we can compute the solution for xt exactly Figure 4 showsthe impact that truncating the series has on approximation of the time t values of thestate variables The approximated magnitude for the error Bn shown in red is again verypessimistic compared to the actual error Zn = xt minus xtinfin shown in blue With enoughterms even using an ldquoalmostrdquo arbitrarily chosen linear model the series approximationprovides a machine precision accurate value for the time t state vector
Figure 5 shows that using a linearization that better tracks the nonlinear model pathsimproves the approximation Zn shown in blue and the approximate error Bn shown in redUsing the linearization of the RBC model around the ergodic mean produces a tighter butstill pessimistic bound on the errors for the initial state vector Again the first few termsmake most of the difference in approximating the value of the state variables
11
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
One can then express the xt solving
Hminus1xtminus1 +H0xt +H1xt+1 = ψεε+ ψc + zt
using
xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφzt+ν (2)
and
xt+k+1 = Bxt+k + (I minus F )minus1φψc +infinsumν=0
F νφzt+k+ν+1 forallt k ge 0
Proof See (Anderson 2010)
Consequently given a bounded time series 1 and a stable linear homogeneous systemL one can easily compute a corresponding series representation like 2 for xt Interestinglythe linear model H the constant term ψc and the impact of the stochastic shocks ψε can bechosen rather arbitrarily ndash the only constraints being the existence of a saddle-point solutionfor the linear system and that all z-values fall in the row space of H1 Note that since theeigenvalues of F are all less than one in magnitude the formula supports the intuition thatdistant future shocks influence current conditions less than imminent shocks
The transformation xt xt xt+1 xt+2 rarr L(xtminus1 εt)rarr zt zt+1 zt+2 is invertiblezt zt+1 zt+2 rarr L(xtminus1 εt)minus1 rarr xt xt xt+1 xt+2 and the inverse can be interpretedas giving the impact of ldquofully anticipated future shocksrdquo on the path of xt in a linear perfectforesight model Note that the reference model is deterministic so that the zt(xtminus1 εt)functions will fully account for any stochastic aspects encountered in the models I analyze
A key feature to note is that the series representation can accommodate arbitrarily com-plex time series trajectories so long as these trajectories are bounded Later this observationwill give us some confidence in the robustness of the algorithm for constructing series rep-resentations for unknown families of functions satisfying complicated systems of dynamicnon-linear equations
22 The Linear Reference Model in Action Arbitrary BoundedTime Series
Consider the following matrix constructed from ldquoalmostrdquo arbitrary coefficients[Hminus1 H0 H1
]=
01 05 -05 1 04 09 1 1 09
02 02 -05 7 04 08 3 2 06
01 -025 -15 21 047 19 21 21 39(3)
with ψc = ψε = 0 ψz = I I have chosen these coefficients so that the linear model hasa unique stable solution and since H1 is full rank its column space will span the columnspace for all non zero values for zt isin R34 It turns out that the algorithm is very forgivingabout the choice of L
4The flexibility in choosing L will become more important in later in the paper where we must choose alinear reference model in a context where we will have little guidance about what might make a good choice
6
The solution for this system is
B =-00282384 -00552487 000939369
-00664679 -0700462 -00718527
-0163638 -139868 0331726
φ =00210079 015727 -00531634
120712 -00553003 -0431842
258165 -0183521 -0578227
F =-0381174 -0223904 00940684
-0134352 -0189653 0630956
-0816814 -100033 00417094
It is straightforward to apply 2 to the following three bounded families of time seriespaths
x1t = xtminus11 + εt
Dπ(t)
where Dπ(t) gives the t-th digit of π
x2t = xtminus1(1 + εt)2 (minus1)t
ax3t = εt
and the εt are a sequence of pseudo random draws from the uniform distributions U(minus4 4)produced subsequent to the selection of a random seed
randomseed(xtminus1+ εt)
The three panels in Figure 1 display these three time series generated for a particular initialstate vector and shock value The first set of trajectories characterizes a function of the digitsin the decimal representation of π The second set of trajectories oscillates between two valuesdetermined by a nonlinear function of the initial conditions xtminus1 and the shock The thirdset of trajectories characterizes a sequence of uniformly distributed random numbers basedon a seed determined by a nonlinear function of the initial conditions and the shock Thesepaths were chosen to emphasize that continuity is not important for the existence of theseries representation that the trajectories need not converge to a fixed point and need notbe produced by some linear rational expectations solution or by the iteration of a discrete-time map The spanning condition and the boundedness of the paths are sufficient conditionsfor the existence of the series representation based on the linear reference modelL5
Though unintuitive and perhaps lacking a compelling interpretation the values for ztexist provide a series for the xt and are easy to compute One can apply the calculationsfor any given initial condition to produce a z series exactly replicating any bounded set oftrajectories Consequently these numerical linear algebra calculations can easily transforma zt series into an xt series and vice versa Since this can be done for any path starting fromany particular initial conditions any discrete time invariant map that generates families ofbounded paths has a series representation
5Although potentially useful in some contexts this paper will not investigate representations for familiesof unbounded but slowly diverging trajectories
7
10 20 30 40 50
5
10
15
Pi Digits Path(10 10 105 5 5)
10 20 30 40 50
-02
02
04
06
Oscillatory Path(10 10 105 5 5)
10 20 30 40 50
2
4
6
8
10
Pseudo Random Path(10 10 105 5 5)
Figure 1 Arbitrary Bounded Time Series Paths
23 Assessing xt ErrorsThe formula 2 computes the impact on the current state of fully anticipated future shocksIt characterizes the impact exactly However one can contemplate the impact of at least twodeviations from the exact calculation one could truncate a series of correct values of zt orone might have imprecise values of zt along the path
231 Truncation Error
The series representation can compute the entire series exactly if all the terms are includedbut it will at times be useful to exploit the fact that the terms for state vectors closer tothe initial time have the most important impact One could consider approximating Xt bytruncating the series at a finite number of terms
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +ksums=0
F sφzt
We can bound the series approximation truncation errors
infinsums=k+1
F sφψz = (I minus F )minus1F k+1φψz
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1F k+1φψz
∥∥∥infin
(Hminus1infin + H0infin + H1infin)∥∥∥X ∥∥∥
infin
Figure 2 shows that this truncation error can be a very conservative measure of theaccuracy of the truncated series The orange line represents the computed approximationof the infinity norm of the difference between xt from the full series and a truncated seriesfor different lengths of the truncation The blue line shows the infinity norm of the actualdifference between the xt computed using the full series and the value obtained using atruncated series The series requires only the first 20 terms to compute the initial value ofthe state vector to high precision
8
10 20 30 40 50
20
40
60
80
Truncation Error Bound and Actural Error
Figure 2 xt Error Approximation Versus Actual Error
232 Path Error
We can assess the impact of perturbed values for zt by computing the maximum discrepancy∆ztimpacting the zt and applying the partial sum formula One can approximate Xt using
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsums=0
F sφ(zt + ∆zt)
Note yet again that for approximating x(xtminus1 ε) the impact of a given realization alongthe path declines for those realizations which are more temporally distant However we canconservatively bound the series approximation errors by using the largest possible ∆zt in theformula
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1φψz
∥∥∥infin∆ztinfin
9
3 Dynamic Stochastic Time Invariant MapsMany dynamic stochastic models have solutions that fall in the class of bounded time invari-ant maps Consequently if we take care (see Section 32) we can apply the law of iteratedexpectations to compute conditional expected solution paths forward from any initial valueand realization of (xtminus1 εt)6 As a result we can use the family of conditional expectationspaths along with a contrived linear reference model to generate a series representation forthe model solutions and to approximate errors
So long as the trajectories are bounded the time invariant maps can be represented usingthe framework from section 2 The series representation will provide a linearly weighted sumof zt(xtminus1 εt) functions that give us an approximation for the model solutions This sectionpresents a familiar RBC model as an example7
31 An RBC ExampleConsider the model described in (Maliar and Maliar 2005)8
maxu(ct) + Et
infinsumt=1
δtu(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(kt)f(kt) = kαt
u(c) = c1minusη minus 11minus η
The first order conditions for the model are
1cηt
= αδkαminus1t Et
(θt+1
cηt+1
)ct + kt = θtminus1k
αtminus1
θt = θρtminus1eεt
It is well known that when η = d = 1 we have a closed form solution (Lettau 2003)
xt(xtminus1 εt) equiv D(xtminus1 εt) equiv
ct(xtminus1 εt)kt(xtminus1 εt)θt(xtminus1 εt)
=
(1minus αδ)θtkαtminus1αδθtk
αtminus1
θρtminus1eεt
6Economists have long used similar manipulation of time series paths for dynamic models Indeed Potter
constructs his generalized impulse response functions using differences in conditional expectations paths(Potter 2000 Koop et al 1996)
7 Later in order to handle models with regime switching and occasionally binding constraints I will needto consider more complicated collections of equation systems with Boolean gates I will show how to applythe series formulation and to approximate the errors for these models as well
8Here I set their β = 1 and do not discuss quasi-geometric discounting or time-inconsistency
10
For mean zero iid εt we can easily compute the conditional expectation of the model variablesfor any given θt+k kt+k
D(xt+k+1) equiv
Et(ct+k+1|θt+k kt+k)Et(kt+k+1|θt+k kt+k)Et(θt+k+1|θt+k kt+k)
=
(1minus αδ)kαt+ke
σ22 θρt+k
αδkαt+keσ22 θρt+k
eσ22 θρt+k
For any given values of ktminus1 θtminus1 εt the model decision rule (D) and decision rule
conditional expectation (D) can generate a conditional expectations path forward from anygiven initial (xtminus1 εt)
xt(xtminus1 εt) = D(xtminus1 εt)xt+k+1(xtminus1 εt) = D(xt+k(xtminus1 εt))
and corresponding paths for (z1t(xtminus1 εt) z2t(xtminus1 εt) z3t(xtminus1 εt))
zt+k equiv Hminus1xt+kminus1 +H0xt+k +H1xt+K+1
Formula 2 requires that
x(xtminus1 εt) = B
ctminus1ktminus1θtminus1
+ φψεεt + (I minus F )minus1φψc +infinsumν=0
F sφzt+ν(ktminus1 θtminus1 εt)
For example using d = 1 and the following parameter values and using the arbitrarylinear reference model (3) we can generate a series representation for the model solutions
alpha rarr 036delta rarr 095rho rarr 095sigma rarr 001
we have
csskssθss
=[0360408
0187324
1001
]
ktminus1θtminus1εt
=[018
11
001
](4)
Figure 3 shows from top to bottom the paths of θt ct and kt from the initial valuesgiven in Equation 4
By including enough terms we can compute the solution for xt exactly Figure 4 showsthe impact that truncating the series has on approximation of the time t values of thestate variables The approximated magnitude for the error Bn shown in red is again verypessimistic compared to the actual error Zn = xt minus xtinfin shown in blue With enoughterms even using an ldquoalmostrdquo arbitrarily chosen linear model the series approximationprovides a machine precision accurate value for the time t state vector
Figure 5 shows that using a linearization that better tracks the nonlinear model pathsimproves the approximation Zn shown in blue and the approximate error Bn shown in redUsing the linearization of the RBC model around the ergodic mean produces a tighter butstill pessimistic bound on the errors for the initial state vector Again the first few termsmake most of the difference in approximating the value of the state variables
11
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
The solution for this system is
B =-00282384 -00552487 000939369
-00664679 -0700462 -00718527
-0163638 -139868 0331726
φ =00210079 015727 -00531634
120712 -00553003 -0431842
258165 -0183521 -0578227
F =-0381174 -0223904 00940684
-0134352 -0189653 0630956
-0816814 -100033 00417094
It is straightforward to apply 2 to the following three bounded families of time seriespaths
x1t = xtminus11 + εt
Dπ(t)
where Dπ(t) gives the t-th digit of π
x2t = xtminus1(1 + εt)2 (minus1)t
ax3t = εt
and the εt are a sequence of pseudo random draws from the uniform distributions U(minus4 4)produced subsequent to the selection of a random seed
randomseed(xtminus1+ εt)
The three panels in Figure 1 display these three time series generated for a particular initialstate vector and shock value The first set of trajectories characterizes a function of the digitsin the decimal representation of π The second set of trajectories oscillates between two valuesdetermined by a nonlinear function of the initial conditions xtminus1 and the shock The thirdset of trajectories characterizes a sequence of uniformly distributed random numbers basedon a seed determined by a nonlinear function of the initial conditions and the shock Thesepaths were chosen to emphasize that continuity is not important for the existence of theseries representation that the trajectories need not converge to a fixed point and need notbe produced by some linear rational expectations solution or by the iteration of a discrete-time map The spanning condition and the boundedness of the paths are sufficient conditionsfor the existence of the series representation based on the linear reference modelL5
Though unintuitive and perhaps lacking a compelling interpretation the values for ztexist provide a series for the xt and are easy to compute One can apply the calculationsfor any given initial condition to produce a z series exactly replicating any bounded set oftrajectories Consequently these numerical linear algebra calculations can easily transforma zt series into an xt series and vice versa Since this can be done for any path starting fromany particular initial conditions any discrete time invariant map that generates families ofbounded paths has a series representation
5Although potentially useful in some contexts this paper will not investigate representations for familiesof unbounded but slowly diverging trajectories
7
10 20 30 40 50
5
10
15
Pi Digits Path(10 10 105 5 5)
10 20 30 40 50
-02
02
04
06
Oscillatory Path(10 10 105 5 5)
10 20 30 40 50
2
4
6
8
10
Pseudo Random Path(10 10 105 5 5)
Figure 1 Arbitrary Bounded Time Series Paths
23 Assessing xt ErrorsThe formula 2 computes the impact on the current state of fully anticipated future shocksIt characterizes the impact exactly However one can contemplate the impact of at least twodeviations from the exact calculation one could truncate a series of correct values of zt orone might have imprecise values of zt along the path
231 Truncation Error
The series representation can compute the entire series exactly if all the terms are includedbut it will at times be useful to exploit the fact that the terms for state vectors closer tothe initial time have the most important impact One could consider approximating Xt bytruncating the series at a finite number of terms
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +ksums=0
F sφzt
We can bound the series approximation truncation errors
infinsums=k+1
F sφψz = (I minus F )minus1F k+1φψz
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1F k+1φψz
∥∥∥infin
(Hminus1infin + H0infin + H1infin)∥∥∥X ∥∥∥
infin
Figure 2 shows that this truncation error can be a very conservative measure of theaccuracy of the truncated series The orange line represents the computed approximationof the infinity norm of the difference between xt from the full series and a truncated seriesfor different lengths of the truncation The blue line shows the infinity norm of the actualdifference between the xt computed using the full series and the value obtained using atruncated series The series requires only the first 20 terms to compute the initial value ofthe state vector to high precision
8
10 20 30 40 50
20
40
60
80
Truncation Error Bound and Actural Error
Figure 2 xt Error Approximation Versus Actual Error
232 Path Error
We can assess the impact of perturbed values for zt by computing the maximum discrepancy∆ztimpacting the zt and applying the partial sum formula One can approximate Xt using
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsums=0
F sφ(zt + ∆zt)
Note yet again that for approximating x(xtminus1 ε) the impact of a given realization alongthe path declines for those realizations which are more temporally distant However we canconservatively bound the series approximation errors by using the largest possible ∆zt in theformula
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1φψz
∥∥∥infin∆ztinfin
9
3 Dynamic Stochastic Time Invariant MapsMany dynamic stochastic models have solutions that fall in the class of bounded time invari-ant maps Consequently if we take care (see Section 32) we can apply the law of iteratedexpectations to compute conditional expected solution paths forward from any initial valueand realization of (xtminus1 εt)6 As a result we can use the family of conditional expectationspaths along with a contrived linear reference model to generate a series representation forthe model solutions and to approximate errors
So long as the trajectories are bounded the time invariant maps can be represented usingthe framework from section 2 The series representation will provide a linearly weighted sumof zt(xtminus1 εt) functions that give us an approximation for the model solutions This sectionpresents a familiar RBC model as an example7
31 An RBC ExampleConsider the model described in (Maliar and Maliar 2005)8
maxu(ct) + Et
infinsumt=1
δtu(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(kt)f(kt) = kαt
u(c) = c1minusη minus 11minus η
The first order conditions for the model are
1cηt
= αδkαminus1t Et
(θt+1
cηt+1
)ct + kt = θtminus1k
αtminus1
θt = θρtminus1eεt
It is well known that when η = d = 1 we have a closed form solution (Lettau 2003)
xt(xtminus1 εt) equiv D(xtminus1 εt) equiv
ct(xtminus1 εt)kt(xtminus1 εt)θt(xtminus1 εt)
=
(1minus αδ)θtkαtminus1αδθtk
αtminus1
θρtminus1eεt
6Economists have long used similar manipulation of time series paths for dynamic models Indeed Potter
constructs his generalized impulse response functions using differences in conditional expectations paths(Potter 2000 Koop et al 1996)
7 Later in order to handle models with regime switching and occasionally binding constraints I will needto consider more complicated collections of equation systems with Boolean gates I will show how to applythe series formulation and to approximate the errors for these models as well
8Here I set their β = 1 and do not discuss quasi-geometric discounting or time-inconsistency
10
For mean zero iid εt we can easily compute the conditional expectation of the model variablesfor any given θt+k kt+k
D(xt+k+1) equiv
Et(ct+k+1|θt+k kt+k)Et(kt+k+1|θt+k kt+k)Et(θt+k+1|θt+k kt+k)
=
(1minus αδ)kαt+ke
σ22 θρt+k
αδkαt+keσ22 θρt+k
eσ22 θρt+k
For any given values of ktminus1 θtminus1 εt the model decision rule (D) and decision rule
conditional expectation (D) can generate a conditional expectations path forward from anygiven initial (xtminus1 εt)
xt(xtminus1 εt) = D(xtminus1 εt)xt+k+1(xtminus1 εt) = D(xt+k(xtminus1 εt))
and corresponding paths for (z1t(xtminus1 εt) z2t(xtminus1 εt) z3t(xtminus1 εt))
zt+k equiv Hminus1xt+kminus1 +H0xt+k +H1xt+K+1
Formula 2 requires that
x(xtminus1 εt) = B
ctminus1ktminus1θtminus1
+ φψεεt + (I minus F )minus1φψc +infinsumν=0
F sφzt+ν(ktminus1 θtminus1 εt)
For example using d = 1 and the following parameter values and using the arbitrarylinear reference model (3) we can generate a series representation for the model solutions
alpha rarr 036delta rarr 095rho rarr 095sigma rarr 001
we have
csskssθss
=[0360408
0187324
1001
]
ktminus1θtminus1εt
=[018
11
001
](4)
Figure 3 shows from top to bottom the paths of θt ct and kt from the initial valuesgiven in Equation 4
By including enough terms we can compute the solution for xt exactly Figure 4 showsthe impact that truncating the series has on approximation of the time t values of thestate variables The approximated magnitude for the error Bn shown in red is again verypessimistic compared to the actual error Zn = xt minus xtinfin shown in blue With enoughterms even using an ldquoalmostrdquo arbitrarily chosen linear model the series approximationprovides a machine precision accurate value for the time t state vector
Figure 5 shows that using a linearization that better tracks the nonlinear model pathsimproves the approximation Zn shown in blue and the approximate error Bn shown in redUsing the linearization of the RBC model around the ergodic mean produces a tighter butstill pessimistic bound on the errors for the initial state vector Again the first few termsmake most of the difference in approximating the value of the state variables
11
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
10 20 30 40 50
5
10
15
Pi Digits Path(10 10 105 5 5)
10 20 30 40 50
-02
02
04
06
Oscillatory Path(10 10 105 5 5)
10 20 30 40 50
2
4
6
8
10
Pseudo Random Path(10 10 105 5 5)
Figure 1 Arbitrary Bounded Time Series Paths
23 Assessing xt ErrorsThe formula 2 computes the impact on the current state of fully anticipated future shocksIt characterizes the impact exactly However one can contemplate the impact of at least twodeviations from the exact calculation one could truncate a series of correct values of zt orone might have imprecise values of zt along the path
231 Truncation Error
The series representation can compute the entire series exactly if all the terms are includedbut it will at times be useful to exploit the fact that the terms for state vectors closer tothe initial time have the most important impact One could consider approximating Xt bytruncating the series at a finite number of terms
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +ksums=0
F sφzt
We can bound the series approximation truncation errors
infinsums=k+1
F sφψz = (I minus F )minus1F k+1φψz
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1F k+1φψz
∥∥∥infin
(Hminus1infin + H0infin + H1infin)∥∥∥X ∥∥∥
infin
Figure 2 shows that this truncation error can be a very conservative measure of theaccuracy of the truncated series The orange line represents the computed approximationof the infinity norm of the difference between xt from the full series and a truncated seriesfor different lengths of the truncation The blue line shows the infinity norm of the actualdifference between the xt computed using the full series and the value obtained using atruncated series The series requires only the first 20 terms to compute the initial value ofthe state vector to high precision
8
10 20 30 40 50
20
40
60
80
Truncation Error Bound and Actural Error
Figure 2 xt Error Approximation Versus Actual Error
232 Path Error
We can assess the impact of perturbed values for zt by computing the maximum discrepancy∆ztimpacting the zt and applying the partial sum formula One can approximate Xt using
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsums=0
F sφ(zt + ∆zt)
Note yet again that for approximating x(xtminus1 ε) the impact of a given realization alongthe path declines for those realizations which are more temporally distant However we canconservatively bound the series approximation errors by using the largest possible ∆zt in theformula
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1φψz
∥∥∥infin∆ztinfin
9
3 Dynamic Stochastic Time Invariant MapsMany dynamic stochastic models have solutions that fall in the class of bounded time invari-ant maps Consequently if we take care (see Section 32) we can apply the law of iteratedexpectations to compute conditional expected solution paths forward from any initial valueand realization of (xtminus1 εt)6 As a result we can use the family of conditional expectationspaths along with a contrived linear reference model to generate a series representation forthe model solutions and to approximate errors
So long as the trajectories are bounded the time invariant maps can be represented usingthe framework from section 2 The series representation will provide a linearly weighted sumof zt(xtminus1 εt) functions that give us an approximation for the model solutions This sectionpresents a familiar RBC model as an example7
31 An RBC ExampleConsider the model described in (Maliar and Maliar 2005)8
maxu(ct) + Et
infinsumt=1
δtu(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(kt)f(kt) = kαt
u(c) = c1minusη minus 11minus η
The first order conditions for the model are
1cηt
= αδkαminus1t Et
(θt+1
cηt+1
)ct + kt = θtminus1k
αtminus1
θt = θρtminus1eεt
It is well known that when η = d = 1 we have a closed form solution (Lettau 2003)
xt(xtminus1 εt) equiv D(xtminus1 εt) equiv
ct(xtminus1 εt)kt(xtminus1 εt)θt(xtminus1 εt)
=
(1minus αδ)θtkαtminus1αδθtk
αtminus1
θρtminus1eεt
6Economists have long used similar manipulation of time series paths for dynamic models Indeed Potter
constructs his generalized impulse response functions using differences in conditional expectations paths(Potter 2000 Koop et al 1996)
7 Later in order to handle models with regime switching and occasionally binding constraints I will needto consider more complicated collections of equation systems with Boolean gates I will show how to applythe series formulation and to approximate the errors for these models as well
8Here I set their β = 1 and do not discuss quasi-geometric discounting or time-inconsistency
10
For mean zero iid εt we can easily compute the conditional expectation of the model variablesfor any given θt+k kt+k
D(xt+k+1) equiv
Et(ct+k+1|θt+k kt+k)Et(kt+k+1|θt+k kt+k)Et(θt+k+1|θt+k kt+k)
=
(1minus αδ)kαt+ke
σ22 θρt+k
αδkαt+keσ22 θρt+k
eσ22 θρt+k
For any given values of ktminus1 θtminus1 εt the model decision rule (D) and decision rule
conditional expectation (D) can generate a conditional expectations path forward from anygiven initial (xtminus1 εt)
xt(xtminus1 εt) = D(xtminus1 εt)xt+k+1(xtminus1 εt) = D(xt+k(xtminus1 εt))
and corresponding paths for (z1t(xtminus1 εt) z2t(xtminus1 εt) z3t(xtminus1 εt))
zt+k equiv Hminus1xt+kminus1 +H0xt+k +H1xt+K+1
Formula 2 requires that
x(xtminus1 εt) = B
ctminus1ktminus1θtminus1
+ φψεεt + (I minus F )minus1φψc +infinsumν=0
F sφzt+ν(ktminus1 θtminus1 εt)
For example using d = 1 and the following parameter values and using the arbitrarylinear reference model (3) we can generate a series representation for the model solutions
alpha rarr 036delta rarr 095rho rarr 095sigma rarr 001
we have
csskssθss
=[0360408
0187324
1001
]
ktminus1θtminus1εt
=[018
11
001
](4)
Figure 3 shows from top to bottom the paths of θt ct and kt from the initial valuesgiven in Equation 4
By including enough terms we can compute the solution for xt exactly Figure 4 showsthe impact that truncating the series has on approximation of the time t values of thestate variables The approximated magnitude for the error Bn shown in red is again verypessimistic compared to the actual error Zn = xt minus xtinfin shown in blue With enoughterms even using an ldquoalmostrdquo arbitrarily chosen linear model the series approximationprovides a machine precision accurate value for the time t state vector
Figure 5 shows that using a linearization that better tracks the nonlinear model pathsimproves the approximation Zn shown in blue and the approximate error Bn shown in redUsing the linearization of the RBC model around the ergodic mean produces a tighter butstill pessimistic bound on the errors for the initial state vector Again the first few termsmake most of the difference in approximating the value of the state variables
11
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
10 20 30 40 50
20
40
60
80
Truncation Error Bound and Actural Error
Figure 2 xt Error Approximation Versus Actual Error
232 Path Error
We can assess the impact of perturbed values for zt by computing the maximum discrepancy∆ztimpacting the zt and applying the partial sum formula One can approximate Xt using
xt equiv Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsums=0
F sφ(zt + ∆zt)
Note yet again that for approximating x(xtminus1 ε) the impact of a given realization alongthe path declines for those realizations which are more temporally distant However we canconservatively bound the series approximation errors by using the largest possible ∆zt in theformula
x(xtminus1 ε)minus x(xtminus1 ε k)infin le∥∥∥(I minus F )minus1φψz
∥∥∥infin∆ztinfin
9
3 Dynamic Stochastic Time Invariant MapsMany dynamic stochastic models have solutions that fall in the class of bounded time invari-ant maps Consequently if we take care (see Section 32) we can apply the law of iteratedexpectations to compute conditional expected solution paths forward from any initial valueand realization of (xtminus1 εt)6 As a result we can use the family of conditional expectationspaths along with a contrived linear reference model to generate a series representation forthe model solutions and to approximate errors
So long as the trajectories are bounded the time invariant maps can be represented usingthe framework from section 2 The series representation will provide a linearly weighted sumof zt(xtminus1 εt) functions that give us an approximation for the model solutions This sectionpresents a familiar RBC model as an example7
31 An RBC ExampleConsider the model described in (Maliar and Maliar 2005)8
maxu(ct) + Et
infinsumt=1
δtu(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(kt)f(kt) = kαt
u(c) = c1minusη minus 11minus η
The first order conditions for the model are
1cηt
= αδkαminus1t Et
(θt+1
cηt+1
)ct + kt = θtminus1k
αtminus1
θt = θρtminus1eεt
It is well known that when η = d = 1 we have a closed form solution (Lettau 2003)
xt(xtminus1 εt) equiv D(xtminus1 εt) equiv
ct(xtminus1 εt)kt(xtminus1 εt)θt(xtminus1 εt)
=
(1minus αδ)θtkαtminus1αδθtk
αtminus1
θρtminus1eεt
6Economists have long used similar manipulation of time series paths for dynamic models Indeed Potter
constructs his generalized impulse response functions using differences in conditional expectations paths(Potter 2000 Koop et al 1996)
7 Later in order to handle models with regime switching and occasionally binding constraints I will needto consider more complicated collections of equation systems with Boolean gates I will show how to applythe series formulation and to approximate the errors for these models as well
8Here I set their β = 1 and do not discuss quasi-geometric discounting or time-inconsistency
10
For mean zero iid εt we can easily compute the conditional expectation of the model variablesfor any given θt+k kt+k
D(xt+k+1) equiv
Et(ct+k+1|θt+k kt+k)Et(kt+k+1|θt+k kt+k)Et(θt+k+1|θt+k kt+k)
=
(1minus αδ)kαt+ke
σ22 θρt+k
αδkαt+keσ22 θρt+k
eσ22 θρt+k
For any given values of ktminus1 θtminus1 εt the model decision rule (D) and decision rule
conditional expectation (D) can generate a conditional expectations path forward from anygiven initial (xtminus1 εt)
xt(xtminus1 εt) = D(xtminus1 εt)xt+k+1(xtminus1 εt) = D(xt+k(xtminus1 εt))
and corresponding paths for (z1t(xtminus1 εt) z2t(xtminus1 εt) z3t(xtminus1 εt))
zt+k equiv Hminus1xt+kminus1 +H0xt+k +H1xt+K+1
Formula 2 requires that
x(xtminus1 εt) = B
ctminus1ktminus1θtminus1
+ φψεεt + (I minus F )minus1φψc +infinsumν=0
F sφzt+ν(ktminus1 θtminus1 εt)
For example using d = 1 and the following parameter values and using the arbitrarylinear reference model (3) we can generate a series representation for the model solutions
alpha rarr 036delta rarr 095rho rarr 095sigma rarr 001
we have
csskssθss
=[0360408
0187324
1001
]
ktminus1θtminus1εt
=[018
11
001
](4)
Figure 3 shows from top to bottom the paths of θt ct and kt from the initial valuesgiven in Equation 4
By including enough terms we can compute the solution for xt exactly Figure 4 showsthe impact that truncating the series has on approximation of the time t values of thestate variables The approximated magnitude for the error Bn shown in red is again verypessimistic compared to the actual error Zn = xt minus xtinfin shown in blue With enoughterms even using an ldquoalmostrdquo arbitrarily chosen linear model the series approximationprovides a machine precision accurate value for the time t state vector
Figure 5 shows that using a linearization that better tracks the nonlinear model pathsimproves the approximation Zn shown in blue and the approximate error Bn shown in redUsing the linearization of the RBC model around the ergodic mean produces a tighter butstill pessimistic bound on the errors for the initial state vector Again the first few termsmake most of the difference in approximating the value of the state variables
11
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
3 Dynamic Stochastic Time Invariant MapsMany dynamic stochastic models have solutions that fall in the class of bounded time invari-ant maps Consequently if we take care (see Section 32) we can apply the law of iteratedexpectations to compute conditional expected solution paths forward from any initial valueand realization of (xtminus1 εt)6 As a result we can use the family of conditional expectationspaths along with a contrived linear reference model to generate a series representation forthe model solutions and to approximate errors
So long as the trajectories are bounded the time invariant maps can be represented usingthe framework from section 2 The series representation will provide a linearly weighted sumof zt(xtminus1 εt) functions that give us an approximation for the model solutions This sectionpresents a familiar RBC model as an example7
31 An RBC ExampleConsider the model described in (Maliar and Maliar 2005)8
maxu(ct) + Et
infinsumt=1
δtu(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(kt)f(kt) = kαt
u(c) = c1minusη minus 11minus η
The first order conditions for the model are
1cηt
= αδkαminus1t Et
(θt+1
cηt+1
)ct + kt = θtminus1k
αtminus1
θt = θρtminus1eεt
It is well known that when η = d = 1 we have a closed form solution (Lettau 2003)
xt(xtminus1 εt) equiv D(xtminus1 εt) equiv
ct(xtminus1 εt)kt(xtminus1 εt)θt(xtminus1 εt)
=
(1minus αδ)θtkαtminus1αδθtk
αtminus1
θρtminus1eεt
6Economists have long used similar manipulation of time series paths for dynamic models Indeed Potter
constructs his generalized impulse response functions using differences in conditional expectations paths(Potter 2000 Koop et al 1996)
7 Later in order to handle models with regime switching and occasionally binding constraints I will needto consider more complicated collections of equation systems with Boolean gates I will show how to applythe series formulation and to approximate the errors for these models as well
8Here I set their β = 1 and do not discuss quasi-geometric discounting or time-inconsistency
10
For mean zero iid εt we can easily compute the conditional expectation of the model variablesfor any given θt+k kt+k
D(xt+k+1) equiv
Et(ct+k+1|θt+k kt+k)Et(kt+k+1|θt+k kt+k)Et(θt+k+1|θt+k kt+k)
=
(1minus αδ)kαt+ke
σ22 θρt+k
αδkαt+keσ22 θρt+k
eσ22 θρt+k
For any given values of ktminus1 θtminus1 εt the model decision rule (D) and decision rule
conditional expectation (D) can generate a conditional expectations path forward from anygiven initial (xtminus1 εt)
xt(xtminus1 εt) = D(xtminus1 εt)xt+k+1(xtminus1 εt) = D(xt+k(xtminus1 εt))
and corresponding paths for (z1t(xtminus1 εt) z2t(xtminus1 εt) z3t(xtminus1 εt))
zt+k equiv Hminus1xt+kminus1 +H0xt+k +H1xt+K+1
Formula 2 requires that
x(xtminus1 εt) = B
ctminus1ktminus1θtminus1
+ φψεεt + (I minus F )minus1φψc +infinsumν=0
F sφzt+ν(ktminus1 θtminus1 εt)
For example using d = 1 and the following parameter values and using the arbitrarylinear reference model (3) we can generate a series representation for the model solutions
alpha rarr 036delta rarr 095rho rarr 095sigma rarr 001
we have
csskssθss
=[0360408
0187324
1001
]
ktminus1θtminus1εt
=[018
11
001
](4)
Figure 3 shows from top to bottom the paths of θt ct and kt from the initial valuesgiven in Equation 4
By including enough terms we can compute the solution for xt exactly Figure 4 showsthe impact that truncating the series has on approximation of the time t values of thestate variables The approximated magnitude for the error Bn shown in red is again verypessimistic compared to the actual error Zn = xt minus xtinfin shown in blue With enoughterms even using an ldquoalmostrdquo arbitrarily chosen linear model the series approximationprovides a machine precision accurate value for the time t state vector
Figure 5 shows that using a linearization that better tracks the nonlinear model pathsimproves the approximation Zn shown in blue and the approximate error Bn shown in redUsing the linearization of the RBC model around the ergodic mean produces a tighter butstill pessimistic bound on the errors for the initial state vector Again the first few termsmake most of the difference in approximating the value of the state variables
11
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
For mean zero iid εt we can easily compute the conditional expectation of the model variablesfor any given θt+k kt+k
D(xt+k+1) equiv
Et(ct+k+1|θt+k kt+k)Et(kt+k+1|θt+k kt+k)Et(θt+k+1|θt+k kt+k)
=
(1minus αδ)kαt+ke
σ22 θρt+k
αδkαt+keσ22 θρt+k
eσ22 θρt+k
For any given values of ktminus1 θtminus1 εt the model decision rule (D) and decision rule
conditional expectation (D) can generate a conditional expectations path forward from anygiven initial (xtminus1 εt)
xt(xtminus1 εt) = D(xtminus1 εt)xt+k+1(xtminus1 εt) = D(xt+k(xtminus1 εt))
and corresponding paths for (z1t(xtminus1 εt) z2t(xtminus1 εt) z3t(xtminus1 εt))
zt+k equiv Hminus1xt+kminus1 +H0xt+k +H1xt+K+1
Formula 2 requires that
x(xtminus1 εt) = B
ctminus1ktminus1θtminus1
+ φψεεt + (I minus F )minus1φψc +infinsumν=0
F sφzt+ν(ktminus1 θtminus1 εt)
For example using d = 1 and the following parameter values and using the arbitrarylinear reference model (3) we can generate a series representation for the model solutions
alpha rarr 036delta rarr 095rho rarr 095sigma rarr 001
we have
csskssθss
=[0360408
0187324
1001
]
ktminus1θtminus1εt
=[018
11
001
](4)
Figure 3 shows from top to bottom the paths of θt ct and kt from the initial valuesgiven in Equation 4
By including enough terms we can compute the solution for xt exactly Figure 4 showsthe impact that truncating the series has on approximation of the time t values of thestate variables The approximated magnitude for the error Bn shown in red is again verypessimistic compared to the actual error Zn = xt minus xtinfin shown in blue With enoughterms even using an ldquoalmostrdquo arbitrarily chosen linear model the series approximationprovides a machine precision accurate value for the time t state vector
Figure 5 shows that using a linearization that better tracks the nonlinear model pathsimproves the approximation Zn shown in blue and the approximate error Bn shown in redUsing the linearization of the RBC model around the ergodic mean produces a tighter butstill pessimistic bound on the errors for the initial state vector Again the first few termsmake most of the difference in approximating the value of the state variables
11
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
5 10 15 20 25 30
02
04
06
08
10
Figure 3 model variable values
5 10 15 20 25 30
2
4
6
8
10
RBC Model Bounds versus Actual
Bn
Zn
Figure 4 RBC Known Solution Truncation Approximate Error Versus Actual ldquoAlmostArbitraryrdquo Linear Reference Model
5 10 15 20 25 30
0002
0004
0006
0008
RBC Model Bounds versus Actual
Bn
Zn
Figure 5 RBC Known Solution Truncation Approximate Error Versus Actual StochasticSteady State Linearization Linear Reference Model
12
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
32 A Model Specification ConstraintFor convenience of notation in what follows I will focus on models built up from componentsof the form
hi(xtminus1 xt xt+1 εt) = hdetio (xtminus1 xt εt) +pisumj=1
[hdetij (xtminus1 xt εt)hnondetij (xt+1)] = 0
This is a very broad class of models including most widely used macroeconomics modelsThis constraint will make it easier to write down and manipulate the conditional expectationsexpressions in the description of the algorithms in subsequent sections For example theEuler equations for the neoclassical growth model can be written as
h10det(middot) = 1cηt hdet11 () = αδkαminus1
t hnondet11 (middot) = Et
(θt+1
cηt+1
)hdet20 (middot) = ct + kt minus θtkαtminus1 h
det21 (middot) = 0
hdet30 (middot) = ln θt minus (ρ ln θtminus1 + εt) hdet31 (middot) = 0
Since we need to compute the conditional expectation of nonlinear expressions this setupwill make it possible for us to use auxiliary variables to correctly compute the requiredexpected values We will be working with models where expectations are computed at timet with εt known We can construct a linear reference model for the modified RBC systemby augmenting the RBC model with the equation
Nt = θtct
substituting Nt+1 for θt+1ct+1
in the first equation and linearizing the RBC model about theergodic mean
This leads to [Hminus1 H0 H1
]=
0 0 0 0 -76986 947963 0 0 0 0 -0999 0
0 -105263 0 0 1 1 0 -0547185 0 0 0 0
0 0 0 0 77063 0 1 -277463 0 0 0 0
0 0 0 -0949953 0 0 0 1 0 0 0 0
with
ψε =
0010
ψz = I
These coefficients produce a unique stable linear solution
13
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
B =0 0692632 0 0342028
0 036 0 0177771
0 -533763 0 0
0 0 0 0949953
φ =-00444237 0658 0 0360048
00444237 0342 0 0187137
0342342 -507075 1 0
0 0 0 1
F =0 0 -00443793 0
0 0 00443793 0
0 0 0342 0
0 0 0 0
ψc =-37735
-0197184
277741
00500976
Recomputing the truncation errors with the expanded model produces nearly identical resultsfor variable approximation errors
14
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
4 Approximating Model Solution ErrorsThis section shows how to use the model equation errors to construct an approximate errorfor the components of any proposed model solution9
41 Approximating a Global Decision Rule Error BoundConsider a bounded region A that contains all iterated conditional expectations paths Bybounding the largest deviation in the paths for the ∆zpt we can bound the largest deviationin a proposed solution from the exact solution
Given an exact solution xt = g(xtminus1 εt) define the conditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
which can be approximated by
maxxtminus1εt
(φept (xtminus1 ε))2
For proof see Appendix A9This work contrasts with the analysis provided in (Judd et al 2017a Peralta-Alva and Santos 2014
Santos and Peralta-alva 2005 Santos 2000) Their approaches identify Euler Equation Errors but usestatistical techniques to estimate the impact of these errors on solution accuracy Here I provide a formulafor the approximate error that does not require statistical inference
15
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
42 Approximating Local Decision Rule ErrorsGiven a proposed model solution define the error in xt at (xtminus1 εt)
e(xtminus1 εt) equiv x(xtminus1 εt)minus xp(xtminus1 εt)
compute the conditional expectations functions for the decision rule and use them to generatea path to use with a linear reference model L equiv Hψε ψcB φ F in 2 to approximate e
Xp(x) equiv E [xp(x ε)]
xt+k+1 = Xp(xt+k) k = 1 K
We can define functions zpt by
zpt equivM(xtminus1 xt xt+1 εt)zpt+k equivM(xt+kminus1 xt+k xt+k+1 0)
e(xtminus1 εt) equivKsumν=0
F sφzpt+ν
43 Assessing the Error ApproximationThis section reports on the results of an experiment evaluating how well the formula performsI compute the known decision rule for the RBC model with parameters α = 036 δ =095 η = 1 ρ = 095 σ = 001 I consider this the fixed ldquotruerdquo model
I generate 10 other settings for the model parameters by choosing parameters from thefollowing ranges
020 lt α lt 040 085 lt δ lt 099 085 lt ρ lt 099 0005 lt σ lt 0015
producing the following 10 model specifications
α δ η ρ σ035 092875 1 0957188 0008750275 09725 1 0906875 0013750375 08675 1 0924375 0011250225 094625 1 0891563 0006250325 091125 1 0944063 000687502625 0922187 1 098125 001187503625 0887187 1 085875 001437502125 0965938 1 0965938 000937503125 0860938 1 0878438 000812502375 0904688 1 0933125 0013125
16
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
I linearized each of the ten models at their respective stochastic steady states producing10 linear reference models Li and 10 decision rules based on the linearization As a resultI have 100 = 10 times 10 combinations of (inaccurate by design ) decision rule functions andlinear reference models to use in the error formula experiments
I generate 30 representative points (kitminus1 θitminus1 εit) from the ergodic set for the fixedtrue reference model using the Niederreiter quasi-random algorithm For each of the 100pairings of decision rule and linear reference model I compute the error approximation ateach of the 30 representative points and regress the actual error computed using the knownanalytic solution and the predicted error from the error formula
ei = α + βei + ui
Although the true model and the true decision rule stays fixed in the experiment the pro-posed decision rules and the linear reference model both change
Figures 6 and 7 present the 100 R2 and estimated variance and Figures 8 and 9 presentsthe 100 resulting regression coefficients for a variety of values for K of number of terms in theerror formula ( K = 0 1 10 30)10 The regressions with K = 0 correspond to the maximandof the global error formula The figures show that the formula does a good job of predictingthe error produced by the inaccurate decision rule functions Even with K = 0 using onlyone term in the error formula still provides useful information about the magnitude of thediscrepancy between the proposed decision rules values and true values The R2 values areall above 60 percent and are often near one The β coefficients are generally close to oneand the constant tends to be close to zero
10I donrsquot display results for θt because the formula predicts the error in θt perfectly since it is determinedby a backward looking equation
17
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
c error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
070
075
080
085
090
095
100
R2
c error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
c error regression K=30
Figure 6 ct (σ2 R2)
18
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
2times10-7 4times10-7 6times10-7 8times10-7 1times10-6σ2
06
07
08
09
10
R2
k error regression K=0
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
02
04
06
08
10
R2
k error regression K=1
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=10
1times10-7 2times10-7 3times10-7 4times10-7 5times10-7 6times10-7 7times10-7σ2
075
080
085
090
095
100
R2
k error regression K=30
Figure 7 kt (σ2 R2)
19
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
c error regression K=0
-003 -002 -001 001 002α
02
04
06
08
10
12
β
c error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
c error regression K=30
Figure 8 ct (α β)
20
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
001 002 003 004 005 006α
07
08
09
10
11
12
13
β
k error regression K=0
-003 -002 -001 001α
05
10
15
β
k error regression K=1
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=10
-004 -002 002α
02
04
06
08
10
12
14
β
k error regression K=30
Figure 9 kt (α β)
21
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
5 Improving Proposed Solutions
51 Some Practical Considerations for Applying the Series For-mula
I assume that all models of interest can be characterized by a finite number of equationssystems I will require that for any given (xtminus1 εt) this collection of equation systemsproduces a unique solution for xt This formulation will allow us to apply the technique tomodels with occasionally binding constraints and models with regime switching I will onlyconsider time invariant model solutions xt = x(xtminus1 ε) Given a decision rule x(xtminus1 ε)denote its conditional expectation X(xtminus1) equiv E [x(xtminus1 ε)] Using the convention describedin section 32 I will include enough auxiliary variables so that I can correctly computeexpected values for model variables by applying the law of iterated expectations
It will be convenient for describing the algorithms to write the models in the form
C1 CMd(xtminus1 ε xt E [xt+1])|(b c)
where each
Ci equiv (bi(xtminus1 εt)Mi(xtminus1 ε xt E [xt+1]) = 0 ci(xtminus1 ε xt E [xt+1]))
represents a set of model equations Mi with Boolean valued gate functions bi ci Inthe algorithmrsquos implementation the bi will be a Boolean function precondition and the ciwill be a Boolean function post-condition for a given equation system If some bi is falsethen there is no need to try to solve model i If ci is false the system has produced anunacceptable solution which should be ignored I suggest organizing equations using pre andpost conditions as this can help with parallelizing a solution regardless of whether or not oneuses my series representation The d will be a function for choosing among the candidatext(xtminus1 εt) values The function d uses the truth values from the (b c) and the possiblevalues of the state variables to select one and only one xt Thus we will have an equationsystem and corresponding gatekeeper logical expressions indicating which equation systemis in force for producing the unique solution for a given (xtminus1 εt)
Examples of such systems include the simple RBC model from Section 32 Other exam-ples include models with occasionally binding constraints where solutions exhibit comple-mentary slackness conditions or models with regime switching See Section 6 for a modelwith occasionally binding constraints and Section B for an example of a specification forregime switching
52 How My Approach Differs From the Usual ApproachIt may be worthwhile at this point to briefly compare and contrast the approach I proposehere with what is typically done for solving dynamic stochastic models As with the stan-dard approach I characterize the solutions using a linearly weighted sums of orthogonalpolynomials and solve the model equations at predetermined collocation points However inthis new algorithm I augment the model Euler equations with equations based on 2 linkingthe conditional expectations for future values in the proposed solution with the time t value
22
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
of the model variables As a result the algorithm determines linearly weighted polynomialscharacterizing the trajectories for the usual variables xt(xtminus1 εt) as well as new zt(xtminus1 εt)variables These new constraints help keep proposed solutions bounded during the solutioniterations thus improving algorithm reliability and accuracy
53 Algorithm OverviewI closely follow the techniques outlined in (Judd et al 2013 2014) As a result much of whatmust be done to use this algorithm is the same as for the usual approach One must specifyan initial guess for decision rule x(xtminus1 ε k) and obtain conditional expectations functionsI use the an-isotropic Smolyak Lagrange interpolation with adaptive domain I precomputeall of the conditional expectations for the integrals as outlined in (Judd et al 2017b) so thatthe time t computations 51 are all deterministic
There are number of new steps One must specify a linear reference model One mustchoose a value for K the number of terms in the series representation There are trade-offsFewer means more truncation error More terms corresponds to more accuracy when thedecision rule is accurate but More terms is computationally more expensive The initial guessis typically some distance away from the solution Consequently the terms in the series willbe incorrect The Smolyak grid adds more error to the calculation If there are many gridpoints it is more likely that increasing the K can improve the quality of the solution If thegrid is too sparse increasing K will likely be less useful
531 Single Equation System Case
For now consider a single model equation system case with no Boolean gates A descriptionof the actual multi-system implementation follows We seek
M(xtminus1x(xtminus1 εt)X(x(xtminus1 εt)) εt) = 0 forall(xtminus1 εt)
Given a proposed model solution
xt = xp(xtminus1 εt)
compute
Xp(xtminus1) equiv E [xp(xtminus1 εt)]
We will use a linear reference model L equiv Hψε ψcB φ F to construct a series of zpfunctions that help improve the accuracy of the proposed solution
We can define functions zpZp by
zp(xtminus1 ε) equiv H
xtminus1xp(xtminus1 ε)
Xp(x(xtminus1 ε))
+ φεεt + φc
Zp(xtminus1) equiv E [zp(xtminus1 ε)]
23
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
bull Algorithm loop begins here Define conditional expectations paths for xt zt
xt+k+1 = X(xt+k) zt+k+1 = Z(xt+k) forallk ge 0
Using the zp(xtminus1 ε) Series
bull We get expressions for xt E [x]t+1 consistent with L and the conditional expectations path
Xt = Bxtminus1 + φψεε+ (I minus F )minus1φψc +infinsumν=0
F sφZ(xt+ν)
E [Xt+1] = BXt + (I minus F )minus1φψc +infinsumν=0
F νminus1φZ(xt+ν) forallt k ge 0
bull Use the model equations M(xtminus1 xt E [xt+1] ε) = 0 and xt(xtminus1 εt) = Xt(xtminus1 εt)to determine xpprime(xtminus1 ε) zp
prime(xtminus1 ε)
bull xp(xtminus1 ε) = xpprime(xtminus1 ε) zp(xtminus1 ε) = zpprime(xtminus1 ε) ndash Repeat loop
532 Multiple Equation System Case
An outline of the current Mathematica implementation of the algorithm follows Table 1provides notation Figure 10 provides a graphic depiction of the function call tree For more
L equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
Table 1 Algorithm Notation
24
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
algorithmic detail see Appendix C The current implementation has a couple of atypicalfeatures Many of the calculations exploit Mathematicarsquos capability to blend numericaland symbolic computation and to easily use functions as arguments for other functions Inaddition the implementation exploits parallelism at several points The parallel featuresalso apply when employing the usual technique for solving the set types of models BelowI note where these features play a role
nestInterp
doInterp
genFindRootFuncs
genFindRootWorker
genZsForFindRoot genLilXkZkFunc
fSumC genXtOfXtm1 genXtp1OfXt
makeInterpFuncs
genInterpData
evaluateTriple
interpDataToFunc
Figure 10 Function Call Tree
nestInterp mdash Computes a sequence of decision rule and conditional expectation rule func-tions terminating when the evaluation of the decision rule functions at the tests pointsno longer changes much
doInterp mdash Symbolically generates functions that return a unique xt for any value of(xtminus1 εt) for the family model equations C1 CMd(xtminus1 ε xt E [xt+1])|(b c)and uses these functions to construct interpolating functions approximating the deci-sion rules
genFindRootFuncs mdash The function can use 2 or omit it thus implementing the usualapproach for solving these modelsThe routine symbolically generates a collection of function triples and an outer modelsolution selection function Each of the function constructions can be done in par-allel Each of the triples returns the Boolean gate values and a unique value for xtfor any given value of (xtminus1 εt) the selection function chooses one solution from theset of model equations The routine subsequently uses these functions to constructinterpolating functions approximating the decision rules Sub-components of this stepexploit parallelism
25
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
genLilXkZkFunc mdash Symbolically creates a function that uses an initial Zt path to gen-erate a function of (xtminus1 εt zt) providing (potentially symbolic ) inputs xtminus1
xtEt(xt+1) εt)
for model system equations M This routine provides the information for constructingxt(xtminus1 εt) using the series expression 2
makeInterpFuncs mdash Solves model equations at collocation points producing interpolationdata and subsequently returns the interpolating functions for the decision rule anddecision rule conditional expectation This can be done in parallel
54 Approximating a Known Solution η = 1 U(c) = Log(c)This subsection uses the series representation to compute the solution for the case when theexact solution is known and to compare the various approximations to the known actual so-lution In what follows both the traditional and the new approach use an-isotropic Smolyakpolynomial function approximation with precomputed expectations Figure 11 characterizesthe use of the parallelotope transformation described in (Judd et al 2013) The left panelshows the 200 values of kt and θt resulting from a stochastic simulation of the the knowndecision rule The right panel shows the transformed variables that constitute an improvedset of variables for constructing function approximation values
017 018 019 020 021
095
100
105
110
Ergodic Values for K and θ
-3 -2 -1 1 2 3
-01
01
02
Ergodic Values for K and θ
Figure 11 The Ergodic Values for kt θt
X =018853
100466
0
σX =00112443
00387807
001
V =-0707107 -0707107 0
-0707107 0707107 0
0 0 1
26
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
maxU =302524
0199124
3
minU =-316879
-017629
-3
Table 2 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (111) Table 3 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 4 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 5 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Table 6 shows the evaluation points when the Smolyak polynomial degrees of approxima-tion were (222) Table 7 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 8 shows the log of the absolute value of the actual error in the decision ruleat each evaluation point Table 9 shows the log of the absolute value of the estimated er-ror in the decision rule at each evaluation points Note that the formula provides accurateestimates of the error component by component
Each of the estimated errors were computed with K = 5 Table 10 shows the effect ofincreasing value of K Generally increasing K increases the accuracy of the solution Thevalue of K has more effect when the Smolyak grid is more densely populated with collocationpoints
27
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
k1 θ1 ε1
k30 θ30 ε30
016818 0942376 minus002511040210392 108282 002320850181393 0968212 minus0009004101949 1017 00111288
0188668 100876 minus00210838020145 106007 00272351017245 0945462 minus0004977520214179 10876 001918190195059 103442 minus001303070182278 0983104 0003075630208273 106025 minus00291370166906 0916853 000106234020192 105244 minus003115030187205 100787 001716860213199 108502 minus0015044017147 0942887 002522180179847 0985589 minus0006990810194563 103016 0009115490165563 0915543 minus00230971020705 105852 002119520173322 0954406 minus0011017402136 11016 000508892
0184601 0986988 minus002712370198833 103324 00131421020721 107594 minus001907050166932 0928749 002924840192926 10059 minus0002964240179118 0958169 001414870172672 0949185 minus001806390212949 109638 0030255
Table 2 RBC Known Solution Model Evaluation Points d=(111)
28
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
c1 k1 θ1
c30 k30 θ30
0318386 016551 092020413447 0214841 1102150341848 0177697 09607770375209 0195014 102742035634 0185242 09872730400686 0208232 1084810329563 0171293 09431650416315 0216325 110250372502 0193618 1019590351946 0182927 0987070384948 0200107 102813031841 0165481 0922020377048 0196011 1018780368995 019178 1024950402167 0209001 1065470339185 0176282 0971490347449 0180596 09792860378776 0196859 1037850308332 0160276 08966580401836 0208829 1077070330982 0172039 09456220415597 0215941 1101370343927 0178809 09606590384305 0199732 1044880393281 0204401 1052910332894 0173006 0962198036484 0189639 1002620345376 0179513 09746510326232 0169582 09336520422656 0219618 112227
Table 3 RBC Known Solution Values at Evaluation Points d=(111)
29
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus686423 minus761023 minus62776minus677469 minus725654 minus6193minus827193 minus926988 minus790952minus953366 minus994641 minus907674minus970109 minus110692 minus108193minus701131 minus752677 minus64226minus862144 minus943568 minus812434minus693069 minus736841 minus62991minus882105 minus939136 minus796867minus948549 minus994897 minus904695minus683337 minus745765 minus641963minus11193 minus139426 minus833167minus713458 minus770834 minus651919minus946667 minus106151 minus10154minus713317 minus797018 minus671715minus676168 minus744531 minus620438minus946294 minus107345 minus860101minus899962 minus932606 minus836754minus65744 minus728112 minus60606minus726443 minus774753 minus665645minus795047 minus875065 minus734261minus790465 minus802499 minus740682minus778668 minus888177 minus73262minus844035 minus886441 minus783514minus721479 minus797368 minus658639minus640774 minus709877 minus586537minus118365 minus111009 minus118397minus781106 minus843094 minus701803minus728745 minus806534 minus674087minus633557 minus684989 minus576803
Table 4 RBC Known Solution Actual Errors d=(111)
30
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus684011 minus759703 minus62776minus679189 minus733194 minus6193minus827296 minus926585 minus790952minus954759 minus996466 minus907674minus972214 minus11142 minus108193minus7019 minus758967 minus64226minus861034 minus941771 minus812434minus695566 minus744593 minus62991minus883746 minus943331 minus796867minus948388 minus995406 minus904695minus68561 minus750703 minus641963minus108845 minus136119 minus833167minus715654 minus775234 minus651919minus948715 minus105641 minus10154minus716809 minus802412 minus671715minus674694 minus743005 minus620438minus946173 minus107234 minus860101minus900034 minus936818 minus836754minus655256 minus725512 minus60606minus728911 minus780039 minus665645minus793653 minus873939 minus734261minus787653 minus813406 minus740682minus779071 minus888802 minus73262minus845665 minus889927 minus783514minus725059 minus802416 minus658639minus638709 minus707562 minus586537minus119887 minus111639 minus118397minus78057 minus842757 minus701803minus727379 minus805278 minus674087minus635248 minus693629 minus576803
Table 5 RBC Known Solution Error Approximations d=(111)
31
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 6 RBC Known Solution Model Evaluation Points d=(222)
32
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
k1 θ1 ε1
k30 θ30 ε30
0175411 100081 minus0008618540182233 101265 minus0001215660181807 0987487 minus0006150910183086 0994888 minus0003066380179142 100488 minus0008001630179142 101672 minus00005987560178715 0991558 minus0005534010184685 100636 minus0001832570179142 10108 minus0006767820179142 0998959 minus0004300190185538 0997479 minus0009235440180208 0980456 minus000460865018138 100821 minus000954390177969 100821 minus0002141020184365 100673 minus0007076270178396 0991928 minus00009072090176263 100821 minus0005842460179675 100821 minus0003374830179248 0983046 minus0008310080184792 0999329 minus0001524120177436 0997479 minus0006459370180847 102116 minus0003991740180421 0995999 minus0008926990182979 0998959 minus0002757930180847 101524 minus0007693180177436 0991558 minus00002903030183832 0990077 minus000522555018202 0984526 minus000260370178049 0994426 minus000753895018146 101811 minus0000136076
Table 7 RBC Known Solution Values at Evaluation Points d=(222)
33
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
ln(|ec1|) ln(|ek1|) ln(|eθ1|)
ln(|ec30|) ln(|ek30|) ln(|eθ30|)
minus136548 minus136548 minus141969minus180614 minus137477 minus147744minus172538 minus16064 minus171189minus182334 minus14931 minus184609minus149375 minus150484 minus1806minus133718 minus134466 minus143339minus153357 minus151409 minus166528minus134818 minus17055 minus186856minus165151 minus17974 minus160224minus168629 minus171652 minus169262minus150336 minus130546 minus150929minus148104 minus151068 minus170829minus139454 minus150733 minus153665minus170059 minus150225 minus168123minus143533 minus136955 minus161402minus14697 minus152052 minus155889minus156999 minus165298 minus165502minus15469 minus158159 minus161557minus129005 minus170451 minus155094minus13407 minus145127 minus159061minus15605 minus150689 minus155069minus145434 minus144278 minus151171minus148235 minus148132 minus166327minus150699 minus151443 minus177803minus135813 minus147228 minus146813minus151304 minus152556 minus15467minus173355 minus174808 minus205415minus132332 minus152877 minus161059minus15668 minus149417 minus152354minus142264 minus128483 minus142733
Table 8 RBC Known Solution Actual Errorsd=(222)
34
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus135994 minus137316 minus141969minus19713 minus137563 minus147744minus178538 minus159394 minus171189minus184186 minus149247 minus184609minus149453 minus150569 minus1806minus133661 minus134484 minus143339minus152186 minus150483 minus166528minus134591 minus164547 minus186856minus166757 minus174658 minus160224minus167507 minus173581 minus169262minus176809 minus13212 minus150929minus148476 minus150629 minus170829minus141282 minus158166 minus153665minus168176 minus149953 minus168123minus147882 minus138997 minus161402minus145928 minus150521 minus155889minus158049 minus163126 minus165502minus15479 minus158001 minus161557minus128385 minus153986 minus155094minus133789 minus14598 minus159061minus156645 minus151186 minus155069minus144965 minus14459 minus151171minus147832 minus147719 minus166327minus150335 minus151856 minus177803minus137298 minus153206 minus146813minus152349 minus151718 minus15467minus173423 minus17473 minus205415minus132297 minus153227 minus161059minus157978 minus150212 minus152354minus140272 minus128995 minus142733
Table 9 RBC Known Solution Error Approximationsd=(222)
35
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
[K d = (1 1 1) d = (2 2 2) d = (3 3 3)
]
NonSeries minus651367 minus122771 minus1689470 minus60793 minus626833 minus6298431 minus600805 minus624202 minus6498352 minus652219 minus743876 minus7047853 minus657578 minus803864 minus8325894 minus662831 minus91522 minus9493145 minus677305 minus105516 minus1042476 minus638499 minus116399 minus114557 minus663849 minus113627 minus1302138 minus638735 minus117288 minus1399769 minus646759 minus119826 minus15337210 minus645966 minus121345 minus15415710 minus677776 minus115025 minus15821115 minus667778 minus124593 minus15889920 minus640802 minus117296 minus17165225 minus653265 minus121441 minus17151830 minus681949 minus116097 minus16374835 minus633508 minus121446 minus16696340 minus652442 minus114042 minus156302
Table 10 RBC Known Solution Impact of Series Length on Approximation Error
36
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
55 Approximating an Unknown Solution η = 3Table 11 shows the evaluation points when the Smolyak polynomial degrees of approximationwere (111) Table 12 shows the decision rule prescription for (ct kt θt) at each evaluationpoint Table 13 shows the log of the absolute value of the estimated error in the decision ruleat each evaluation points Note that the formula provides accurate estimates of the errorcomponent by component
Table 14 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 15 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 9 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
37
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
k1 θ1 ε1
k30 θ30 ε30
01489 0887266 minus0044574
0233573 116936 005950960175324 0939453 minus0009879460202434 103737 00334887018999 102063 minus003590030215667 112355 006818320157417 0893645 minus0001205830241135 117907 005083590202828 107209 minus001855310177152 096917 001614140229252 112428 minus005324760146251 0836349 00118046021657 110837 minus005758440187072 101878 004649910239172 117389 minus002288990155454 0888463 006384640172322 0973989 minus0005542640201821 106358 002915190143572 0833667 minus004023710226812 112076 005517270159193 0911511 minus001421630240045 120694 002047820181795 0977034 minus004891080210338 106995 003782550227206 115548 minus003156350146355 086005 0072520198455 101516 0003130980170749 0919322 003999390157874 0901074 minus002939510238725 119651 00746884
Table 11 RBC Approximated Solution Model Evaluation Points d=(111)
38
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
c1 k1 θ1
c30 k30 θ30
021611 0208557 08470270452893 0269857 1223930267239 0230108 0931775035028 0251768 1070360299961 0240849 09835690420887 0262256 1189840243253 0217177 08965210459129 0272277 1223740340827 0250513 1049980294783 0234714 09869090368953 0260446 1065470221402 020607 08547410349843 025471 1046210339335 0244203 106640416676 0267429 114247027496 0218765 09600640283425 0231206 0969230360306 0252522 1090830193846 0200678 07997350420092 0266042 1172980245256 0218836 09004890456834 0270845 121817026908 0233193 09291140373581 0256909 1106050393659 0262194 1116440264319 0211099 09421480321357 0247159 1017540281918 0228897 09641620232855 0216163 08753040479992 0272575 126627
Table 12 RBC Approximated Solution Values at Evaluation Points d=(111)
39
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus438747 minus5 minus501321minus568045 minus5805 minus489809minus510224 minus533003 minus66064minus634158 minus662031 minus787535minus560248 minus564831 minus96263minus57969 minus618996 minus511512minus492963 minus507008 minus683086minus588332 minus588269 minus501359minus663272 minus615415 minus668716minus569579 minus556877 minus776351minus6081 minus572439 minus516437minus482324 minus480882 minus705272minus686202 minus574095 minus525257minus647222 minus625808 minus900805minus596599 minus666276 minus544834minus866584 minus499997 minus490498minus54139 minus550185 minus733409minus642036 minus703866 minus708005minus413978 minus480089 minus478296minus606664 minus639354 minus5377minus486241 minus51415 minus606258minus733554 minus638356 minus611469minus502079 minus537422 minus604755minus643212 minus799418 minus657026minus617638 minus642884 minus531385minus675784 minus479589 minus456474minus593264 minus592418 minus103455minus592431 minus529301 minus571515minus462067 minus50846 minus546376minus522287 minus541884 minus446492
Table 13 RBC Approximated Solution Error Approximations d=(111)
40
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
k1 θ1 ε1
k30 θ30 ε30
0154714 0909409 005835160238295 11837 minus004419350181916 0956081 002416990208439 105216 minus001855730195384 103868 004980620220186 114073 minus00527390163808 0913113 001562450246242 119138 minus003564810207785 108971 003271530182983 0987658 minus0001466390234987 113638 006689710153414 0855121 0002806320221646 11239 007116980192257 103778 minus003137540244262 11865 00369880161828 0908228 minus004846630177563 0994718 001989720206952 108084 minus001428450150574 0853223 005407890232434 113348 minus003992080165188 0931842 002844260244182 122206 minus0005739110187804 0994441 006262430216046 108454 minus0022830231781 117103 004553350152787 0880818 minus005701170204792 102954 001135180177553 0935951 minus002496630164075 0921007 004339710243068 121122 minus00591481
Table 14 RBC Approximated Solution Model Evaluation Points d=(222)
41
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
k1 θ1 ε1
k30 θ30 ε30
0274683 0220094 0968634040339 0266729 1123010296394 023513 09816720335137 0250659 1030190358182 0247172 1089650365685 0257823 1075020263846 022194 09317160417917 027018 11396403831 0253685 112112
0299405 023603 09868240451246 026553 120725023049 0209622 08642610437392 0260092 1199780312821 0241638 1003860465984 026895 1220730236754 021458 08694370309625 0235153 1014980350497 0251486 1061370246402 0212757 09078110376735 0263313 1082320277956 0225197 09621140454981 0269165 1202940337604 02424 10590352851 0255287 1055770454299 0264058 1215950219639 0206105 08373040337981 0249525 103978026425 0227363 09159027897 0224894 09658190410798 0268804 113077
Table 15 RBC Approximated Solution Values at Evaluation Points d=(222)
42
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus534255 minus533875 minus118565minus615664 minus615255 minus123108minus541646 minus541585 minus138565minus564275 minus564369 minus181473minus59222 minus592211 minus160029minus587248 minus586293 minus120622minus520976 minus520965 minus138815minus626734 minus627376 minus133781minus611431 minus611424 minus135244minus543751 minus543768 minus143313minus684735 minus683506 minus134062minus496411 minus496311 minus157406minus674538 minus674996 minus130128minus551523 minus551584 minus146129minus701914 minus701377 minus130087minus498907 minus49888 minus128895minus554918 minus55502 minus142886minus578761 minus57866 minus136307minus510822 minus511197 minus164254minus591312 minus591964 minus15971minus532484 minus532492 minus129666minus681071 minus680294 minus126755minus575943 minus575928 minus138696minus576735 minus576907 minus143413minus693921 minus694492 minus122327minus487233 minus487293 minus130072minus568297 minus568316 minus203557minus516688 minus516342 minus170775minus533829 minus533817 minus126149minus621494 minus620121 minus121051
Table 16 RBC Approximated Solution Error Approximationsd=(222)
43
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
6 A Model with Occasionally Binding ConstraintsStochastic dynamic non linear economic models often embody occasionally binding con-straints (OBC) Since (Christiano and Fisher 2000) a host of authors have described avariety of approaches11 (Holden 2015 Guerrieri and Iacoviello 2015b Benigno et al2009 Hintermaier and Koeniger 2010b Brumm and Grill 2010 Nakov 2008 Haefke 1998Nakata 2012 Gordon 2011 Billi 2011 Hintermaier and Koeniger 2010a Guerrieri andIacoviello 2015a)
Consider adding a constraint to the simple RBC model
It ge υIss
maxu(ct) + Et
infinsumt=0
δt+1u(ct+1)
ct + kt = (1minus d)ktminus1 + θtf(ktminus1)θt = θρtminus1e
εt
f(kt) = kαt
u(c) = c1minusη minus 11minus η
L =u(ct) + Et
infinsumt=0
δtu(ct+1)
+
infinsumt=0
δtλt(ct + kt minus ((1minus d)ktminus1 + θtf(ktminus1)))+
δtmicrot((kt minus (1minus d)ktminus1)minus υIss)
so that we can impose
It = (kt minus (1minus d)ktminus1) ge υIss
The first order conditions become1cηt
+ λt
λt minus δλt+1θt+1αk(αminus1)t minus δ(1minus d)λt+1 + microt minus δ(1minus d)microt+1
For example the Euler equations for the neoclassical growth model can be written as11The algorithms described in (Holden 2015) and (Guerrieri and Iacoviello 2015b) also exploit the use of
ldquoanticipated shocksrdquo but do not use the comprehensive formula employed here
44
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
if(micro gt 0 and (kt minus (1minus d)ktminus1 minus υIss) = 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
if(micro = 0 and (kt minus (1minus d)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + It minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d) + microt+1 + δ(1minus d)microt+1
It minus (Kt minus (1minus d)ktminus1)microt(kt minus (1minus d)ktminus1 minus υIss)
Table 17 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (111) Table 18 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 19 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Table 20 shows the evaluation points when the Smolyak polynomial degrees of approx-imation were (222) Table 21 shows the decision rule prescription for (ct kt θt) at eachevaluation point Table 22 shows the log of the absolute value of the estimated error in thedecision rule at each evaluation points Note that the formula provides accurate estimatesof the error component by component
Why not here
45
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
k1 θ1 ε1
k30 θ30 ε30
326643 0944454 minus00261085412098 106801 00270199367835 0940854 minus000839901392114 0989356 00137378369543 100026 minus00216811388415 105817 00314473344151 0931012 minus000397164426001 106084 00225926378979 102922 minus00128264360107 0971308 00048831420171 102562 minus00305358341024 0891085 000266941396698 103809 minus00327495363406 100526 0020378942347 105957 minus00150401341619 0929745 0029233634676 0988852 minus000618532380052 102168 00115241335789 089452 minus00238948415837 102748 00248063341102 0947657 minus00106127412137 10963 000709678367874 096914 minus00283222397561 100824 00159515402702 106734 minus00194674331666 0918703 003366139173 0973011 minus000175796365197 0928429 00170584342219 0938626 minus00183606413254 108727 00347678
Table 17 Occasionally Binding Constraints Model Evaluation Points d=(111)
46
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
c1 I1 k1
c30 I30 k30
103662 0379903 334624136955 0431143 413607113338 0361691 369353125263 0381718 392139118795 0376988 371505132486 0433045 392962108991 0364941 348842137645 0426962 425615124202 039202 380933117598 0373255 363206126718 039439 41772103482 03675 346926125075 0392683 396566122878 0398246 368118133116 0409411 421636111619 0384274 348551115026 038833 352625126672 0394799 3822760997682 0362814 341768133254 040944 4153571095 0369696 346366
136855 0443297 414433114269 0364517 369245128393 0389658 397475130274 041229 403407108632 0391331 340604121329 0372403 391112113942 0372397 368273107662 0365006 347022138964 0453375 416566
Table 18 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(111)
47
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus431395 minus455496 minus455496minus449983 minus413333 minus413333minus55352 minus524857 minus524857minus58198 minus584969 minus584969minus460287 minus460328 minus460328minus441983 minus410467 minus410467minus462802 minus455181 minus455181minus526804 minus474828 minus474828minus843803 minus815898 minus815898minus535434 minus528501 minus528501minus385154 minus366516 minus366516minus438957 minus432099 minus432099minus447738 minus423689 minus423689minus501928 minus504091 minus504091minus551293 minus494452 minus494452minus383164 minus364992 minus364992minus453368 minus451412 minus451412minus474133 minus466482 minus466482minus73604 minus500693 minus500693minus771361 minus596299 minus596299minus471182 minus460582 minus460582minus481987 minus452925 minus452925minus636037 minus573385 minus573385minus604304 minus580405 minus580405minus658347 minus558301 minus558301minus345988 minus327868 minus327868minus415596 minus415415 minus415415minus42487 minus433841 minus433841minus503715 minus472138 minus472138minus438481 minus389053 minus389053
Table 19 Occasionally Binding Constraints Error Approximations d=(111)
48
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
k1 θ1 ε1
k30 θ30 ε30
311653 0930006 minus00265567404206 106199 00292736358012 0922498 minus000794662383937 0975085 0015316358288 098926 minus00219042377881 105289 00339261331687 09134 minus00032941420017 105275 00246211368084 102108 minus00125991348492 0957443 000601095414443 101357 minus00312093329279 0868696 000368469387738 102958 minus00335355351259 0995409 00222948417209 105153 minus00149254328879 0912185 00315998333019 0978321 minus000562036369499 10125 00129897323305 0873004 minus00242305409524 101604 00269473327803 0932401 minus00102729403468 109384 000833722357274 0954351 minus0028883389532 0995891 00176423393671 106203 minus00195779318007 0900584 00362524383958 095671 minus0000967836355394 0908726 00188054329307 0922137 minus00184148404972 108358 00374155
Table 20 Occasionally Binding Constraints Model Evaluation Points d=(222)
49
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
k1 θ1 ε1
k30 θ30 ε30
0996234 0372233 317711135273 0449889 408774108239 037197 359408122794 0381138 383657115723 0375819 360041129529 0457947 385887103658 0371604 335678137221 0431977 421213121472 0395466 370822114148 037158 350801124362 0394261 4124250972304 0376186 333969122692 0392432 388208119298 0407354 356868131345 0414672 416956106882 0383074 33429811142 0387566 338473123929 0401702 3727190926116 0382809 329255132171 0410856 409657104696 0373008 332323135156 046278 409399109896 0370843 358631126258 039151 38973128036 0420156 39632104154 0382172 324423118102 0373737 382935109669 0372006 357055102297 0373053 333681138105 0472609 411735
Table 21 Occasionally Binding Constraints Values at Evaluation Points for OccasionallyBinding Constraints d=(222)
50
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
ln(| ˆec1|) ln(| ˆek1|) ln(| ˆeθ1|)
ln(| ˆec30|) ln(| ˆek30|) ln(| ˆeθ30|)
minus43713 minus437518 minus437518minus480064 minus479951 minus479951minus451635 minus451745 minus451745minus497684 minus497675 minus497675minus476238 minus476248 minus476248minus520213 minus519772 minus519772minus442504 minus442526 minus442526minus474432 minus474585 minus474585minus581449 minus581175 minus581175minus544103 minus54409 minus54409minus341118 minus341245 minus341245minus235772 minus235772 minus235772minus410566 minus410512 minus410512minus755599 minus755774 minus755774minus476462 minus476603 minus476603minus388786 minus388797 minus388797minus430233 minus430326 minus430326minus500949 minus500948 minus500948minus204364 minus204336 minus204336minus559945 minus560358 minus560358minus512234 minus512392 minus512392minus573503 minus57328 minus57328minus480694 minus480739 minus480739minus706764 minus706949 minus706949minus520027 minus5196 minus5196minus34838 minus348367 minus348367minus361222 minus361225 minus361225minus437205 minus43713 minus43713minus480664 minus480713 minus480713minus463986 minus463587 minus463587
Table 22 Occasionally Binding Constraints Error Approximations d=(222)
51
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
7 ConclusionsThis paper introduces a new series representation for bounded time series and shows howto develop series for representing bounded time invariant maps The series representationplays a strategic role in developing a formula for approximating the errors in dynamic modelsolution decision rules The paper also shows how to augment the usual ldquoEuler Equationrdquomodel solution methods with constraints reflecting how the updated conditional expectationpaths relate to the time t solution values thereby enhancing solution algorithm reliabilityand accuracy The prototype was written in Mathematica and is available on gitHub athttpsgithubcomes335mathwizAMASeriesRepresentation
52
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
ReferencesGary Anderson A reliable and computationally efficient algorithm for imposing the sad-
dle point proprerty in dynamic models Journal of Economic Dynamics and Control34472ndash489 June 2010 URL httpwwwsciencedirectcomsciencearticlepiiS0165188909001924
Gianluca Benigno Huigang Chen Christopher Otrok Alessandro Rebucci and Eric RYoung Optimal policy with occasionally binding credit constraints First Draft Nov 2007April 2009
Roberto M Billi Optimal inflation for the us economy American Economic JournalMacroeconomics 3(3)29ndash52 July 2011 URL httpideasrepecorgaaeaaejmacv3y2011i3p29-52html
Andrew Binning and Junior Maih Modelling Occasionally Binding Constraints Us-ing Regime-Switching Working Papers No 92017 Centre for Applied Macro- andPetroleum economics (CAMP) BI Norwegian Business School December 2017 URLhttpsideasrepecorgpbnywpaper0058html
Olivier Jean Blanchard and Charles M Kahn The solution of linear difference models underrational expectations Econometrica 48(5)1305ndash11 July 1980 URL httpideasrepecorgaecmemetrpv48y1980i5p1305-11html
Johannes Brumm and Michael Grill Computing equilibria in dynamic models with occa-sionally binding constraints Technical Report 95 University of Mannheim Center forDoctoral Studies in Economics July 2010
Lawrence J Christiano and Jonas D M Fisher Algorithms for solving dynamic mod-els with occasionally binding constraints Journal of Economic Dynamics and Control24(8)1179ndash1232 July 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-408BVCF-21217643ed2bbecee4fa5a1ec60ed9407e
Ulrich Doraszelskiy and Kenneth L Judd Avoiding the curse of dimensionality in dynamicstochastic games Harvard University and Hoover Institution and NBER November 2004URL filePcompmpepdf
Jess Gaspar and Kenneth L Judd Solving large scale rational expectations models TechnicalReport 0207 National Bureau of Economic Research Inc February 1997 URL filePt0207pdf available at httpideasrepecorgpnbrnberte0207html
Grey Gordon Computing dynamic heterogeneous-agent economies Tracking the distri-bution PIER Working Paper Archive 11-018 Penn Institute for Economic ResearchDepartment of Economics University of Pennsylvania June 2011 URL httpideasrepecorgppenpapers11-018html
Luca Guerrieri and Matteo Iacoviello OccBin A toolkit for solving dynamic modelswith occasionally binding constraints easily Journal of Monetary Economics 7022ndash38
53
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
2015a ISSN 03043932 doi 101016jjmoneco201408005 URL httplinkinghubelseviercomretrievepiiS0304393214001238
Luca Guerrieri and Matteo Iacoviello Occbin A toolkit for solving dynamic models withoccasionally binding constraints easily Journal of Monetary Economics 7022ndash38 2015b
Christian Haefke Projections parameterized expectations algorithms (matlab) QMampRBCCodes Quantitative Macroeconomics amp Real Business Cycles November 1998 URL httpideasrepecorgcdgeqmrbcd69html
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints with oc-casionally binding constraints among endogenous variables Journal of Economic Dynam-ics and Control 34(10)2074ndash2088 2010a ISSN 01651889 doi 101016jjedc201005002URL httpdxdoiorg101016jjedc201005002
Thomas Hintermaier and Winfried Koeniger The method of endogenous gridpoints withoccasionally binding constraints among endogenous variables Journal of Economic Dy-namics and Control 34(10)2074ndash2088 October 2010b URL httpideasrepecorgaeeedynconv34y2010i10p2074-2088html
Thomas Holden Existence uniqueness and computation of solutions to dsge models withoccasionally binding constraints University Surrey 2015
Kenneth Judd Projection methods for solving aggregate growth models Journal of Eco-nomic Theory 58410ndash452 1992
Kenneth Judd Lilia Maliar and Serguei Maliar Numerically stable and accurate stochasticsimulation approaches for solving dynamic economic models Working Papers Serie AD2011-15 Instituto Valenciano de Investigaciones EconAtildeşmicas SA (Ivie) July 2011 URLhttpsideasrepecorgpiviwpasad2011-15html
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak Method for Solv-ing Dynamic Economic Models Lagrange Interpolation Anisotropic Grid and AdaptiveDomain unpublished 2013
Kenneth L Judd Lilia Maliar Serguei Maliar and Rafael Valero Smolyak method forsolving dynamic economic models Lagrange interpolation anisotropic grid and adap-tive domain Journal of Economic Dynamics and Control 4492ndash123 July 2014 ISSN01651889 doi 101016jjedc201403003 URL httplinkinghubelseviercomretrievepiiS0165188914000621
Kenneth L Judd Lilia Maliar and Serguei Maliar Lower bounds on approximation errorsto numerical solutions of dynamic economic models Econometrica 85(3)991ndash1012 52017a ISSN 1468-0262 doi 103982ECTA12791 URL httphttpsdoiorg103982ECTA12791
54
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
Kenneth L Judd Lilia Maliar Serguei Maliar and Inna Tsener How to solvedynamic stochastic models computing expectations just once Quantitative Eco-nomics 8(3)851ndash893 November 2017b URL httpsideasrepecorgawlyquantev8y2017i3p851-893html
Gary Koop M Hashem Pesaran and Simon M Potter Impulse response analysis innonlinear multivariate models Journal of Econometrics 74(1)119ndash147 September1996 URL httpwwwsciencedirectcomsciencearticleB6VC0-3XDS2R2-62f8b1713c81765453e1a5e9a52593b52e
Martin Lettau Inspecting the mechanism Closed-form solutions for asset prices in realbusiness cycle models Economic Journal 113(489)550ndash575 July 2003
Lilia Maliar and Serguei Maliar Parametrized Expectations Algorithm And The Mov-ing Bounds Working Papers Serie AD 2001-23 Instituto Valenciano de InvestigacionesEconAtildeşmicas SA (Ivie) August 2001 URL httpsideasrepecorgpiviwpasad2001-23html
Lilia Maliar and Serguei Maliar Solving the neoclassical growth model with quasi-geometricdiscounting A grid-based euler-equation method Computational Economics 26163ndash1722005 ISSN 09277099 doi 101007s10614-005-1732-y
Albert Marcet and Guido Lorenzoni Parameterized expectations approach Some practicalissues In Ramon Marimon and Andrew Scott editors Computational Methods for theStudy of Dynamic Economies chapter 7 pages 143ndash171 Oxford University Press NewYork 1999
Taisuke Nakata Optimal fiscal and monetary policy with occasionally binding zero boundconstraints New York University January 2012
Anton Nakov Optimal and simple monetary policy rules with zero floor on the nominalinterest rate International Journal of Central Banking 4(2)73ndash127 June 2008 URLhttpideasrepecorgaijcijcjouy2008q2a3html
A Peralta-Alva and Manuel Santos Schmedders K and Judd K volume 3 chapter 9 pages517ndash556 Elsevier Science 2014
Simon M Potter Nonlinear impulse response functions Journal of Economic Dynamicsand Control 24(10)1425ndash1446 September 2000 URL httpwwwsciencedirectcomsciencearticleB6V85-40T9HN0-313f27e3e4d4b890df59d4f83850c1c3d1
By Manuel S Santos and Adrian Peralta-alva Accuracy of simulations for stochastic dy-namic models Econometrica 73(6)1939ndash1976 11 2005 ISSN 1468-0262 doi 101111j1468-0262200500642x URL httphttpsdoiorg101111j1468-0262200500642x
Manuel Santos Accuracy of numerical solutions using the euler equation residuals Econo-metrica 68(6)1377ndash1402 2000 URL httpsEconPapersrepecorgRePEcecmemetrpv68y2000i6p1377-1402
55
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
A Error Approximation Formula ProofWe consider time invariant maps such as often arise from solving an optimization problemcodified in a systems of equations In what follows we construct an error approximationfor proposed model solutions Consider a bounded region A where all iterated conditionalexpectations paths remain bounded Given an exact solution xt = g(xtminus1 εt) define theconditional expectations function
G(x) equiv E [g(x ε)]
then with
Etxt+1 = G(g(xtminus1 εt))
we have
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
In words the exact solution exactly satisfies the model equations Using G and L constructthe family of trajectories and corresponding zt (xtminus1 ε)
xt (xtminus1 εt) isin RL xt (xtminus1 εt)2 le X forallt gt 0
zt (xtminus1 εt) equivHminus1xtminus1(xtminus1 εt)+
H0xt (xtminus1 εt)+
H1xt+1(xtminus1 εt)
Consequently the exact solution has a representation given by
xt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzt+ν(xtminus1 ε)
and
E[xt+1(xtminus1 ε)
]= Bxt+k +
infinsumν=0
F νφE[zt+1+ν(xtminus1 ε)
]+ (I minus F )minus1φψc
with
M(xtminus1 xt Etx
t+1 εt) = 0 forall(xtminus1 εt)
Now consider a proposed solution for the model xpt = gp(xtminus1 εt) define Gp(x) equivE [gp(x ε)] so that
Etxt+1 = Gp(gp(xtminus1 εt))ept (xtminus1 ε) equivM(xtminus1 x
pt Etx
pt+1 εt)
56
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
By construction the conditional expectations paths will all be bounded in the region ApUsing Gp and L construct the family of trajectories and corresponding zpt (xtminus1 ε)12
xpt (xtminus1 εt) isin RL xpt (xtminus1 εt)2 le X forallt gt 0
zpt (xtminus1 εt) equivHminus1xptminus1(xtminus1 εt)+
H0xpt (xtminus1 εt)+
H1xpt+1(xtminus1 εt)
The proposed solution has a representation given by
xpt (xtminus1 ε) = Bxtminus1 + φψεε+ (I minus F )minus1φψc+infinsumν=0
F sφzpt+ν(xtminus1 ε)
and
E [xpt+1(xtminus1 ε)] = Bxpt+k +infinsumν=0
F νφzpt+1+ν(xtminus1 ε) + (I minus F )minus1φψc
with
ept (xtminus1 ε) equivM(xtminus1 xpt Etx
pt+1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ(zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
∆zpt+ν(xtminus1 εt) equiv (zt+ν(xtminus1 ε)minus zpt+ν(xtminus1 ε))
xt (xtminus1 ε)minus xpt (xtminus1 ε) =infinsumν=0
F sφ∆zpt+ν(xtminus1 εt)
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin leinfinsumν=0
F sφ ∆zpt+ν(xtminus1 εt)infin
By bounding the largest deviation in the paths for the ∆zpt we can bound the largestdifference in xt13 The exact solution satisfies the model equations exactly The error asso-ciated with the proposed solution leads to a conservative bound on the largest change in zneeded to match the exact solution
12The algorithm presented below for finding solutions provides a mechanism for generating such proposedsolutions
13Since the future values are probability weighted averages of the ∆zpt values for the given initial conditions
and the condition expectations paths remain in the region Ap the largest error for ∆zpt+k are pessimistic
bounds for the errors from the conditional expectations path
57
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
∆zt le maxxminusε
φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)2
xt (xtminus1 ε)minus xpt (xtminus1 ε)infin le maxxminusε
∥∥∥(I minus F )minus1φM(xminus gp(xminus ε) Gp(gp(xminus ε)) ε)∥∥∥
2
zt = Hminusxtminus1 +H0x
t +H+x
t+1
zpt = Hminusxptminus1 +H0x
pt +H+x
pt+1
0 = M(xtminus1 xt x
t+1 εt)
ept (xtminus1 ε) = M(xptminus1 xpt x
pt+1 εt)
maxxtminus1εt
(φ∆zt2)2 = maxxtminus1εt
(φ(H0(xpt minus xt ) +H+(xpt+1 minus xt+1)))2
with
M(xtminus1 xt x
t+1 εt) = 0
∆ept (xtminus1 ε) asymppartM(xtminus1 xt xt+1 εt)
partxt(xt minus x
pt ) + partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
when partM(xtminus1xtxt+1εt)partxt
is non-singular we can write
(xt minus xpt ) asymp
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1 (∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)
collecting results
(φ∆zt2)2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε)minus
partM(xtminus1 xt xt+1 εt)partxt+1
(xt+1 minus xpt+1)
)+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1(xt+1 minus x
pt+1)
+H+(xpt+1 minus xt+1)))2 =
(φ(H0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1
(∆ept (xtminus1 ε))minusH0
(partM(xtminus1 xt xt+1 εt)
partxt
)minus1partM(xtminus1 xt xt+1 εt)
partxt+1minusH+
(xt+1 minus xpt+1)))2 =
58
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
approximating the derivatives using the H matrices
partM(xtminus1 xt xt+1 εt)partxt
asymp H0
partM(xtminus1 xt xt+1 εt)partxt+1
asymp H+
produces
maxxtminus1εt
(φ∆ept (xtminus1 ε))2
B A Regime Switching ExampleTo apply the series formula one must construct conditional expectations paths from eachinitial x(xtminus1 εt) Consider the case when the transition probabilities pij are constant anddo not depend on xtminus1 εt xt
Et[xt+1] =Nsumj=1
pijEt(xt+1(xt)|st+1 = j)
Et[xt+2] =Nsumj=1
pijEt(xt+2(xt+1)|st+2 = j)
If we know the current state we can use the known conditional expectation function forthe state
Et(xt+1(xt)|st = 1)
Et(xt+1(xt)|st = N)
For the next state we can compute expectations if the errors are independent
Et(xt+2(xt)|st = 1)
Et(xt+2(xt)|st = N)
= (P otimes I)
Et(xt+2(xt+1)|st+1 = 1)
Et(xt+2(xt+1)|st+1 = N)
Consider two states st isin 0 1 where the depreciation rates are different d0 gt d1
Prob(st = j|stminus1 = i) = pij
59
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
if(st = 0 and micro gt 0 and (kt minus (1minus d0)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 0 and micro = 0 and (kt minus (1minus d0)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d0) + microt+1 + δ(1minus d0)
It minus (Kt minus (1minus d0)ktminus1)microt(kt minus (1minus d0)ktminus1 minus υIss)
if(st = 1 and micro gt 0 and (kt minus (1minus d1)ktminus1 minus υIss) = 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
60
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
if(st = 1 and micro = 0 and (kt minus (1minus d1)ktminus1 minus υIss) ge 0)
λt minus1ct
ct + kt minus θtkαtminus1
Nt minus λtθtθt minus e(ρ ln(θtminus1)+ε)
λt + microt minus (αk(αminus1)t δNt+1 + λt+1δ(1minus d1) + microt+1 + δ(1minus d1)
It minus (Kt minus (1minus d1)ktminus1)microt(kt minus (1minus d1)ktminus1 minus υIss)
61
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
C Algorithm Pseudo-codeL equiv Hψε ψcB φ F The linear reference model
xp(xtminus1 εt) The proposed decision rule xt components
zp(xtminus1 εt) The proposed decision rule zt components
Xp(xtminus1) The proposed decision rule conditional expectations xt components
Zp(xtminus1) The proposed decision rule their conditional expectations zt components
κ One less than the number of terms in series representation(κ gt= 0)
S Smolyak an-isotropic grid polynomials (p1(x ε) pN(x ε)) and their conditional expec-tations (P1(x) PN(x))
T Model evaluation points Neidereiter sequence in the model variable ergodic region
γ x(xtminus1 εt) z(xtminus1 εt) collects the decision rule components
G x(xtminus1 εt) z(xtminus1 εt) collects the decision rule conditional expectations components
H γG collects decision rule information
C1 CMd(xtminus1 ε xt E [xt+1])|(b c) The equation system R3Nx+Nε+Nx rarr RNx
input (LH0 κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) ST)1 Compute a sequence of decision rule and conditional expectation rule
function terminating when the evaluation of the functions at thetests points no longer changes much Norm of the differencecontrolled by default (lsquolsquonormConvTolrsquorsquo-gt10minus10)
2 NestWhile(Function(v doInterp(L v κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S))H0 notConvergedAt(vT))output H0H1 Hc
Algorithm 1 nestInterp
62
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
input (LHi κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Symbolically generates a collection of function triples and an outer
model solution selection function Each of triples returns theboolean gate values and a unique value for xt for any given value of(xtminus1 εt) one of the model equations and subsequently uses thesefunctions to construct interpolating functions approximating thedecision rules
2 theF indRootFuncs =genFindRootFuncs(LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c))
3 Hi+1 = makeInterpFuncs(theF indRootFuncs S)output Hi+1
Algorithm 2 doInterp
input (LH κ C1 CMd(xtminus1 ε xt E [xt+1])|(b c) S)1 Applies genFindRootWorker to each model component Symbolically
generates a collection of function triples and an outer modelsolution selection function Each of the triples returns theBoolean gate values and a unique value for xt for any given value of(xtminus1 εt) for one from the set of model equations It subsequentlyuses these functions to construct interpolating functionsapproximating the decision rules Each of the functionconstructions can be done in parallel
2 theFuncOptionsTriples =Map(Function(v v[1] genF indRootWorker(LH κ v[2]) v[3]) C1 CM)output theFuncOptionsTriplesd
Algorithm 3 genFindRootFuncs
input (LG κM)1 Symbolically constructs functions that return xt for a given (xtminus1 εt)
2 Z = Zt+1 Zt+kminus1 = genZsForF indRoot(L xpt G)3 modEqnArgsFunc = genLilXkZkFunc(LZ)4 X = xtminus1 xt Et(xt+1) εt = modEqnArgsFunc(xtminus1 εt zt)5 funcOfXtZt = FindRoot((xpt zt) M(X ) xt minus xpt)6 funcOfXtm1Eps = Function((xtminus1 εt) funcOfXtZt)
output funcOfXtm1EpsAlgorithm 4 genFindRootWorker
63
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
input (xpt LG κM)1 Symbolically compute the κminus 1 future conditional Zrsquos vectors needed
for the series formula(empty list for κ = 1 2 Xt = xpt for i = 1 i lt κminus 1 i+ + do3 Xt+1 Zt+i = G(Zt+iminus1)4 end5 = (Xt+i Zt+1) (Xt+κminus1Zt+κminus1) = genZsForF indRoot(L xpt G)
output Zt+1 Zt+kminus1Algorithm 5 genZsForF indRoot
input (L = Hψε ψcB φ F)1 Create functions that use the solution from the linear reference
model for an initial proposed decision rule function and decisionrule conditional expectation function
2 γ(x ε) =[Bx+ (I minus F )minus1φψc + ψεεt
0Nx
]
3 G(x ε) =[Bx+ (I minus F )minus1φψc + ψε
0Nx
]4 H = γG
output HAlgorithm 6 genBothX0Z0Funcs
input (L Zt+1 Zt+kminus1)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolic )inputs (xtminus1 xt Et(xt+1) εt)for model system equations M
2 fCon = fSumC(φ F ψz Zt+1 Zt+kminus1)xtV als = genXtOfXtm1(L xtminus1 εt zt fCon)xtp1V als = genXtp1OfXt(L xtV als fCon)fullV ec = xtm1V ars xtV als xtp1V als epsV arsoutput fullV ec
Algorithm 7 genLilXkZkFunc
input (L xtminus1 εt zt fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically) the xt component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + FfConoutput xt
Algorithm 8 genXtOfXtm1
64
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
input (L xtminus1 fCon)1 Symbolically creates a function that uses an initial Zt path to
generate a function of (xtminus1 εt zt) providing (potentially symbolically)the xt+1 component of inputs (xtminus1 xt Et(xt+1) εt)for model systemequations M
2 xt = Bxtminus1 + (I minus F )minus1φψc + φψεεt + φψzzt + fConoutput xt
Algorithm 9 genXtp1OfXt
input (L Zt+1 Zt+kminus1)1 Numerically sums the F contribution for the series 2 S = sum
F νminus1φZt+νoutput S
Algorithm 10 fSumC
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
2 interpData = genInterpData(C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)γG = interpDataToFunc(interData S)output γG
Algorithm 11 makeInterpFuncs
input (C1 CMd(xtminus1 ε xt E [xt+1])|(b c)S)1 Solves model equations at collocation points producing interpolation
data and subsequently returns the interpolating functions for thedecision rule and decision rule conditional expectation Thisroutine also optionally replaces the approximations generated forall lsquolsquobackward lookingrsquorsquo equations with user provided pre-computedfunctions
output γGAlgorithm 12 genInterpData
input (C(xtminus1 εt))1 Evaluate the model equations obeying pre and post conditions
output xt or failedAlgorithm 13 evaluateTriple
65
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
input (V S)1 Computes a vector of Smolyak interpolating polynomials corresponding
to the matrix of function evaluations Each row corresponds to avector of values at a collocation point
output γGAlgorithm 14 interpDataToFunc
input (approxLevelsmeans stdDevminZsmaxZs vvMat distributions)1 Given approximation levels PCA information and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 15 smolyakInterpolationPrep
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 16 smolyakInterpolationPrep
66
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 17 genSolutionErrXtEps
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 18 genSolutionErrXtEpsZero
input (approxLevels varRanges distributions)1 Given approximation levels variable ranges and probability
distributions for errors compute the collocation points the PsiMatrix the polynomials and the integrals of the polynomials
output xPts PsiMat polys intPolysAlgorithm 19 genSolutionPath
67
- Introduction
- A New Series Representation For Bounded Time Series
-
- Linear Reference Models and a Formula for ``Anticipated Shocks
- The Linear Reference Model in Action Arbitrary Bounded Time Series
- Assessing xt Errors
-
- Truncation Error
- Path Error
-
- Dynamic Stochastic Time Invariant Maps
-
- An RBC Example
- A Model Specification Constraint
-
- Approximating Model Solution Errors
-
- Approximating a Global Decision Rule Error Bound
- Approximating Local Decision Rule Errors
- Assessing the Error Approximation
-
- Improving Proposed Solutions
-
- Some Practical Considerations for Applying the Series Formula
- How My Approach Differs From the Usual Approach
- Algorithm Overview
-
- Single Equation System Case
- Multiple Equation System Case
-
- Approximating a Known Solution =1 U(c) = Log(c)
- Approximating an Unknown Solution =3
-
- A Model with Occasionally Binding Constraints
- Conclusions
- Error Approximation Formula Proof
- A Regime Switching Example
- Algorithm Pseudo-code
-