1 econ 240c lecture 8. 2 outline: 2 nd order ar roots of the quadratic roots of the quadratic...

Post on 20-Dec-2015

217 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

1

ECON 240C

Lecture 8Lecture 8

2

Outline: 2nd Order AR Roots of the quadraticRoots of the quadratic Example: capumfgExample: capumfg Polar formPolar form Inverse of B(z)Inverse of B(z) Autocovariance functionAutocovariance function Yule-Walker EquationsYule-Walker Equations Partial autocorrelation functionPartial autocorrelation function

3

Outline Cont. Parameter uncertaintyParameter uncertainty Moving average processesMoving average processes Significance of AutocorrelationsSignificance of Autocorrelations

4

Roots of the quadratic X(t) = bX(t) = b1 1 x(t-1) + bx(t-1) + b2 2 x(t-2) + wn(t)x(t-2) + wn(t)

yy2 2 –b–b1 1 y – by – b2 2 = 0, from substituting y= 0, from substituting y2-u 2-u for for

x(t-u)x(t-u) y = [by = [b1 1 +/- (b+/- (b11

2 2 + 4b+ 4b22))1/2 1/2 ]/2]/2

Complex if (bComplex if (b112 2 + 4b+ 4b22) < 0) < 0

5

b1 = 1.489, b2 = -0.575Roots: y = 1.489 +/- [(1.489)2 +4(-0.575)]1/2/2

y = 0.744 +/- (2.217 – 2.30)1/2 /2y = 0.74 +/- 0.288/2 *iy = 0.74 + 0.14 i, 0.74 – 0.14 i

6

Roots in polar form y = Re + Im i = a + b iy = Re + Im i = a + b i sin sin = b/(a = b/(a2 2 + b+ b2 2 ))1/21/2

cos cos = a/(a = a/(a2 2 + b+ b2 2 ))1/21/2

y = (ay = (a2 2 + b+ b2 2 ))1/2 1/2 cos cos + i (a + i (a2 2 + b+ b2 2 ))1/2 1/2 sin sin

Re

Im

a

(a, b)

b

7

Roots in Polar form Re + i Im = a + i b = (aRe + i Im = a + i b = (a2 2 + b+ b2 2 ))1/2 1/2 [cos [cos + sin + sin ]] Example: modulus, (aExample: modulus, (a2 2 + b+ b2 2 ))1/2 1/2 = [(0.74)= [(0.74)2 2

+(0.14)+(0.14)2 2 ]]1/21/2 = [0.548 + 0.0196] = [0.548 + 0.0196]1/2 1/2 = 0.753= 0.753 Tan Tan = sin = sin /cos /cos = b/a = 0.14/0.74 = 0.189 = b/a = 0.14/0.74 = 0.189 = tan= tan-1 -1 0.189 ~ 11 degrees = 0.0306 fraction 0.189 ~ 11 degrees = 0.0306 fraction

of a circle = 0.0306*2of a circle = 0.0306*2 radians = 0.192 radians radians = 0.192 radians Period = 2*Period = 2*// = 32.7 quarters, 8.2 years,the = 32.7 quarters, 8.2 years,the

time it takes to go around the circle once time it takes to go around the circle once

8

1972.41977.4

Peak to peak: 5 years or 20 quarters

9

Half a cycle in 21-9 = 12 quarters, so period = 24 quarters or 6 years

10

Difference Equation Solutions x(t) –bx(t) –b1 x(t-1) – b2 x(t-2) = 0 Suppose b2 = 0, then b1 is the root, with x(t)

= b1 x(t-1). Suppose x(0) = 100, and b1 =1.2 then x(1) = 1.2*100, And x(2) = 1.2*x(1) = (1.2)2 *100, And the solution is x(t) = x(0)* b1

t

In general for roots r1 and r2 , the solution is x(t) = Ar1

t + Br2t where A and B are

constants

11

III. Autoregressive of the Second Order

ARTWO(t) = bARTWO(t) = b1 1 *ARTWO(t-1) + b*ARTWO(t-1) + b2 2 *ARTWO(t-*ARTWO(t-

2) + WN(t)2) + WN(t) ARTWO(t) - bARTWO(t) - b1 1 *ARTWO(t-1) - b*ARTWO(t-1) - b2 2 *ARTWO(t-2) *ARTWO(t-2)

= WN(t)= WN(t) ARTWO(t) - bARTWO(t) - b1 1 *Z*ARTWO(t) - b*Z*ARTWO(t) - b2 2

*Z*ARTWO(t) = WN(t)*Z*ARTWO(t) = WN(t) [1 - b[1 - b1 1 *Z - b*Z - b2 2 *Z*Z22] ARTWO(t) = WN(t)] ARTWO(t) = WN(t)

12

Inverse of [1-b1z –b2z2] ARTWO(t) = wn(t)/B(z) =wn(t)/[1-bARTWO(t) = wn(t)/B(z) =wn(t)/[1-b1z –b–b2z2]

ARTWO(t) = A(z) wn(t) = {1/[1-b[1-b1z –b–b2z2]}wn(t)

So A(z) = [1 + a1 z + a2 z2 + …] = 1/[1-b[1-b1z –b–b2z2]

[1-b[1-b1z –b–b2z2] [1 + a1 z + a2 z2 + …] = 1

1 + a1 z + a2 z2 + … -b -b1z – a– a11 b b1z2 - b2 z2… = 1

1 + (a1 – b1)z + (a2 –a1 b1 –b2 ) z2 + … = 1

So (a1 – b1) = 0, (a2 –a1 b1 –b2 ) = 0, …

13

Inverse of [1-b1z –b2z2]

A(z) = [1 + a1 z + a2 z2 + …] = [1 + b1 z + (b12 +b2)

z2 + …. So ARTWO(t) = wn(t) + b1 wn(t-1) + (b1

2 +b2) wn(t-2) + ….

And ARTWO(t-1) = wn(t-1) + b1 wn(t-2) + (b12

+b2) wn(t-3) + ….

14

Autocovariance Function

ARTWO(t) = bARTWO(t) = b1 1 *ARTWO(t-1) + b*ARTWO(t-1) + b2 2

*ARTWO(t-2) + WN(t)*ARTWO(t-2) + WN(t) Using x(t) for ARTWO, Using x(t) for ARTWO, x(t) = bx(t) = b1 1 *x(t-1) + b*x(t-1) + b2 2 *x(t-2) + WN(t)*x(t-2) + WN(t) By lagging and substitution, one can show By lagging and substitution, one can show

that x(t-1) depends on earlier shocks, so that x(t-1) depends on earlier shocks, so multiplying by x(t-1) and taking multiplying by x(t-1) and taking expectationsexpectations

15

Autocovariance Function x(t) = bx(t) = b1 1 *x(t-1) + b*x(t-1) + b2 2 *x(t-2) + WN(t)*x(t-2) + WN(t)

x(t)*x(t-1) = bx(t)*x(t-1) = b1 1 *[x(t-1)]*[x(t-1)]22 + b + b2 2 *x(t-1)*x(t-2) + x(t-*x(t-1)*x(t-2) + x(t-

1)*WN(t)1)*WN(t) Ex(t)*x(t-1) = bEx(t)*x(t-1) = b1 1 *E[x(t-1)]*E[x(t-1)]22 + b + b2 2 *Ex(t-1)*x(t-2) *Ex(t-1)*x(t-2)

+E x(t-1)*WN(t)+E x(t-1)*WN(t) x, xx, x b b1 1 * * x, xx, xbb2 2 * * x, x,

xxwhere Ex(t)*x(t-1), where Ex(t)*x(t-1),

E[x(t-1)]E[x(t-1)]22 , and Ex(t-1)*x(t-2) follow by definition , and Ex(t-1)*x(t-2) follow by definition and E x(t-1)*WN(t) = 0 since x(t-1) depends on and E x(t-1)*WN(t) = 0 since x(t-1) depends on earlier shocks and is independent of WN(t) earlier shocks and is independent of WN(t)

16

Autocovariance Function x, xx, x b b1 1 * * x, xx, xbb2 2 * * x, xx, x dividing though by dividing though by x, xx, x x, xx, x b b1 1 * * x, xx, xbb2 2 * * x, xx, xsoso

x, xx, xbb2 2 * * x, xx, x b b1 1 * * x, xx, xandand

x, xx, xbb2 2 ]] b b1 1 oror

x, xx, x b b1 1 bb2 2 ]] Note: if the parameters, bNote: if the parameters, b1 1 and b and b2 2 are known, are known,

then one can calculate the value of then one can calculate the value of x, xx, x

17

Autocovariance Function x(t) = bx(t) = b1 1 *x(t-1) + b*x(t-1) + b2 2 *x(t-2) + WN(t)*x(t-2) + WN(t)

x(t)*x(t-2) = bx(t)*x(t-2) = b1 1 *[x(t-1)x(t-2)] + b*[x(t-1)x(t-2)] + b2 2 *[x(t-2)]*[x(t-2)]22 + x(t- + x(t-

2)*WN(t)2)*WN(t) Ex(t)*x(t-2) = bEx(t)*x(t-2) = b1 1 *E[x(t-1)x(t-2)] + b*E[x(t-1)x(t-2)] + b2 2 *E[x(t-2)]*E[x(t-2)]22

+E x(t-2)*WN(t)+E x(t-2)*WN(t) x, xx, x b b1 1 * * x, xx, xbb2 2 * * x, x,

xxwhere Ex(t)*x(t-2), where Ex(t)*x(t-2),

E[x(t-2)]E[x(t-2)]22 , and Ex(t-1)*x(t-2) follow by definition , and Ex(t-1)*x(t-2) follow by definition and E x(t-2)*WN(t) = 0 since x(t-2) depends on and E x(t-2)*WN(t) = 0 since x(t-2) depends on earlier shocks and is independent of WN(t) earlier shocks and is independent of WN(t)

18

Autocovariance Function x, xx, x b b1 1 * * x, xx, xbb2 2 * * x, xx, x dividing though by dividing though by x, xx, x x, xx, x b b1 1 * * x, xx, xbb2 2 * * x, xx, x x, xx, x b b1 1 * * x, xx, xbb2 2 * * x, xx, x Note: if the parameters, bNote: if the parameters, b1 1 and b and b2 2 are known, are known,

then one can calculate the value of then one can calculate the value of x, xx, xas as

we did above from we did above from x, xx, x b b1 1 bb2 2 ], ],

and then calculate and then calculate x, xx, x

19

Autocorrelation Function x, xx, x b b1 1 * * x, xx, xbb2 2 * * x, xx, x Note also the recursive nature of this Note also the recursive nature of this

formula, so formula, so x, xx, x b b1 1 * * x, x,

xxbb2 2 * * x, xx, xfor u>=2.for u>=2.

Thus we can map from the parameter space Thus we can map from the parameter space to the autocorrelation function.to the autocorrelation function.

How about the other way around?How about the other way around?

20

Yule-Walker Equations From slide 16 above, From slide 16 above, x, xx, x b b1 1 * * x, xx, xbb2 2 * * x, xx, xand so and so

bb1 1 = = x, xx, xbb2 2 * * x, xx, x From slide 19 above, From slide 19 above, x, xx, x b b1 1 * * x, xx, xbb2 2 * * x, xx, xoror

bb2 2 ==x, xx, x b b1 1 * * x, xx, xand substituting for and substituting for

bb1 1 from line 3 abovefrom line 3 above

bb2 2 ==x, xx, x [ [x, xx, xbb2 2 * * x, xx, x] ] x, xx, x

21

Yule-Walker Equations bb2 2 ==x, xx, x {[ {[x, xx, xbb2 2 * [* [x, xx, x]]22 } }

so bso b2 2 ==x, xx, x [ [x, xx, xbb2 2 * [* [x, xx, x]]22

and band b2 2 bb2 2 * [* [x, xx, x]]22 = = x, xx, x [ [x, xx, x so bso b2 2 x, xx, x]]22 = = x, xx, x [ [x, xx, x

and and bb2 2 = {= {x, xx, x [ [x, xx, x x, xx, x]]22

This is the formula for the partial autocorrelation This is the formula for the partial autocorrelation at lag two.at lag two.

22

Partial Autocorrelation Function bb2 2 = {= {x, xx, x [ [x, xx, x x, xx, x]]22

Note: If the process is really autoregressive of the Note: If the process is really autoregressive of the first order, then first order, then x, xx, xbb2 2 and and x, xx, xb, b,

so the numerator is zero, i.e. the partial so the numerator is zero, i.e. the partial autocorrelation function goes to zero one lag after autocorrelation function goes to zero one lag after the order of the autoregressive process.the order of the autoregressive process.

Thus the partial autocorrelation function can be Thus the partial autocorrelation function can be used to identify the order of the autoregressive used to identify the order of the autoregressive process.process.

23

Partial Autocorrelation Function If the process is first order autoregressive then the If the process is first order autoregressive then the

formula for bformula for b1 1 = b is: = b is:

bb1 1 = b =ACF(1), so this is used to calculate the = b =ACF(1), so this is used to calculate the

PACF at lag one, i.e. PACF(1) =ACF(1) = bPACF at lag one, i.e. PACF(1) =ACF(1) = b1 1 = b.= b.

For a third order autoregressive process,For a third order autoregressive process, x(t) = bx(t) = b1 1 *x(t-1) + b*x(t-1) + b2 2 *x(t-2) + b*x(t-2) + b3 3 *x(t-3) + WN(t), *x(t-3) + WN(t),

we would have to derive three Yule-Walker we would have to derive three Yule-Walker equations by first multiplying by x(t-1) and then by equations by first multiplying by x(t-1) and then by x(t-2) and lastly by x(t-3), and take expectations.x(t-2) and lastly by x(t-3), and take expectations.

24

Partial Autocorrelation Function

Then these three equations could be solved Then these three equations could be solved for bfor b3 3 in terms of in terms of x, xx, xx, xx, xand and x, x,

xxto determine the expression for the to determine the expression for the

partial autocorrelation function at lag three. partial autocorrelation function at lag three. EVIEWS does this and calculates the PACF EVIEWS does this and calculates the PACF at higher lags as well.at higher lags as well.

25

IV. Economic Forecast Project

Santa Barbara County SeminarSanta Barbara County Seminar April 29, 2005April 29, 2005

URL: URL: http://www.ucsb-efp.comhttp://www.ucsb-efp.com

26

V. Forecasting Trends

27

Lab Two: LNSP500

28

Note: Autocorrelated Residual

29

Autorrelation Confirmed from the Correlogram of the Residual

30

Visual Representation of the Forecast

31

Numerical Representation of the Forecast

32

One Period Ahead Forecast Note the standard error of the regression is 0.2237Note the standard error of the regression is 0.2237 Note: the standard error of the forecast is 0.2248Note: the standard error of the forecast is 0.2248 Diebold refers to the forecast errorDiebold refers to the forecast error

without parameter uncertainty, which will just without parameter uncertainty, which will just be the standard error of the regressionbe the standard error of the regression

or with parameter uncertainty, which accounts or with parameter uncertainty, which accounts for the fact that the estimated intercept and for the fact that the estimated intercept and slope are uncertain as wellslope are uncertain as well

33

Parameter Uncertainty

Trend model: y(t) = a + b*t + e(t)Trend model: y(t) = a + b*t + e(t) Fitted model: Fitted model: tbay *ˆˆˆ

tbaty *ˆˆ)(ˆ

34

Parameter Uncertainty

Estimated error Estimated error )(ˆ)()(ˆ tytyte

35

Forecast Formula

)1()1(*ˆˆ)1(ˆ tetbaty

36

Expected Value of the Forecast

EEt t

)1(*

)1()1(*ˆˆ)1(ˆ

tba

tetbaty

37

Forecast Minus its Expected Value

Forecast = a + b*(t+1) + 0Forecast = a + b*(t+1) + 0

Ety )1(ˆ )1(ˆ ty

)1()1(*)ˆ()ˆ()1(ˆ)1(ˆ tetbbaatyEty t

38

Variance in the Forecast

)1(Re)1(*ˆ)1(*ˆˆ*2]ˆ[

)]1(ˆ)1(ˆ[2

tsVARtbVARtbaCOVaaVAR

tyEtyVAR t

39

40

Variance of the Forecast Error

)1Re()1(*ˆ)1(*ˆˆ*2]ˆ[

)1(ˆ)1(ˆ[2

tVAtbVARtbaCOVaaVAR

tyEtyVAR t

0.000501 +2*(-0.00000189)*398 + 9.52x10-9*(398)2 +(0.223686)2

0.000501 - 0.00150 + 0.001508 + 0.0500354 0.0505444SEF = (0.0505444)1/2 = 0.22482

41

Numerical Representation of the Forecast

42

Evolutionary Vs. Stationary

Evolutionary: Trend model for lnSp500(t)Evolutionary: Trend model for lnSp500(t) Stationary: Model for Dlnsp500(t)Stationary: Model for Dlnsp500(t)

43

Pre-whitened Time Series

44Note: 0 008625 is monthly growth rate; times 12=0.1035

45

Is the Mean Fractional Rate of Growth Different from Zero?

Econ 240A, Ch.12.2Econ 240A, Ch.12.2

where the null hypothesis is that where the null hypothesis is that = 0. = 0. (0.008625-0)/(0.045661/397(0.008625-0)/(0.045661/3971/21/2)) 0.008625/0.002292 = 3.76 t-statistic, so 0.008625/0.002292 = 3.76 t-statistic, so

0.008625 is significantly different from 0.008625 is significantly different from zerozero

)//()( nsx

46

Model for lnsp500(t)

Lnsp500(t) = a +b*t +resid(t), where Lnsp500(t) = a +b*t +resid(t), where resid(t) is close to a random walk, so the resid(t) is close to a random walk, so the model is:model is:

lnsp500(t) a +b*t + RW(t), and taking lnsp500(t) a +b*t + RW(t), and taking exponentialexponential

sp500(t) = esp500(t) = ea + b*t + RW(t) a + b*t + RW(t) = e= ea + b*t a + b*t eeRW(t)RW(t)

47

Note: The Fitted Trend Line Forecasts Above the Observations

48

49

VI. Autoregressive Representation of a Moving Average Process

MAONE(t) = WN(t) + a*WN(t-1) MAONE(t) = WN(t) + a*WN(t-1) MAONE(t) = WN(t) +a*Z*WN(t)MAONE(t) = WN(t) +a*Z*WN(t) MAONE(t) = [1 +a*Z] WN(t)MAONE(t) = [1 +a*Z] WN(t) MAONE(t)/[1 - (-aZ)] = WN(t)MAONE(t)/[1 - (-aZ)] = WN(t) [1 + (-aZ) + (-aZ)[1 + (-aZ) + (-aZ)2 2 + …]MAONE(t) = WN(t)+ …]MAONE(t) = WN(t) MAONE(t) -a*MAONE(t-1) + aMAONE(t) -a*MAONE(t-1) + a2 2 MAONE(t-2) + .. MAONE(t-2) + ..

=WN(t)=WN(t)

50

MAONE(t) = a*MAONE(t-1) - aMAONE(t) = a*MAONE(t-1) - a22*MAONE(t-2) + *MAONE(t-2) + …. +WN(t)…. +WN(t)

51

Lab 4: Alternating Pattern in PACF of MATHREE

52

Part IV. Significance of Autocorrelations

x, x (u) ~ N(0, 1/T) , where T is # of observations x, x (u) ~ N(0, 1/T) , where T is # of observations

53

Correlogram of the Residual from the Trend Model for LNSP500(t)

54

Box-Pierce Statistic

xx,)(ˆ

)/1/()0)(ˆ(

,

,

uT

Tu

xx

xx

Is normalized, 1.e. is N(0,1)

The square of N(0,1) variables is distributed Chi-square

)(ˆ ,2 uT xx

55

Box-Pierce StatisticThe sum of the squares of independent N(0, 1) variables is Chi-square, and if the autocorrelations are close to zero they will be independent, so under the null hypothesis that the autocorrelations are zero, we have a Chi-square statistic:

)(ˆ1

,2 uT

K

u

xx

that has K-p-q degrees of freedom where K is the number of lags in the sum, and p+q are the number of parameters estimated.

56

Application to Lab Four: the Fractional Change in the Federal Funds Rate

Dlnffr = lnffr-lnffr(-1)Dlnffr = lnffr-lnffr(-1) Does taking the logarithm and then Does taking the logarithm and then

differencing help model this rate??differencing help model this rate??

57

58

59

Correlogram of dlnffr(t)

60

How would you model dlnffr(t) ?

Notation (p,d,q) for ARIMA models where Notation (p,d,q) for ARIMA models where d stands for the number of times first d stands for the number of times first differenced, p is the order of the differenced, p is the order of the autoregressive part, and q is the order of the autoregressive part, and q is the order of the moving average part.moving average part.

61

Estimated MAThree Model for dlnffr

62

Correlogram of Residual from (0,0,3) Model for dlnffr

63

Calculating the Box-Pierce Stat

Lag ACF ACF square SUM Sum*5841 0.013 0.000169 0.000169 0.0986962 -0.015 0.000225 0.000394 0.2300963 -0.026 0.000676 0.00107 0.624884 -0.004 0.000016 0.001086 0.6342245 -0.029 0.000841 0.001927 1.125368

64

EVIEWS Uses the Ljung-Box Statistic

65

Q-Stat at Lag 5

(T+2)/(T-5) * Box-Pierce = Ljung-Box(T+2)/(T-5) * Box-Pierce = Ljung-Box (586/581)*1.25368 = 1.135 compared to (586/581)*1.25368 = 1.135 compared to

1.132(EVIEWS)1.132(EVIEWS)

66

GENR: chi=rchisq(3); dens=dchisq(chi, 3)

67

Correlogram of Residual from (0,0,3) Model for dlnffr

68

top related