econometrics_solutions to analy - fumio hayashi

Download Econometrics_solutions to Analy - Fumio Hayashi

Post on 21-Oct-2015

584 views

Category:

Documents

2 download

Embed Size (px)

DESCRIPTION

solutions to analytical exe of "Econometrics"

TRANSCRIPT

  • Nov. 22, 2003, revised Dec. 27, 2003 Hayashi Econometrics

    Solution to Chapter 1 Analytical Exercises

    1. (Reproducing the answer on p. 84 of the book)

    (yX)(y X) = [(y Xb) + X(b )][(y Xb) + X(b )](by the add-and-subtract strategy)

    = [(y Xb) + (b )X][(y Xb) + X(b )]

    = (y Xb)(y Xb) + (b )X(y Xb)+ (y Xb)X(b ) + (b )XX(b )

    = (y Xb)(y Xb) + 2(b )X(y Xb) + (b )XX(b )(since (b )X(y Xb) = (y Xb)X(b ))

    = (y Xb)(y Xb) + (b )XX(b )(since X(y Xb) = 0 by the normal equations)

    (y Xb)(y Xb)(since (b )XX(b ) = zz =

    ni=1

    z2i 0 where z X(b )).

    2. (a), (b). If X is an nK matrix of full column rank, then XX is symmetric and invertible.It is very straightforward to show (and indeed youve been asked to show in the text) thatMX InX(XX)1X is symmetric and idempotent and that MXX = 0. In this question,set X = 1 (vector of ones).

    (c)

    M1y = [In 1(11)11]y= y 1

    n11y (since 11 = n)

    = y 1n

    1ni=1

    yi = y 1 y

    (d) Replace y by X in (c).

    3. Special case of the solution to the next exercise.

    4. From the normal equations (1.2.3) of the text, we obtain

    (a) [X1X2

    ][X1

    ... X2][

    b1b2

    ]=[

    X1X2

    ]y.

    Using the rules of multiplication of partitioned matrices, it is straightforward to derive ()and () from the above.

    1

  • (b) By premultiplying both sides of () in the question by X1(X1X1)1, we obtainX1(X1X1)

    1X1X1b1 = X1(X1X1)1X1X2b2 + X1(X1X1)1X1y X1b1 = P1X2b2 + P1y

    Substitution of this into () yieldsX2(P1X2b2 + P1y) + X2X2b2 = X2y

    X2(IP1)X2b2 = X2(IP1)y X2M1X2b2 = X2M1y

    X2M1M1X2b2 = X2M1M1y (since M1 is symmetric & idempotent) X2X2b2 = X2y.

    Therefore,

    b2 = (X2X2)1X2y

    (The matrix X2X2 is invertible because X2 is of full column rank. To see that X2 is of fullcolumn rank, suppose not. Then there exists a non-zero vector c such that X2c = 0. But

    X2c = X2cX1d where d (X1X1)1X1X2c. That is, Xpi = 0 for pi [d

    c

    ]. This is

    a contradiction because X = [X1... X2] is of full column rank and pi 6= 0.)

    (c) By premultiplying both sides of y = X1b1 + X2b2 + e by M1, we obtain

    M1y = M1X1b1 + M1X2b2 + M1e.

    Since M1X1 = 0 and y M1y, the above equation can be rewritten asy = M1X2b2 + M1e

    = X2b2 + M1e.

    M1e = e because

    M1e = (IP1)e= eP1e= eX1(X1X1)1X1e= e (since X1e = 0 by normal equations).

    (d) From (b), we have

    b2 = (X2X2)1X2y

    = (X2X2)1X2M

    1M1y

    = (X2X2)1X2y.

    Therefore, b2 is the OLS coefficient estimator for the regression y on X2. The residualvector from the regression is

    y X2b2 = (y y) + (y X2b2)= (y M1y) + (y X2b2)= (y M1y) + e (by (c))= P1y + e.

    2

  • This does not equal e because P1y is not necessarily zero. The SSR from the regressionof y on X2 can be written as

    (y X2b2)(y X2b2) = (P1y + e)(P1y + e)= (P1y)(P1y) + ee (since P1e = X1(X1X1)

    1X1e = 0).

    This does not equal ee if P1y is not zero.

    (e) From (c), y = X2b2 + e. So

    yy = (X2b2 + e)(X2b2 + e)

    = b2X2X2b2 + e

    e (since X2e = 0).

    Since b2 = (X2X2)1X2y, we have b

    2X2X2b2 = y

    X2(X2M1X2)1X2y.

    (f) (i) Let b1 be the OLS coefficient estimator for the regression of y on X1. Then

    b1 = (X1X1)1X1y

    = (X1X1)1X1M1y

    = (X1X1)1(M1X1)y

    = 0 (since M1X1 = 0).

    So SSR1 = (y X1b1)(y X1b1) = yy.(ii) Since the residual vector from the regression of y on X2 equals e by (c), SSR2 = ee.

    (iii) From the Frisch-Waugh Theorem, the residuals from the regression of y on X1 andX2 equal those from the regression of M1y (= y) on M1X2 (= X2). So SSR3 = ee.

    5. (a) The hint is as good as the answer.

    (b) Let yX, the residuals from the restricted regression. By using the add-and-subtractstrategy, we obtain

    y X = (y Xb) + X(b ).So

    SSRR = [(y Xb) + X(b )][(y Xb) + X(b )]= (y Xb)(y Xb) + (b )XX(b ) (since X(y Xb) = 0).

    But SSRU = (y Xb)(y Xb), so

    SSRR SSRU = (b )XX(b )= (Rb r)[R(XX)1R]1(Rb r) (using the expresion for from (a))= R(XX)1R (using the expresion for from (a))

    = X(XX)1X (by the first order conditions that X(y X) = R)= P.

    (c) The F -ratio is defined as

    F (Rb r)[R(XX)1R]1(Rb r)/r

    s2(where r = #r) (1.4.9)

    3

  • Since (Rb r)[R(XX)1R]1(Rb r) = SSRR SSRU as shown above, the F -ratiocan be rewritten as

    F =(SSRR SSRU )/r

    s2

    =(SSRR SSRU )/r

    ee/(nK)=

    (SSRR SSRU )/rSSRU/(nK)

    Therefore, (1.4.9)=(1.4.11).

    6. (a) Unrestricted model: y = X + , where

    y(N1)

    =

    y1...yn

    , X(NK)

    =

    1 x12 . . . x1K... ... . . . ...1 xn2 . . . xnK

    , (K1)

    =

    1...n

    .Restricted model: y = X + , R = r, where

    R((K1)K)

    =

    0 1 0 . . . 00 0 1 . . . 0...

    .... . .

    0 0 1

    , r((K1)1) = 0...

    0

    .Obviously, the restricted OLS estimator of is

    (K1)

    =

    y0...0

    . So X =yy...y

    = 1 y.(You can use the formula for the unrestricted OLS derived in the previous exercise, =b (XX)1R[R(XX)1R]1(Rb r), to verify this.) If SSRU and SSRR are theminimized sums of squared residuals from the unrestricted and restricted models, they arecalculated as

    SSRR = (y X)(y X) =ni=1

    (yi y)2

    SSRU = (y Xb)(y Xb) = ee =ni=1

    e2i

    Therefore,

    SSRR SSRU =ni=1

    (yi y)2 ni=1

    e2i . (A)

    4

  • On the other hand,

    (b )(XX)(b ) = (XbX)(XbX)

    =ni=1

    (yi y)2.

    Since SSRR SSRU = (b )(XX)(b ) (as shown in Exercise 5(b)),ni=1

    (yi y)2 ni=1

    e2i =ni=1

    (yi y)2. (B)

    (b)

    F =(SSRR SSRU )/(K 1)n

    i=1 e2i /(nK)

    (by Exercise 5(c))

    =(ni=1(yi y)2

    ni=1 e

    2i )/(K 1)n

    i=1 e2i /(nK)

    (by equation (A) above)

    =ni=1(yi y)2/(K 1)n

    i=1 e2i /(nK)

    (by equation (B) above)

    =

    Pni=1(byiy)2/(K1)Pni=1(yiy)2Pni=1 e

    2i/(nK)Pni=1(yiy)2

    (by dividing both numerator & denominator byni=1

    (yi y)2)

    =R2/(K 1)

    (1R2)/(nK) (by the definition or R2).

    7. (Reproducing the answer on pp. 84-85 of the book)

    (a) GLS = A where A (XV1X)1XV1 and b GLS = B where B (XX)1X (XV1X)1XV1. So

    Cov(GLS ,b GLS)= Cov(A,B)= A Var()B

    = 2AVB.

    It is straightforward to show that AVB = 0.

    (b) For the choice of H indicated in the hint,

    Var()Var(GLS) = CV1q C.If C 6= 0, then there exists a nonzero vector z such that Cz v 6= 0. For such z,

    z[Var()Var(GLS)]z = vV1q v < 0 (since Vq is positive definite),

    which is a contradiction because GLS is efficient.

    5

  • Nov. 25, 2003, Revised February 23, 2010 Hayashi Econometrics

    Solution to Chapter 2 Analytical Exercises

    1. For any > 0,

    Prob(|zn| > ) = 1n 0 as n.

    So, plim zn = 0. On the other hand,

    E(zn) =n 1n

    0 + 1nn2 = n,

    which means that limn E(zn) =.2. As shown in the hint,

    (zn )2 = (zn E(zn))2 + 2(zn E(zn))(E(zn) ) + (E(zn) )2.

    Take the expectation of both sides to obtain

    E[(zn )2] = E[(zn E(zn))2] + 2E[zn E(zn)](E(zn) ) + (E(zn) )2= Var(zn) + (E(zn) )2 (because E[zn E(zn)] = E(zn) E(zn) = 0).

    Take the limit as n of both sides to obtain

    limnE[(zn )

    2] = limnVar(zn) + limn(E(zn) )

    2

    = 0 (because limnE(zn) = , limnVar(zn) = 0).

    Therefore, zn m.s. . By Lemma 2.2(a), this implies zn p .3. (a) Since an i.i.d. process is ergodic stationary, Assumption 2.2 is implied by Assumption 2.2.

    Assumptions 2.1 and 2.2 imply that gi xi i is i.i.d. Since an i.i.d. process with meanzero is mds (martingale differences), Assumption 2.5 is implied by Assumptions 2.2 and2.5.

    (b) Rewrite the OLS estimator as

    b = (XX)1X = S1xx g. (A)

    Since by Assumption 2.2 {xi} is i.i.d., {xixi} is i.i.d. So by Kolmogorovs Second StrongLLN, we obtain

    Sxx pxx

    The convergence is actually almost surely, but almost sure convergence implies convergencein probability. Since xx is invertible by Assumption 2.4, by Lemma 2.3(a) we get

    S1xx p1xx .

    1

  • Similarly, under Assumption 2.1 and 2.2 {gi} is i.i.d. By Kolmogorovs Second StrongLLN, we obtain

    gpE(gi),

    which is zero by Assumption 2.3. So by Lemma 2.3(a),

    S1xx gp1xx 0 = 0.

    Therefore, plimn(b ) = 0 which implies that the OLS estimator b is consistent.Next, we prove that the OLS estimator b is asymptotically normal. Rewrite equation(A)

    above asn(b ) = S1xx

    ng.

    As already observed, {gi} is i.i.d. with E(gi) = 0. The variance of gi equals E(gigi) = Ssince E(gi) = 0 by Assumption 2.3. So by the Lindeberg-Levy CLT,

    ng

    dN(0,S).

    Furthermore, as already noted, S1xx p 1xx . Thus by Lemma 2.4(c),n(b )

    dN(0,1xx S

    1xx ).

    4. The hint is as good as the answer.

    5. As shown in the solution to Chapter 1 Analytical Exercise 5, SSRR SSRU can be written asSSRR SSRU = (Rb r)[R(XX)1R]1(Rb r).

    Using the restrictions of the null hypothesis,

    Rb r = R(b )= R(XX)1X (since b = (XX)1X)

    = RS1xxg (where g 1n

    ni=1

    xi i.).

    Also [R(XX)1R]1 = n [RS1xxR]1. SoSSRR SSRU = (

    ng)S1xx R

    (RS1xx R)1RS1xx (

    ng).

    Thus

    SSRR SSRUs2

    = (ng)S1xx R

    (s2RS1xx R)1RS1xx (

    ng)

    = znA1n zn,

    wherezn RS1xx (

    ng), An s2RS1xx R.

    By Assumption 2.2, plimSxx = xx. By Assumption 2.5,ng d N(0,S). So by

    Lemma 2.4(c), we have:zn

    dN(0,R1xx S

    1xx R

    ).

    2

  • But, as shown in (2.6.4), S = 2xx under conditional homoekedasticity (Assumption 2.7).So the expression for the variance of the limiting distribution above becomes

    R1xx S1xx R

    = 2R1xxR A.

    Thus we have shown:zn

    dz