copyright © 2006 pearson addison-wesley. all rights reserved. lecture 12: joint hypothesis tests...
Post on 18-Dec-2015
216 views
TRANSCRIPT
Copyright © 2006 Pearson Addison-Wesley. All rights reserved.
Lecture 12: Joint Hypothesis Tests
(Chapter 9.1–9.3, 9.5–9.6)
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-2
Today’s Agenda
• Review
• Joint Hypotheses (Chapter 9.1)
• F-tests (Chapter 9.2–9.3)
• Applications of F-tests (Chapter 9.5–9.6)
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-3
Review
• Perfect multicollinearity occurs when 2 or more of your explanators are jointly perfectly correlated.
• That is, you can write one of your explanators as a linear function of other explanators:
X1 aX2 bX3
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-4
Review (cont.)
• OLS breaks down with perfect multicollinearity (and standard errors blow up with near perfect multicollinearity).
• Multicollinearity most frequently occurs when you want to include:
– Time, age, and birth year effects
– A dummy variable for each category, plus a constant
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-5
Review (cont.)
• Dummy variables (also called binary variables) take on only the values 0 or 1.
• Dummy variables let you estimate separate intercepts and slopes for different groups.
• To avoid multicollinearity while including a constant, you need to omit the dummy variable for one group (e.g. males or non-Hispanic whites). You want to pick one of the larger groups to omit.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-6
0 1 2 1 2 3
0
0 1
0 2
_1 _ 2 _1 _ 2
_1
_ 2
is the intercept for the omitted category.
is the intercept for the category coded by .
is the intercept for the category coded by .
i i i i i i i i iY D D X X D X D
D
D
0 1
1
1 2
1
_1
: 0.
_1
You can test whether group has the same intercept
as the omitted group by testing
is the slope for the omitted category.
is the slope for the category coded by .
D
H
D
3
0 2
_ 2
_1
: 0.
is the slope for the category coded by .
You can test whether group has the same slope
as the omitted group by testing
D
D
H
Review (cont.)
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-7
Review (cont.)
• You can multiply 2 variables together to create interaction terms.
• Interaction terms let the slope of each variable depend on the value of the other variable.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-8
Review (cont.)
0 1 1 2 2 3 1 2
1 3 21
i i i i i i
ii
i
Y X X X X
YX
X
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-9
Review (cont.)
• With many of the specifications covered last time, we encountered hypotheses that required us to test multiple conditions simultaneously. For example, to test:
– All categories have the same intercept (with 3 or more categories)
– All categories have the same slope (with 3 or more categories)
– One explanator has no effect on Y, when that explanator has been used in an interaction term
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-10
Review (cont.)
• In economics, many processes are non-linear. Economic theory relies heavily on diminishing marginal returns, decreasing returns to scale, etc.
• We want a specification that lets the 50th unit of X have a different marginal effect than the 1st unit of X.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-11
Review (cont.)
• If we regress not
but rather
then the marginal benefit of a unit of X changes to:
0 1i i iY X
20 1 2i i i iY X X
Y
X
12
2
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-12
Yi
0
1X
i
2X
i2
i
Y
X
12
2
Review (cont.)
• If 2 > 0, then the marginal impact of X is increasing. If 2 = 0, then X has a constant marginal effect. If 2 < 0, then the marginal impact of X is decreasing.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-13
Review (cont.)
• If 2 > 0 and 3 < 0, then this equation traces an inverse parabola.
• Earnings increases quickly in experience at first, but then flattens out.
log(earnings)i
0
1Ed
i
2Exp
i
3Exp
i2
i
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-14
Joint Hypotheses (Chapter 9.1)
log(earnings)i
0
1Ed
i
2Exp
i
3Exp
i2
i
• To test the hypothesis that experience is not an explanator of log(earnings), you need to test
• WARNING: you CANNOT simply look individually at the t-test for 2 = 0 and the t-test for 3 = 0
H0:
20 AND
30
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-15
Joint Hypotheses (cont.)
• You CANNOT test a JOINT hypothesis by combining multiple t-tests.
• Suppose you are testing
• A t-test rejects 1 = 0 if the data would be very surprising to see, given that 1 = 0. A t-test does NOT reject 1 = 0 if the data would only be pretty surprising.
H0:
10 AND
20
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-16
Joint Hypotheses (cont.)
• Each t-test could fail to reject the null if the data would only be “pretty surprising” under each null, taken one at a time.
• However, it might be “very surprising” to see two “pretty surprising” events.
• We do not know the “size” of a joint test conducted by stacking together many t-tests.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-17
Joint Hypotheses (cont.)
• Another problem with t-tests: suppose X1 and X2 are heavily correlated with each other (though not so much as to create perfect multicollinearity). Then each coefficient will have a large standard error.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-18
Joint Hypotheses (cont.)
• Another problem with t-tests: suppose X1 and X2 are heavily correlated with each other.
• If you remove either variable—leaving in the other—then you lose very little explanatory power. The other variable simply picks up the slack (through the omitted variables bias formula).
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-19
Joint Hypotheses (cont.)
• However, to test the null hypotheses that neither variable has explanatory power, we want to consider removing both variables at the same time. The two of them together may share a lot of explanatory power, even if either one could do the job nearly as well as both together.
• We need a new type of test, that lets us consider multiple hypotheses at once.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-20
Joint Hypotheses (cont.)
• Simply including more than one coefficient in the hypothesis does NOT make a joint hypothesis. For example, suppose you believed that X1 and X2 had identical effects. You could test this claim with:
• This test is a single hypotheses, and can be tested using a t-test. The calculation requires you to know the covariance of the two coefficients. See Chapter 7.5.
H0 : 1 2
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-21
Joint Hypothesis (cont.)
• A joint hypothesis tests more than one condition simultaneously. The easiest way to see how many conditions are being tested is to count the number of equal signs.
• E.g. H0 : 1 = 0 AND 2 = 0 has two equal signs, so there are two conditions being tested. This is a joint test.
• This hypothesis is often written 1 2 0
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-22
F-tests (Chapter 9.2–9.3)
• How can we test multiple conditions simultaneously?
• Intuition: run a regression normally, and then also run a regression where you assume the conditions are true. See if imposing the conditions makes a big difference.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-23
F-tests (cont.)
log(earnings)i
0
1Ed
i
2Exp
i
3Exp
i2
i
20 1
0 1
log( ) 0 0i i i i i
i i
earnings Ed Exp Exp
Ed
• To test the hypothesis that experience is not an explanator of log(earnings), you need to test H0 : 2 = 0 AND 3 = 0
• If these conditions are true, then there should be little difference between our “unrestricted” regression and the “restricted” version:
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-24
F-tests (cont.)
• If the conditions we are testing are true, then there should be little difference between our “unrestricted” regression and the “restricted” version.
• What do we mean by “little difference”?
• Does imposing the restrictions we wish to test greatly affect the model’s ability to fit the data?
• We can turn to our measure of fit, R2
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-25
F-tests (cont.)
• To measure the difference in the quality of fit before and after we impose the restrictions we are testing, we can turn to our measure of fit, R2
• Notice that the Total Sum of Squares is the same for both versions of the regression, so we can focus on the Sum of Squares of the Residuals.
20 12
2
ˆ ˆ( )1 1
( )
Y X SSRR
TSSY Y
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-26
F-tests (cont.)
• Does imposing the restrictions from our null hypothesis greatly increase the SSR ? (Remember, we want a low SSR.)
• Run both regressions and calculate the SSR.
• Call the SSR for the unrestricted version the SSRu
• Call the SSR for the restricted version the SSRr
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-27
F-tests (cont.)
• Call the SSR for the unconstrained version the SSRu
• Call the SSR for the constrained version the SSRc
• If the null hypothesis (2 = 3 = 0) is true, then imposing the restrictions will not change the SSR much. We will have a “small” SSRc-SSRu
2 20 1 2 3
ˆ ˆ ˆ ˆ(log( ) )ui i i iSSR earnings Ed Exp Exp
20 1ˆ ˆ(log( ) )c
i iSSR earnings Ed
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-28
F-tests (cont.)
• If the null hypothesis is true, then imposing the restrictions will not change the SSR much. We will have a “small” SSRc-SSRu
• Remember, OLS finds the smallest possible SSR. So SSRc > SSRu
• The more restrictions we impose, the larger SSRc will get, even if the restrictions are true.
• We need to adjust for the number of restrictions (r) we impose.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-29
F-tests (cont.)
• To measure how large an effect our constraints have, look at:
• What constitutes a large difference? We want to compare the difference in SSR to the original SSRu. An increase of 100 units is more worrisome if we start from SSRu = 200 than if SSRu = 20,000.
c uSSR SSR
r
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-30
F-tests (cont.)
• The more data we have, the more we trust our unconstrained regression.
• Also, the more data we have, the more seriously we want to take a deterioration in SSR.
• To capture the effect of more data, we weight by n-k-1.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-31
F-tests (cont.)
( 1)
c u
u
SSR SSRrF
SSRn k
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-32
F-tests (cont.)
• When the i are distributed normally, the F-statistic will be distributed according to the F-distribution with r, n-k-1 degrees of freedom.
• We know how to compute an F-statistic from the data.
• We know the distribution of the F-statistic under the null hypothesis.
• The F-statistic meets all the needs of a test statistic.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-33
F-tests (cont.)
• If our null hypothesis is true, then imposing the hypothesized values as constraints on the regression should not change SSR much. Under the null, we expect a low value of F.
• If we see a large value of F, then we can build a compelling case against the null hypothesis.
• The F-table tells you the critical values of F for different values of r and n-k-1.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-34
F-tests (cont.)
log(earnings)i
0
1Ed
i
2Exp
i
3Exp
i2
i
• Let’s return to the earnings test example, with the polynomial specification
• To test the hypothesis that experience is not an explanator of log(earnings), you need to test H0
: 20 AND
30
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-35
F-tests (cont.)
H0
: 2
30
r 2; n - k -16540 - 3-16536
Unconstrained Regression:
log(earnings)i
0
1Ed
i
2Exp
i
3Exp
i2
i
SSRu 3844
Constrained Regression:
log(earnings)i
0
1Ed
i v
i
SSRc 3959
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-36
F-tests (cont.)
H0
: 2
30
F
SSRc SSRu
rSSRu
n k 1
3959 3844
23844
6536
97.77
The 5% critical value for F2,6536
is 3. We can reject the
null hypothesis that experience is not an explanator
of log(income).
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-37
F-tests (cont.)
• Note: we must be able to impose the restrictions as part of an OLS estimation. We can impose only linear restrictions.
• For example, we CAN test:
314
1
2
30
14
2 and
3- 3
45
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-38
F-tests (cont.)
• However, we CANNOT test:
1·2 3
1 23
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-39
F-tests (cont.)
• Example:
• There are two equal signs. r = 2.
• How do we impose the restrictions?
• How do we enter this regression into the computer?
Y 0
1X
1
2X
2
3X
3
4X
4
5X
5
H0
: 14
2 and
3- 3
4
5
Y 0 (42 )X1 2 X2 3X3 4 X4 (3 - 34 )X5
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-40
F-tests (cont.)
• To enter a regression into the computer, we need to regroup so that all our explanators receive a single coefficient apiece.
• We need to transform this expression from one with separated explanators and linear combinations of coefficients to one with separated coefficients and linear combinations of explanators.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-41
F-tests (cont.)
• To find the constrained sum of squares, we need to regress Y on a constant, (4X1+X2), (X3+X5), and (X4 -3X5). The SSR from this regression is our SSRc.
Y 0 (42 )X1 2 X2 3X3 4 X4 (3 - 34 )X5 Y 0 2 (4X1 X2 )3(X3 X5 )4 (X4 - 3X5 )
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-42
Y 0 1X1 2 X2
H0 :1 -2
Checking Understanding
• You regress
• You want to test
• What are the constrained and unconstrained regressions? What is r ?
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-43
Checking Understanding (cont.)
Y 0
1X
1
2X
2
H0
: 1-
2
Unconstrained regression:
Y 0
1X
1
2X
2
Constrained regression:
Y 0 (-
2)X
1
2X
2
0
2( X
2- X
1)
r 1
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-44
F-tests
• Note: when r = 1, you have a choice between using a t-test or an F-test.
• When r = 1, F = |t|2. F-tests and t-tests will give the same results.
• When r > 1, you cannot use a t-test.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-45
F-tests (cont.)
• A frequently encountered test is the null hypothesis that all the coefficients (except the constant) are 0. This test asks whether the entire model is useless. Do our explanators do a better job at predicting Y than simply guessing the mean?
• Many econometrics programs automatically calculate this F-statistic when they perform a regression.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-46
log(earnings)i
0
1Ed
i
2Exp
i
3D _ F
i
i
An Application of F-tests (Chapter 9.5)
• Let’s use F-tests to re-examine the differences in earnings equations between black women and black men in the NLSY.
• Regress the following for black workers:
• where Edi = years of education, Expi = years of experience, and D_Fi = 1 if the worker is female
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-47
An Application of F-tests (cont.)
• To test whether black males and black females have the same intercept, we can use a simple t-test with H0 : 3 = 0
• Our estimated coefficient is -0.201 with a standard error of 0.036, yielding a t-statistic of -5.566
• This t-statistic exceeds our critical value of -1.96
• We can reject the null hypothesis at the 5% level
log(earnings)i
0
1Ed
i
2Exp
i
3D _ F
i
i
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-48
TABLE 9.1 Earnings Equation for Black Men and Women (NLSY Data)
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-49
An Application of F-tests (cont.)
• We have rejected the null hypothesis that black men and black women have the same intercept.
• Could they also have different slopes for education and experience?
• We can use dummy variable interaction terms.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-50
0 1 2 3
4 5
0 1 2
0
log( ) _
_ _
( _ 0) :
log( )
( _ 1) :
log( ) (
Case: worker is male
Case: worker is female
i i i i
i i i i i
i
i i i i
i
i
earnings Ed Exp D F
Ed D F Exp D F
D F
earnings Ed Exp
D F
earnings
3 1 4 2 5) ( ) ( )i i iEd Exp
0 3 4 5: 0H
An Application of F-tests (cont.)
• To test the null hypothesis that black men and black women have identical earnings equations, we need to test the joint hypothesis:
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-51
An Application of F-tests (cont.)
0 3 4 5
0 1 2 3
4 5
0 1
: 0
log( ) _
_ _
1002.75
log( )
Unconstrained Regression:
Constrained Regression:
i i i i
i i i i i
u
i i
H
earnings Ed Exp D F
Ed D F Exp D F
SSR
earnings Ed
2
1020.378
3, 1 1795
i i
c
Exp v
SSR
r n k
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-52
An Application of F-tests (cont.)
H0
: 3
4
50
F
SSRc SSRu
rSSRu
n k 1
1020.37 1002.75
31002.75
1795
10.51
The critical value at the 5% significance level for F3,1795
is 2.60.
We can reject the null hypothesis at the 5% level.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-53
An Application of F-tests (cont.)
• We can reject the null hypothesis that black men and black women have identical earnings functions.
• Do we really need the interaction terms, or do we get the same explanatory power by simply giving black women a different intercept?
• Let’s test the null hypothesis that the interaction coefficients are both 0.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-54
An Application of F-tests (cont.)
0 4 5
0 1 2 3
4 5
0 1 2
: 0
log( ) _
_ _
1002.75
log( )
Unconstrained Regression:
Constrained Regression:
i i i i
i i i i i
u
i i
H
earnings Ed Exp D F
Ed D F Exp D F
SSR
earnings Ed Ex
3 _
1003.08
2, 1 1795
i i i
c
p D F v
SSR
r n k
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-55
An Application of F-tests (cont.)
H0
: 4
50
F
SSRc SSRu
rSSRu
n k 1
1003.08 1002.75
21002.75
1795
0.30
The critical value at the 5% significance level for F2,1795
is 3.00.
We fail to reject the null hypothesis at the 5% level.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-56
F-tests and Regime Shifts (Chapter 9.6)
• What is the relationship between Federal budget deficits and long-term interest rates?
• We have time-series data from 1960–1994.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-57
F-tests and Regime Shifts (cont.)
• Our dependent variable is long-term interest rates (LongTermt)
• Our explanators are expected inflation (Inflationt), short-term interest rates (ShortTermt), change in real per-capita income (DeltaInct), and the real per-capita budget deficit (Deficitt).
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-58
LongTermt
0
1Inflation
t
2ShortTerm
t
3DeltaInc
t
4Deficit
t
t
F-tests and Regime Shifts (cont.)
• Note that we index observations by t, not i.
• 4 is the change in long-term interest rates from a $1 increase in the Federal deficit (measured in 1996 dollars).
• Financial market de-regulation began in 1982.
• Was the relationship between long-term interest rates and Federal deficits altered by the de-regulation?
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-59
F-tests and Regime Shifts (cont.)
• We can let the co-efficient on Deficitt vary before and after 1982 by interacting with a dummy variable.
• Create the variable D_1982t = 1 if the observation is for year 1983 or later
• To test whether the slope on Deficitt changes after 1982, conduct a t-test of the hypothesis H0
: 6 = 0
0 1 2 3
4 5 6 _1982 _1982t t t t
t t t t t
LongTerm Inflation ShortTerm DeltaInc
Deficit D Deficit D
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-60
F-tests and Regime Shifts (cont.)
• Dependent Variable: LongTermt
• Independent Variables, with standard errors
• Inflationt: 0.765 (0.0372)
• ShortTermt: 0.822 (0.0586)
• DeltaInct: -6.4·10-6 (1.6·10-4)
• Deficitt: 0.002 (0.0004)
• D_1982t: -0.739 (0.6608)
• Deficitt·D_1982t: 0.0005 (0.0007)
• Constant: 1.277 (0.2135)
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-61
F-tests and Regime Shifts (cont.)
• For the period 1960–1982, the slope on Deficitt is 0.0022. A $1 increase in the Federal deficit per capita increases long-term interest rates by 0.0022 points.
• For the period 1983–1994, the slope on Deficitt is 0.0022 + 0.0005 = 0.0027.
• The t-statistic for Deficitt·D_1982t is 0.63. We fail to reject the null hypothesis that the slopes are different.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-62
F-tests and Regime Shifts (cont.)
• For the period 1960–1982, the slope on Deficitt is 0.0022. A $1 increase in the Federal deficit per capita increases long-term interest rates by 0.0022 points.
• Is this change important in magnitude? One quick, crude way to assess magnitudes is to ask, “How many standard deviations does Y change when I change X by 1 standard deviation?”
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-63
F-tests and Regime Shifts (cont.)
• Standard Deviation of Deficitt = 463
• Standard Deviation of LongTermt = 2.65
• A 1-standard-deviation change in Deficitt is predicted to cause a 463·0.0022 = 1.02 percentage point change in LongTermt, or about a third of a standard deviation
• At first glance, the effect of Federal deficits on interest rates is non-negligible, but not massive, either.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-64
F-tests and Regime Shifts (cont.)
• Let’s test a more complicated hypothesis.
• Does the entire financial regime shift after 1982?
• Let’s let every coefficient vary between the two time periods.
0 1 2 3
4 5 6
7 8
9
_1982 _1982
_1982 _1982
_1982
t t t t
t t t t
t t t t
t t t
LongTerm Inflation ShortTerm DeltaInc
Deficit D Inflation D
ShortTerm D DeltaInc D
Deficit D
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-65
H0 :5 6 7 8 9 0
F-tests and Regime Shifts (cont.)
• Does the entire financial regime shift after 1982?
• Test the joint hypothesis that every interaction term is 0:
• We need an F-test
• There are 5 equal signs, so r = 5
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-66
F-tests and Regime Shifts (cont.)
0 5 6 7 8 9
0 1 2 3
4 5 6
7 8
9
: 0
_1982 _1982
_1982 _1982
Unconstrained regression:
t t t t
t t t t
t t t t
H
LongTerm Inflation ShortTerm DeltaInc
Deficit D Inflation D
ShortTerm D DeltaInc D
Defi
0 1 2 3
4
_1982
Constrained regression:t t t
t t t t
t t
cit D
LongTerm Inflation ShortTerm DeltaInc
Deficit v
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-67
F-tests and Regime Shifts (cont.)
H0
: 5
6
7
8
90
F
SSRc SSRu
rSSRu
n k 1
4.93 4.15
54.15
25
0.94.
The critical value for an F-test with 5,25 degrees
of freedom is 2.60. We cannot reject the null
hypothesis that there is no regime shift.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-68
F-tests and Regime Shifts (cont.)
• Instead of using dummy variables, we could conduct this same test by running the same regression on 3 separate datasets.
• For the constrained regression (there is no regime shift), we use all the data, 1960–1994.
• For the unconstrained regression (there is a regime shift), we run separate regressions for 1960–1982 and 1983–1994.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-69
F-tests and Regime Shifts (cont.)
0 1 2
3 4
t t t
t t t
LongTerm Inflation ShortTerm
DeltaInc Deficit
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-70
1982 1994 19942 2 2
1960 1983 1960t t t
t t t
e e e
F-tests and Regime Shifts (cont.)
• For each regression, we record the SSR
• SSRc is the SSR from the regression for 1960–1994, SSR1960–1994
• SSRu is the sum SSR1960–1982 + SSR1983–1994
• Using these SSR ’s, we can compute F
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-71
F-tests and Regime Shifts (cont.)
SSRu SSR1960 1982 SSR1983 1994
2.17 1.98 4.15
SSRc SSR1960 1994 4.93
r 5
n k 125
F
SSRc SSRu
rSSRu
n k 1
4.93 4.15
54.15
25
0.94
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-72
F-tests and Regime Shifts (cont.)
• See Chapter 9.7 for additional tests for regime shifts.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-73
Review
• How can we test multiple conditions simultaneously?
• Intuition: run a regression normally, and then also run a regression where you constrain the parameters to make the null hypothesis true. See if imposing the conditions makes a big difference.
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-74
Review (cont.)
• Does imposing the restrictions from our null hypothesis greatly increase the SSR ? (Remember, we want a low SSR.)
• Run both regressions and calculate the SSR
• Call the SSR for the unrestricted version the SSRu
• Call the SSR for the restricted version the SSRr
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-75
Review (cont.)
• Run both regressions and calculate the SSR
• Call the SSR for the unrestricted version the SSRu
• Call the SSR for the restricted version the SSRr
• If the null hypothesis is true, then imposing the restrictions will not change the SSR much. We will have a “small” SSRr-SSRu
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-76
Review (cont.)
( 1)
c u
u
SSR SSRrF
SSRn k
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-77
Review (cont.)
• When the i are distributed normally, the F-statistic will be distributed according to the F-distribution with r, n-k-1 degrees of freedom
• We know how to compute an F-statistic from the data
• We know the distribution of the F-statistic under the null hypothesis
• The F-statistic meets all the needs of a test statistic
Copyright © 2006 Pearson Addison-Wesley. All rights reserved. 12-78
Review (cont.)
• If our null hypothesis is true, then imposing the hypothesized values as constraints on the regression should not change SSR much. Under the null, we expect a low value of F.
• If we see a large value of F, then we can build a compelling case against the null hypothesis.
• The F table tells you the critical values of F for different values of r and n-k-1.