week 13 november 24-28

24
Week 13 November 24-28 Three Mini-Lectures QMM 510 Fall 2014

Upload: sinead

Post on 24-Feb-2016

39 views

Category:

Documents


0 download

DESCRIPTION

Week 13 November 24-28. Three Mini-Lectures QMM 510 Fall 2014 . Multicollinearity ML 13.1. Chapter 13. What Is Multicollinearity?. Multicollinearity occurs when the independent variables X 1 , X 2 , …, X m are intercorrelated instead of being independent. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Week  13  November  24-28

Week 13 November 24-28

Three Mini-Lectures QMM 510Fall 2014

Page 2: Week  13  November  24-28

13-2

Multicollinearity ML 13.1

• Multicollinearity occurs when the independent variables X1, X2, …, Xm are intercorrelated instead of being independent.

• Collinearity occurs if only two predictors are correlated.• The degree of multicollinearity is the real concern.

What Is Multicollinearity?

Chapter 13

Page 3: Week  13  November  24-28

13-3

Chapter 13

Variance Inflation

• Multicollinearity induces variance inflation in the estimation of the regression model coefficients.

• This results in wider confidence intervals for the true coefficients b1, b2, …, bk and makes the t statistic less reliable.

• The separate contribution of each predictor in “explaining” the response variable is difficult to identify.

Multicollinearity

Page 4: Week  13  November  24-28

13-4

To check whether two predictors are correlated (collinearity), inspect the correlation matrix using Excel, MegaStat, or MINITAB. For example,

Correlation Matrix

Chapter 13

Multicollinearity

Page 5: Week  13  November  24-28

13-5

Variance Inflation Factor (VIF)

• The matrix scatter plots and correlation matrix only show correlations between any two predictors.

• The variance inflation factor (VIF) is a more comprehensive test for multicollinearity.

• For a given predictor j, the VIF is defined as

where Rj2 is the coefficient of determination when

predictor j is regressed against all other predictors.

Chapter 13

Multicollinearity

Page 6: Week  13  November  24-28

13-6

Variance Inflation Factor (VIF)

Some possible situations are:

Chapter 13

Multicollinearity

Page 7: Week  13  November  24-28

13-7

Rules of Thumb

• There is no limit on the magnitude of the VIF.

• A VIF of 10 says that the other predictors “explain” 90% of the variation in predictor j.

• A high VIF indicates that predictor j is strongly related to the other predictors.

• However, a high is not necessarily indicative of instability in the least squares estimate.

• A large VIF is a warning to consider whether predictor j really belongs to the model.

Chapter 13

Multicollinearity

Page 8: Week  13  November  24-28

13-8

Are Coefficients Stable?• Evidence of instability is when X1 and X2 have a high pairwise

correlation with Y, yet one or both predictors have insignificant t statistics in the fitted multiple regression, and/or if X1 and X2 are positively correlated with Y, yet one has a negative slope in the multiple regression.

• As a test, try dropping a collinear predictor from the regression and see what happens to the fitted coefficients in the re-estimated model.

• If they don’t change much, then multicollinearity is not a concern.

• If there are sharp changes in one or more of the remaining coefficients in the model, then multicollinearity may be causing instability.

Chapter 13

Multicollinearity

Page 9: Week  13  November  24-28

12-9

Tests of Assumptions 13.2

Three Important Assumptions

1. The errors are normally distributed.

2. The errors have constant variance (i.e., they are homoscedastic).

3. The errors are independent (i.e., they are nonautocorrelated).

Chapter 13

Note: Everything you learned about residual tests in Chapter 12 is still true, except that the residuals are now based on multiple predictors (k > 2). This may affect the degrees of freedom in some tests, but the concepts are basically the same.

Page 10: Week  13  November  24-28

12-10

Chapter 13

Tests for Non-Normal Errors• Quick test: Make a histogram of residuals. Is it bell-shaped? Outliers?• For a more precise test, use the normal probability plot. If H0 is true,

the residual plot should be linear.H0: Errors are normally distributedH1: Errors are not normally distributed

Residual Tests Non-Normal Errors

• Non-normality of errors is a mild violation since the parameter estimates and their variances remain unbiased and consistent.

• Confidence intervals for the parameters may be untrustworthy because the normality assumption is used to justify using Student’s t.

Page 11: Week  13  November  24-28

12-11

Chapter 13

Residual Tests Example: Normality Test (n = 50, k = 7)

MegaStat’s normal probability plot is somewhat linear, but has some weird values at either end. Possible non-normal residuals?

But a histogram of residuals from Minitab's Stat > Graphical Summary is arguably normal in shape. From its p-value (.24) we would not reject normality. Note that the mean of the residuals is zero, as it must be.

Page 12: Week  13  November  24-28

12-12

Heteroscedastic Errors (Nonconstant Variance)• Ideally, the error magnitude is constant (i.e., homoscedastic

errors). Heteroscedastic errors increase or decrease with X.

• In multiple regression, we have several X’s, so for simplicity we often just plot the n residuals against the fitted Y values.

• In the most common form of heteroscedasticity, the variances of the estimators may be understated, the t-statistics overstated, and confidence intervals artificially narrow.

Chapter 13

Tests for Heteroscedasticity• Plot the residuals against Yfitted or

against each of the k predictors. Ideally, there is no pattern in the residuals moving from left to right.

Residual Tests

Page 13: Week  13  November  24-28

12-13

Chapter 13 Example: Regression with 7 Predictors• Plots of residuals against Yfitted (shown below in blue) or against each of

the k predictors (shown below in pink) show evidence of heteroscedasticity. Note that one predictor was binary (X = 0 or 1).

Residual Tests

Page 14: Week  13  November  24-28

12-14

Chapter 13

Autocorrelated Errors

• Autocorrelation is a pattern of non-independent errors.

• It is of more concern in time series data because the natural order of the data is meaningful (whereas in cross-sectional data the order of observatiohs if often alphabetical or randomized).

• In a first-order autocorrelation, et is correlated with et 1.

• The estimated variances of the OLS estimators are biased, resulting in confidence intervals that are too narrow, overstating the model’s fit.

Residual Tests

Page 15: Week  13  November  24-28

12-15

Runs Test for Autocorrelation• Look at a plot of residuals over time (or by observation). Count the number of

sign reversals (i.e., how often does the residual cross the zero centerline?).

• If the pattern is random, the number of sign changes should be near n/2.

• Fewer than n/2 would suggest positive autocorrelation.

• More than n/2 would suggest negative autocorrelation.

Chapter 13

Durbin-Watson (DW) Test• Tests for autocorrelation under the hypotheses

H0: Errors are non-autocorrelatedH1: Errors are autocorrelated

• The DW statistic will range from 0 to 4.DW < 2 suggests positive autocorrelationDW = 2 suggests no autocorrelation (ideal)DW > 2 suggests negative autocorrelation

Residual Tests

Page 16: Week  13  November  24-28

12-16

• Here is MegaStat’s plot of residuals by observation (n = 50). Count the number of sign reversals (i.e., how often does the residual cross the zero centerline). If the pattern is random, the number of sign changes should be near n/2 and the Durbin-Watson statistic should be near 2.

Chapter 13

Residual Tests Example: Normality Test (n = 50, k = 7)

28 sign changes (close to n/2 = 50/2 = 25) so there is not much evidence of autocorrelation.

DW is near 2 so there is not much evidence of autocorrelation.

Page 17: Week  13  November  24-28

12-17

Example: Excel’s Tests of Assumptions

Chapter 13

Residual Tests

Warning: Excel offers normal probability plots for residuals, but they are done incorrectly.

Excel’s Data Analysis > Regression does residual plots (test for heteroscedasticity) and gives the DW test statistic. Excel’s standardized residuals are done in a strange way, but usually they are not misleading.

Page 18: Week  13  November  24-28

12-18

Example: MegaStat’s Tests of Assumptions

Chapter 13

Residual Tests

MegaStat will do all three tests (if you check the boxes). Its runs plot (residuals by observation) is a visual test for autocorrelation, which Excel does not offer.

Page 19: Week  13  November  24-28

13-1913-19

Other Regression Topics ML 13.3C

hapter 13

• Outliers? (omit only if clearly errors)

• Missing Predictors? (usually you can’t tell)

• Ill-Conditioned Data (adjust decimals or take logs)

• Significance in Large Samples? (if n is huge, any regression will be significant)

• Model Specification Errors? (may show up in residual patterns)

• Missing Data? (we may have to live without it)

• Binary Response? (if Y = 0,1 we use logistic regression)

• Stepwise and Best Subsets Regression (MegaStat does these)

Page 20: Week  13  November  24-28

13-20

• Sometimes the effect of a predictor is nonlinear.• A simple example would be estimating the volume of lumber to be

obtained from a tree.• To test for suspected nonlinearity of any predictor variable, we can

include its square in the regression.

Tests for Nonlinearity

Chapter 13

Tests for Nonlinearity and Interaction

Tests for Interaction• We can test for interaction between two predictors by including their

product in the regression.

Model with x1x2 interaction term.

Page 21: Week  13  November  24-28

13-21

Tests for Nonlinearity

Chapter 13

Tests for Nonlinearity and Interaction

Page 22: Week  13  November  24-28

13-22

Example: MegaStat

Chapter 13

Stepwise Regression

Caution: This is basically a data-mining tool that looks only at fit (not at causal logic). Use it only as a check.

Page 23: Week  13  November  24-28

13-23

Chapter SummaryC

hapter 13

Page 24: Week  13  November  24-28

13-24

Chapter SummaryC

hapter 13