becin 002 chapter 6 · used test for serial correlation, there are limitations: 1. the test is for...

99
Chapter 6 Chapter 6 Autocorrelation or Serial Correlation

Upload: others

Post on 23-Oct-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

  • Chapter 6Chapter 6

    Autocorrelation or Serial Correlation

  • Section 6.1Section 6.1

    Introduction

    2

  • Evaluating Econometric WorkHow does an analyst know when the econometric work is completed?

    3

  • Evaluating Econometric WorkEconometric Results – indeed a term that for many economists conjures up horrifying visions of well-meaning but perhaps marginally skilled, likely nocturnal, individuals sorting through endless piles of computer print-outs. One “final” print-out is then chosen for seemingly mysterious reasons and the rest discarded to be recycled through a local paper processor and another computer printer for other “econometricians” to repeat the process ad infinitum. Besides supplying a lucrative business for the paper recyclers, what useful output, if any, results from such a process? This question lies at the heart of the so called “science of econometrics as currently applied,” a practice which has been called “data-mining”, “number crunching”, “model sifting”, “data grubbing”, “fishing”, “data messaging”, and even “alchemy” among other less palatable terms. All of these euphemisms describe basically, the same process: choosing an econometric model based on repeated experimentation with available sample data.

    - Ziemer (1984)4

  • Evaluating Econometric Work“Econometrics may not have the everlasting charm of Holmesian characters and adventures, or even a famous resident of Baker Street, but there is much in his methodological approach to the solving of criminal cases that is of relevance to applied econometric modeling. Holmesian detection may be interpreted as Holmesian detection may be interpreted as accommodating the relationship between data theory, modeling procedures, deductions and inferences, analysis of biases, testing of theories, re-evaluation and reformulation of theories, and finally reaching a solution to the problem at hand. With this in mind, can applied econometricians learn anything from the master of detection?

    - McAleer (1994)5

  • Key Diagnostics

    Diagnostic Component of Econometric Model Under Consideration

    Serial Correlation (Autocorrelation)

    Error (or Disturbance) Term

    Heteroscedasticity Error (or Disturbance) Term

    Collinearity Diagnostics Explanatory Variables

    6

    Collinearity Diagnostics Explanatory Variables

    Influence diagnostics Observations

    Structural Change Structural Parameters

    ���� See Beggs (1988).

  • Section 6.2Section 6.2

    Autocorrelation or

    Serial Correlation

  • Autocorrelation or Serial Correlation� Definition� Consequences� Formal Tests

    – Durbin-Watson Test– Nonparametric Runs Test– Durbin's h-Test– Durbin's m-Test– Lagrange Multiplier (LM) Test– Box-Pierce Test (Q Statistic)– Ljung-Box Test (Q* Statistic)– (Small-sample modification of Box-Pierce Q

    Statistic)� Solution

    – Generalized Least Squares88

  • Formal Definition of Autocorrelation or Serial CorrelationAutocorrelation or serial correlation refers to the lack of independence of error (or disturbance) terms. Autocorrelation and serial correlation refer to the same phenomenon.Simply put, a systematic pattern exists in the residuals of the econometric model. Ideally, the residuals, which represent a composite of all factors not embedded in the model, should exhibit no pattern.That is to say, the residuals should follow a white-noise (or random) pattern.

    99

  • Prevalence of Serial CorrelationWith the use of time-series data in econometric applications, serial correlation is “public enemy number one.” Systematic patterns in the error terms commonly arise due to the (inadvertent) omission of explanatory variables in econometric models. These variables might come from disciplines other than economics, finance, or business; for example, psychology and sociology.business; for example, psychology and sociology.Alternatively, these variables might represent factors that simply are difficult to quantify, such as tastes and preferences of consumers or technological innovation on the part of producers.

    1010

  • Consequences of Serial Correlation� Bishop (1981)� Errors “contaminated” with autocorrelation or serial

    correlationPotential of discovering “spurious” relationships due to problems with autocorrelated errors (Granger and Newbold, 1974)

    � Difficulties with structural analysis and forecastingIf the error structure is autoregressive, then OLS estimates of the regression parameters are (1) unbiased, (2) consistent, but (3) inefficient in small and in large samples.

    1111 continued...

  • � The estimates of the standard errors of the coefficients in any econometric model are biased downward if the residuals are positively autocorrelated. They are biased upward if the residuals are negatively autocorrelated.

    � Therefore, the calculated t statistic is biased upward or downward in the opposite direction of the bias in the

    Consequences of Serial Correlation

    downward in the opposite direction of the bias in the estimated standard error of that coefficient.

    � Granger and Newbold (1974) further suggest that the econometric results can be defined as “nonsense” if R2 >DW(d).

    1212 continued...

  • Consequences of Serial CorrelationPositive autocorrelation of the errors generally tends to make the estimate of the error variance too small, so confidence intervals are too narrow and null hypotheses are rejected with a higher probability than the stated significance level.Negative autocorrelation of the errors generally tends to make the estimate of the error variance too large, so make the estimate of the error variance too large, so confidence intervals are too wide; also the power of significance tests is reduced.With either positive or negative autocorrelation, least-squares parameter estimates usually are not as efficient as generalized least-squares parameter estimates.

    1313

  • Regression with Autocorrelated Errors� Ordinary regression analysis is based on several statistical

    assumptions. One key assumption is that the errors are independent of each other. However, with time series data, the ordinary regression residuals usually are correlated over time.

    � Violation of the independent errors assumption has three important consequences for ordinary regression.important consequences for ordinary regression.– First, statistical tests of the significance of the

    parameters and the confidence limits for the predicted values are not correct.

    – Second, the estimates of the regression coefficients are not as efficient as they would be if the autocorrelation were taken into account.

    – Third, because the ordinary regression residuals are not independent, they contain information that can be used to improve the prediction of future values.1414

  • Solution to the Serial Correlation ProblemGeneralized Least Squares (GLS)The AUTOREG procedure solves this problem by augmenting the regression model with an autoregressive model for the random error, thereby accounting for the systematic pattern of the errors. Instead of the usual regression model, the following autoregressive error model is used:

    εβxy ttt +′=

    The notation indicates that each vt is normally and independently distributed with mean 0 and variance σ2.

    1515

    ),0( ~

    ...

    2

    2211

    σ

    εφεφεφε

    εβ

    INv

    v

    xy

    t

    tmtmttt

    ttt

    +−−−−=

    +′=

    −−−

    ),0( ~ 2σINvt

    continued...

  • Solution to the Serial Correlation ProblemGeneralized Least Squares (GLS)By simultaneously estimating the regression coefficients β and the autoregressive error model parameters φi , the AUTOREG procedure corrects the regression estimates for autocorrelation. Thus, this kind of regression analysis is often called autoregressive error correction or serial correlation correction.correlation correction.

    This technique is also called the use of generalized least squares (GLS).

    1616

  • Predicted Values and ResidualsThe AUTOREG procedure can produce two kinds of predicted values and corresponding residuals and confidence limits.

    � The first kind of predicted value is obtained from only the structural part of the model. This predicted value is an estimate of the unconditional mean of the dependent variable at time t.dependent variable at time t.

    � The second kind of predicted value includes both the structural part of the model and the predicted value of the autoregressive error process.

    Both the structural part and autoregressive error process of the model (termed the full model) are used to forecast future values.

    1717 continued...

  • Predicted Values and ResidualsUse the OUTPUT statement to store predicted values and residuals in a SAS data set and to output other values such as confidence limits and variance estimates.

    � The P= option specifies an output variable to contain the full model predicted values.

    � The PM= option names an output variable for the predicted (unconditional) mean.

    � The R= and RM= options specify output variables for the corresponding residuals, computed as the actual value minus the predicted value.

    1818

  • Serial Correlation� Disturbance terms are not independent.

    � The correlation between εt and εt-k is called an autocorrelation of order k.

    jiE

    TiXXXY

    ji

    tktkttt

    ≠≠=+++++=

    0)(

    ,...,2,1...22110εε

    εββββ

    t t-kautocorrelation of order k.

    1919 continued...

  • Serial CorrelationRecommend a graphical analysis of plotting the residuals over time to determine the existence of a non-random or systematic pattern.

    Time Positive Correlation

    Residuals

    2020 ...

    Time Negative Correlation

    Residuals

  • Section 6.3 Section 6.3

    Tests for Serial Correlation

  • The Durbin-Watson TestAR(1) process

    or d statistic

    ttt v+= −1ρεε

    0:

    0:

    1

    0

    ≠=

    ρρ

    H

    H

    )ˆˆ(0

    2

    2

    21

    ≤−

    =≤∑

    ∑=

    n

    n

    ttt

    e

    eeDW

    22

    4d then -1, if

    0;d then 1, if 2;d then 0, If

    ======

    ρρρ

    continued...

    =

    =−

    ≈−=−≈n

    tt

    n

    ttt

    e

    eedDW

    dDW

    1

    2

    21ˆˆ

    2

    )(1ˆ)1(2)( ρρ that so

    ˆ1

    2∑

    =tte

  • � dL,dU depend on α, k, n.� DW is invalid with models that contain no intercept and

    models that contain lagged dependent variables.� The distribution of DW(d) is reported by Durbin and

    Watson (1950, 1951).

    The Durbin-Watson Test

    2323 continued...

  • f(d)Reject H0

    Evidence of positiveautocorrelation

    Reject H0

    Evidence of negativeautocorrelation

    Zone of indecision

    Zone of indecision

    continued...

    dL 4-dL2 4-dUdU 40 d

    Accept H0

    24

  • The Durbin-Watson Test� The sampling distribution of d depends on the values

    of the exogenous variables and hence Durbin and Watson derived upper (dU) limits and lower (dL) limits for the significance levels for d.

    � Tables of the distribution are found in most econometric textbooks.

    � The Durbin-Watson test perhaps is the most used procedure in econometric applications.

    2525

  • The Durbin-Watson Statistic

    2626 continued...

  • Appendix G: Statistical Table

    27

  • Limitations of the Durbin-Watson TestAlthough the Durbin-Watson test is the most commonly used test for serial correlation, there are limitations:1. The test is for first-order serial correlation only.2. The test might be inconclusive.3. The test cannot be applied in models with lagged

    dependent variables.dependent variables.4. The test cannot be applied in models without

    intercepts.

    2828

  • Additional TablesThere are other tables for the DW test that have been prepared to take care of special situations. Some of these are:

    1. R.W. Farebrother (1980) provides tables for regression models with no intercept term.

    2. Savin and White (1977) present tables for the DW test 2. Savin and White (1977) present tables for the DW test for samples with 6 to 200 observations and for as many as 20 regressors.

    2929 continued...

  • Additional Tables3. Wallis (1972) gives tables for regression models with quarterly

    data. Here you want to test for fourth-order autocorrelation rather than the first-order autocorrelation. In this case, the DW statistic is

    ∑=

    −−=

    n2t

    n

    5t

    24tt

    4

    )ûû(d

    Wallis provides 5% critical values dL and dU for two situations: where the k regressors include an intercept (but not a full set of seasonal dummy variables) and another where the regressors include four quarterly seasonal dummy variables. In each case the critical values are for testing H0: ρ =0 against H1: ρ > 0. For the hypothesis H1: ρ < 0, Wallis suggests that the appropriate critical values are (4-dU) and (4-dL). King and Giles (1978) give further significance points for Wallis tests.

    3030 continued...

    ∑=1t

    tu

  • Additional Tables4. King (1981) gives the 5% points for dL and dU quarterly

    time-series data with trend and/or seasonal dummy variables. These tables are for testing first-order autocorrelation.

    5. King (1983) gives tables for the DW test for monthly data. In case of monthly data, you want to test for twelfth-order autocorrelation.twelfth-order autocorrelation.

    3131

  • Nonparametric Runs Test (Gujarti, 1978)� More general than the DW test. Interest in H0: ρ = 0.

    – Test of AR(1) process in the error termsN+ = number of positive residualsN- = number of negative residualsN = number of observationsNr = number of runs

    � Example:

    � Test Statistic:

    � Reject H0 (non-autocorrelation) if the test statistic is too large in absolute value.

    3232

    [ ][ ] ))1(/())2(2(

    /)2(2 −−=

    =−+−+

    −+

    NNNNNNNNVAR

    NNNNE

    r

    r

    )1,0(N)N(VAR/))N(EN(Z rrr ≈−=

  • The REG Procedure

    Model: MODEL1

    Dependent Variable: lnpcg

    Number of Observations Read 36

    Number of Observations Used 36

    Analysis of Variance

    Sum of Mean

    Source DF Squares Square F Value Pr > F

    3333

    Source DF Squares Square F Value Pr > F

    Model 6 0.78035 0.13006 150.85

  • Parameter Estimates

    Parameter Standard

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 -17.27084 1.71977 -10.04

  • The REG Procedure

    Model: MODEL1

    Dependent Variable: lnpcg

    Durbin-Watson D 0.786

    Pr < DW DW 1.0000

    Number of Observations 36

    1st Order Autocorrelation 0.601

    NOTE: PrDW is the p-value for testing negative autocorrelation.

    3535

    In this example, sample evidence exists to suggest the presence of positive serial correlation, which is t he more common form of pattern in the residuals in regard t o the use of economic or financial data.

  • Obs year lnpcgres

    19 1978 0.024483

    20 1979 -0.007387

    21 1980 -0.039505

    22 1981 -0.014226

    23 1982 0.022968

    24 1983 0.042105

    25 1984 -0.011793

    26 1985 -0.028349

    Obs year lnpcgres

    1 1960 0.017118

    2 1961 0.010675

    3 1962 0.019701

    4 1963 0.025370

    5 1964 -0.019348

    6 1965 -0.043199

    7 1966 -0.045077

    8 1967 -0.056039 26 1985 -0.028349

    27 1986 -0.007728

    28 1987 0.016606

    29 1988 -0.019070

    30 1989 -0.005426

    31 1990 -0.008096

    32 1991 -0.025680

    33 1992 -0.015855

    34 1993 0.013422

    35 1994 0.007171

    36 1995 0.001383 3636

    8 1967 -0.056039

    9 1968 -0.032779

    10 1969 0.003223

    11 1970 0.008482

    12 1971 0.021400

    13 1972 0.016612

    14 1973 -0.015336

    15 1974 0.006500

    16 1975 0.039297

    17 1976 0.048282

    18 1977 0.050095

  • The Greene ProblemIn the Greene problem for gasoline, DW = 0.786 and = 0.601.

    Use of Nonparametric Runs testN = 36 N+ = 19 N- = 17 Nr = 11

    ρ̂

    [ ][ ] 69.8))1(/())2(2(

    94.17/)2(2 =−−=

    ==−+−+

    −+

    NNNNNNNNVAR

    NNNNE

    r

    r

    3737

    [ ] 69.8))1(/())2(2( =−−= NNNNNNNNVAR r

    .05 at

    reject at

    ====

    −=−=−=

    −=

    α96.1005

    35.295.2

    94.6

    69.8

    94.1711

    )())((

    0

    crit

    rrr

    Z

    .:ρ H, .α

    Z

    NVARNENZ

  • Analysis Limitations� Analysts must recognize that a “good” Durbin-Watson

    statistic is insufficient evidence upon which to conclude that the error structure is “contamination free” in terms of autocorrelation. The Durbin-Watson test is only applicable for the presence of first-order autocorrelation.

    � There is little reason to suppose that the correct model for residuals is AR(1). A mixed, autoregressive, moving-residuals is AR(1). A mixed, autoregressive, moving-average (ARMA) structure is much more likely to be correct, especially with quarterly, monthly, and weekly frequencies of time-series data. Modeling of the residuals can be employed following the methodology of Box and Jenkins (1976).

    � Owing to higher frequencies of time-series data used in applied econometrics in recent years, the pattern of the error structure generally is more complex than the common AR(1) pattern.

    3838

  • A General Test for Higher-Order Serial CorrelationThe LM Test (Breusch and Pagan, 1980)

    � LM - Lagrange multiplier

    0...:H

    ),0(IN~eeu...uuu

    .n,...,2,1tuX...Xy

    p210

    2ttptp2t21t1t

    tktkt110t

    =ρ==ρ=ρ

    σ+ρ++ρ+ρ=

    =+β++β+β=

    −−−

    The Xs might or might not include lagged dependent variables.

    1. Estimate by OLS and obtain the least squares residuals.

    2. Estimate .

    3. Test whether the coefficients of are all zero. Use the conventional F statistic.

    3939

    0...:H p210 =ρ==ρ=ρ

    .ˆ...ˆ1

    110 t

    p

    tiitktktt vuXXu +++++= ∑

    =− ργγγ

    tû

    itû −

  • Box-Pierce or Ljung-Box Tests� Check the serial correlation pattern of the residuals. You must

    be sure that there is no serial correlation (desire white noise).

    � H0: no pattern in the residuals (The residuals are white noise.)

    � Box and Pierce (1970) suggest looking at not only the first-order autocorrelation but autocorrelation of all orders of residuals.

    Calculate where∑=m

    krNQ2,Calculate where

    is the autocorrelation of lag k, and N is the number of observations in the series.

    � If the model fitted is appropriate, .

    � Ljung and Box (1978) suggest a modification of the Q statistic for moderate sample sizes.

    40

    ∑=

    −−+=m

    1k

    2k

    1r)kN()2N(N*Q

    ∑=

    =k

    krNQ1

    ,

    2kr

    2~ mQ χ&

  • Box-Pierce or Ljung-Box TestsWith the Box-Pierce or Ljung-Box tests, you examine the interface of structural models with time-series models.� Use the correlations and partial correlations of the

    residuals over time. The idea is to determine the appropriate pattern in the error structure from the autocorrelation and partial autocorrelation functions associated with the residuals.associated with the residuals.

    � Autocorrelation functions tell you about moving average (MA) patterns.

    � Partial autocorrelation functions tell you about autoregressive (AR) patterns.

    � Anticipate ARMA error structures, particularly higher-order AR patterns in residuals of econometric models.

    41

  • Section 6.4Section 6.4

    Sample Problem:

    The Demand for Shrimp

  • The REG Procedure

    Model: MODEL1

    Dependent Variable: QSHRIMP

    Number of Observations Read 97

    Number of Observations Used 97

    Analysis of Variance

    Sum of Mean

    43

    Sum of Mean

    Source DF Squares Square F Value Pr > F

    Model 6 2064.71370 344.11895 19.59

  • Parameter Estimates

    Parameter Standard

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 29.69939 8.11798 3.66 0.0004

    PSHRIMP 1 -0.03223 0.00346 -9.33

  • The REG Procedure

    Model: MODEL1

    Dependent Variable: QSHRIMP

    Durbin-Watson D 2.092

    Pr < DW 0.6443

    Pr > DW 0.3557

    The REG Procedure Output

    Pr > DW 0.3557

    Number of Observations 97

    1st Order Autocorrelation -0.049

    NOTE: PrDW is the p-value for testing negative autocorrelation.

    45

    Conclusion: No AR(1) pattern in the residuals

  • 8

    12

    16

    RESID

    -8

    -4

    0

    4

    10 20 30 40 50 60 70 80 9046

  • Name of Variable = resQSHRIMP Mean of Working Series -12E-16 Standard Deviation 4.037075 Number of Observations 97

    Autocorrelations Lag Cova riance Correlation - 1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 Std Error

    The ARIMA Procedure Output

    MA(3) Pattern

    The autocorrelation function(acf). A plot of the correlation ofthe residuals at various lags.Corr (et, et-k), k = 0, 1, 2, …, 24.

    Lag Cova riance Correlation - 1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 Std Error 0 16.297977 1.00000 | |********************| 0 1 -0.803782 -.04932 | . *| . | 0.101535 2 1.365554 0.08379 | . |** . | 0.101781 3 5.065873 0.31083 | . |****** | 0.102 490 4 1.279257 0.07849 | . |** . | 0.111786 5 0.847853 0.05202 | . |* . | 0.112353 6 1.553722 0.09533 | . | ** . | 0.112601 7 2.748583 0.16865 | . |*** . | 0.113 430 8 0.060094 0.00369 | . | . | 0.115 986 9 1.246364 0.07647 | . |** . | 0.115 988 10 2.816294 0.17280 | . |*** . | 0.11 6506

    47

  • Partial Autoco rrelations

    Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1

    1 -0.04932 | . *| . | 2 0.08155 | . |** . | 3 0.32156 | . |****** | 4 0.12215 | . |** . | 5 0.01467 | . | . | 6 -0.01941 | . | . | 7 0.12325 | . |** . | 8 -0.00334 | . | . | 9 0.02246 | . | . |

    AR(3) Pattern

    The partial autocorrelation function (PACF).A plot of the 9 0.02246 | . | . |

    10 0.09677 | . |** . | 11 0.06154 | . |* . | 12 -0.09759 | . **| . | 13 0.04673 | . |* . | 14 -0.15242 | .***| . | 15 -0.01645 | . | . | 16 -0.19457 | ****| . | 17 0.08137 | . |** . | 18 -0.07893 | . **| . | 19 -0.07409 | . *| . | 20 -0.01932 | . | . |

    21 0.06747 | . |* . | 22 -0.19432 | ****| . |

    48

    A plot of the correlation of the residuals at various lags after netting out intermittent lags.

  • Partial Autocorrelations Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 23 -0.06713 | . *| . | 24 -0.03305 | . *| . | Autocorrelation Check for White Noise

    PROC ARIMA Output

    To Chi- Pr > Lag Square DF ChiSq ----------------Autocorrelations------------- 6 12.70 6 0.0480 -0.049 0.084 0.311 0 .078 0.052 0.095 12 20 .18 12 0.0637 0.169 0.004 0.076 0.1 73 0.056 -0.037 18 26.80 18 0.0828 0.159 -0.090 0.001 -0.099 0 .067 -0.093 2 4 36.18 24 0.0527 -0.130 0.065 -0.04 4 -0.218 -0.051 -0.031

    49

    The Ljung Box Q* statistic reveals that the residual series is not white noise.

  • Correlogram of

    RESID

    50

    Presence of MA(3), AR(3) pattern

  • Durbin-Watson Statistics

    Order DW Pr < DW Pr > DW

    1 2.0921 0.6443 0.3557

    2 1.8221 0.1974 0.8026

    3 1.3585 0.0007 0.9993

    NOTE: PrDW is the p-value for testing negative autocorrelation. and Pr>DW is the p-value for testing negative autocorrelation.

    Godfrey's Serial Correlation Test

    Alternative LM Pr > LM

    AR(1) 0.2799 0.5967

    AR(2) 0.9989 0.6069

    AR(3) 11.6932 0.0085

    51

    Presence of AR(3) pattern

  • Standard Approx

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 29.6994 8.1180 3.66 0.0004

    PSHRIMP 1 -0.0322 0.003456 -9.33

  • Breusch-Godfrey Serial Correlation LM Test:

    F-statistic 3.975111 Prob. F(3,87) 0.0105 Obs*R-squared 11.69324 Prob. Chi-Square(3) 0.0085

    Test Equation: Dependent Variable: RESID Method: Least Squares Sample: 1 97 Included observations: 97 Presample missing value lagged residuals set to zer o.

    Coefficient Std. Error t-Statistic Prob.

    C -0.064917 7.783149 -0.008341 0.9934 PSHRIMP 0.000324 0.003415 0.094983 0.9245 PSHRIMP 0.000324 0.003415 0.094983 0.9245

    PFIN 0.003214 0.013142 0.244517 0.8074 PSHELL1 -0.002645 0.006911 -0.382712 0.7029

    ADSHRIMP 0.003714 0.007144 0.519862 0.6045 ADSHELL1 0.002087 0.020725 0.100694 0.9200

    ADFIN -0.000244 0.004692 -0.052095 0.9586 RESID(-1) -0.074724 0.109910 -0.679863 0.4984 RESID(-2) 0.091318 0.107147 0.852272 0.3964 RESID(-3) 0.351359 0.106391 3.302520 0.0014

    R-squared 0.120549 Mean dependent var -3.13E-15 Adjusted R-squared 0.029571 S.D. dependent var 4.058047 S.E. of regression 3.997597 Akaike info criterion 5.706646 Sum squared resid 1390.328 Schwarz criterion 5.972081 Log likelihood -266.7724 Hannan-Quinn criter. 5.813975 F-statistic 1.325037 Durbin-Watson stat 2.066485 Prob(F-statistic) 0.235832

    53

    Presence ofAR(3) Pattern

  • Estimates of Autoregressive Parameters

    Standard Lag Coefficient Error t Va lue

    1 0.071520 0.101517 0 .70 2 -0.096118 0.101283 -0 .95 3 -0.321561 0.101517 - 3.17 Use of Yule-

    Walker

    Correcting for serial correlationthrough the use of PROC AUTOREG

    Yule -Walker Estimates

    SSE 1376.71794 DFE 87 MSE 15.82434 Root MS E 3.97798

    SBC 578.680834 AIC 552.933724 Reg ress R-Square 0.5618 Total R-Square 0.6224 Log Likelihood -551.8663 Observations 97

    54

    Walkerestimates ofφφφφ1, φφφφ2, and φφφφ3

  • Durbin -Watson Statistics

    Order DW Pr < DW Pr > DW

    1 2.0340 0 .5135 0.4865 2 2.0047 0 .5319 0.4681 3 1.8530 0 .2997 0.7003

    NOTE: PrDW is the p-value for testing negative autoc orrelation.

    Now, no serial correlationexists in the residuals.

    55

  • The AUTOREG Procedure

    Godfrey's Serial Correlation Test

    Alternative LM Pr > LM

    AR(1) 2.4131 0.1203 AR(2) 2.5993 0.2726 AR(3) 2.6009 0.4573

    Standard Approx Variable DF Estimate Error t Value Pr > |t|

    GLS Estimates

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 29.6938 7.2391 4.10

  • Partial

    Autocorrelations

    1 -0.049318

    2 0.081553

    3 0.321561

    Preliminary MSE 14.4802

    Estimates of Autoregressive Parameters

    Starting values of estimates of φφφφ1, φφφφ2, and φφφφ3 in the ML procedure.

    Estimates of Autoregressive Parameters

    Standard

    Lag Coefficient Error t Value

    1 0.071520 0.101517 0.70

    2 -0.096118 0.101283 -0.95

    3 -0.321561 0.101517 -3.17

    Algorithm converged.

    57

  • Maximum Likelihood Estimates

    SSE 1365.10117 DFE 87

    MSE 15.69082 Root MSE 3.96116

    SBC 578.057305 AIC 552.310196

    Regress R-Square 0.5711 Total R-Square 0.6256

    Log Likelihood -266.1551 Observations 97

    Use of the MaximumLikelihood procedureto produce estimatesof φφφφ1, φφφφ2, and φφφφ3.

    Durbin-Watson Statistics

    Order DW Pr < DW Pr > DW

    1 2.1042 0.6574 0.3426

    2 2.0571 0.6368 0.3632

    3 1.9762 0.5481 0.4519

    NOTE: PrDW is the p-value for use of autoreg procedure testing negative

    autocorrelation. 58

  • Godfrey's Serial Correlation Test

    Alternative LM Pr > LM

    AR(1) 2.0021 0.1571

    AR(2) 2.2920 0.3179

    AR(3) 2.2951 0.5135

    Standard Approx

    Variable DF Estimate Error t Value Pr > |t|

    ML Estimates

    Intercept 1 30.4400 7.2224 4.21

  • Autoregressive parameters assumed given.

    Standard Approx

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 30.4400 7.1700 4.25

  • Depiction of the Estimated Model for the Qshrimp Problem� The estimated model is based on the Maximum Likelihood

    estimates.

    � Qshrimpt = 30.4400 - .0341*Pshrimpt - .0138*Pfint + .007794*Pshell1t + .001561*Adshrimpt - .0155Adfint -.003312*Adshell1t + vt

    � vt = -.0152*vt-1 + .1256*vt-2 + .3915*vt-3 + εt� vt = -.0152*vt-1 + .1256*vt-2 + .3915*vt-3 + εt� MSE = 15.69082 (estimate of residual variance)

    ���� This estimate is smaller than the OLS estimate of 17.5656.

    � The total R-square statistic computed from the residuals of the autoregressive model is 0.6256, reflecting the improved fit from the use of past residuals to help predict the next value of Qshrimpt.

    � The Reg Rsq value is 0.5711, which is the R-square statistic for a regression of transformed variables adjusted for the estimated autocorrelation.61

  • Comparison of Diagnostic Statistics and Parameter Estimates from the Qshrimp Problem

    Explanatory Variables OLS Yule-Walker Maximum Likelihood

    Parameter Estimate

    Standard Error

    Parameter Estimate

    Standard Error

    Parameter Estimate

    Standard Error

    62 continued...

    Intercept 29.6994 8.1180 29.6938 7.2391 30.4400 7.2224

    Pshrimp -0.0322 0.003456 -0.0324 0.003706 -0.0341 0.004134

    Pfin -0.0273 0.0136 -0.0177 0.0130 -0.0138 0.0135

    Pshell1 0.0159 0.007185 0.009815 0.006848 0.007794 0.007022

    Adshrimp -0.001014 0.007085 0.000814 0.006389 0.001561 0.006500

    Adfin -0.0140 0.004805 -0.0151 0.004708 -0.0155 0.004768

    Adshell1 0.008479 0.0208 0.001010 0.0193 -0.003312 0.0196

  • Comparison of Diagnostic Statistics and

    Parameter EstimatesOLS Yule-Walker Maximum Likelihood

    Parameter Estimate

    Standard Error

    Parameter Estimate

    Standard Error

    Parameter Estimate

    Standard Error

    MSE 17.5656 15.82434 15.69082

    Root MSE 4.19113 3.97798 3.96116

    SBC 578.028031 578.680834 578.057305

    AIC 560.005054 552.933724 552.310196

    Regress R-Square

    0.5664 0.5618 0.5711

    Total R-Square 0.5664 0.6224 0.6256

    Log Likelihood -273.00253 -551.8663 -266.151

    AR 1 NA 0.07152 0.101517 0.0152 0.1060

    AR 2 NA -0.096118 0.101283 -0.1256 0.1037

    AR 3 NA -0.321561 0.101517 -0.3915 0.1042

    DW order 1 2.0921 2.0340 2.1042

    DW order 2 1.8221 2.0047 2.0571

    DW order 3 1.3585 1.8530 1.976263

  • Section 6.5 Section 6.5

    Sample Problem:

    The Demand for Gasoline

  • Model: MODEL1

    Dependent Variable: lnpcg

    Number of Observations Read 36

    Number of Observations Used 36

    Analysis of Variance

    Sum of Mean

    Source DF Squares Square F Value Pr > F Source DF Squares Square F Value Pr > F

    Model 6 0.78035 0.13006 150.85

  • Parameter Estimates

    Parameter Standard

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 -17.27084 1.71977 -10.04

  • Model: MODEL1

    Dependent Variable: lnpcg

    Durbin-Watson D 0.786

    Pr < DW DW 1.0000

    Number of Observations 36

    1st Order Autocorrelation 0.601

    The REG Procedure Output

    1st Order Autocorrelation 0.601

    NOTE: PrDW is the p-value for testing negative autocorrelation.

    67

    What is the conclusion regarding serial correlationbased on the Durbin-Watson statistic?

  • .02

    .04

    .06

    RESID

    -.06

    -.04

    -.02

    .00

    1960 1965 1970 1975 1980 1985 1990 199568

  • Name of Variable = lnpcgresName of Variable = lnpcgresName of Variable = lnpcgresName of Variable = lnpcgres

    Mean of Working Series 9.68EMean of Working Series 9.68EMean of Working Series 9.68EMean of Working Series 9.68E----16161616

    Standard Deviation 0.026354 Standard Deviation 0.026354 Standard Deviation 0.026354 Standard Deviation 0.026354

    Number of Observations 36 Number of Observations 36 Number of Observations 36 Number of Observations 36

    Autocorrelations Autocorrelations Autocorrelations Autocorrelations

    Lag Covariance Correlation Lag Covariance Correlation Lag Covariance Correlation Lag Covariance Correlation ----1 9 8 7 6 5 41 9 8 7 6 5 41 9 8 7 6 5 41 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 3 2 1 0 1 2 3 4 5 6 7 8 9 1 3 2 1 0 1 2 3 4 5 6 7 8 9 1 3 2 1 0 1 2 3 4 5 6 7 8 9 1 Std ErrorStd ErrorStd ErrorStd Error

    0 0.00069453 0 0.00069453 0 0.00069453 0 0.00069453 1.00000 1.00000 1.00000 1.00000 | |******** | |******** | |******** | |********************| ************| ************| ************| 0 0 0 0

    1 1 1 1 0.00041753 0.00041753 0.00041753 0.00041753 0.60110.60110.60110.60117 7 7 7 | . | . | . | . |************ | |************ | |************ | |************ | 0.1666670.1666670.1666670.166667

    2 0.00004865 2 0.00004865 2 0.00004865 2 0.00004865 0.07004 0.07004 0.07004 0.07004 | . | . | . | . |* . | |* . | |* . | |* . | 0.218760 0.218760 0.218760 0.218760

    MA(1)

    2 0.00004865 2 0.00004865 2 0.00004865 2 0.00004865 0.07004 0.07004 0.07004 0.07004 | . | . | . | . |* . | |* . | |* . | |* . | 0.218760 0.218760 0.218760 0.218760

    3 3 3 3 ----0.0001150 0.0001150 0.0001150 0.0001150 ----.16552 .16552 .16552 .16552 | . | . | . | . ***| . | ***| . | ***| . | ***| . | 0.219382 0.219382 0.219382 0.219382

    4 4 4 4 ----0.0001336 0.0001336 0.0001336 0.0001336 ----.19242 .19242 .19242 .19242 | . | . | . | . ****| . | ****| . | ****| . | ****| . | 0.222824 0.222824 0.222824 0.222824

    5 5 5 5 ----0.0000673 0.0000673 0.0000673 0.0000673 ----.09685 .09685 .09685 .09685 | . | . | . | . **| . | **| . | **| . | **| . | 0.227393 0.227393 0.227393 0.227393

    6 0.00004871 6 0.00004871 6 0.00004871 6 0.00004871 0.07014 0.07014 0.07014 0.07014 | . | . | . | . |* . | |* . | |* . | |* . | 0.2285360.2285360.2285360.228536

    7 7 7 7 ----8.1403E8.1403E8.1403E8.1403E----7 7 7 7 ----.00117 .00117 .00117 .00117 | . | . | . | . | . | | . | | . | | . | 0.229133 0.229133 0.229133 0.229133

    8 8 8 8 ----0.0001372 0.0001372 0.0001372 0.0001372 ----.19760 .19760 .19760 .19760 | . ****| | . ****| | . ****| | . ****| . | . | . | . | 0.229133 0.229133 0.229133 0.229133

    9 9 9 9 ----0.0002564 0.0002564 0.0002564 0.0002564 ----.36917 .36917 .36917 .36917 | . ***| . ***| . ***| . *******| . | ****| . | ****| . | ****| . | 0.2338180.2338180.2338180.233818

    69

    Statistically significant autocorrelation and parti al autocorrelation coefficients

  • Partial Autocorrelations

    Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1

    1 0.60117 | . |************ |

    2 -0.45626 | *********| . |

    3 0.09383 | . |** . |

    4 -0.11796 | . **| . |

    AR(1), AR(2)

    4 -0.11796 | . **| . |

    5 0.07910 | . |** . |

    6 0.10048 | . |** . |

    7 -0.33312 | *******| . |

    8 -0.02557 | . *| . |

    9 -0.31678 | .******| . |

    70 continued...

  • Autocorrelation Check for White Noise

    To Chi- Pr >

    Lag Square DF ChiSq -----------------Autocorrelations------------

    6 17.68 6 0.0071 0.601 0.070 -0.166 -0.192 -0.097 0.070

    Statistically significant Ljung-Box Q* statistic.Thus, the pattern in the residuals is not random or white noise.

    71

  • 72

  • The AUTOREG Procedure

    Dependent Variable lnpcg

    Ordinary Least Squares Estimates

    SSE 0.02500325 DFE 29

    MSE 0.0008622 Root MSE 0.02936

    SBC -134.55346 AIC -145.6381

    Regress R-Square 0.9690 Total R-Square 0.9690

    Normal Test 0.5997 Pr > ChiSq 0.7409

    Log Likelihood 79.8190475 Observations 36 Log Likelihood 79.8190475 Observations 36

    Durbin-Watson Statistics

    Order DW Pr < DW Pr > DW

    1 0.7859

  • Godfrey's Serial Correlation Test

    Alternative LM Pr > LM

    AR(1) 15.0740 0.0001

    AR(2) 18.5985 |t|

    Intercept 1 -17.2708 1.7198 -10.04

  • The AUTOREG Procedure

    Estimates of Autocorrelations

    Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1

    0 0.000695 1.000000 | |********************|

    1 0.000418 0.601169 | |************ |

    2 0.000049 0.070040 | |* |

    Partial

    Autocorrelations Autocorrelations

    1 0.601169

    2 -0.456258

    Preliminary MSE 0.000351

    Estimates of Autoregressive Parameters

    Standard

    Lag Coefficient Error t Value

    1 -0.875457 0.171251 -5.11

    2 0.456258 0.171251 2.66

    75

    Estimates of φφφφ1, φφφφ2

    continued...

  • Yule-Walker Estimates

    SSE 0.01185535 DFE 27

    MSE 0.0004391 Root MSE 0.02095

    SBC -153.33526 AIC -167.58693

    Regress R-Square 0.9558 Total R-Square 0.9853

    Log Likelihood -27.396156 Observations 36

    After you take

    Durbin-Watson Statistics

    Order DW Pr < DW Pr > DW

    1 1.7378 0.0773 0.9227

    2 2.2225 0.5771 0.4229

    NOTE: PrDW

    is the p-value for testing negative autocorrelation.

    into account this pattern in the residuals, the serial correlation problem no longer is evident.

    76

  • Godfrey's Serial Correlation Test

    Alternative LM Pr > LM

    AR(1) 1.0852 0.2975

    AR(2) 2.3248 0.3127

    Standard Approx

    The AUTOREG Procedure Output

    Standard Approx

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 -15.7683 1.8017 -8.75

  • Estimates of Autocorrelations

    Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1

    0 0.000695 1.000000 | |********************|

    1 0.000418 0.601169 | |************ |

    2 0.000049 0.070040 | |* |

    Partial

    Autocorrelations

    1 0.601169 Starting values in the 1 0.601169 2 -0.456258

    Preliminary MSE 0.000351

    Estimates of Autoregressive Parameters

    Standard

    Lag Coefficient Error t Value

    1 -0.875457 0.171251 -5.11

    2 0.456258 0.171251 2.66

    78

    Starting values in theML procedure to obtain estimates of φφφφ1, φφφφ2

    continued...

  • Maximum Likelihood Estimates

    SSE 0.01174976 DFE 27

    MSE 0.0004352 Root MSE 0.02086

    SBC -153.49238 AIC -167.74405

    Regress R-Square 0.9580 Total R-Square 0.9854

    Log Likelihood 92.8720272 Observations 36

    Durbin-Watson Statistics Durbin-Watson Statistics

    Order DW Pr < DW Pr > DW

    1 1.8195 0.1255 0.8745

    2 2.2171 0.5664 0.4336

    NOTE: PrDW is the p-value for testing negative autocorrelation.

    79

  • Godfrey's Serial Correlation Test

    Alternative LM Pr > LM

    AR(1) 0.7457 0.3878

    AR(2) 2.0349 0.3615

    Standard Approx

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 -15.8652 1.9232 -8.25

  • Autoregressive parameters assumed given.

    Standard Approx

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 -15.8652 1.7796 -8.92

  • Depiction of the Estimated Model for the Demand for Gasoline Problem (Greene)� The estimated model is based on the Maximum

    Likelihood estimates:

    tpptpuc

    pncpgypcg

    tt

    tttt

    *0109.0ln*0808.0ln*2007.0

    ln*1494.0ln*0816.0ln*7770.18652.15ln

    −−+−−+−=

    vvv ε+−= *5194.0*9159.0

    � Regress R-Square = 0.9580

    � Total R-Square = 0.9854

    MSE = 0.0004352 =

    � DW order 1 = 1.8195

    � DW order 2 = 2.2171

    82

    2σ̂

    tttt vvv ε+−= −− 21 *5194.0*9159.0

  • Section 6.6Section 6.6

    A Test for Serial Correlation

    in the Presence of a Lagged

    Dependent Variable

  • Durbin’s h-TestA large sample test for autocorrelation when lagged dependent variables are present.

    d is the DW statistic.))(nV1/(nˆh

    d)2/1(1ˆ

    β−ρ=−≈ρ

    AR(1) process in error termsH0:ρ = 0

    Coefficient associated with

    The test breaks down if .

    If the Durbin's h-test breaks down, compute the OLS residuals . Then regress on and the set of exogenous variables. The test

    for ρ = 0 is carried out by testing the significance of the coefficient . (Durbin's m-test)

    84

    1tY −)1,0(N~h &

    1)ˆ(nV ≥βtû

    tû ,y,û 1t1t −−1tû −

  • Model: MODEL1

    Dependent Variable: lnpcg

    Number of Observations Read 36

    Number of Observations Used 35

    Number of Observations with Missing Values 1

    Analysis of Variance

    Sum of Mean

    The REG Procedure Output

    Sum of Mean

    Source DF Squares Square F Value Pr > F

    Model 7 0.68226 0.09747 210.45

  • Parameter Estimates

    Parameter Standard

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 -6.81147 2.38667 -2.85 0.0082

    laglnpcg 1 0.63554 0.12456 5.10

  • Model: MODEL1

    Dependent Variable: lnpcg

    Durbin-Watson D 1.639

    Pr < DW 0.0079

    Pr > DW 0.9921

    Number of Observations 35

    The REG Procedure Output

    Number of Observations 35

    1st Order Autocorrelation 0.180

    NOTE: PrDW is the p-value for testing negative autocorrelation.

    87

    The DW statistic reveals positive serial correlation, AR(1) pattern.

  • Name of Variable = lnpcgres

    Mean of Working Series -66E-17

    Standard Deviation 0.018902

    Number of Observations 35

    Autocorrelations

    Lag Covariance Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1 Std Error

    0 0.00035727 1.00000 | |********************| 0 0 0.00035727 1.00000 | |********************| 0

    1 0.00006445 0.18040 | . |**** . | 0.169031

    2 -0.0001012 -.28328 | .******| . | 0.174445

    3 -0.0000410 -.11466 | . **| . | 0.187127

    4 -0.0000612 -.17122 | . ***| . | 0.189124

    5 0.00002013 0.05634 | . |* . | 0.193502

    6 0.00007480 0.20936 | . |**** . | 0.193970

    7 0.00002450 0.06858 | . |* . | 0.200323

    8 -0.0000229 -.06416 | . *| . | 0.200992

    88

  • Partial Autocorrelations

    Lag Correlation -1 9 8 7 6 5 4 3 2 1 0 1 2 3 4 5 6 7 8 9 1

    1 0.18040 | . |**** . |

    2 -0.32645 | *******| . |

    3 0.01390 | . | . |

    4 -0.27677 | .******| . |

    5 0.15471 | . |*** . |

    6 0.02360 | . | . |

    7 0.07670 | . |** . |

    From the ACF and PACF plots, no pattern in the resi duals is revealed.

    7 0.07670 | . |** . |

    8 -0.06233 | . *| . |

    The ARIMA Procedure

    Autocorrelation Check for White Noise

    To Chi- Pr >

    Lag Square DF ChiSq -----------------Autocorrelations-------------

    6 8.24 6 0.2211 0.180 -0.283 -0.115 -0.171 0.056 0.209

    The Ljung-Box Q* statistic suggests a white noise r esidual series.89

  • The AUTOREG Procedure

    Dependent Variable lnpcg

    Ordinary Least Squares Estimates

    SSE 0.01250434 DFE 27

    MSE 0.0004631 Root MSE 0.02152

    SBC -150.02748 AIC -162.47027

    Durbin’s h-test, at least at the 0.10 level of significance, suggests a non-random residual series.

    SBC -150.02748 AIC -162.47027

    Regress R-Square 0.9820 Total R-Square 0.9820

    Durbin h 1.5788 Pr > h 0.0572

    Normal Test 1.3963 Pr > ChiSq 0.4975

    Log Likelihood 89.2351336 Observations 35

    Durbin-Watson 1.6391

    Godfrey's Serial Correlation Test

    Alternative LM Pr > LM

    AR(1) 1.7213 0.1895 90 continued...

  • Standard Approx

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 -6.8115 2.3867 -2.85 0.0082

    laglnpcg 1 0.6355 0.1246 5.10

  • Partial

    Autocorrelations

    1 0.180402

    Preliminary MSE 0.000346

    The AUTOREG Procedure Output

    Starting value of φφφφin the ML procedure

    Estimates of Autoregressive Parameters

    Standard

    Lag Coefficient Error t Value

    1 -0.180402 0.192898 -0.94

    Algorithm converged. 92

  • Maximum Likelihood Estimates

    SSE 0.00983118 DFE 26

    MSE 0.0003781 Root MSE 0.01945

    SBC -153.04594 AIC -167.04408

    Regress R-Square 0.8412 Total R-Square 0.9858

    Log Likelihood 92.5220382 Observations 35

    Durbin-Watson 1.7302

    Godfrey's Serial Correlation Test

    Alternative LM Pr > LM

    AR(1) 2.3826 0.1227

    93

    GLS (ML) Estimates

    continued...

  • Standard Approx

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 -8.3477 2.5227 -3.31 0.0027

    laglnpcg 1 0.2081 0.1299 1.60 0.1213

    lny 1 0.9295 0.2890 3.22 0.0035

    lnpg 1 -0.2157 0.0407 -5.31

  • The AUTOREG Procedure Output

    Autoregressive parameters assumed given.

    Standard Approx

    Variable DF Estimate Error t Value Pr > |t|

    Intercept 1 -8.3477 2.2511 -3.71 0.0010

    laglnpcg 1 0.2081 0.1223 1.70 0.1008

    95

    laglnpcg 1 0.2081 0.1223 1.70 0.1008

    lny 1 0.9295 0.2567 3.62 0.0012

    lnpg 1 -0.2157 0.0373 -5.79

  • Calculation of Durbin’s h-Test for the Greene Problem

    In the Greene Problem for gasoline demand:

    35

    1805.0)2/639.1(1ˆ

    =

    =−=

    n

    ρ

    96

    5788.1

    0155.0)12456.0()(

    35

    2

    =

    ==

    =

    h

    Bv

    n

  • Analysis of Variance

    Sum of Mean

    Source DF Squares Square F Value Pr > F

    Model 7 0.00061193 0.00008742 0.19 0.9848

    Error 26 0.01189 0.00045736

    Corrected Total 33 0.01250

    Root MSE 0.02139 R-Square 0.0489

    Dependent Mean -0.00002950 Adj R-Sq -0.2071

    Coeff Var -72488

    Parameter Estimates

    Parameter Standard

    Variable Label DF Estimate Error t Value Pr > |t|

    Intercept Intercept 1 -1.10845 2.12226 -0.52 0.6059

    lnpcgres1 1 0.26893 0.23259 1.16 0.2581

    laglnpcg 1 -0.08946 0.13979 -0.64 0.5278

    lny 1 0.12258 0.23732 0.52 0.6098

    lnpg 1 0.00625 0.02344 0.27 0.7918

    lnpnc 1 -0.02989 0.05719 -0.52 0.6056

    lnppt 1 -0.00223 0.06428 -0.03 0.9726

    t 1 0.00038584 0.00450 0.09 0.9324 97

  • Section 6.7Section 6.7

    Summary Remarks about the

    Issue of Serial Correlation

  • Final Considerations� With time-series data, in most cases this problem will surface.

    � Analysts must examine the error structure carefully.

    � Minimally do the following:

    – Graph the residuals over time.

    – Consider the significance of the Durbin-Watson statistic.

    – Consider higher-order autocorrelation structure via PROC ARIMA.ARIMA.

    – Consider the Godfrey LM test.

    – Consider the Box-Pierce or Ljung-Box tests (Q statistics).

    � Re-estimate econometric models with AR(p) error structures via PROC AUTOREG.

    – Use the Yule-Walker or Maximum Likelihood method to obtain estimates of the AR(P) error structure.

    – A preference exists for the Maximum Likelihood method.

    99