11. multiple regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables...

58
11. Multiple Regression y – response variable x 1 , x 2 , … , x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative. Chapters 12-14 show how to incorporate categorical variables also in a regression model. Multiple regression equation (population): E(y) = + 1 x 1 + 2 x 2 + …. + k x k

Upload: gordon-boyd

Post on 16-Jan-2016

227 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

11. Multiple Regression

• y – response variable

x1, x2 , … , xk -- a set of explanatory variables

In this chapter, all variables assumed to be quantitative. Chapters 12-14 show how to incorporate categorical variables also in a regression model.

Multiple regression equation (population):

E(y) = + 1x1 + 2x2 + …. + kxk

Page 2: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Parameter Interpretation

= E(y) when x1 = x2 = … = xk = 0.

1, 2 , … , k are called partial regression coefficients.

Controlling for other predictors in model, there is a linear relationship between E(y) and x1 with slope 1.

i.e., if x1 goes up 1 unit with other x’s held constant, the change in E(y) is

[ + 1(x1 + 1) + 2x2 + …. + kxk] – [ + 1x1 + 2x2 + …. + kxk]

= 1.

Page 3: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative
Page 4: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Prediction equation

• With sample data, we get “least squares” estimates of parameters by minimizing

SSE = sum of squared prediction errors (residuals)

= (observed y – predicted y)2

to get a sample prediction equation

1 1 2 2ˆ ... k ky a b x b x b x

Page 5: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Example: Mental impairment study

• y = mental impairment (summarizes extent of psychiatric symptoms, including aspects of anxiety and depression, based on questions in “Health opinion survey” with possible responses hardly ever, sometimes, often)

Ranged from 17 to 41 in sample, mean = 27, s = 5.

• x1 = life events score (composite measure of number and

severity of life events in previous 3 years) Ranges from 0 to 100, sample mean = 44, s = 23

• x2 = socioeconomic status (composite index based on

occupation, income, and education) Ranges from 0 to 100, sample mean = 57, s = 25

Data (n = 40) at www.stat.ufl.edu/~aa/social/data.html and p. 327 of text

Page 6: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Other explanatory variables in study (not used here) include age, marital status, gender, race

• Bivariate regression analyses give prediction equations:

• Correlation matrix

2ˆ 32.2 0.086y x 1ˆ 23.3 0.090y x

Page 7: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Prediction equation for multiple regression analysis is:

1 2ˆ 28.23 0.103 0.097y x x

Page 8: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Predicted mental impairment:

• increases by 0.103 for each 1-unit increase in life events, controlling for SES.

• decreases by 0.097 for each 1-unit increase in SES, controlling for life events.

(e.g., decreases by 9.7 when SES goes from minimum of 0 to maximum of 100, which is relatively large since sample standard deviation of y is 5.5)

Page 9: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• Can we compare the estimated partial regression coefficients to determine which explanatory variable is “most important” in the predictions?

• These estimates are unstandardized and so depend on units.

• “Standardized coefficients” presented in multiple regression output refer to partial effect of a standard deviation increase in a predictor, keeping other predictors constant. (Sec. 11.8).

• In bivariate regression, standardized coefficient = correlation. In multiple regression, std. coeff. relates algebraically to “partial correlations” (Sec. 11.7).

• We skip or only briefly cover Sec. 11.7, 11.8 (lack of time), but I’ve included notes at end of this chapter on these topics.

Page 10: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Predicted values and residuals

• One subject in the data file (p. 327) has

y = 33, x1 = 45 (near mean), x2 = 55 (near mean)This subject has predicted mental impairment

(near mean)

The prediction error (residual) is 33 – 27.5 = 5.5 i.e., this person has mental impairment 5.5 higher than predicted

given his/her values of life events, SES.

SSE = 768.2 smaller than SSE for either bivariate model or for any other linear equation with predictors x1 , x2.

ˆ 28.23 0.103(45) 0.097(55) 27.5y

Page 11: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Comments

• Partial effects in multiple regression refer to controlling other variables in model, so differ from effects in bivariate models, which ignore all other variables.

• Partial effect of x1 (controlling for x2) is same as bivariate effect of x1 when correlation = 0 between x1 and x2

(as is true in most designed experiments).• Partial effect of a predictor in this multiple regression model

is identical at all fixed values of other predictors in model

Example:

At x2 = 0,

At x2 = 100,

1 1ˆ 28.23 0.103 0.097(0) 28.23 0.103y x x

1 1ˆ 28.23 0.103 0.097(100) 18.5 0.103y x x

1 2ˆ 28.23 0.103 0.097y x x

Page 12: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• This parallelism means that this model assumes no interaction between predictors in their effects on y. (i.e., effect of x1 does not depend on value of x2)

• Model is inadequate if, in reality

(insert graph)

• The model E(y) = + 1x1 + 2x2 + …. + kxk

is equivalently expressed as

y = + 1x1 + 2x2 + …. + kxk +

where = y – E(y) = “error” having E() = 0 is population analog of residual e = y – predicted y.

Page 13: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Graphics for multiple regression• Scatterplot matrix: Scatterplot for each pair of variables

Page 14: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• Partial regression plots: One plot for each predictor, shows its partial effect controlling for other predictors

Example: With two predictors, show partial effect of x1 on y (i.e., controlling for x2) by using residuals after

Regressing y on x2

Regressing x1 on x2

Partial regression plot is a scatterplot with residuals from regressing y on x2 on vertical axis and residuals from regressing x1 on x2 on horizontal axis.

The prediction equation for these points has the same slope as the effect of x1 in the prediction equation for the multiple regression model.

Page 15: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative
Page 16: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative
Page 17: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Multiple correlation and R2

• How well do the explanatory variables in the model predict y, using the prediction equation?

• The multiple correlation, denoted by R, is the correlation between the observed y-values and predicted values

from the prediction equation.

i.e., it is the ordinary correlation between y and an artificial variable whose values for the n subjects in the sample are the predicted values from the prediction equation.

1 1 2 2ˆ ... k ky a b x b x b x

Page 18: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Example: Mental impairment predicted by life events and SES

The multiple correlation is the correlation between the

n = 40 pairs of values of observed y and predicted y values:

Subject y Predicted y

1 17 24.8 = 28.23 + 0.103(46) – 0.097(84)

2 19 22.8 = 28.23 + 0.103(39) – 0.097(97)

3 20 28.7 = 28.23 + 0.103(27) – 0.097(24)

……

Software reports R = 0.58

(bivariate correlations with y were 0.37 for x1, -0.40 for x2)

1 2ˆ 28.23 0.103 0.097y x x

Page 19: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• The coefficient of multiple determination R2 is the proportional reduction in error obtained by using the prediction equation to predict y instead of using to predict y

Example: Predictor TSS SSE R2

x1 1162.4 1001.4 0.14 x2 1162.4 977.7 0.16 x1 , x2 1162.4 768.2 0.34

For the multiple regression model,

2 22

2

ˆ( ) ( )

( )

TSS SSE y y y yR

TSS y y

y

2 22

2

ˆ( ) ( ) 1162.4 768.20.339

( ) 1162.4

TSS SSE y y y yR

TSS y y

Page 20: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Software provides an ANOVA table with the sums of squares used in R-squared and a Model Summary table with values of R and R-squared.

Page 21: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• There is a 34% reduction in error when we use life events and SES together to predict mental impairment (via the prediction equation), compared to using to predict mental impairment.

• This is sometimes expressed as “34% of the variation in mental impairment is explained by life events and SES.”

• The multiple correlation is R =

= the correlation between the 40 values of y and the 40 corresponding predicted y-values from the prediction equation for the multiple regression model.

y

0.339 0.582

Page 22: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Properties of R and R2

• 0 ≤ R2 ≤ 1• so 0 ≤ R ≤ 1 (i.e., it can’t be negative)

• The larger their values, the better the set of explanatory variables predict y

• R2 = 1 when observed y = predicted y, so SSE = 0• R2 = 0 when all predicted y = so TSS = SSE.

When this happens, b1 = b2 = … = bk = 0 and the correlation r = 0 between y and each predictor.

• R2 cannot decrease when predictors added to model• With single predictor, R2 = r2 , R = |r|

2R R

y

Page 23: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• The numerator of R2, which is TSS – SSE, is called the regression sum of squares. This represents the variability in y “explained” by the model.

• R2 is additive (i.e., it equals the sum of r2 values from bivariate regressions of y with each x) when each pair of explanatory variables is uncorrelated; this is true in many designed experiments, but we don’t expect it in observational studies.

• Sample R2 tends to be a biased estimate (upwards) of population value of R2 , more so for small n (e.g., extreme case -- consider n = 2 in bivariate regression!) Software also reports adjusted R2, a less biased estimate (p. 366, Exer. 11.61)

• In observational studies with many predictors, explanatory var’s are often sufficiently correlated that we can predict one of them well using the others – called multicollinearity – consider later.

(picture)

Page 24: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Inference for multiple regression

Based on assumptions Model E(y) = + 1x1 + 2x2 + …. + kxk is (nearly)

correct Population conditional distribution of y is normal, at

each combination of predictor values Standard deviation σ of conditional dist. of responses

on y is same at each combination of predictor values(The estimate s of σ is the square root of MSE. It is what SPSS

calls “Std. error of the estimate” in the Model Summary table!) Sample is randomly selected

Two-sided inference about parameters is robust to normality and common σ assumptions

Page 25: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Collective influence of explanatory var’s

• To test whether explanatory variables collectively have effect on y, we test

H0 : 1 = 2 = … = k = 0

(i.e., y independent of all the explanatory variables)

Ha: At least one i 0

(at least one explanatory variable has an effect on y, controlling

for the others in the model)

Equivalent to testing

H0: population multiple correlation = 0 (or popul. R2 = 0)

vs. Ha: population multiple correlation > 0

Page 26: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• Test statistic (with k explanatory variables)

• When H0 true, F values follow the F distribution (R. A. Fisher)

• For given n, larger R gives larger F test statistic, more evidence against null hypothesis.

• Since larger F gives stronger evidence against null, P-value = right-tail probability above observed value

2

2

/

(1 ) /[ ( 1)]

R kF

R n k

Page 27: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Properties of F distribution

• F can take only nonnegative values

• Distribution is skewed right

• Exact shape depends on two df values:

df1 = k (number of explanatory variables in model)

df2 = n – (k + 1) (sample size – no. model parameters)

• Mean is approximately 1 (closer to 1 for large n)

• F tables report F-scores for right-tail probabilities such as 0.05, 0.01, 0.001 (one table for each tail prob.)

Page 28: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Example: Is mental impairment independent of life events and SES?

H0 : 1 = 2 = 0

(i.e., y independent of x1 and x2)

Ha: 1 0 or 2 0 or both

Test statistic

df1 = 2, df2 = 37, P-value = 0.000 (i.e., P < 0.001) (From F table, F = 8.4 has P-value = 0.001)

2

2

/ 0.339 / 29.5

(1 ) /[ ( 1)] (1 0.339) /[40 (2 1)]

R kF

R n k

Page 29: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• Software provides ANOVA table with result of F test about all regression parameters

Page 30: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• There is very strong evidence that at least one of the explanatory variables is associated with mental impairment.

• Alternatively, can calculate F as ratio of mean squares from the ANOVA table.

Example: F = 197.12/20.76 = 9.5• When only k=1 predictor,

the square of the t stat. = b/(se) for testing H0: = 0.

(If t is a statistic having a t distribution, then t2 has the F dist with df1 = 1, df2 = df for the t statistic.)

2

22

2 2

/1

(1 ) /( 2) 12

r rF t

r n rn

Page 31: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Inferences for individual regression coefficients (Need all predictors in model?)

• To test partial effect of xi controlling for the other explan. var’s in model, test H0: i = 0 using test stat.

t = (bi – 0)/se, df = n - (k + 1) which is df2 from the F test (and in df column of

ANOVA table in Residual row)

• CI for i has form bi ± t(se), with t-score from t-table also having

df = n - (k + 1), for the desired confidence level

• Software provides estimates, standard errors, t test statistics, P-values for tests (2-sided by default)

Page 32: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• In SPSS, check “confidence intervals” under “Statistics” in Linear regression dialog box to get CI’s for regression parameters (95% by default)

Page 33: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Example: Effect of SES on mental impairment, controlling for life events

• H0: 2 = 0, Ha: 2 0

Test statistic t = b2/se = -0.097/0.029 = -3.35,

df = n - (k + 1) = 40 – 3 = 37.

Software reports P-value = 0.002

Conclude there is very strong evidence that SES has a negative effect on mental impairment, controlling for life events. (We would reject H0 at standard significance levels, such as 0.05.)

Likewise for test of H0: 1 = 0 (P-value = 0.003), but life events has positive effect on mental impairment, controlling for SES.

Page 34: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

A 95% CI for 2 is b2 ± t(se), which is -0.097 ± 2.03(0.029), or (-0.16, -0.04)

• This does not contain 0, in agreement with rejecting H0 for two-sided Ha at 0.05 significance level

• Perhaps simpler to interpret corresponding CI of (-16, -4) for the change in mean mental impairment

for an increase of 100 units in SES (from minimum of 0 to maximum of 100).

(relatively wide CI because of relatively small n = 40)

Why bother with F test? Why not go right to the t tests?

Page 35: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

A caution: “Overlapping variables” (multicollinearity)

• It is possible to get a small P-value in F test of

H0 : 1 = 2 = … = k = 0 yet not get a small P-value for any of the t tests of individual H0: i = 0

• Likewise, it is possible to get a small P-value in a bivariate test for a predictor but not for its partial test controlling for other variables.

• This happens when the partial variability explained uniquely by a predictor is small. (i.e., each xi can be predicted well using the other predictors) (picture)

Example (purposely absurd): y = height

x1 = length of right leg, x2 = length of left leg

Page 36: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• When multicollinearity occurs,

se values for individual bi may be large

(and individual t statistics not significant)

R2 may be nearly as large when drop some predictors from model

It is advisable to simplify the model by dropping some “nearly redundant” explanatory variables.

Page 37: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Modeling interaction between predictors

• Recall that the multiple regression model

E(y) = + 1x1 + 2x2 + …. + kxk

assumes the partial slope relating y to each xi is the same at all values of other predictors (i.e., assumes “no interaction” between pairs of predictors)

(recall picture showing parallelism)

For a model allowing interaction between x1 and x2 the effect of x1 may change as x2 changes.

Page 38: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Simplest interaction model: Introduce cross product terms for predictors

Ex: k = 2 explan var’s: E(y) = + 1x1 + 2x2 + 3(x1x2)

is special case of the multiple regression model

E(y) = + 1x1 + 2x2 + 3x3

with x3 = x1x2 (create x3 in “transform” menu with “compute variable” option in SPSS)

Example: For mental impairment data, we get

= 26.0 + 0.156x1 - 0.060x2 - 0.00087x1x2

y

Page 39: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

SPSS output for interaction model

(need more decimal places! Highlight table and repeatedly click on the value.)

Page 40: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Fixed x2 Prediction equation for y and x1

0 26.0 + 0.156x1 - 0.060(0) - 0.00087 x1(0)

= 26.0 + 0.16x1

50 26.0 + 0.156x1 - 0.060(50) - 0.00087 x1(50)

= 23.0 + 0.11x1

100 26.0 + 0.156x1 - 0.060(100) - 0.00087 x1(100)

= 20.0 + 0.07x1

The higher the value of SES, the weaker the relationship between y = mental impairment and x1 = life events (plausible for these variables)

(picture)

Page 41: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Comments about interaction model

• Note that E(y) = + 1x1 + 2x2 + 3x1x2

= ( + 2x2) + (1 + 3x2)x1

i.e, E(y) is a linear function of x1

E(y) = (constant with respect to x1 ) + (coeff. of x1)x1

where coefficient of x1 is (1 + 3x2).

For fixed x2 the slope of the relationship between E(y) and x1 depends on the value of x2 .

• To model interaction with k > 2 explanatory variables, take cross product for each pair; e.g., k = 3:

E(y) = + 1x1 + 2x2 + 3x3 + 4x1x2 + 5x1x3 + 6x2x3

Page 42: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• To test H0: no interaction in model E(y) = + 1x1 + 2x2 + 3x1x2, test H0: 3 = 0 using test statistic

t = b3 /se.

Example: t = -0.00087/0.0013 = -0.67, df = n – 4 = 36.

P-value = 0.51 for Ha : 3 0

Insufficient evidence to conclude that interaction exists.(It is significant for the entire data set, with n > 1000)

• With several predictors, often some interaction terms are needed but not others. E.g., could end up using model such as E(y) = + 1x1 + 2x2 + 3x3 + 4x1x2

• Be careful not to misintepret “main effect” terms when there is interaction between them in the model.

Page 43: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Comparing two regression models

• How to test whether a model gives a better fit than a simpler model containing only a subset of the predictors?

Example: Compare

E(y) = + 1x1 + 2x2 + 3x3 + 4x1x2 + 5x1x3 + 6x2x3

to

E(y) = + 1x1 + 2x2 + 3x3

to test H0: no interaction by testing H0: 4 = 5 = 6 = 0.

Page 44: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• An F test compares the models by comparing their SSE values, or equivalently, their R2 values.

• The more complex (“complete”) model is better if its SSE is sufficiently smaller (or equivalently if its R2 value is sufficiently larger) than the SSE (or R2) value for the simpler (“reduced”) model.

• Denote the SSE values for the complete and reduced models by SSEc and SSEr. Denote the R2 values by R2

c and R2r.

• The test statistic for comparing the models is

df1 = number of extra parameters in complete model,

df2 = n-(k+1) = df2 for F test that all terms in complete model = 0

(e.g., df2 = n – 7 for model above)

2 21 1

22 2

( ) / ( ) /

/ (1 ) /r c c r

c c

SSE SSE df R R dfF

SSE df R df

Page 45: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Example: Mental impairment study (n = 40)

Reduced model: E(y) = + 1x1 + 2x2

for x1 = life events score, x2 = SES

Complete model: E(y) = + 1x1 + 2x2 + 3x3

with x3 = religious attendance (number of times subject attended religious service in past year)

R2r = 0.339, R2

c = 0.358

Test comparing models has H0: 3 = 0.

Page 46: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Test statistic

with df1 = 1, df2 = 36. P-value = 0.31.

We cannot reject H0 at the usual significance levels (such as 0.05). The simpler model is adequate.

Note: Since only one parameter in null hypo., the F test statistic is the square of t = b3/se for testing H0: 3 = 0. The t test also gives P-value = 0.31, for Ha: 3 0

2 21

22

( ) / (0.358 0.339) /11.07

(1 ) / (1 0.358) /(40 4)c r

c

R R dfF

R df

Page 47: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Partial Correlation

• Used to describe association between y and an explanatory variable, while controlling for the other explanatory variables in the model

• Notation:

denotes the partial correlation between y and x1 while controlling for x2.

Formula in text (p. 347), which we’ll skip, focusing on interpretation and letting software do the calculation. (“correlate” on “Analyze” menu has a “partial correlation” option)

1 2.yx xr

Page 48: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Properties of partial correlation• Falls between -1 and +1. • The larger the absolute value, the stronger the association,

controlling for the other variables• Does not depend on units of measurement• Has same sign as corresponding partial slope in the

prediction equation• Can regard as approximating the ordinary

correlation between y and x1 at a fixed value of x2.• Equals ordinary correlation found for data points in the

corresponding partial regression plot• Squared partial correlation has a proportional reduction in

error (PRE) interpretation for predicting y using that predictor, controlling for other explan. var’s in model.

1 2.yx xr

Page 49: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Example: Mental impairment as function of life events and SES

• The ordinary correlations are:

0.372 between y and life events

-0.399 between y and SES

• The partial correlations are:

0.463 between y and life events, controlling for SES

-0.483 between y and SES, controlling for life events

Page 50: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Notes:

• Since partial correlation = 0.463 between y and life events, controlling for SES,

and since (0.463)2 = 0.21,

“Controlling for SES, 21% of the variation in mental impairment is explained by life events.”

or

“Of the variability in mental impairment unexplained by SES, 21% is explained by life events.”

Page 51: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• Test of H0 : population partial correlation = 0

is equivalent to t test of

H0 : population partial slope = 0

For the corresponding regression parameter.

e.g., model E(y) = + 1x1 + 2x2 + 3x3

H0 : population partial correlation = 0 between y and x2

controlling for x1 and x3

is equivalent to test of

H0 : 2 = 0

Page 52: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Standardized Regression Coefficients

• Recall for a bivariate model, the correlation is a “standardized slope,” reflecting what the slope would be if x and y had equal standard dev’s.

• In multiple regression, there are also standardized regression coefficients that describe what the partial regression coefficients would equal if all variables had the same standard deviation.

Def: The standardized regression coefficient for an explanatory variable represents the change in the mean of y (in y std. dev’s) for a 1-std.-dev. increase in that variable, controlling for the other explanatory variables in the model

Page 53: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• r = b(sx/sy) (bivariate) generalizes to

b1* = b1 (sx1 / sy )

b2* = b2 (sx2 / sy ), etc.

Properties of standardized regression coeff’s:• Same sign as unstandardized coefficients, but do

not depend on units of measurement• Represent what the partial slopes would be if the

standard deviations were equal• Equals 0 when unstandardized coeff. = 0, so a new

test is not needed about significance• SPSS reports in same table as unstandardized

coefficients

Page 54: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Example: Mental impairment by life events and SES

• We found unstandardized equation

• Standard deviations sy = 5.5, sx1 = 22.6, sx2 = 25.3

The standardized coefficient for the effect of life events is

b1* = 0.103(22.6/5.5) = 0.43. Likewise b2* = -0.45

1 2ˆ 28.23 0.103 0.097y x x

Page 55: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• We estimate that E(y) increases by 0.43 standard deviations for a 1 std. dev. increase in life events, controlling for SES.

Note:

An alternative form for prediction equations uses standardized regression coefficients as coefficients of standardized variables

(text, p. 352)

Page 56: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

Some multiple regression review questions

Predicted cost = -19.22 + 0.29(food) + 1.58(décor) + 0.90(service)

a. The correct interpretation of the estimated regression coefficient for décor is: for every 1-point increase in the décor score, the cost of dining increases by $1.58

b. Décor is the most important predictorc. Décor has a positive correlation with costd. Ignoring other predictors, it is impossible that décor could

have a negative correlation with coste. The t statistic for testing the partial effect of décor against

the alternative of a positive effect could have a P-value above 0.50

f. None of the above

Page 57: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• (T/F) For the multiple regression model with two predictors, if the correlation between x1 and y is 0.50 and the correlation between x2 and y is 0.50, it is possible that the multiple correlation R = 0.35.

• For the previous exercise in which case would you expect R to be larger: when the correlation between x1 and x2 is 0, or when it is 1.0?

• (T/F) For every F test, there is always an equivalent t test.

• Explain with an example what it means for there to be (a) no interaction, (b) interaction between two quantitative explanatory variables in their effects on a quantitative response variable.

• Give an example of interaction when the three variables are categorical (see back in Chap. 10).

Page 58: 11. Multiple Regression y – response variable x 1, x 2, …, x k -- a set of explanatory variables In this chapter, all variables assumed to be quantitative

• Approximately what value do you expect to see for a F test statistic when a null hypothesis is true?

• (T/F) If you get a small P-value in the F test that all regression coefficients = 0, then the P-value will be small in at least one of the t tests for the individual regression coefficients.

• Why do we need the F (chi-squared) distribution? Why can’t we use the t (normal) distribution for all hypotheses involving quantitative (categorical) var’s?

• What is the connection between (a) t and F, (b) z and chi-squared?