Multiple Regression: Hypotheses Testing and CIs

Econ 358: Econometrics

Agenda pt. 1

1

Testing Hypotheses and CISs

2

Confidence Sets

3

Model Specification

Testing Hypotheses

Same as before

\[ TestScore = \beta_0 + \beta_1 \times STR + \beta_2 \times english + u \]

  • Having multiple variables in a regression changes very little when testing a hypothesis about a single coefficient

    • estimate model
    • get t-statistic
    • find p-value

Same as before

\[ TestScore = \beta_0 + \beta_1 \times STR + \beta_2 \times english + u \]

Code
model <- lm(score ~ stratio + english, data = CASchools)

export_summs(model, robust = "HC1", statistics = c(N = "nobs", R2 = "r.squared", AdjR2 = "adj.r.squared"))
Model 1
(Intercept) 686.03 ***
(8.73)   
stratio -1.10 *  
(0.43)   
english -0.65 ***
(0.03)   
N 420       
R2 0.43    
AdjR2 0.42    
Standard errors are heteroskedasticity robust. *** p < 0.001; ** p < 0.01; * p < 0.05.

Same as before (but with C.I.)

\[ TestScore = \beta_0 + \beta_1 \times STR + \beta_2 \times english + u \]

Code
model <- lm(score ~ stratio + english, data = CASchools)

export_summs(model, robust = "HC1", error_format = "SE = {std.error}, CI = [{conf.low}, {conf.high}]", statistics = c(N = "nobs", R2 = "r.squared", AdjR2 = "adj.r.squared"))
Model 1
(Intercept) 686.03 ***
SE = 8.73, CI = [668.88, 703.19]   
stratio -1.10 *  
SE = 0.43, CI = [-1.95, -0.25]   
english -0.65 ***
SE = 0.03, CI = [-0.71, -0.59]   
N 420       
R2 0.43    
AdjR2 0.42    
Standard errors are heteroskedasticity robust. *** p < 0.001; ** p < 0.01; * p < 0.05.

What about for multiple?

\[ TestScore = \beta_0 + \beta_1 \times STR + \beta2 \times Expn + \beta_3 \times english + u \] - where Expn is expenditures per pupil - What if you wanted to test the null hypothesis that, “school resources don’t matter,” so that: \[ H_0: \beta_1 =0 \text{ and } \beta_2 = 0 \] vs \[ H_1: \text{ either } \beta_1 \neq 0 \text{ or } \beta_2 \neq 0 \text{ or both } \]

What about for multiple?

\[ TestScore = \beta_0 + \beta_1 \times STR + \beta2 \times Expn + \beta_3 \times english + u \]

  • We cannot use normal t-statistics here (i.e. cannot reject the joint null hypothesis if either or exceeds 1.96 in absolute value)

  • The book explains why, but we will focus on what you can do: use the F-statistic test

What about for multiple?

\[ TestScore = \beta_0 + \beta_1 \times STR + \beta2 \times Expn + \beta_3 \times english + u \]

  • The F-test is a formal hypothesis test designed to deal with a null hypothesis that contains multiple hypotheses or a single hypothesis about a group of coefficients.

  • The way an F-test works is fairly ingenious:

    • Translate the null hypothesis into constraints that will be placed on the equation.
    • Estimate the constrained equation with OLS and compare the fit of the constrained equation with the fit of the unconstrained equation.

Walking through a manual example

Step 1: Calculate the “unrestricted” model:

model1 <- lm(score ~ stratio + expenditure + english, data = CASchools)

summary(model1)

Call:
lm(formula = score ~ stratio + expenditure + english, data = CASchools)

Residuals:
    Min      1Q  Median      3Q     Max 
-51.340 -10.111   0.293  10.318  43.181 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)    
(Intercept) 649.577947  15.205717  42.719  < 2e-16 ***
stratio      -0.286399   0.480523  -0.596  0.55149    
expenditure   0.003868   0.001412   2.739  0.00643 ** 
english      -0.656023   0.039106 -16.776  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 14.35 on 416 degrees of freedom
Multiple R-squared:  0.4366,    Adjusted R-squared:  0.4325 
F-statistic: 107.5 on 3 and 416 DF,  p-value: < 2.2e-16

Walking through a manual example

Step 2: Calculate the “restricted” model:

\[ H_0: \beta_1 =0 \text{ and } \beta_2 = 0 \]

model2 <- lm(score ~ english, data = CASchools)

summary(model2)

Call:
lm(formula = score ~ english, data = CASchools)

Residuals:
    Min      1Q  Median      3Q     Max 
-50.861 -10.183  -0.807   9.004  45.183 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept) 664.73944    0.94064  706.69   <2e-16 ***
english      -0.67116    0.03898  -17.22   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 14.59 on 418 degrees of freedom
Multiple R-squared:  0.4149,    Adjusted R-squared:  0.4135 
F-statistic: 296.4 on 1 and 418 DF,  p-value: < 2.2e-16

Walking through a manual example

Step 3: Compare the fits:

\[ F = \frac{(R^2_{unrestricted} - R^2_{restricted})/q}{(1-R^2_{unrestricted})/(n-k_{unrestricted} - 1)} \]

  • \(q\) : number of restrictions under the null
  • \(k_{unrestricted}\) : number of regressors in the unrestricted regression.

Walking through a manual example

Step 3: Compare the fits:

\[ F = \frac{(R^2_{unrestricted} - R^2_{restricted})/q}{(1-R^2_{unrestricted})/(n-k_{unrestricted} - 1)} \]

\[ \frac{(0.4366 - 0.4149)/2}{(1-0.4366)/(420-3-1)} = 8.01 \]

  • Then compare to the critical value on the F-table…

Walking through an R package example

Step 1: load package “car” estimate the multiple regression model:

library(car)

model <- lm(score ~ stratio + english + expenditure, data = CASchools)

Walking through an R package example

Step 2: execute the function on the model object and provide both linear restrictions to be tested as strings

linearHypothesis(model1, c("stratio=0", "expenditure=0"))
Res.Df RSS Df Sum of Sq F Pr(>F)
418 8.9e+04                    
416 8.57e+04 2 3.3e+03 8.01 0.000386

Walking through an R package example

Res.Df RSS Df Sum of Sq F Pr(>F)
418 8.9e+04                    
416 8.57e+04 2 3.3e+03 8.01 0.000386
  • The output reveals that the F-statistic for this joint hypothesis test is about 8.01 and the corresponding p-value is 0.0004

  • Thus, we can reject the null hypothesis that both coefficients are zero at any level of significance commonly used in practice.

Robust example

A heteroskedasticity-robust version of this F-test (which leads to the same conclusion) can be conducted as follows:

linearHypothesis(model1, c("stratio=0", "expenditure=0"), white.adjust = "hc1")
Res.Df Df F Pr(>F)
418           
416 2 5.43 0.00468

Overall F-statistic

export_summs(model1, robust = "HC1", error_format = "SE = {std.error}, CI = [{conf.low}, {conf.high}]", statistics =c(N = "nobs", R2 = "r.squared", AdjR2 = "adj.r.squared", F = "statistic"))
Model 1
(Intercept) 649.58 ***
SE = 15.46, CI = [619.19, 679.96]   
stratio -0.29    
SE = 0.48, CI = [-1.23, 0.66]   
expenditure 0.00 *  
SE = 0.00, CI = [0.00, 0.01]   
english -0.66 ***
SE = 0.03, CI = [-0.72, -0.59]   
N 420       
R2 0.44    
AdjR2 0.43    
F 107.45    
Standard errors are heteroskedasticity robust. *** p < 0.001; ** p < 0.01; * p < 0.05.

Overall F-statistic

  • the overall f-statistic reports a statistic testing that all of the population coefficients in the model except for the intercept are zero

  • Our statistic of 107.45 rejects the null hypothesis that the model has no power in explaining test scores.

  • It is important to know that the F-statistic reported by summary is not robust to heteroskedasticity!

Confidence Sets

Confidence Sets

  • A 95% joint confidence set is:

    • A set-valued function of the data that contains the true coefficient(s) in 95% of hypothetical repeated samples.
    • Equivalently, the set of coefficient values that cannot be rejected at the 5% significance level.

Confidence Sets

In R

Code
confidenceEllipse(model1, 
                  fill = T,
                  lwd = 0,
                  which.coef = c("stratio", "expenditure"),
                  main = "95% Confidence Set",
                  ylab="Coefficients of Expenditure",
                  xlab="Coefficients of STRatio")

In R

We see that the ellipse is centered around
(−0.29, 3.87), the pair of coefficients estimates on STRatio and expenditure. What is more, (0,0) is not in the set, so we can reject the null that both coefficients are zero.

Model Specification

How to decide what variables to include?

  1. Identify variable of interest

  2. Think of the omitted causal effects that could result in omitted variable bias

  3. Include those omitted causal effects if you can or, if you can’t, include variables correlated with them that serve as control variables

  4. Also specify a range of plausible alternative models, which include additional candidate variables.

  5. Estimate your base model and plausible alternative specifications (“sensitivity checks”).

Sensitivity checks

  • Does a candidate variable change the coefficient of interest?

  • Is a candidate variable statistically significant?

  • Use judgment, not a mechanical recipe…

  • Don’t just try to maximize \(R^2\)