Chapter 21 Basic Hypothesis Tests for Linear Models
21.1 Introduction
In this section we consider the application of hypothesis testing to linear models. Suppose that we are given the linear model,where \(\epsilon_i \sim N(0,\sigma^2)\) are independent and identically distributed. We are interested in testing the hypothesis that a coefficient \(\beta_j\) is equal to some value \(b\). In particular, we are most interested in \(b=0\) as setting \(\beta_j =0\) means that \(x_{ji}\) is not important in predicting \(Y_i\), see Section 21.2. We can also construct confidence intervals for \(\beta_j\) and in Section 21.4 extend hypothesis testing to multiple (all) parameters to test whether or not a linear model is useful in a given modelling scenario.
21.2 Tests on a single parameter
Given the linear model,where \(\epsilon_i \sim N(0,\sigma^2)\), we want to test \(H_0: \beta_j = b\) vs. \(H_1: \beta_j \neq b\) at significance level \(\alpha\) where \(b\) is some constant. Typically, we might choose \(\alpha =0.05\) (common alternatives \(\alpha =0.01\) or \(\alpha =0.1\)).
The decision rule is to reject \(H_0\) if \[|T| = \left| \frac{ \hat{\beta}_j - b}{\text{SE}(\hat{\beta}_j)} \right| > t_{n-p,\alpha/2},\] where \(\text{SE}(\hat{\beta}_j) = \sqrt{ \text{Var}(\hat{\beta}_j) }\) is the standard error of the parameter. Recall from Section 17 that \(\text{Var}(\hat{\beta}_j) = s^2 \left( (\mathbf{Z}^T\mathbf{Z})^{-1} \right)_{jj}\).
A special case of the above test occurs when we choose \(b=0\). The test \(H_0: \beta_j = 0\) vs. \(H_1: \beta_j \neq 0\) at level \(\alpha\) has the decision rule to reject \(H_0\) ifNote that if we reject \(H_0: \beta_j =0\) we are claiming that the explanatory variable \(X_j\) is useful in predicting the response variable \(Y\) when all the other variables are included in the model.
The test statistic \(|T| = \left| \frac{ \hat{\beta}_j}{\text{SE}(\hat{\beta}_j)} \right|\) is often reported in the output from statistical software such as R.
Fuel consumption
A dataset considers fuel consumption for 50 US states plus Washington DC, that is \(n=51\) observations. The response fuel is fuel consumption measured in gallons per person. The predictors considered are dlic, the percentage of licensed drivers, tax, motor fuel tax in US cents per gallon, inc, income per person in $1,000s and road, the log of the number of miles of federal highway. Fitting a linear model of the form
\[\text{fuel} = \beta_0 + \beta_1 \cdot \text{dlic} + \beta_2 \cdot \text{tax} + \beta_3 \cdot \text{inc} + \beta_4 \cdot \text{road}\]
using R, the output is
Estimate | Standard Error | |
---|---|---|
\(\beta_0\) | 154.19 | 194.906 |
\(\beta_1\) | 4.719 | 1.285 |
\(\beta_2\) | -4.228 | 2.030 |
\(\beta_3\) | -6.135 | 2.194 |
\(\beta_4\) | 26.755 | 9.337 |
Test \(H_0: \beta_2 = 0\) vs. \(H_1: \beta_2 \neq 0\) at significance level \(\alpha = 0.05\).
Watch Video 31 for a work through in R of testing the null hypothesis.
Video 31: Fuel consumption example.
Hypothesis test for \(\beta_2\).
The decision rule is to reject \(H_0\) if
So we reject \(H_0\) and conclude that the tax variable is useful for prediction of fuel after having included the other variables.
We note that the \(p\)-values is \(P(|t_{46}|>2.083) = 0.0428\) and therefore would not reject the null hypothesis \(\beta_2 =0\) at significance level \(\alpha = 0.01\).21.3 Confidence intervals for parameters
Recall thatFuel consumption (continued)
Consider Example 21.2.1 (Fuel consumption), construct a 95% confidence interval for \(\beta_2\).
This confidence interval does not contain 0 (just) as we would expect from the calculation of the \(p\)-value in Example 20.2.1 (Fuel consumption) above.
21.4 Tests for the existence of regression
We want to test
\(H_0: \beta_1 = \beta_2 = \cdots = \beta_{p-1} = 0\)
versus
\(H_1: \beta_j \neq 0\)
for some \(j\) at significance level \(\alpha\).
Note that if we reject \(H_0\) we are saying that the modelhas some ability to explain the variance that we are observing in \(Y\). That is, there exists a linear relationship between the explanatory variables and the response variable.
If \(D_0\) is the model deviance under the null hypothesis and \(D_1\) is the model deviance under the alternative hypothesis, then the decision rule is to reject \(H_0\) ifFor the data in Example 21.2.1 (Fuel consumption), the two competing models are
The models have residual sum of squares \(D_1 = 193700\) and \(D_0 = 395694.1\), respectively. We test \(H_0: \beta_1 = \cdots = \beta_4 = 0\) vs. \(H_1: \beta_j \neq 0\) for some \(j=1,\dots,4\) at level \(\alpha=0.05\).
Therefore, we reject \(H_0\) and can say that the linear model has some power in explaining the variability in fuel.
Note that the \(p\)-value for the \(F\) test is \(9.331 \times 10^{-7} = P(F_{4,46} >11.99)\). This is given in R by 1-pf(11.99,4,46)
and is reported in the last line of summary()
for a linear model in R.