Hypothesis Testing

If we were interested in making inferences about \(\beta\) in a simple linear regression model i.e. \[y_i = \alpha+\beta x_i+\epsilon_i\] \(\mathbf{b}^T\boldsymbol{\beta} = \beta\) i.e. \(\mathbf{b}^T = (0 \quad 1)\) and this gives us:

\[\frac{\hat{\beta}-\beta}{\text{e.s.e}(\hat{\beta})} \sim t(n-p)\]

\(\newline\) Under the null hypothesis:

\(\newline\) H\(_0\): \(\beta=0\) (where H\(_1: \beta \neq 0\))

\(\newline\) \[\frac{\hat{\beta}-0}{\text{e.s.e}(\hat{\beta})}=\frac{\hat{\beta}}{\text{e.s.e}(\hat{\beta})} \sim t(n-p)\]

and \(\frac{\hat{\beta}}{\text{e.s.e}(\hat{\beta})}\) is typically called the test statistic. Therefore, the null hypothesis is rejected for large absolute values of the test statistic, usually values \(> 2\) i.e. for small p-values in R (where a p-value is the probability that we obtain a test statistic value as extreme or more extreme if the null hypothesis is true). In general, we reject H\(_0\) for p-values \(<0.05\) and this would indicate a significant relationship between a response and an explanatory variable in the model (i.e., \(\beta \neq 0\)).

Notice we can test any hypothsis with respect to \(\beta\), for instance

\(\newline\) H\(_0\): \(\beta=5\) (where H\(_1: \beta \neq 5\))

\(\newline\) \[\frac{\hat{\beta}-5}{\text{e.s.e}(\hat{\beta})} \sim t(n-p).\]