Summary points
In this lecture, we have disucces three possible models:
\[\begin{aligned} E(y_{ij}) & = \alpha_i+\beta_i(x_{ij}-\bar{x}_{i.})\\ E(y_{ij}) & = \alpha_i+\beta(x_{ij}-\bar{x}_{i.})\\ E(y_{ij}) & = \alpha+\beta(x_{ij}-\bar{x}_{..})\\\end{aligned}\]
And we derived a framwork to pick a model that best suits a particular data set.
Some important things to remember are:
How to formulate each model in vector-matrix notation as \[E(\mathbf{Y}) = \mathbf{X}\boldsymbol{\beta}\]
How to use the general linear model formulae below to derive the parameter estimates and residual sums of squares as \[\boldsymbol{\hat{\beta}} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y} \quad \mathrm{and} \quad RSS = \mathbf{Y}^T\mathbf{Y}-\mathbf{Y}^T\mathbf{X}\boldsymbol{\hat{\beta}}\]
How to use the general formula for a confidence interval for a linear combination of parameters to compare models.