Different lines

In order to compare our three models of interest we can firstly assume the more general model 1 and examine whether it is reasonable to adopt the model which has parallel lines, model 2. We can do this by constructing a 95% confidence interval (C.I.) for \((\beta_1-\beta_2)\), that is the difference between the two slope parameters.

Recall the only difference between model 1 and model 2 is that model 1 contains two slope paramters \(\beta_1\) and \(\beta_2\) and model 2 has one slope parameter \(\beta\). In other words, model 2 assumes \(\beta=\beta_1=\beta_2\).

For example, in the plot below we have two lines (blue and black). It is clear here that the black line has a positive slope parameter and the blue line has a negative slope parameter. This is a clear example of two lines with different slope and intercept terms.

Model 1: Different slope and intercept terms

The most general way to tackle fitting these lines is to formulate each in matrix form and to use the results we have already derived.

\[\begin{eqnarray*} E(y_{11}) &=& \alpha_1+\beta_1(x_{11}-\bar{x}_{1.})\\ &\vdots &\\ E(y_{1n_1}) &=& \alpha_1+\beta_1(x_{1n_1}-\bar{x}_{1.})\\ E(y_{21}) &=& \alpha_2+\beta_2(x_{21}-\bar{x}_{2.})\\ &\vdots &\\ E(y_{2n_2}) &=& \alpha_2+\beta_2(x_{2n_2}-\bar{x}_{2.}) \end{eqnarray*}\]

i.e. \(E(\mathbf{Y}) = \textbf{X}\boldsymbol{\beta}\) where

\[\begin{eqnarray*} \mathbf{Y} &=& \left( \begin{array}{c} y_{11} \\ . \\ . \\ y_{1n_1} \\ y_{21} \\ . \\ . \\ y_{2n_2} \\ \end{array} \right)\\ \mathbf{X}&=&\left( \begin{array}{cccc} 1 & (x_{11}-\bar{x}_{1.}) & 0 & 0 \\ . & . & . & . \\ . & . & . & . \\ 1 & (x_{1n_1}-\bar{x}_{1.}) & 0 & 0 \\ 0 & 0 & 1 & (x_{21}-\bar{x}_{2.}) \\ . & . & . & . \\ . & . & . & . \\ 0 & 0 & 1 & (x_{2n_2}-\bar{x}_{2.}) \\ \end{array} \right)\\ \boldsymbol{\beta} &=&\left( \begin{array}{c} \alpha_1 \\ \beta_1 \\ \alpha_2 \\ \beta_2 \\ \end{array} \right) \end{eqnarray*}\]

The least-squares estimate for \(\boldsymbol{\beta}\) is therefore given by

\[ \boldsymbol{\hat{\beta}} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y}\]

\[\begin{eqnarray*} \mathbf{X}^T\mathbf{X}&=&\left( \begin{array}{cccc} n_1 & \sum_{j=1}^{n_1}(x_{1j}-\bar{x}_{1.}) & 0 & 0 \\ \sum_{j=1}^{n_1}(x_{1j}-\bar{x}_{1.}) & \sum_{j=1}^{n_1}(x_{1j}-\bar{x}_{1.})^2 & 0 & 0 \\ 0 & 0 & n_2 & \sum_{j=1}^{n_2}(x_{2j}-\bar{x}_{2.}) \\ 0 & 0 & \sum_{j=1}^{n_2}(x_{2j}-\bar{x}_{2.}) & \sum_{j=1}^{n_2}(x_{2j}-\bar{x}_{2.})^2 \\ \end{array} \right)\\ &=&\left( \begin{array}{cccc} n_1 & 0 & 0 & 0 \\ 0 & S_{x_1x_1} & 0 & 0 \\ 0 & 0 & n_2 & 0 \\ 0 & 0 & 0 & S_{x_2x_2} \\ \end{array} \right) \end{eqnarray*}\]

since \(\sum_{j=1}^{n_i}(x_{ij}-\bar{x}_{i.}) = 0\) and \(S_{x_1x_1} = \sum_{j=1}^{n_1}(x_{1j}-\bar{x}_{1.})^2\).

\[\begin{eqnarray*} (\mathbf{X}^T\mathbf{X})^{-1} &=& \left( \begin{array}{cccc} \frac{1}{n_1} & 0 & 0 & 0 \\ 0 & \frac{1}{S_{x_1x_1}} & 0 & 0 \\ 0 & 0 & \frac{1}{n_2} & 0 \\ 0 & 0 & 0 & \frac{1}{S_{x_2x_2}} \\ \end{array} \right)\\\\ \mathbf{X}^T\mathbf{Y} &=& \left( \begin{array}{c} \sum_{j=1}^{n_1}y_{1j} \\ \sum_{j=1}^{n_1}(x_{1j}-\bar{x}_{1.})y_{1j} \\ \sum_{j=1}^{n_2}y_{2j} \\ \sum_{j=1}^{n_2}(x_{2j}-\bar{x}_{2.})y_{2j} \\ \end{array} \right)\\\\ &=&\left( \begin{array}{c} n_1\bar{y}_{1.} \\ S_{x_1y_1} \\ n_2\bar{y}_{2.} \\ S_{x_2y_2} \\ \end{array} \right) \end{eqnarray*}\]

Therefore,

\[ \boldsymbol{\hat{\beta}} = (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y} = \left( \begin{array}{c} \bar{y}_{1.} \\ \frac{S_{x_1y_1}}{S_{x_1x_1}} \\ \bar{y}_{2.} \\ \frac{S_{x_2y_2}}{S_{x_2x_2}} \\ \end{array} \right)\]

Also

\[\begin{eqnarray*} RSS &=& \mathbf{Y}^T\mathbf{Y}-\mathbf{Y}^T\mathbf{X}\boldsymbol{\hat{\beta}}\\ &=&\sum_{i=1}^2\sum_{j=1}^{n_i}y_{ij}^2-n_1\bar{y}_{1.}^2-\frac{(S_{x_1y_1})^2}{S_{x_1x_1}}-n_2\bar{y}_{2.}^2-\frac{(S_{x_2y_2})^2}{S_{x_2x_2}}\\ &=&S_{y_1y_1}-\frac{(S_{x_1y_1})^2}{S_{x_1x_1}}+S_{y_2y_2}-\frac{(S_{x_2y_2})^2}{S_{x_2x_2}}\\ &=&RSS_1+RSS_2 \end{eqnarray*}\]

where \(RSS_i\) is the residual sum-of-squares from a simple linear regression fitted to group \(i\).

We therefore verified that the parameter estimates are identical to those obtained by fitting a regression line to each group separately.

95% Confidence interval for slope parameters for two regression lines

Model : \(E(y_{ij}) = \alpha_i+\beta_i(x_{ij}-\bar{x}_{i.})\)

\(\newline\) Calculate the 95% confidence interval (CI) for \((\beta_1-\beta_2)\).

\(\newline\) This quantity of interest \((\beta_1-\beta_2)\) can be written as \(\mathbf{b}^T\boldsymbol{\beta}\) where

\[\mathbf{b}^T = \left( \begin{array}{cccc} 0 & 1 & 0 & -1 \\ \end{array} \right).\]

\(\newline\) The standard formula now applies.

\[\mathbf{b}^T\boldsymbol{\hat{\beta}} \pm t(n-p; 0.975)\sqrt{\frac{RSS}{n-p}\mathbf{b}^T(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{b}}\]

and \[\begin{eqnarray*} \mathbf{b}^T(\mathbf{X}^T\mathbf{X})^{-1}\mathbf{b} &=& \left(\begin{array}{cccc} 0 & 1 & 0 & -1 \\ \end{array} \right) \left( \begin{array}{cccc} \frac{1}{n_1} & 0 & 0 & 0 \\ 0 & \frac{1}{S_{x_1x_1}} & 0 & 0 \\ 0 & 0 & \frac{1}{n_2} & 0 \\ 0 & 0 & 0 & \frac{1}{S_{x_2x_2}} \\ \end{array} \right) \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ -1 \\ \end{array} \right)\\ &=&\left(\begin{array}{cccc} 0 & \frac{1}{S_{x_1x_1}} & 0 & -\frac{1}{S_{x_2x_2}} \\ \end{array} \right) \left(\begin{array}{c} 0 \\ 1 \\ 0 \\ -1 \\ \end{array} \right)\\ &=&\frac{1}{S_{x_1x_1}} + \frac{1}{S_{x_2x_2}} \end{eqnarray*}\]

i.e. a 95% C.I. for \(\beta_1-\beta_2\) is

\[\hat{\beta}_1-\hat{\beta}_2 \pm t(n_1+n_2-4; 0.975)\sqrt{\left(\frac{RSS_1+RSS_2}{n_1+n_2-4}\right)\left(\frac{1}{S_{x_1x_1}}+\frac{1}{S_{x_2x_2}}\right)}\]

Interpreting the the confidence interval

If the C.I. for \(\beta_1-\beta_2\) contains 0, we cannot reject the null that (that the two regression lines are parallel) and thus stay with the parallel lines model. It is now natural to fit the parallel lines model and examine it to see whether a even simpler model can be used.