16.5 Statistical Properties of SI Model Estimates
To determine the statistical properties of the plug-in principle/least squares/ML estimators \(\hat{\alpha}_{i}\), \(\hat{\beta}_{i}\) and \(\hat{\sigma}_{\epsilon,i}^{2}\) in the SI model, we treat them as functions of the random variables \(\{(R_{i,t},R_{Mt})\}_{t=1}^{T}\) where \(R_{t}\) and \(R_{Mt}\) are assumed to be generated by the SI model (16.1) - (16.5).
16.5.1 Bias
In the SI model, the estimators \(\hat{\alpha}_{i}\), \(\hat{\beta}_{i}\) and \(\hat{\sigma}_{\epsilon,i}^{2}\) (with degrees-of-freedom adjustment) are unbiased: \[\begin{eqnarray*} E[\hat{\alpha}_{i}] & = & \alpha_{i},\\ E[\hat{\beta}_{i}] & = & \beta_{i},\\ E[\hat{\sigma}_{\epsilon,i}^{2}] & = & \sigma_{\epsilon,i}^{2}. \end{eqnarray*}\] To shows that \(\hat{\alpha}_{i}\) and \(\hat{\beta}_{i}\) are unbiased, it is useful to consider the SI model for asset \(i\) in matrix form for \(t=1,\ldots,T\): \[ \mathbf{R}_{i}=\alpha_{i}\mathbf{1}+\beta_{i}\mathbf{R}_{M}+\epsilon_{i}=\mathbf{X}\gamma_{i}+\epsilon_{i}, \] where \(\mathbf{R}_{i}=(R_{i1},\ldots,R_{iT})^{\prime},\) \(\mathbf{R}_{M}=(R_{M1},\ldots,R_{MT})^{\prime}\), \(\epsilon_{i}=(\epsilon_{i1},\ldots,\epsilon_{iT})^{\prime}\), \(\mathbf{1}=(1,\ldots,1)^{\prime}\), \(\mathbf{X}=(\begin{array}{cc} \mathbf{1} & \mathbf{R}_{M})\end{array},\) and \(\gamma_{i}=(\alpha_{i},\beta_{i})^{\prime}.\) The estimator for \(\gamma_{i}\) is \[\begin{eqnarray*} \hat{\gamma}_{i} & = & (\mathbf{X}^{\prime}\mathbf{X})^{-1}\mathbf{X}^{\prime}\mathbf{R}_{i}. \end{eqnarray*}\] Pluging in \(\mathbf{R}_{i}=\mathbf{X}\gamma_{i}+\epsilon_{i}\) gives \[\begin{eqnarray*} \hat{\gamma}_{i} & = & (\mathbf{X}^{\prime}\mathbf{X})^{-1}\mathbf{X}^{\prime}\left(\mathbf{X}\gamma_{i}+\epsilon_{i}\right)=(\mathbf{X}^{\prime}\mathbf{X})^{-1}\mathbf{X}^{\prime}\mathbf{X}\gamma_{i}+(\mathbf{X}^{\prime}\mathbf{X})^{-1}\mathbf{X}^{\prime}\epsilon_{i}\\ & = & \gamma_{i}+(\mathbf{X}^{\prime}\mathbf{X})^{-1}\mathbf{X}^{\prime}\epsilon_{i}. \end{eqnarray*}\] Then \[\begin{eqnarray*} E[\hat{\gamma}_{i}] & = & \gamma_{i}+E\left[(\mathbf{X}^{\prime}\mathbf{X})^{-1}\mathbf{X}^{\prime}\epsilon_{i}\right]\\ & = & \gamma_{i}+E\left[(\mathbf{X}^{\prime}\mathbf{X})^{-1}\mathbf{X}^{\prime}\right]E[\epsilon_{i}]\,(\mathrm{because}\,\epsilon_{it}\,\mathrm{is}\,\mathrm{independent}\,\mathrm{of}\,R_{Mt})\\ & = & \gamma_{i}\,(\mathrm{because}\,E[\epsilon_{i}]=\mathbf{0}). \end{eqnarray*}\] The derivation of \(E[\hat{\sigma}_{\epsilon,i}^{2}]=\sigma_{\epsilon,i}^{2}\) is beyond the scope of this book and can be found in graduate econometrics textbooks such as Hayashi (1980).
16.5.2 Precision
Under the assumptions of the SI model, analytic formulas for estimates of the standard errors for \(\hat{\alpha}_{i}\) and \(\hat{\beta}_{i}\) are given by: \[\begin{align} \widehat{\mathrm{se}}(\hat{\alpha}_{i}) & \approx\frac{\hat{\sigma}_{\varepsilon,i}}{\sqrt{T\cdot\hat{\sigma}_{M}^{2}}}\cdot\sqrt{\frac{1}{T}\sum_{t=1}^{T}r_{Mt}^{2}},\tag{16.36}\\ \widehat{\mathrm{se}}(\hat{\beta}_{i}) & \approx\frac{\hat{\sigma}_{\varepsilon,i}}{\sqrt{T\cdot\hat{\sigma}_{M}^{2}}},\tag{16.37} \end{align}\] where “\(\approx\)” denotes an approximation based on the CLT that gets more accurate the larger the sample size. Remarks:
- \(\widehat{\mathrm{se}}(\hat{\alpha}_{i})\) and \(\widehat{\mathrm{se}}(\hat{\beta}_{i})\) are smaller the smaller is \(\hat{\sigma}_{\varepsilon,i}\). That is, the closer are returns to the fitted regression line the smaller are the estimation errors in \(\hat{\alpha}_{i}\) and \(\hat{\beta}_{i}\).
- \(\widehat{\mathrm{se}}(\hat{\beta}_{i})\) is smaller the larger is \(\hat{\sigma}_{M}^{2}\). That is, the greater the variability in the market return \(R_{Mt}\) the smaller is the estimation error in the estimated slope coefficient \(\hat{\beta}_{i}\). This is illustrated in Figure xxx. The left panel shows a data sample from the SI model with a small value of \(\hat{\sigma}_{M}^{2}\) and the right panel shows a sample with a large value of \(\hat{\sigma}_{M}^{2}\). The right panel shows that the high variability in \(R_{Mt}\) makes it easier to identify the slope of the line.
- Both \(\widehat{\mathrm{se}}(\hat{\alpha}_{i})\) and \(\widehat{\mathrm{se}}(\hat{\beta}_{i})\) go to zero as the sample size, \(T\), gets large. Since \(\hat{\alpha}_{i}\) and \(\hat{\beta}_{i}\) are unbiased estimators, this implies that they are also consistent estimators. That is, they converge to the true values \(\alpha_{i}\) and \(\beta_{i}\), respectively, as \(T\rightarrow\infty\).
- In R, the standard error values (16.36) and (16.37) are computed using the
summary()
function on an “lm
” object.
There are no easy formulas for the estimated standard errors for \(\hat{\sigma}_{\epsilon,i}^{2}\), \(\hat{\sigma}_{\varepsilon,i}\) and \(\hat{R}^{2}\). Estimated standard errors for these estimators, however, can be easily computed using the bootstrap.
to be completed
\(\blacksquare\)
16.5.3 Sampling distribution and confidence intervals.
Using arguments based on the CLT, it can be shown that for large enough \(T\) the estimators \(\hat{\alpha}_{i}\) and \(\hat{\beta}_{i}\) are approximately normally distributed: \[\begin{eqnarray*} \hat{\alpha}_{i} & \sim & N(\alpha_{i},\widehat{\mathrm{se}}(\hat{\alpha}_{i})^{2}),\\ \hat{\beta}_{i} & \sim & N(\beta_{i},\widehat{\mathrm{se}}(\hat{\beta}_{i})^{2}), \end{eqnarray*}\] where \(\widehat{\mathrm{se}}(\hat{\alpha}_{i})\) and \(\widehat{\mathrm{se}}(\hat{\beta}_{i})\) are given by (16.36) and (16.37), respectively.