Chapter 10 The Aitken Model

  • Orthogonal Matrices: A square matrix P is said to be orthogonal if and only if PP=I
  • Spectral Decomposition Theorem: An n×n symmetric matrix H may be decomposed as H=PΛP=ni=1λipipi
    • P is an n×n orthogonal matrix whose columns p1,,pn are the orthonormal eigenvectors of H
    • Λ=diag(λ1,,λn) is a diagonal matrix whose diagonal entries are the eigenvalues of H
  • Symmetric Square Root Matrix: H is an n×n symmetric, NND matrix. Then there exists a symmetric, NND matrix B such that BB=H where B=PΛ1/2P.
  • Aitken Model: y=Xβ+ϵ, E(ϵ)=0, Var(ϵ)=σ2V where V is a known positive definite variance matrix.
    • V1/2y=V1/2Xβ+V1/2ϵ
    • With z=V1/2y, W=V1/2X and δ=V1/2ϵ, we have z=Wβ+δ, E(δ)=0 and Var(δ)=σ2I, which is a Gauss-Markov Model.
    • ˆz=V1/2X(XV1X)XV1y so that ˆy=X(XV1X)XV1y.
    • If Cβ is estimable, we know the BLUE is the OLS estimator. C(WW)Wz=C(XV1X)XV1yCˆβV is called a Generalized Least Squares (GLS) estimator.
    • ˆβV=(XV1X)XV1y is a solution to the Aitken Equations: XV1Xβ=XV1y. When V is diagonal, ˆβV is called weighted least squares estimator.
    • An unbiased estimator of σ2 is z(IPW)znrank(W)=