Processing math: 97%
The Aitken Model
- Orthogonal Matrices: A square matrix P is said to be orthogonal if and only if P′P=I
- Spectral Decomposition Theorem: An n×n symmetric matrix H may be decomposed as H=PΛP′=∑ni=1λipip′i
- P is an n×n orthogonal matrix whose columns p1,…,pn are the orthonormal eigenvectors of H
- Λ=diag(λ1,…,λn) is a diagonal matrix whose diagonal entries are the eigenvalues of H
- Symmetric Square Root Matrix: H is an n×n symmetric, NND matrix. Then there exists a symmetric, NND matrix B such that BB=H where B=PΛ1/2P′.
- Aitken Model: y=Xβ+ϵ, E(ϵ)=0, Var(ϵ)=σ2V where V is a known positive definite variance matrix.
- V−1/2y=V−1/2Xβ+V−1/2ϵ
- With z=V−1/2y, W=V−1/2X and δ=V−1/2ϵ, we have z=Wβ+δ, E(δ)=0 and Var(δ)=σ2I, which is a Gauss-Markov Model.
- ˆz=V−1/2X(X′V−1X)−X′V−1y so that ˆy=X(X′V−1X)−X′V−1y.
- If Cβ is estimable, we know the BLUE is the OLS estimator. C(W′W)−W′z=C(X′V−1X)−X′V−1y≡CˆβV is called a Generalized Least Squares (GLS) estimator.
- ˆβV=(X′V−1X)−X′V−1y is a solution to the Aitken Equations: X′V−1Xβ=X′V−1y. When V is diagonal, ˆβV is called weighted least squares estimator.
- An unbiased estimator of σ2 is
z′(I−PW)zn−rank(W)=‖