8.2 Estimation of Functions of GWN Model Parameters
In Chapter 7, we used the plug-in principle to motivate estimators of the GWN model parameters μi, σ2i, σi,j, σij, and ρij. We can also use the plug-in principle to estimate functions of the GWN model parameters.
Definition 4.1 (Plug-in principle for functions of GWN model parameters) Let θ denote a k×1 vector of GWN model parameters, and let η=f(θ) where f:Rk→R is a continuous and differentiable function of θ. Let ˆθ denote the plug-in estimator of θ. Then the plug-in estimator of η is ˆη=f(ˆθ).
Let W0=$100,000, and rf=0.03/12=0.0025. Using the plug-in estimates of the GWN model parameters, the plug-in estimates of the functions (8.1) - (8.4) are:
100000
W0 = 0.03/12
r.f = muhatS + sigmahatS*qnorm(0.05)
f1.hat = -W0*f1.hat
f2.hat = -W0*(exp(muhatC + sigmahatC*qnorm(0.05)) - 1)
f3.hat = (muhatS-r.f)/sigmahatS
f4.hat = cbind(f1.hat, f2.hat, f3.hat, f4.hat)
fhat.vals =colnames(fhat.vals) = c("f1", "f2", "f3", "f4")
rownames(fhat.vals) = "Estimate"
fhat.vals
## f1 f2 f3 f4
## Estimate -0.158 15780 14846 0.0655
◼
Plug-in estimators of functions are random variables and are subject to estimation error. Just as we studied the statistical properties of the plug-in estimators of the GWN model parameters we can study the statistical properties of the plug-in estimators of functions of GWN model parameters.
8.2.1 Bias
As discussed in Chapter 7, a desirable finite sample property of an estimator ˆθ of θ is unbiasedness: on average over many hypothetical samples ˆθ is equal to θ. When estimating f(θ), unbiasedness of ˆθ may or may not carry over to f(ˆθ).
Suppose f(θ) is a linear function of the elements of θ so that
f(θ)=a+b1θ1+b2θ2+⋯+bkθk.
where a,b1,b2,…bk are constants. For example, the functions (8.1) and (8.2) are linear functions of the elements of θ=(μ,σ)′. Let ˆθ denote an estimator of θ and let f(ˆθ) denote the plug-in estimator of f(θ). If E[ˆθ]=θ for all elements of θ (^θi is unbiased for θi, i=1,…,k), then
E[f(^θ)]=a+b1E[^θ1]+b2E[^θ2]+⋯+bkE[^θk]=a+b1θ1+b2θ2+⋯+bkθk=f(θ)
and so f(ˆθ) is unbiased for f(θ).
Now suppose f(θ) is a nonlinear function of the elements of θ. For example, the functions (8.3) and (8.4) are nonlinear functions of the elements of θ=(μ,σ)′. Then, in general, f(ˆθ) is not unbiased for f(θ) even if ˆθ is unbiased for θ. The direction of the bias depends on the properties of f(θ). If f(⋅) is convex at θ or concave at θ then we can say something about the direction of the bias in f(ˆθ).
Definition 7.2 (Convex function) A scalar function f(θ) is convex (in two dimensions) if all points on a straight line connecting any two points on the graph of f(θ) is above or on that graph. More formally, we say f(θ) is convex if for all θ1 and θ2 and for all 0≤α≤1
f(αθ1+(1−α)θ2))≤αf(θ1)+(1−α)f(θ2)For example, the function f(θ)=θ2 is a convex function. Another common convex function is exp(θ).
For example, the function f(θ)=−θ2 is concave. Other common concave functions are log(θ) and √θ.
Jensen’s inequality tells us that if f is convex (concave) and ˆθ is unbiased then f(ˆθ) is positively (negatively) biased. For example, we know from Chapter 7 that ˆσ2 is unbiased for σ2. Now, ˆσ=f(ˆσ2)=√ˆσ2 is a concave function of ˆσ. From Jensen’s inequality, we know that ˆσ is negatively biased (i.e. too small).
◼
8.2.2 Consistency
An important property of the plug-in estimators of the GWN model parameters is that they are consistent: as the sample size gets larger and larger the estimators get closer and closer to their true values and eventually equal their true values for an infinitely large sample.
Consistency also holds for plug-in estimators of functions of GWN model parameters. The justification comes from Slutsky’s Theorem.
Proposition 8.2 (Slutsky’s Theorem) Let ˆθ be consistent for θ, and let f:Rk→R be continuous at θ. Then f(ˆθ) is consistent for f(θ).
The key condition on f(θ) is continuity at θ.39
All of the functions (8.1) - (8.4) are continuous at θ=(μ,σ)′. By Slutsky’s Theorem, all of the plug-in estimators of the functions are also consistent.
◼
Recall, the intuitive definition of continuity is that you can draw the function on paper without lifting up your pen.↩︎