## 8.2 Estimation of Functions of GWN Model Parameters

In Chapter 7, we used the plug-in principle to motivate estimators of the GWN model parameters $$\mu_i$$, $$\sigma_{i}^2$$, $$\sigma_{i,j}$$, $$\sigma_{ij}$$, and $$\rho_{ij}$$. We can also use the plug-in principle to estimate functions of the GWN model parameters.

Definition 4.1 (Plug-in principle for functions of GWN model parameters) Let $$\theta$$ denote a $$k \times 1$$ vector of GWN model parameters, and let $$\eta = f(\theta)$$ where $$f:\mathbb{R}^k \rightarrow \mathbb{R}$$ is a continuous and differentiable function of $$\theta$$. Let $$\hat{\theta}$$ denote the plug-in estimator of $$\theta$$. Then the plug-in estimator of $$\eta$$ is $$\hat{\eta} = f(\hat{\theta})$$.

Example 4.3 (Plug-in estimates of example functions for example data)

Let $$W_0 = \100,000$$, and $$r_f = 0.03/12 = 0.0025$$. Using the plug-in estimates of the GWN model parameters, the plug-in estimates of the functions (8.1) - (8.4) are:

W0 = 100000
r.f = 0.03/12
f1.hat = muhatS + sigmahatS*qnorm(0.05)
f2.hat = -W0*f1.hat
f3.hat = -W0*(exp(muhatC + sigmahatC*qnorm(0.05)) - 1)
f4.hat = (muhatS-r.f)/sigmahatS
fhat.vals = cbind(f1.hat, f2.hat, f3.hat, f4.hat)
colnames(fhat.vals) = c("f1", "f2", "f3", "f4")
rownames(fhat.vals) = "Estimate"
fhat.vals
##              f1    f2    f3     f4
## Estimate -0.158 15780 14846 0.0655

$$\blacksquare$$

Plug-in estimators of functions are random variables and are subject to estimation error. Just as we studied the statistical properties of the plug-in estimators of the GWN model parameters we can study the statistical properties of the plug-in estimators of functions of GWN model parameters.

### 8.2.1 Bias

As discussed in Chapter 7, a desirable finite sample property of an estimator $$\hat{\theta}$$ of $$\theta$$ is unbiasedness: on average over many hypothetical samples $$\hat{\theta}$$ is equal to $$\theta$$. When estimating $$f(\theta)$$, unbiasedness of $$\hat{\theta}$$ may or may not carry over to $$f(\hat{\theta})$$.

Suppose $$f(\theta)$$ is a linear function of the elements of $$\theta$$ so that

$f(\theta) = a + b_1 \theta_1 + b_2 \theta_2 + \cdots + b_k \theta_k.$

where $$a, b_1, b_2, \ldots b_k$$ are constants. For example, the functions (8.1) and (8.2) are linear functions of the elements of $$\theta = (\mu, \sigma)^{\prime}.$$ Let $$\hat{\theta}$$ denote an estimator of $$\theta$$ and let $$f(\hat{\theta})$$ denote the plug-in estimator of $$f(\theta)$$. If $$E[\hat{\theta}]=\theta$$ for all elements of $$\theta$$ ($$\hat{\theta_i}$$ is unbiased for $$\theta_i$$, $$i=1,\ldots, k$$), then

\begin{align*} E[f(\hat{\theta)}] & = a + b_1 E[\hat{\theta_1}] + b_2 E[\hat{\theta_2}] + \cdots + b_k E[\hat{\theta_k}] \\ & = a + b_1 \theta_1 + b_2 \theta_2 + \cdots + b_k \theta_k = f(\theta) \end{align*}

and so $$f(\hat{\theta})$$ is unbiased for $$f(\theta)$$.

Now suppose $$f(\theta)$$ is a nonlinear function of the elements of $$\theta$$. For example, the functions (8.3) and (8.4) are nonlinear functions of the elements of $$\theta = (\mu, \sigma)^{\prime}.$$ Then, in general, $$f(\hat{\theta})$$ is not unbiased for $$f(\theta)$$ even if $$\hat{\theta}$$ is unbiased for $$\theta$$. The direction of the bias depends on the properties of $$f(\theta)$$. If $$f(\cdot)$$ is convex at $$\theta$$ or concave at $$\theta$$ then we can say something about the direction of the bias in $$f(\hat{\theta})$$.

Definition 7.2 (Convex function) A scalar function $$f(\theta)$$ is convex (in two dimensions) if all points on a straight line connecting any two points on the graph of $$f(\theta)$$ is above or on that graph. More formally, we say $$f(\theta)$$ is convex if for all $$\theta_1$$ and $$\theta_2$$ and for all $$0 \le \alpha \le1$$

$\begin{equation*} f(\alpha \theta_1 + (1-\alpha) \theta_2)) \le \alpha f(\theta_1) + (1 - \alpha)f(\theta_2) \end{equation*}$

For example, the function $$f(\theta)=\theta^2$$ is a convex function. Another common convex function is $$\exp(\theta)$$.

Definition 2.6 (Concave function) A function $$f(\theta)$$ is concave if $$-f(\theta)$$ is convex.

For example, the function $$f(\theta) = - \theta^2$$ is concave. Other common concave functions are $$log(\theta)$$ and $$\sqrt{\theta}$$.

Proposition 8.1 (Jensen’s inequality) Let $$\theta$$ be a scalar parameter and let $$\hat{\theta}$$ be an unbiased estimate of $$\theta$$ so that $$E[\hat{\theta}]=\theta$$. Let $$f(\theta)$$ be a convex function of $$\theta$$. Then $\begin{equation} E[f(\hat{\theta)}] \ge f(E[\hat{\theta}]) = f(\theta). \end{equation}$

Jensen’s inequality tells us that if $$f$$ is convex (concave) and $$\hat{\theta}$$ is unbiased then $$f(\hat{\theta})$$ is positively (negatively) biased. For example, we know from Chapter 7 that $$\hat{\sigma}^2$$ is unbiased for $$\sigma^2$$. Now, $$\hat{\sigma} = f(\hat{\sigma}^2) = \sqrt{\hat{\sigma}^2}$$ is a concave function of $$\hat{\sigma}$$. From Jensen’s inequality, we know that $$\hat{\sigma}$$ is negatively biased (i.e. too small).

$$\blacksquare$$

### 8.2.2 Consistency

An important property of the plug-in estimators of the GWN model parameters is that they are consistent: as the sample size gets larger and larger the estimators get closer and closer to their true values and eventually equal their true values for an infinitely large sample.

Consistency also holds for plug-in estimators of functions of GWN model parameters. The justification comes from Slutsky’s Theorem.

Proposition 8.2 (Slutsky’s Theorem) Let $$\hat{\theta}$$ be consistent for $$\theta$$, and let $$f:\mathbb{R}^k \rightarrow \mathbb{R}$$ be continuous at $$\theta$$. Then $$f(\hat{\theta})$$ is consistent for $$f(\theta)$$.

The key condition on $$f(\theta)$$ is continuity at $$\theta$$.39

Example 2.8 (Consistency of functions of GWN model parameters)

All of the functions (8.1) - (8.4) are continuous at $$\theta = (\mu, \sigma)'$$. By Slutsky’s Theorem, all of the plug-in estimators of the functions are also consistent.

$$\blacksquare$$

1. Recall, the intuitive definition of continuity is that you can draw the function on paper without lifting up your pen.↩︎