4.2 Asymptotic properties

The asymptotic properties of the local polynomial estimator give us valuable insights into its performance. In particular, they allow answering, precisely, the following questions:

What affects the performance of the local polynomial estimator? Is local linear estimation better than local constant estimation? What is the effect of h on the estimates?

The asymptotic analysis of the local constant and local linear estimators134 is achieved, as done in Sections 2.3 and 3.3, by examining the asymptotic bias and variance.

In order to establish a framework for the analysis, we consider the so-called location-scale model for Y and its predictor X:

Y=m(X)+σ(X)ε,

where

σ2(x):=Var[Y|X=x]

is the conditional variance of Y given X, and ε is such that E[ε]=0 and Var[ε]=1. Recall that, since the conditional variance is not forced to be constant, we are implicitly allowing for heteroscedasticity.135

Note that for the derivation of the Nadaraya–Watson estimator and the local polynomial estimator we did not assume any particular assumption, beyond the (implicit) differentiability of m up to order p for the local polynomial estimator. The following assumptions136 are the only requirements to perform the asymptotic analysis of the estimator:

  • A1.137 m is twice continuously differentiable.
  • A2.138 σ2 is continuous and positive.
  • A3.139 f, the marginal pdf of X, is continuously differentiable and bounded away from zero.140
  • A4.141 The kernel K is a symmetric and bounded pdf with finite second moment and is square integrable.
  • A5.142 h=hn is a deterministic sequence of bandwidths such that, when n, h0 and nh.

The bias and variance are studied in their conditional versions on the predictor’s sample X1,,Xn. The reason for analyzing the conditional instead of the unconditional versions is to avoid technical difficulties that integration with respect to the unknown predictor’s density may pose. This is in the spirit of what was done in parametric inference (observe Sections B.1.2 and B.2.2).

The main result follows. It provides useful insights into the effect of p, m, f (standing from now on for the marginal pdf of X), and σ2 in the performance of ˆm(;p,h) for p=0,1.

Theorem 4.1 Under A1A5, the conditional bias and variance of the local constant (p=0) and local linear (p=1) estimators are

Bias[ˆm(x;p,h)|X1,,Xn]=Bp(x)h2+oP(h2),Var[ˆm(x;p,h)|X1,,Xn]=R(K)nhf(x)σ2(x)+oP((nh)1),

where

Bp(x):={μ2(K)2{m

Remark. The little-o_\mathbb{P}s in (4.16) and (4.17) appear (instead of little-os as in Theorem 2.1) because \mathrm{Bias}[\hat{m}(x;p,h)| X_1,\ldots,X_n] and \mathbb{V}\mathrm{ar}[\hat{m}(x;p,h)| X_1,\ldots,X_n] are random variables. Then, the asymptotic expansions of these random variables have stochastic remainders that converge to zero in probability at specific rates.

The bias and variance expressions (4.16) and (4.17) yield very interesting insights:

  1. Bias:

    • The bias decreases with h quadratically for both p=0,1. That means that small bandwidths h give estimators with low bias, whereas large bandwidths provide largely biased estimators.

    • For p=1, the bias at x is directly proportional to m''(x). Therefore:

      • The bias is negative in regions where m is concave, i.e., \{x\in\mathbb{R}:m''(x)<0\}. These regions correspond to peaks and local maxima of m.
      • Conversely, the bias is positive in regions where m is convex, i.e., \{x\in\mathbb{R}:m''(x)>0\}. These regions correspond to valleys and local minima of m.
      • All in all, the “wilder” the curvature of m, the larger the bias and the harder to estimate m.
    • For p=0, the bias at x is more convoluted and is affected by m''(x), m'(x), f'(x), and f(x):

      • The quantities m'(x), f'(x), and f(x) are not present in the bias when p=1. Precisely, for the local constant estimator, the lower the density f(x), the larger the bias (in absolute value). Also, the faster m and f change at x (derivatives), the larger the bias. Thus the bias of the local constant estimator is sensitive to m'(x) and f(x), unlike the local linear estimator (which is sensitive to m''(x) only). Particularly, the fact that the bias of the local constant estimator depends on f'(x) and f(x) is referred to as the design bias since it depends merely on the predictor’s distribution.
      • The quantity m''(x) contributes to the bias in the same way as it does for p = 1, this contribution being negative in regions corresponding to peaks and local maxima of m, and positive in the valleys and local minima of m. In general, the “wilder” the curvature of m, the larger its contribution to the bias and the harder to estimate m.
  2. Variance:

    • The main term of the variance is the same for p=0,1. In addition, it depends directly on \frac{\sigma^2(x)}{f(x)}. As a consequence, the lower the density, the more variable \hat{m}(x;p,h) is.143 Also, the larger the conditional variance at x, \sigma^2(x), the more variable \hat{m}(x;p,h) is.144
    • The variance decreases as a factor of (nh)^{-1}. This is related to the so-called effective sample size nh, which can be thought of as the amount of data in the neighborhood of x that is employed for performing the regression.145

All in all, the main takeaway of the analysis of p=0 vs. p=1 is:

p=1 has, in general, smaller bias than that of p=0 (but of the same order) while keeping the same variance as p=0.

An extended version of Theorem 4.1, given in Theorem 3.1 in Fan and Gijbels (1996), shows that this phenomenon extends to higher orders: odd order (p=2\nu+1, \nu\in\mathbb{N}) polynomial fits introduce an extra coefficient for the polynomial fit that allows them to reduce the bias, while maintaining the same variance of the precedent146 even order (p=2\nu). So, for example, local cubic fits are preferred to local quadratic fits. This motivates the following motto: local polynomial fitting is an odd world (Fan and Gijbels (1996)).

Finally, we have the asymptotic pointwise normality of the estimator, an analogous result to Theorem 2.2 which is helpful to obtain pointwise confidence intervals for m afterwards.

Theorem 4.2 Assume that \mathbb{E}[(Y-m(x))^{2+\delta}\vert X=x]<\infty for some \delta>0. Then, under A1A5,

\begin{align} &\sqrt{nh}(\hat m(x;p,h)-\mathbb{E}[\hat m(x;p,h)|X_1,\ldots,X_n])\stackrel{d}{\longrightarrow}\mathcal{N}\left(0,\frac{R(K)\sigma^2(x)}{f(x)}\right).\tag{4.18}\end{align}

Additionally, if nh^5=O(1), then

\begin{align} &\sqrt{nh}\left(\hat m(x;p,h)-m(x)-B_p(x)h^2\right)\stackrel{d}{\longrightarrow}\mathcal{N}\left(0,\frac{R(K)\sigma^2(x)}{f(x)}\right).\tag{4.19} \end{align}

Exercise 4.10 Theorem 4.1 gives some additional insights with respect to B_p(x), the dominating term of the bias:

  1. If m is constant,147 then B_0(x)=0.
  2. If m is linear,148 then B_1(x)=0.

That is, for each of these two cases, \mathrm{Bias}[\hat{m}(x;p,h)| X_1,\ldots,X_n]=o_\mathbb{P}(h^2). The local constant and local linear estimators are actually exactly unbiased when estimating constant and linear regression functions, respectively. That is, \mathbb{E}_c[\hat{m}(x;0,h)| X_1,\ldots,X_n]=c and \mathbb{E}_{a,b}[\hat{m}(x;1,h)| X_1,\ldots,X_n]=ax+b, where \mathbb{E}_c[\cdot|X_1,\ldots,X_n] and \mathbb{E}_{a,b}[\cdot|X_1,\ldots,X_n] represent the conditional expectations under the constant and linear models, respectively. Prove these two results.

References

Fan, J., and I. Gijbels. 1996. Local Polynomial Modelling and Its Applications. Vol. 66. Monographs on Statistics and Applied Probability. London: Chapman & Hall. https://doi.org/10.1201/9780203748725.

  1. We do not address the analysis of the general case in which p can be greater than one. The reader is referred to, for example, Theorem 3.1 in Fan and Gijbels (1996) for the full analysis.↩︎

  2. In linear models, homoscedasticity is one of the key assumptions for performing inference (Section B.1.2).↩︎

  3. Recall that these are the only assumptions done in the model so far. Compared with the ones linear models or generalized linear models make, they are extremely mild. Recall Y is not assumed to be continuous.↩︎

  4. This assumption requires certain smoothness of the regression function, allowing thus for Taylor expansions to be performed. This assumption is important in practice: \hat{m}(\cdot;p,h) is infinitely differentiable if the considered kernels K are so too.↩︎

  5. It avoids the situation in which Y is a degenerated random variable.↩︎

  6. It avoids the degenerate situation in which m is estimated at regions without observations of the predictors (such as holes in the support of X).↩︎

  7. Meaning that there exists a positive lower bound for f.↩︎

  8. Mild assumption inherited from the kde.↩︎

  9. Key assumption for reducing the bias and variance of \hat{m}(\cdot;p,h) simultaneously.↩︎

  10. Recall that this makes perfect sense: low-density regions of X imply less information available about m.↩︎

  11. The same happened in the linear model with the error variance \sigma^2.↩︎

  12. The variance of an unweighted mean is reduced by a factor n^{-1} when n observations are employed. To compute \hat{m}(x;p,h), n observations are used but in a weighted fashion that roughly amounts to considering nh unweighted observations.↩︎

  13. Since the variance increases as \nu does, not as p does.↩︎

  14. m(x)=c for all x\in\mathbb{R} and given c\in\mathbb{R}.↩︎

  15. m(x)=ax+b for all x\in\mathbb{R} and given a,b\in\mathbb{R}.↩︎