6.2 Kernel regression estimation
6.2.1 Nadaraya–Watson estimator
Our objective is to estimate the regression function nonparametrically (recall that we are considering the simplest situation: one continuous predictor, so ). Due to its definition, we can rewrite as
This expression shows an interesting point: the regression function can be computed from the joint density and the marginal Therefore, given a sample a nonparametric estimate of may follow by replacing the previous densities by their kernel density estimators! From the previous section, we know how to do this using the multivariate and univariate kde’s given in (6.4) and (6.9), respectively. For the multivariate kde, we can consider the kde (6.12) based on product kernels for the two dimensional case and bandwidths which yields the estimate
of the joint pdf of On the other hand, considering the same bandwidth for the kde of we have
We can therefore define the estimator of that results from replacing and in (6.13) by (6.14) and (6.15):
The resulting estimator203 is the so-called Nadaraya–Watson204 estimator of the regression function:
where
Let’s implement from scratch the Nadaraya–Watson estimate to get a feeling of how it works in practice.
# A naive implementation of the Nadaraya-Watson estimator
mNW <- function(x, X, Y, h, K = dnorm) {
# Arguments
# x: evaluation points
# X: vector (size n) with the predictors
# Y: vector (size n) with the response variable
# h: bandwidth
# K: kernel
# Matrix of size length(x) x n
Kx <- sapply(X, function(Xi) K((x - Xi) / h) / h)
# Weights
W <- Kx / rowSums(Kx) # Column recycling!
# Means at x ("drop" to drop the matrix attributes)
drop(W %*% Y)
}
# Generate some data to test the implementation
set.seed(12345)
n <- 100
eps <- rnorm(n, sd = 2)
m <- function(x) x^2 * cos(x)
# m <- function(x) x - x^2 # Other possible regression function, works
# equally well
X <- rnorm(n, sd = 2)
Y <- m(X) + eps
xGrid <- seq(-10, 10, l = 500)
# Bandwidth
h <- 0.5
# Plot data
plot(X, Y)
rug(X, side = 1); rug(Y, side = 2)
lines(xGrid, m(xGrid), col = 1)
lines(xGrid, mNW(x = xGrid, X = X, Y = Y, h = h), col = 2)
legend("top", legend = c("True regression", "Nadaraya-Watson"),
lwd = 2, col = 1:2)

Figure 6.5: The Nadaraya–Watson estimator of an arbitrary regression function .
Similarly to kernel density estimation, in the Nadaraya–Watson estimator the bandwidth has a prominent effect on the shape of the estimator, whereas the kernel is clearly less important. The code below illustrates the effect of varying using the manipulate::manipulate
function.
# Simple plot of N-W for varying h's
manipulate::manipulate({
# Plot data
plot(X, Y)
rug(X, side = 1); rug(Y, side = 2)
lines(xGrid, m(xGrid), col = 1)
lines(xGrid, mNW(x = xGrid, X = X, Y = Y, h = h), col = 2)
legend("topright", legend = c("True regression", "Nadaraya-Watson"),
lwd = 2, col = 1:2)
}, h = manipulate::slider(min = 0.01, max = 2, initial = 0.5, step = 0.01))
Implement your own version of the Nadaraya–Watson estimator in R and
compare it with mNW
. Focus only on the normal kernel and
reduce the accuracy of the final computation up to 1e-7
to
achieve better efficiency. Are you able to improve the speed of
mNW
? Use the microbenchmark::microbenchmark
function to measure the running times for a sample with
6.2.2 Local polynomial regression
The Nadaraya–Watson estimator can be seen as a particular case of a wider class of nonparametric estimators, the so called local polynomial estimators. Specifically, Nadaraya–Watson corresponds to performing a local constant fit. Let’s see this wider class of nonparametric estimators and their advantages with respect to the Nadaraya–Watson estimator.
The motivation for the local polynomial fit comes from attempting to find an estimator of that “minimizes”205 the RSS
without assuming any particular form for the true This is not achievable directly, since no knowledge on is available. Recall that what we did in parametric models was to assume a parametrization for For example, in simple linear regression we assumed which allowed to tackle the minimization of (6.17) by means of solving
The resulting is precisely the estimator that minimizes the RSS among all the linear estimators, that is, among the class of estimators that we have parametrized.
When has no available parametrization and can adopt any mathematical form, an alternative approach is required. The first step is to induce a local parametrization for By a -th206 order Taylor expression it is possible to obtain that, for close to
Then, replacing (6.18) in the population version of (6.17) that replaces with we have that
Expression (6.19) is still not workable: it depends on which of course are unknown, as is unknown. The great idea is to set and turn (6.19) into a linear regression problem where the unknown parameters are precisely Simply rewriting (6.19) using this idea gives
Now, estimates of automatically produce estimates for ! In addition, we know how to obtain an estimate that minimizes (6.20), since this is precisely the least squares problem studied in Section 2.2.3. The final touch is to weight the contributions of each datum to the estimation of according to the proximity of to 207 We can achieve this precisely by kernels:
Solving (6.21) is easy once the proper notation is introduced. To that end, denote
and
Then we can re-express (6.21) into a weighted least squares problem208 whose exact solution is
The estimate209 for is therefore computed as
where
and is the -th canonical vector. Just as the Nadaraya–Watson was, the local polynomial estimator is a weighted linear combination of the responses.
Two cases deserve special attention on (6.23):
is the local constant estimator or the Nadaraya–Watson estimator. In this situation, the estimator has explicit weights, as we saw before:
is the local linear estimator, which has weights equal to:
where
Recall that the local polynomial fit is computationally more expensive than the local constant fit: is obtained as the solution of a weighted linear problem, whereas can be directly computed as a weighted mean of the responses.
Figure 6.6 illustrates the construction of the local polynomial estimator (up to cubic degree) and shows how the intercept of the local fit, estimates at
Figure 6.6: Construction of the local polynomial estimator. The animation shows how local polynomial fits in a neighborhood of are combined to provide an estimate of the regression function, which depends on the polynomial degree, bandwidth, and kernel (gray density at the bottom). The data points are shaded according to their weights for the local fit at Application available here.
The local polynomial estimator of performs a series of weighted polynomial fits; as many as points on which is to be evaluated.
An inefficient implementation of the local polynomial estimator can be done relatively straightforwardly from the previous insight and from expression (6.22). However, several R packages provide implementations, such as KernSmooth::locpoly
and R’s loess
210 (but this one has a different control of the bandwidth plus a set of other modifications). Below are some examples of their usage.
# Generate some data
set.seed(123456)
n <- 100
eps <- rnorm(n, sd = 2)
m <- function(x) x^3 * sin(x)
X <- rnorm(n, sd = 1.5)
Y <- m(X) + eps
xGrid <- seq(-10, 10, l = 500)
# KernSmooth::locpoly fits
h <- 0.25
lp0 <- KernSmooth::locpoly(x = X, y = Y, bandwidth = h, degree = 0,
range.x = c(-10, 10), gridsize = 500)
lp1 <- KernSmooth::locpoly(x = X, y = Y, bandwidth = h, degree = 1,
range.x = c(-10, 10), gridsize = 500)
# Provide the evaluation points by range.x and gridsize
# loess fits
span <- 0.25 # The default span is 0.75, which works very bad in this scenario
lo0 <- loess(Y ~ X, degree = 0, span = span)
lo1 <- loess(Y ~ X, degree = 1, span = span)
# loess employs an "span" argument that plays the role of an variable bandwidth
# "span" gives the proportion of points of the sample that are taken into
# account for performing the local fit about x and then uses a triweight kernel
# (not a normal kernel) for weighting the contributions. Therefore, the final
# estimate differs from the definition of local polynomial estimator, although
# the principles in which are based are the same
# Prediction at x = 2
x <- 2
lp1$y[which.min(abs(lp1$x - x))] # Prediction by KernSmooth::locpoly
## [1] 5.445975
predict(lo1, newdata = data.frame(X = x)) # Prediction by loess
## 1
## 5.379652
m(x) # Reality
## [1] 7.274379
# Plot data
plot(X, Y)
rug(X, side = 1); rug(Y, side = 2)
lines(xGrid, m(xGrid), col = 1)
lines(lp0$x, lp0$y, col = 2)
lines(lp1$x, lp1$y, col = 3)
lines(xGrid, predict(lo0, newdata = data.frame(X = xGrid)), col = 2, lty = 2)
lines(xGrid, predict(lo1, newdata = data.frame(X = xGrid)), col = 3, lty = 2)
legend("bottom", legend = c("True regression", "Local constant (locpoly)",
"Local linear (locpoly)", "Local constant (loess)",
"Local linear (loess)"),
lwd = 2, col = c(1:3, 2:3), lty = c(rep(1, 3), rep(2, 2)))
As with the Nadaraya–Watson, the local polynomial estimator heavily depends on
# Simple plot of local polynomials for varying h's
manipulate::manipulate({
# Plot data
lpp <- KernSmooth::locpoly(x = X, y = Y, bandwidth = h, degree = p,
range.x = c(-10, 10), gridsize = 500)
plot(X, Y)
rug(X, side = 1); rug(Y, side = 2)
lines(xGrid, m(xGrid), col = 1)
lines(lpp$x, lpp$y, col = p + 2)
legend("bottom", legend = c("True regression", "Local polynomial fit"),
lwd = 2, col = c(1, p + 2))
}, p = manipulate::slider(min = 0, max = 4, initial = 0, step = 1),
h = manipulate::slider(min = 0.01, max = 2, initial = 0.5, step = 0.01))
A more sophisticated framework for performing nonparametric estimation of the regression function is the np
package, which we detail in Section 6.2.4. This package will be the chosen approach for the more challenging situation in which several predictors are present, since the former implementations do not escalate well for more than one predictor.
6.2.3 Asymptotic properties
What affects the performance of the local polynomial estimator? Is local linear estimation better than local constant estimation? What is the effect of ?
The purpose of this section is to provide some highlights on the questions above by examining the theoretical properties of the local polynomial estimator. This is achieved by examining the asymptotic bias and variance of the local linear and local constant estimators.211 For this goal, we consider the location-scale model for and its predictor :
where is the conditional variance of given and is such that and Note that since the conditional variance is not forced to be constant we are implicitly allowing for heteroscedasticity.
The following assumptions212 are the only requirements to perform the asymptotic analysis of the estimator:
- A1.213 is twice continuously differentiable.
- A2.214 is continuous and positive.
- A3.215 the marginal pdf of is continuously differentiable and bounded away from zero.216
- A4.217 The kernel is a symmetric and bounded pdf with finite second moment and is square integrable.
- A5.218 is a deterministic sequence of bandwidths such that, when and
The bias and variance are studied in their conditional versions on the predictor’s sample The reason for analyzing the conditional instead of the unconditional versions is avoiding technical difficulties that integration with respect to the predictor’s density may pose. This is in the spirit of what it was done in the parametric inference of Sections 2.4 and 5.3. The main result is the following, which provides useful insights on the effect of (standing from now on for the marginal pdf of ), and in the performance of
Theorem 6.1 Under A1–A5, the conditional bias and variance of the local constant () and local linear () estimators are219
where
The bias and variance expressions (6.24) and (6.25) yield very interesting insights:
Bias.
The bias decreases with quadratically for both That means that small bandwidths give estimators with low bias, whereas large bandwidths provide largely biased estimators.
The bias at is directly proportional to if or affected by if Therefore:
- The bias is negative in regions where is concave, i.e., These regions correspond to peaks and modes of .
- Conversely, the bias is positive in regions where is convex, i.e., These regions correspond to valleys of .
- All in all, the “wilder” the curvature , the larger the bias and the harder to estimate .
The bias for at is affected by and All of them are quantities that are not present in the bias when Precisely, for the local constant estimator, the lower the density the larger the bias. Also, the faster and change at (derivatives), the larger the bias. Thus the bias of the local constant estimator is much more sensible to and than the local linear (which is only sensible to ). Particularly, the fact that the bias depends on and is referred to as the design bias since it depends merely on the predictor’s distribution.
Variance.
- The main term of the variance is the same for . In addition, it depends directly on As a consequence, the lower the density, the more variable is.220 Also, the larger the conditional variance at the more variable is.221
- The variance decreases at a factor of . This is related with the so-called effective sample size which can be thought of as the amount of data in the neighborhood of that is employed for performing the regression.222
The main takeaway of the analysis of vs. is that has smaller bias than (but of the same order) while keeping the same variance as .
An extended version of Theorem 6.1, given in Theorem 3.1 of Fan and Gijbels (1996), shows that this phenomenon extends to higher orders: odd order ( ) polynomial fits introduce an extra coefficient for the polynomial fit that allows them to reduce the bias, while maintaining the same variance of the precedent even order (). So, for example, local cubic fits are preferred to local quadratic fits. This motivates the claim that local polynomial fitting is an “odd world” (Fan and Gijbels (1996)).
6.2.4 Bandwidth selection
Bandwidth selection, as for density estimation, has a crucial practical importance for kernel regression estimation. Several bandwidth selectors have been by following cross-validatory and plug-in ideas similar to the ones seen in Section 6.1.3. For simplicity, we briefly mention223 the DPI analogue for local linear regression for a single continuous predictor and focus mainly on least squares cross-validation, as it is a bandwidth selector that readily generalizes to the more complex settings of Section 6.3.
Following the derivation of the DPI for the kde, the first step is to define a suitable error criterion for the estimator The conditional (on the sample of the predictor) MISE of is often considered:
Observe that this definition is very similar to the kde’s MISE, except for the fact that appears weighting the quadratic difference: what matters is to minimize the estimation error of on the regions were the density of is higher. Recall also that the MISE follows by integrating the conditional MSE, which amounts to the squared bias (6.24) plus the variance (6.25) given in Theorem 6.1. These operations produce the conditional AMISE:
and, if the resulting optimal AMISE bandwidth is
where
As happened in the density setting, the AMISE-optimal bandwidth cannot be readily employed, as knowledge about the “curvature” of and about is required. As with the DPI selector, a series of nonparametric estimations of and high-order curvature terms follow, concluding with a necessary estimation of a higher-order curvature based on a “block polynomial fit”.224 The estimation of is carried out by assuming homoscedasticity and a compactly supported density The resulting bandwidth selector, has a much faster convergence rate to than cross-validatory selectors. However, it is notably more convoluted, and as a consequence is less straightforward to extend to more complex settings.
The DPI selector for the local linear estimator is implemented in KernSmooth::dpill
.
# Generate some data
set.seed(123456)
n <- 100
eps <- rnorm(n, sd = 2)
m <- function(x) x^3 * sin(x)
X <- rnorm(n, sd = 1.5)
Y <- m(X) + eps
xGrid <- seq(-10, 10, l = 500)
# DPI selector
hDPI <- KernSmooth::dpill(x = X, y = Y)
# Fits
lp1 <- KernSmooth::locpoly(x = X, y = Y, bandwidth = 0.25, degree = 0,
range.x = c(-10, 10), gridsize = 500)
lp1DPI <- KernSmooth::locpoly(x = X, y = Y, bandwidth = hDPI, degree = 1,
range.x = c(-10, 10), gridsize = 500)
# Compare fits
plot(X, Y)
rug(X, side = 1); rug(Y, side = 2)
lines(xGrid, m(xGrid), col = 1)
lines(lp1$x, lp1$y, col = 2)
lines(lp1DPI$x, lp1DPI$y, col = 3)
legend("bottom", legend = c("True regression", "Local linear",
"Local linear (DPI)"),
lwd = 2, col = 1:3)
We turn now our attention to cross validation. Following an analogy with the fit of the linear model, we could look for the bandwidth such that it minimizes an RSS of the form
As it looks, this is a bad idea. Attempting to minimize (6.26) always leads to that results in a useless interpolation of the data, as illustrated below.
# Grid for representing (6.26)
hGrid <- seq(0.1, 1, l = 200)^2
error <- sapply(hGrid, function(h) {
mean((Y - mNW(x = X, X = X, Y = Y, h = h))^2)
})
# Error curve
plot(hGrid, error, type = "l")
rug(hGrid)
abline(v = hGrid[which.min(error)], col = 2)
As we know, the root of the problem is the comparison of with since there is nothing forbidding and as a consequence As discussed in (3.17),225 a solution is to compare with the leave-one-out estimate of computed without the -th datum yielding the least squares cross-validation error
and then choose
The optimization of (6.27) might seem as very computationally demanding, since it is required to compute regressions for just a single evaluation of the cross-validation function. There is, however, a simple and neat theoretical result that vastly reduces the computational complexity, at the price of increasing the memory demand. This trick allows to compute, with a single fit, the cross-validation function.
Proposition 6.1 For any the weights of the leave-one-out estimator can be obtained from :
This implies that
The result can be proved using that the weights add to one, for any and that is a linear combination226 of the responses
Let’s implement for the Nadaraya–Watson estimator.
# Generate some data to test the implementation
set.seed(12345)
n <- 100
eps <- rnorm(n, sd = 2)
m <- function(x) x^2 + sin(x)
X <- rnorm(n, sd = 1.5)
Y <- m(X) + eps
xGrid <- seq(-10, 10, l = 500)
# Objective function
cvNW <- function(X, Y, h, K = dnorm) {
sum(((Y - mNW(x = X, X = X, Y = Y, h = h, K = K)) /
(1 - K(0) / colSums(K(outer(X, X, "-") / h))))^2)
# Beware: outer() is not very memory-friendly!
}
# Find optimum CV bandwidth, with sensible grid
bw.cv.grid <- function(X, Y,
h.grid = diff(range(X)) * (seq(0.1, 1, l = 200))^2,
K = dnorm, plot.cv = FALSE) {
obj <- sapply(h.grid, function(h) cvNW(X = X, Y = Y, h = h, K = K))
h <- h.grid[which.min(obj)]
if (plot.cv) {
plot(h.grid, obj, type = "o")
rug(h.grid)
abline(v = h, col = 2, lwd = 2)
}
h
}
# Bandwidth
hCV <- bw.cv.grid(X = X, Y = Y, plot.cv = TRUE)
hCV
## [1] 0.3117806
# Plot result
plot(X, Y)
rug(X, side = 1); rug(Y, side = 2)
lines(xGrid, m(xGrid), col = 1)
lines(xGrid, mNW(x = xGrid, X = X, Y = Y, h = hCV), col = 2)
legend("top", legend = c("True regression", "Nadaraya-Watson"),
lwd = 2, col = 1:2)
A more sophisticated cross-validation bandwidth selection can be achieved by np::npregbw
and np::npreg
, as shown in the code below.
# Turn off the "multistart" messages in the np package
options(np.messages = FALSE)
# np::npregbw computes by default the least squares CV bandwidth associated to
# a local constant fit
bw0 <- np::npregbw(formula = Y ~ X)
# Multiple initial points can be employed for minimizing the CV function (for
# one predictor, defaults to 1)
bw0 <- np::npregbw(formula = Y ~ X, nmulti = 2)
# The "rbandwidth" object contains many useful information, see ?np::npregbw for
# all the returned objects
bw0
##
## Regression Data (100 observations, 1 variable(s)):
##
## X
## Bandwidth(s): 0.3112962
##
## Regression Type: Local-Constant
## Bandwidth Selection Method: Least Squares Cross-Validation
## Formula: Y ~ X
## Bandwidth Type: Fixed
## Objective Function Value: 5.368999 (achieved on multistart 1)
##
## Continuous Kernel Type: Second-Order Gaussian
## No. Continuous Explanatory Vars.: 1
# Recall that the fit is very similar to hCV
# Once the bandwidth is estimated, np::npreg can be directly called with the
# "rbandwidth" object (it encodes the regression to be made, the data, the kind
# of estimator considered, etc). The hard work goes on np::npregbw, not on
# np::npreg
kre0 <- np::npreg(bw0)
kre0
##
## Regression Data: 100 training points, in 1 variable(s)
## X
## Bandwidth(s): 0.3112962
##
## Kernel Regression Estimator: Local-Constant
## Bandwidth Type: Fixed
##
## Continuous Kernel Type: Second-Order Gaussian
## No. Continuous Explanatory Vars.: 1
# The evaluation points of the estimator are by default the predictor's sample
# (which is not sorted!)
# The evaluation of the estimator is given in "mean"
plot(kre0$eval$X, kre0$mean)
# The evaluation points can be changed using "exdat"
kre0 <- np::npreg(bw0, exdat = xGrid)
# Plot directly the fit via plot() -- it employs different evaluation points
# than exdat
plot(kre0, col = 2, type = "o")
points(X, Y)
rug(X, side = 1); rug(Y, side = 2)
lines(xGrid, m(xGrid), col = 1)
lines(kre0$eval$xGrid, kre0$mean, col = 3, type = "o", pch = 16, cex = 0.5)
# Using the evaluation points
# Local linear fit -- find first the CV bandwidth
bw1 <- np::npregbw(formula = Y ~ X, regtype = "ll")
# regtype = "ll" stands for "local linear", "lc" for "local constant"
# Local linear fit
kre1 <- np::npreg(bw1, exdat = xGrid)
# Comparison
plot(X, Y)
rug(X, side = 1); rug(Y, side = 2)
lines(xGrid, m(xGrid), col = 1)
lines(kre0$eval$xGrid, kre0$mean, col = 2)
lines(kre1$eval$xGrid, kre1$mean, col = 3)
legend("top", legend = c("True regression", "Nadaraya-Watson", "Local linear"),
lwd = 2, col = 1:3)
There are more sophisticated options for bandwidth selection in np::npregbw
. For example, the argument bwtype
allows to estimate data-driven variable bandwidths that depend on the evaluation point rather than fixed bandwidths as we have considered. Roughly speaking, these variable bandwidths are related to the variable bandwidth that is necessary to contain the nearest neighbors of in the neighborhood There is a potential gain in employing variable bandwidths, as the estimator can adapt the amount of smoothing according to the density of the predictor. We do not investigate this approach in detail but just point to its implementation.
# Generate some data with bimodal density
set.seed(12345)
n <- 100
eps <- rnorm(2 * n, sd = 2)
m <- function(x) x^2 * sin(x)
X <- c(rnorm(n, mean = -2, sd = 0.5), rnorm(n, mean = 2, sd = 0.5))
Y <- m(X) + eps
xGrid <- seq(-10, 10, l = 500)
# Constant bandwidth
bwc <- np::npregbw(formula = Y ~ X, bwtype = "fixed", regtype = "ll")
krec <- np::npreg(bwc, exdat = xGrid)
# Variable bandwidths
bwg <- np::npregbw(formula = Y ~ X, bwtype = "generalized_nn", regtype = "ll")
kreg <- np::npreg(bwg, exdat = xGrid)
bwa <- np::npregbw(formula = Y ~ X, bwtype = "adaptive_nn", regtype = "ll")
krea <- np::npreg(bwa, exdat = xGrid)
# Comparison
plot(X, Y)
rug(X, side = 1); rug(Y, side = 2)
lines(xGrid, m(xGrid), col = 1)
lines(krec$eval$xGrid, krec$mean, col = 2)
lines(kreg$eval$xGrid, kreg$mean, col = 3)
lines(krea$eval$xGrid, krea$mean, col = 4)
legend("top", legend = c("True regression", "Fixed", "Generalized NN",
"Adaptive NN"),
lwd = 2, col = 1:4)
# Observe how the fixed bandwidth may yield a fit that produces serious
# artifacts in the low density region. At that region the NN-based bandwidths
# enlarge to borrow strength from the points in the high density regions,
# whereas in the high density regions they shrink to adapt faster to the
# changes of the regression function
References
Notice that it does not depend on only on the bandwidth employed for smoothing ↩︎
Termed due to the coetaneous proposals by Nadaraya (1964) and Watson (1964).↩︎
Obviously, avoiding the spurious perfect fit attained with ↩︎
Here we employ for denoting the order of the Taylor expansion and, correspondingly, the order of the associated polynomial fit. Do not confuse with the number of original predictors for explaining – there is only one predictor in this section, However, with a local polynomial fit we expand this predictor to predictors based on ↩︎
The rationale is simple: should be more informative about than if and are closer than and Observe that and are ignored in measuring this proximity.↩︎
Recall that weighted least squares already appeared in the IRLS of Section 5.2.2.↩︎
Recall that the entries of are estimating so we are indeed estimating (first entry) and, in addition, its derivatives up to order !↩︎
The
lowess
estimator, related withloess
, is the one employed in R’spanel.smooth
, which is the function in charge of displaying the smooth fits inlm
andglm
regression diagnostics. For those diagnostics, it employs a prefixed and not data-driven smoothing span of – which makes it inevitably a bad choice for certain data patterns. An example of data pattern for which the span is not appropriate is the one in upper right panel in Figure 5.15.↩︎We do not address the analysis of the general case in which The reader is referred to, e.g., Theorem 3.1 of Fan and Gijbels (1996) for the full analysis.↩︎
Recall that these are the only assumptions done so far in the model! Compared with the ones made for linear models or generalized linear models, they are extremely mild.↩︎
This assumption requires certain smoothness of the regression function, allowing thus for Taylor expansions to be performed. This assumption is important in practice: is infinitely differentiable if the considered kernels are.↩︎
Avoids the situation in which is a degenerated random variable.↩︎
Avoids the degenerate situation in which is estimated at regions without observations of the predictors (such as holes in the support of ).↩︎
Meaning that there exist a positive lower bound for ↩︎
Mild assumption inherited from the kde.↩︎
Key assumption for reducing the bias and variance of simultaneously.↩︎
The notation stands for a random variable that converges in probability to zero at a rate faster than It is mostly employed for denoting non-important terms in asymptotic expansions, like the ones in (6.24)–(6.25).↩︎
Recall that this makes perfect sense: low density regions of imply less information about available.↩︎
The same happened in the the linear model with the error variance ↩︎
The variance of an unweighted mean is reduced by a factor when observations are employed. For computing observations are used but in a weighted fashion that roughly amounts to considering unweighted observations.↩︎
Further details are available in Section 5.8 of Wand and Jones (1995) and references therein.↩︎
A fit based on ordinal polynomial fits but done in different blocks of the data.↩︎
Recall that is a tuning parameter!↩︎
Indeed, for any other linear smoother of the response, the result also holds.↩︎