6.1 Nonparametric density estimation

In order to introduce a nonparametric estimator for the regression function m, we need to introduce first a nonparametric estimator for the density of the predictor X. This estimator is aimed to estimate f, the density of X, from a sample X1,,Xn without assuming any specific form for f. This is, without assuming, e.g., that the data is normally distributed.

6.1.1 Histogram and moving histogram

The simplest method to estimate a density f from an iid sample X1,,Xn is the histogram. From an analytical point of view, the idea is to aggregate the data in intervals of the form [x0,x0+h) and then use their relative frequency to approximate the density at x[x0,x0+h), f(x), by the estimate of

f(x0)=F(x0)=limh0+F(x0+h)F(x0)h=limh0+P[x0<X<x0+h]h.

More precisely, given an origin t0 and a bandwidth h>0, the histogram builds a piecewise constant function in the intervals {Bk:=[tk,tk+1):tk=t0+hk,kZ} by counting the number of sample points inside each of them. These constant-length intervals are also called bins. The fact they have constant length h is important, since it allows to standardize by h in order to have relative frequencies per length188 in the bins. The histogram at a point x is defined as

(6.1)f^H(x;t0,h):=1nhi=1n1{XiBk:xBk}.

Equivalently, if we denote the number of points in Bk as vk, then the histogram is f^H(x;t0,h)=vknh if xBk for kZ.

The computation of histograms is straightforward in R. As an example, we consider the faithful dataset, which contains the duration of the eruption and the waiting time between eruptions for the Old Faithful geyser in Yellowstone National Park (USA).

# Duration of eruption
faithE <- faithful$eruptions

# Default histogram: automatically chooses bins and uses absolute frequencies
histo <- hist(faithE)


# Bins and bin counts
histo$breaks # Bk's
## [1] 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 5.5
histo$counts # vk's
## [1] 55 37  5  9 34 75 54  3

# With relative frequencies
hist(faithE, probability = TRUE)


# Choosing the breaks
t0 <- min(faithE)
h <- 0.25
Bk <- seq(t0, max(faithE), by = h)
hist(faithE, probability = TRUE, breaks = Bk)
rug(faithE) # The sample

Recall that the shape of the histogram depends on:

  • t0, since the separation between bins happens at t0k, kZ;
  • h, which controls the bin size and the effective number of bins for aggregating the sample.

We focus first on exploring the dependence on t0 with the next example, as it serves for motivating the next density estimator.

# Uniform sample
set.seed(1234567)
u <- runif(n = 100)

# t0 = 0, h = 0.2
Bk1 <- seq(0, 1, by = 0.2)

# t0 = -0.1, h = 0.2
Bk2 <- seq(-0.1, 1.1, by = 0.2)

# Comparison
par(mfrow = 1:2)
hist(u, probability = TRUE, breaks = Bk1, ylim = c(0, 1.5),
     main = "t0 = 0, h = 0.2")
rug(u)
abline(h = 1, col = 2)
hist(u, probability = TRUE, breaks = Bk2, ylim = c(0, 1.5),
     main = "t0 = -0.1, h = 0.2")
rug(u)
abline(h = 1, col = 2)
The dependence of the histogram on the origin \(t_0\).

Figure 6.1: The dependence of the histogram on the origin t0.

Clearly, this dependence is undesirable, as it is prone to change notably the estimation of f using the same data. An alternative to avoid the dependence on t0 is the moving histogram or naive density estimator. The idea is to aggregate the sample X1,,Xn in intervals of the form (xh,x+h) and then use its relative frequency in (xh,x+h) to approximate the density at x, which can be written as

f(x)=F(x)=limh0+F(x+h)F(xh)2h=limh0+P[xh<X<x+h]2h.

Recall the differences with the histogram: the intervals depend on the evaluation point x and are centered about it. That allows to directly estimate f(x) (without the proxy f(x0)) by an estimate of the symmetric derivative.

Given a bandwidth h>0, the naive density estimator builds a piecewise constant function by considering the relative frequency of X1,,Xn inside (xh,x+h)189:

(6.2)f^N(x;h):=12nhi=1n1{xh<Xi<x+h}.

The analysis of f^N(x;h) as a random variable follows from realizing that

i=1n1{xh<Xi<x+h}B(n,px,h)

where

px,h:=P[xh<X<x+h]=F(x+h)F(xh).

Therefore, employing the bias and variance expressions of a binomial190, it follows:

E[f^N(x;h)]=F(x+h)F(xh)2h,Var[f^N(x;h)]=F(x+h)F(xh)4nh2(F(x+h)F(xh))24nh2.

These two results provide very interesting insights on the effect of h on the moving histogram:

  1. If h0, then E[f^N(x;h)]f(x) and (6.2) is an asymptotically unbiased estimator of f(x). However, if h0, the variance explodes: Var[f^N(x;h)]f(x)2nhf(x)2n.
  2. If h, then both E[f^N(x;h)]0 and Var[f^N(x;h)]0. Therefore, the variance shrinks to zero but the bias grows.
  3. If nh191, then the variance shrinks to zero. If, in addition, h0, the bias also shrinks to zero. So both the bias and the variance are reduced if n, h0, and nh, simultaneously.

The animation in Figure 6.2 illustrates the previous points and gives insight on how the performance of (6.2) varies with h.

Figure 6.2: Bias and variance for the moving histogram. The animation shows how for small bandwidths the bias of f^N(x;h) on estimating f(x) is small, but the variance is high, and how for large bandwidths the bias is large and the variance is small. The variance is represented by the asymptotic 95% confidence intervals for f^N(x;h). Recall how the variance of f^N(x;h) is (almost) proportional to f(x). Application also available here.

The estimator (6.2) poses an interesting question:

Why giving the same weight to all X1,,Xn in (xh,x+h) for estimating f(x)?

We are estimating f(x)=F(x) by estimating F(x+h)F(xh)2h through the relative frequency of X1,,Xn in the interval (xh,x+h). Should not be the data points closer to x more important than the ones further away? The answer to this question shows that (6.2) is indeed a particular case of a wider class of density estimators.

6.1.2 Kernel density estimation

The moving histogram (6.2) can be equivalently written as

f^N(x;h)=1nhi=1n121{1<xXih<1}(6.3)=1nhi=1nK(xXih),

with K(z)=121{1<z<1}. Interestingly, K is a uniform density in (1,1). This means that, when approximating

P[xh<X<x+h]=P[1<xXh<1]

by (6.3), we give equal weight to all the points X1,,Xn. The generalization of (6.3) is now obvious: replace K by an arbitrary density. Then K is known as a kernel: a density with certain regularity that is (typically) symmetric and unimodal at 0. This generalization provides the definition of kernel density estimator192 (kde):

(6.4)f^(x;h):=1nhi=1nK(xXih).

A common notation is Kh(z):=1hK(zh), so the kde can be compactly written as f^(x;h)=1ni=1nKh(xXi).

It is useful to recall (6.4) with the normal kernel. If that is the case, then Kh(xXi)=ϕ(x;Xi,h2)=ϕ(xXi;0,h2) (denoted simply as ϕ(xXi;h2)) and the kernel is the density of a N(Xi,h2). Thus the bandwidth h can be thought of as the standard deviation of a normal density whose mean is Xi and the kde (6.4) as a data-driven mixture of those densities. Figure 6.3 illustrates the construction of the kde, and the bandwidth and kernel effects.

Figure 6.3: Construction of the kernel density estimator. The animation shows how the bandwidth and kernel affect the density estimate, and how the kernels are rescaled densities with modes at the data points. Application available here.

Several types of kernels are possible. The most popular is the normal kernel K(z)=ϕ(z), although the Epanechnikov kernel, K(z)=34(1z2)1{|z|<1}, is the most efficient193. The rectangular kernel K(z)=121{|z|<1} yields the moving histogram as a particular case. The kde inherits the smoothness properties of the kernel. That means, e.g., that (6.4) with a normal kernel is infinitely differentiable. But, with an Epanechnikov kernel, (6.4) is not differentiable, and, with a rectangular kernel, the kde is not even continuous. However, if a certain smoothness is guaranteed (continuity at least), then the choice of the kernel has little importance in practice (at least compared with the choice of the bandwidth h).

The computation of the kde in R is done through the density function. The function automatically chooses the bandwidth h using a data-driven criterion194.

# Sample 100 points from a N(0, 1)
set.seed(1234567)
samp <- rnorm(n = 100, mean = 0, sd = 1)

# Quickly compute a kernel density estimator and plot the density object
# Automatically chooses bandwidth and uses normal kernel
plot(density(x = samp))

# Select a particular bandwidth (0.5) and kernel (Epanechnikov)
lines(density(x = samp, bw = 0.5, kernel = "epanechnikov"), col = 2)

# density() automatically chooses the interval for plotting the kernel density
# estimator (observe that the black line goes to roughly between -3 and 3)

# This can be tuned using "from" and "to"
plot(density(x = samp, from = -4, to = 4), xlim = c(-5, 5))


# The density object is a list
kde <- density(x = samp, from = -5, to = 5, n = 1024)
str(kde)
## List of 7
##  $ x        : num [1:1024] -5 -4.99 -4.98 -4.97 -4.96 ...
##  $ y        : num [1:1024] 4.91e-17 2.97e-17 1.86e-17 2.53e-17 3.43e-17 ...
##  $ bw       : num 0.315
##  $ n        : int 100
##  $ call     : language density.default(x = samp, n = 1024, from = -5, to = 5)
##  $ data.name: chr "samp"
##  $ has.na   : logi FALSE
##  - attr(*, "class")= chr "density"
# Note that the evaluation grid "x"" is not directly controlled, only through
# "from, "to", and "n" (better use powers of 2). This is because, internally,
# kde employs an efficient Fast Fourier Transform on grids of size 2^m

# Plotting the returned values of the kde
plot(kde$x, kde$y, type = "l")
curve(dnorm(x), col = 2, add = TRUE) # True density
rug(samp)

Load the dataset faithful. Then:

  1. Estimate and plot the density of faithful$eruptions.
  2. Create a new plot and superimpose different density estimations with bandwidths equal to 0.1, 0.5, and 1.
  3. Get the density estimate at exactly the point x=3.1 using h=0.15 and the Epanechnikov kernel.

6.1.3 Bandwidth selection

The kde critically depends on the employed bandwidth; hence objective and automatic bandwidth selectors that attempt to minimize the estimation error of the target density f are required to properly apply a kde in practice.

A global, rather than local, error criterion for the kde is the Integrated Squared Error (ISE):

ISE[f^(;h)]:=(f^(x;h)f(x))2dx.

The ISE is a random quantity, since it depends directly on the sample X1,,Xn. As a consequence, looking for an optimal-ISE bandwidth is a hard task, since the optimality is dependent on the sample itself (not only on f and n). To avoid this problematic, it is usual to compute the Mean Integrated Squared Error (MISE):

MISE[f^(;h)]:=E[ISE[f^(;h)]]=E[(f^(x;h)f(x))2]dx=MSE[f^(x;h)]dx.

Once the MISE is set as the error criterion to be minimized, our aim is to find

hMISE:=argminh>0MISE[f^(;h)].

For that purpose, we need an explicit expression of the MISE that we can attempt to minimize. An asymptotic expansion can be derived when h0 and nh, resulting in

(6.5)MISE[f^(;h)]AMISE[f^(;h)]:=14μ22(K)R(f)h4+R(K)nh,

where μ2(K):=z2K(z)dz and R(g):=g(x)2dx. The AMISE stands for Asymptotic MISE and, due to its closed expression, it allows to obtain a bandwidth that minimizes it:195

(6.6)hAMISE=[R(K)μ22(K)R(f)n]1/5.

Unfortunately, the AMISE bandwidth depends on R(f)=(f(x))2dx, which measures the curvature of the unknown density f. As a consequence, it cannot be readily applied in practice!

Plug-in selectors

A simple solution to turn (6.6) into something computable is to estimate R(f) by assuming that f is the density of a N(μ,σ2), and then plug-in the form of the curvature for such density:

R(ϕ(;μ,σ2))=38π1/2σ5.

While doing so, we approximate the curvature of an arbitrary density by means of the curvature of a normal and we have that

hAMISE=[8π1/2R(K)3μ22(K)n]1/5σ.

Interestingly, the bandwidth is directly proportional to the standard deviation of the target density. Replacing σ by an estimate yields the normal scale bandwidth selector, which we denote by h^NS to emphasize its randomness:

h^NS=[8π1/2R(K)3μ22(K)n]1/5σ^.

The estimate σ^ can be chosen as the standard deviation s, or, in order to avoid the effects of potential outliers, as the standardized interquantile range

σ^IQR:=X([0.75n])X([0.25n])Φ1(0.75)Φ1(0.25)

or as

(6.7)σ^=min(s,σ^IQR).

When combined with a normal kernel, for which μ2(K)=1 and R(K)=12π, this particularization of h^NS gives the famous rule-of-thumb for bandwidth selection:

h^RT=(43)1/5n1/5σ^1.06n1/5σ^.

h^RT is implemented in R through the function bw.nrd196.

# Data
set.seed(667478)
n <- 100
x <- rnorm(n)

# Rule-of-thumb
bw.nrd(x = x)
## [1] 0.4040319
# bwd.nrd employs 1.34 as an approximation for diff(qnorm(c(0.25, 0.75)))

# Same as
iqr <- diff(quantile(x, c(0.25, 0.75))) / diff(qnorm(c(0.25, 0.75)))
1.06 * n^(-1/5) * min(sd(x), iqr)
## [1] 0.4040319

The rule-of-thumb is an example of a zero-stage plug-in selector, a terminology which lays on the fact that R(f) was estimated by plugging-in a parametric estimation at “the very first moment a quantity that depends on f appears”. We could have opted to estimate R(f) nonparametrically, in a certain optimal way, and then plug-in the estimate into into hAMISE. The important catch lies on the optimal estimation of R(f) : it requires the knowledge of R(f(4))! What -stage plug-in selectors do is to iterate these steps times and finally plug-in a normal estimate of the unknown R(f(2))197.

Typically, two stages are considered a good trade-off between bias (mitigated when increases) and variance (augments with ) of the plug-in selector. This is the method proposed by Sheather and Jones (1991), yielding what we call the Direct Plug-In (DPI). The DPI selector is implemented in R through the function bw.SJ (use method = "dpi"). An alternative and faster implementation is ks::hpi, which also for more flexibility and has a somehow more detailed documentation.

# Data
set.seed(672641)
x <- rnorm(100)

# DPI selector
bw.SJ(x = x, method = "dpi")
## [1] 0.5006905

# Similar to
ks::hpi(x) # Default is two-stages
## [1] 0.4999456

Cross-validation

We turn now our attention to a different philosophy of bandwidth estimation. Instead of trying to minimize the AMISE by plugging-in estimates for the unknown curvature term, we directly attempt to minimize the MISE by using the sample twice: one for computing the kde and other for evaluating its performance on estimating f. To avoid the clear dependence on the sample, we do the evaluation in a cross-validatory way: the data used for computing the kde is not used for its evaluation.

We begin by expanding the square in the MISE expression:

MISE[f^(;h)]=E[(f^(x;h)f(x))2dx]=E[f^(x;h)2dx]2E[f^(x;h)f(x)dx]+f(x)2dx.

Since the last term does not depend on h, minimizing MISE[f^(;h)] is equivalent to minimizing

E[f^(x;h)2dx]2E[f^(x;h)f(x)dx].

This quantity is unknown, but it can be estimated unbiasedly by

(6.8)LSCV(h):=f^(x;h)2dx2n1i=1nf^i(Xi;h),

where f^i(;h) is the leave-one-out kde and is based on the sample with the Xi removed:

f^i(x;h)=1n1j=1jinKh(xXj).

The Least Squares Cross-Validation (LSCV) selector, also denoted Unbiased Cross-Validation (UCV) selector, is defined as

h^LSCV:=argminh>0LSCV(h).

Numerical optimization is required for obtaining h^LSCV, contrary to the previous plug-in selectors, and there is little control on the shape of the objective function.

Numerical optimization of the (6.8) can be challenging. In practice, several local minima are possible, and the roughness of the objective function can vary notably depending on n and f. As a consequence, optimization routines may get trapped in spurious solutions. To be on the safe side, it is advisable to check the solution by plotting LSCV(h) for a range of h, or to perform an exhaustive search in a bandwidth grid: h^LSCVargminh1,,hGLSCV(h).

h^LSCV is implemented in R through the function bw.ucv. bw.ucv uses optimize, which is quite sensible to the selection of the search interval198. Therefore, some care is needed and that is why the bw.ucv.mod function is presented.

# Data
set.seed(123456)
x <- rnorm(100)

# UCV gives a warning
bw.ucv(x = x)
## [1] 0.4499177

# Extend search interval
bw.ucv(x = x, lower = 0.01, upper = 1)
## [1] 0.5482419

# bw.ucv.mod replaces the optimization routine of bw.ucv by an exhaustive
# search on "h.grid" (chosen adaptatively from the sample) and optionally
# plots the LSCV curve with "plot.cv"
bw.ucv.mod <- function(x, nb = 1000L,
                       h.grid = diff(range(x)) * (seq(0.1, 1, l = 200))^2,
                       plot.cv = FALSE) {
  if ((n <- length(x)) < 2L)
    stop("need at least 2 data points")
  n <- as.integer(n)
  if (is.na(n))
    stop("invalid length(x)")
  if (!is.numeric(x))
    stop("invalid 'x'")
  nb <- as.integer(nb)
  if (is.na(nb) || nb <= 0L)
    stop("invalid 'nb'")
  storage.mode(x) <- "double"
  hmax <- 1.144 * sqrt(var(x)) * n^(-1/5)
  Z <- .Call(stats:::C_bw_den, nb, x)
  d <- Z[[1L]]
  cnt <- Z[[2L]]
  fucv <- function(h) .Call(stats:::C_bw_ucv, n, d, cnt, h)
  ## Original
  # h <- optimize(fucv, c(lower, upper), tol = tol)$minimum
  # if (h < lower + tol | h > upper - tol)
  #   warning("minimum occurred at one end of the range")
  ## Modification
  obj <- sapply(h.grid, function(h) fucv(h))
  h <- h.grid[which.min(obj)]
  if (plot.cv) {
    plot(h.grid, obj, type = "o")
    rug(h.grid)
    abline(v = h, col = 2, lwd = 2)
  }
  h
}

# Compute the bandwidth and plot the LSCV curve
bw.ucv.mod(x = x, plot.cv = TRUE)
## [1] 0.5431732

# We can compare with the default bw.ucv output
abline(v = bw.ucv(x = x), col = 3)

The next cross-validation selector is based on Biased Cross-Validation (BCV). The BCV selector presents a hybrid strategy that combines plug-in and cross-validation ideas. It starts by considering the AMISE expression in (6.5) and then plugs-in an estimate for R(f) based on a modification of R(f^(;h)). The appealing property of h^BCV is that it has a considerably smaller variance compared to h^LSCV. This reduction in variance comes at the price of an increased bias, which tends to make h^BCV larger than hMISE.

h^BCV is implemented in R through the function bw.bcv. Again, bw.bcv uses optimize so the bw.bcv.mod function is presented to have better guarantees on finding the adequate minimum.199

# Data
set.seed(123456)
x <- rnorm(100)

# BCV gives a warning
bw.bcv(x = x)
## [1] 0.4500924

# Extend search interval
args(bw.bcv)
## function (x, nb = 1000L, lower = 0.1 * hmax, upper = hmax, tol = 0.1 * 
##     lower) 
## NULL
bw.bcv(x = x, lower = 0.01, upper = 1)
## [1] 0.5070129

# bw.bcv.mod replaces the optimization routine of bw.bcv by an exhaustive
# search on "h.grid" (chosen adaptatively from the sample) and optionally
# plots the BCV curve with "plot.cv"
bw.bcv.mod <- function(x, nb = 1000L,
                       h.grid = diff(range(x)) * (seq(0.1, 1, l = 200))^2,
                       plot.cv = FALSE) {
  if ((n <- length(x)) < 2L)
    stop("need at least 2 data points")
  n <- as.integer(n)
  if (is.na(n))
    stop("invalid length(x)")
  if (!is.numeric(x))
    stop("invalid 'x'")
  nb <- as.integer(nb)
  if (is.na(nb) || nb <= 0L)
    stop("invalid 'nb'")
  storage.mode(x) <- "double"
  hmax <- 1.144 * sqrt(var(x)) * n^(-1/5)
  Z <- .Call(stats:::C_bw_den, nb, x)
  d <- Z[[1L]]
  cnt <- Z[[2L]]
  fbcv <- function(h) .Call(stats:::C_bw_bcv, n, d, cnt, h)
  ## Original code
  # h <- optimize(fbcv, c(lower, upper), tol = tol)$minimum
  # if (h < lower + tol | h > upper - tol)
  #   warning("minimum occurred at one end of the range")
  ## Modification
  obj <- sapply(h.grid, function(h) fbcv(h))
  h <- h.grid[which.min(obj)]
  if (plot.cv) {
    plot(h.grid, obj, type = "o")
    rug(h.grid)
    abline(v = h, col = 2, lwd = 2)
  }
  h
}

# Compute the bandwidth and plot the BCV curve
bw.bcv.mod(x = x, plot.cv = TRUE)
## [1] 0.5130493

# We can compare with the default bw.bcv output
abline(v = bw.bcv(x = x), col = 3)

Comparison of bandwidth selectors

Despite it is possible to compare theoretically the performance of bandwidth selectors by investigating the convergence of nν(h^/hMISE1), comparisons are usually done by simulation and investigation of the averaged ISE error. A popular collection of simulation scenarios was given by Marron and Wand (1992) and are conveniently available through the package nor1mix. They form a collection of normal r-mixtures of the form

f(x;μ,σ,w):=j=1rwjϕ(x;μj,σj2),

where wj0, j=1,,r and j=1rwj=1. Densities of this form are specially attractive since they allow for arbitrarily flexibility and, if the normal kernel is employed, they allow for explicit and exact MISE expressions, directly computable as

MISEr[f^(;h)]=(2πnh)1+w{(1n1)Ω22Ω1+Ω0}w,(Ωa)ij:=ϕ(μiμj;ah2+σi2+σj2),i,j=1,,r.

This expression is especially useful for benchmarking bandwidth selectors, as the MISE optimal bandwidth can be computed by hMISE=argminh>0MISEr[f^(;h)].

# Available models
?nor1mix::MarronWand

# Simulating -- specify density with MW object
samp <- nor1mix::rnorMix(n = 500, obj = nor1mix::MW.nm9)
hist(samp, freq = FALSE)

# Density evaluation
x <- seq(-4, 4, length.out = 400)
lines(x, nor1mix::dnorMix(x = x, obj = nor1mix::MW.nm9), col = 2)


# Plot a MW object directly
# A normal with the same mean and variance is plotted in dashed lines
par(mfrow = c(2, 2))
plot(nor1mix::MW.nm5)
plot(nor1mix::MW.nm7)
plot(nor1mix::MW.nm10)
plot(nor1mix::MW.nm12)
lines(nor1mix::MW.nm1, col = 2:3) # Also possible

Figure 6.4 presents a visualization of the performance of the kde with different bandwidth selectors, carried out in the family of mixtures of Marron and Wand (1992).

Figure 6.4: Performance comparison of bandwidth selectors. The RT, DPI, LSCV, and BCV are computed for each sample for a normal mixture density. For each sample, computes the ISEs of the selectors and sorts them from best to worst. Changing the scenarios gives insight on the adequacy of each selector to hard- and simple-to-estimate densities. Application also available here.

Which bandwidth selector is the most adequate for a given dataset?

There is no simple and universal answer to this question. There are, however, a series of useful facts and suggestions:

  • Trying several selectors and inspecting the results may help on determining which one is estimating the density better.
  • The DPI selector has a convergence rate much faster than the cross-validation selectors. Therefore, in theory, it is expected to perform better than LSCV and BCV. For this reason, it tends to be amongst the preferred bandwidth selectors in the literature.
  • Cross-validatory selectors may be better suited for highly non-normal and rough densities, in which plug-in selectors may end up oversmoothing.
  • LSCV tends to be considerably more variable than BCV.
  • The RT is a quick, simple, and inexpensive selector. However, it tends to give bandwidths that are too large for non-normal data.

6.1.4 Multivariate extension

Kernel density estimation can be extended to estimate multivariate densities f in Rp. For a sample X1,,Xn in Rp, the kde of f evaluated at xRp is

(6.9)f^(x;H):=1n|H|1/2i=1nK(H1/2(xXi)),

where K is multivariate kernel, a p-variate density that is (typically) symmetric and unimodal at 0, and that depends on the bandwidth matrix200 H, a p×p symmetric and positive definite matrix. A common notation is KH(z):=|H|1/2K(H1/2z), so the kde can be compactly written as f^(x;H):=1ni=1nKH(xXi). The most employed multivariate kernel is the normal kernel K(z)=ϕ(z;0,Ip).

The interpretation of (6.9) is analogous to the one of (6.4): build a mixture of densities with each density centered at each data point. As a consequence, and roughly speaking, most of the concepts and ideas seen in univariate kernel density estimation extend to the multivariate situation, although some of them with considerable technical complications. For example, bandwidth selection inherits the same cross-validatory ideas (LSCV and BCV selectors) and plug-in methods (NS and DPI) seen before, but with increased complexity for the BCV and DPI selectors. The interested reader is referred to Chacón and Duong (2018) for a rigorous and comprehensive treatment.

We briefly discuss next the NS and LSCV selectors, denoted by H^NS and H^LSCV, respectively. The normal scale bandwidth selector follows, as in the univariate case, by minimizing the asymptotic MISE of the kde, which now takes the form

MISE[f^(;H)]=E[(f^(x;H)f(x))2dx],

and then assuming that f is the pdf of a Np(μ,Σ). With the normal kernel, this results in

(6.10)HNS=(4(p+2))2/(p+4)n2/(p+4)Σ.

Replacing Σ by the sample covariance matrix S in (6.10) gives H^NS.

The unbiased cross-validation selector neatly extends from the univariate case and attempts to minimize MISE[f^(;H)] by estimating it unbiasedly with

LSCV(H):=f^(x;H)2dx2n1i=1nf^i(Xi;H)

and then minimizing it:

(6.11)H^LSCV:=argminHSPDpLSCV(H),

where SPDp is the set of positive definite matrices201 of size p.

Considering a full bandwidth matrix H gives more flexibility to the kde, but also increases notably the amount of bandwidth parameters that need to be chosen – precisely p(p+1)2 – which significantly complicates bandwidth selection as the dimension p grows. A common simplification is to consider a diagonal bandwidth matrix H=diag(h12,,hp2), which yields the kde employing product kernels:

(6.12)f^(x;h)=1ni=1nKh1(x1Xi,1)×p×Khp(xpXi,p),

where h=(h1,,hp) is the vector of bandwidths. If the variables X1,,Xp have been standardized (so that they have the same scale), then a simple choice is to consider h=h1==hp.

Multivariate kernel density estimation and bandwidth selection is not supported in base R, but the ks package implements the kde by ks::kde for p6. Bandwidth selectors, allowing for full or diagonal bandwidth matrices are implemented by: ks::Hns (NS), ks::Hpi and ks::Hpi.diag (DPI), ks::Hlscv and ks::Hlscv.diag (LSCV), and ks::Hbcv and ks::Hbcv.diag (BCV). The next chunk of code illustrates their usage with the faithful dataset.

# DPI selectors
Hpi1 <- ks::Hpi(x = faithful)
Hpi1
##            [,1]       [,2]
## [1,] 0.06326802  0.6041862
## [2,] 0.60418624 11.1917775

# Compute kde (if H is missing, ks::Hpi is called)
kdeHpi1 <- ks::kde(x = faithful, H = Hpi1)

# Different representations
plot(kdeHpi1, display = "slice", cont = c(25, 50, 75))

# "cont" specifies the density contours, which are upper percentages of highest
# density regions. The default contours are at 25%, 50%, and 75%
plot(kdeHpi1, display = "filled.contour2", cont = c(25, 50, 75))

plot(kdeHpi1, display = "persp")


# Manual plotting using the kde object structure
image(kdeHpi1$eval.points[[1]], kdeHpi1$eval.points[[2]],
      kdeHpi1$estimate, col = viridis::viridis(20))
points(kdeHpi1$x)


# Diagonal vs. full
Hpi2 <- ks::Hpi.diag(x = faithful)
kdeHpi2 <- ks::kde(x = faithful, H = Hpi2)
plot(kdeHpi1, display = "filled.contour2", cont = c(25, 50, 75),
     main = "full")

plot(kdeHpi2, display = "filled.contour2", cont = c(25, 50, 75),
     main = "diagonal")

# Comparison of selectors along predefined contours
x <- faithful
Hlscv0 <- ks::Hlscv(x = x)
Hbcv0 <- ks::Hbcv(x = x)
Hpi0 <- ks::Hpi(x = x)
Hns0 <- ks::Hns(x = x)
par(mfrow = c(2, 2))
p <- lapply(list(Hlscv0, Hbcv0, Hpi0, Hns0), function(H) {
  # col.fun for custom colors
  plot(ks::kde(x = x, H = H), display = "filled.contour2",
       cont = seq(10, 90, by = 10), col.fun = viridis::viridis)
  points(x, cex = 0.5, pch = 16)
})

Kernel density estimation can be used to visualize density level sets in 3D too, as illustrated as follows with the iris dataset.

# Normal scale bandwidth
Hns1 <- ks::Hns(iris[, 1:3])

# Show high nested contours of high density regions
plot(ks::kde(x = iris[, 1:3], H = Hns1))
rgl::points3d(x = iris[, 1:3])
rgl::rglwidget()

Consider the normal mixture

w1N2(μ11,μ12,σ112,σ122,ρ1)+w2N2(μ21,μ22,σ212,σ222,ρ2),

where w1=0.3, w2=0.7, (μ11,μ12)=(1,1), (μ21,μ22)=(1,1), σ112=σ212=1, σ122=σ222=2, ρ1=0.5, and ρ2=0.5.

Perform the following simulation exercise:

  1. Plot the density of the mixture using ks::dnorm.mixt and overlay points simulated employing ks::rnorm.mixt. You may want to use ks::contourLevels to have density plots comparable to the kde plots performed in the next step.
  2. Compute the kde employing H^DPI, both for full and diagonal bandwidth matrices. Are there any gains on considering full bandwidths? What if ρ2=0.7?
  3. Consider the previous point with H^LSCV instead of H^DPI. Are the conclusions the same?

References

Chacón, J. E., and T. Duong. 2018. Multivariate Kernel Smoothing and Its Applications. Vol. 160. Monographs on Statistics and Applied Probability. Boca Raton: CRC Press. https://doi.org/10.1201/9780429485572.
Marron, J. S., and M. P. Wand. 1992. “Exact Mean Integrated Squared Error.” The Annals of Statistics 20 (2): 712–36. https://doi.org/10.1214/aos/1176348653.
Parzen, E. 1962. “On Estimation of a Probability Density Function and Mode.” Annals of Mathematical Statistics 33 (3): 1065–76. https://doi.org/10.1214/aoms/1177704472.
Rosenblatt, M. 1956. “Remarks on Some Nonparametric Estimates of a Density Function.” Annals of Mathematical Statistics 27 (3): 832–37. https://doi.org/10.1214/aoms/1177728190.
Sheather, S. J., and M. C. Jones. 1991. “A Reliable Data-Based Bandwidth Selection Method for Kernel Density Estimation.” Journal of the Royal Statistical Society, Series B (Methodological) 53 (3): 683–90. https://doi.org/10.1111/j.2517-6161.1991.tb01857.x.

  1. Recall that with this standardization we approach to the probability density concept.↩︎

  2. Note that the function has 2n discontinuities that are located at Xi±h.↩︎

  3. E[B(N,p)]=Np and Var[B(N,p)]=Np(1p).↩︎

  4. Or, in other words, if h1 grows slower than n.↩︎

  5. Also known as the Parzen–Rosenblatt estimator to honor the proposals by Parzen (1962) and Rosenblatt (1956).↩︎

  6. Although the efficiency of the normal kernel, with respect to the Epanechnikov kernel, is roughly 0.95.↩︎

  7. Precisely, the rule-of-thumb given by bw.nrd.↩︎

  8. By solving ddhAMISE[f^(;h)]=0, i.e., μ22(K)R(f)h3R(K)n1h2=0, yields hAMISE.↩︎

  9. Not to confuse with bw.nrd0!↩︎

  10. The motivation for doing so is to try to add the parametric assumption at a later, less important, step.↩︎

  11. Long intervals containing the solution may lead to unsatisfactory termination of the search; short intervals might not contain the minimum.↩︎

  12. But be careful not expanding the upper limit of the search interval too much!↩︎

  13. Observe that, if p=1, then H will equal the square of the bandwidth h, that is, H=h2.↩︎

  14. Observe that implementing the optimization of (6.11) is not trivial, since it is required to enforce the constraint HSPDp. A neat way of parametrizing H that induces the positive definiteness constrain is through the (unique) Cholesky decomposition of HSPDp: H=RR, where R is a triangular matrix with positive entries on the diagonal (but the remaining entries unconstrained). Therefore, optimization of (6.11) can be done through the p(p+1)2 entries of R.↩︎