Chapter 6 Nonparametric tests
This chapter overviews some well-known nonparametric hypothesis tests.189 The reviewed tests are intended for different purposes, mostly related to: (i) the evaluation of the goodness-of-fit of a distribution model to a dataset; and (ii) the assessment of the relation between two random variables.
A nonparametric test evaluates a null hypothesis \(H_0\) against an alternative \(H_1\) without assuming any parametric model, on neither \(H_0\) nor \(H_1.\) Consequently, a nonparametric test is free from the overhead of evaluating a parametric assumption that one needs to conduct before applying a parametric test.190 More importantly, it is quite likely that the inspection of these parametric assumptions has a negative outcome that forbids the subsequent application of a parametric test. The direct applicability and generality of nonparametric tests are the reasons for their usefulness in real-data applications.
Nonparametric tests have lower efficiency with respect to optimal parametric tests for specific parametric problems.191 Statistical inference is full of instances of such parametric tests, especially within the context of normal populations.192 For example, given two iid samples \(X_{11},\ldots,X_{1n_1}\) and \(X_{21},\ldots,X_{2n_2}\) from two normal populations \(X_1\sim\mathcal{N}(\mu_1,\sigma^2)\) and \(X_2\sim\mathcal{N}(\mu_2,\sigma^2),\) the test for the equality of the means,
\[\begin{align*} H_0:\mu_1=\mu_2\quad\text{vs.}\quad H_1:\mu_1\neq\mu_2, \end{align*}\]
is optimally carried out using the test statistic \(T_n:=\frac{\bar{X}_1-\bar{X}_2}{S\sqrt{1/n_1+1/n_2}},\) where \(S^2:=\frac{1}{n_1+n_2-2}\left(\sum_{i=1}^{n_1}(X_{1i}-\bar{X}_1)^2+\sum_{i=1}^{n_2}(X_{2i}-\bar{X}_2)^2\right)\) is the pooled sample variance. The distribution of \(T_n\) under \(H_0\) is \(t_{n_1+n_2-2},\) which is compactly denoted by \(T_n\stackrel{H_0}{\sim}t_{n_1+n_2-2}.\) For this result to hold, it is key that the two populations are indeed normally distributed, an assumption that may be unrealistic in practice. Recall that, under \(H_0,\) this test states the equality of distributions of \(X_1\) and \(X_2.\) A nonparametric alternative therefore is the Kolmogorov–Smirnov test for two samples, to be seen in Section 6.2. It evaluates if the distributions of \(X_1\sim F_1\) and \(X_2\sim F_2\) are equal:
\[\begin{align*} H_0:F_1=F_2\quad\text{vs.}\quad H_1:F_1\neq F_2. \end{align*}\]
Finally, the term goodness-of-fit refers to the statistical tests that check the adequacy of a model for explaining a sample. For example, a goodness-of-fit test allows answering if a normal model is “acceptable” to describe a given sample \(X_1,\ldots,X_n.\) Initially, the concept of goodness-of-fit test was proposed for distribution models, but it was later extended to regression193 and other statistical models,194 although such extensions are not addressed in these notes.
References
If necessary, see Section C for an informal review on the main concepts involved in hypothesis testing.↩︎
This prior assessment is of key importance to ensure coherency between the real and the assumed data distributions, as the parametric test bases its decision on the latter. An example to dramatize this point follows. Let \(X_1\sim\mathcal{N}(\mu,\sigma^2)\) and \(X_2\sim\Gamma(\mu/\sigma^2,\mu^2/\sigma^2),\) for \(\mu,\sigma^2>0.\) The cdfs of \(X_1\) and \(X_2,\) \(F_1\) and \(F_2,\) are different for all \(\mu,\sigma^2>0.\) Yet \(\mathbb{E}[X_1]=\mathbb{E}[X_2]\) and \(\mathbb{V}\mathrm{ar}[X_1]=\mathbb{V}\mathrm{ar}[X_2].\) When testing \(H_0:F_1=F_2,\) if one assumes that \(X_1\) and \(X_2\) are normally distributed (which is partially true), then one can use a \(t\)-test with unknown variances. The \(t\)-test will believe \(H_0\) is true, since \(\mathbb{E}[X_1]=\mathbb{E}[X_2]\) and \(\mathbb{V}\mathrm{ar}[X_1]=\mathbb{V}\mathrm{ar}[X_2],\) thus having a rejection rate equal to the significance level \(\alpha.\) However, by construction, \(H_0\) is false. The \(t\)-test fails to reject \(H_0\) because its parametric assumption does not match the reality.↩︎
These optimal parametric tests are often obtained by maximum likelihood theory.↩︎
See, e.g., Section 6.2 in Molina-Peralta and García-Portugués (2022).↩︎
See González-Manteiga and Crujeiras (2013) for an exhaustive review of the topic.↩︎
For example, there are goodness-of-fit tests for time series models, such as \(\mathrm{ARMA}(p,q)\) models (see, e.g., Velilla (1994) and references therein).↩︎