5.1 Independent Samples t-Test

If a population measure X is normally distributed with mean \(\mu_X\) and variance \(\sigma_X^2\), and a population measure Y is normally distributed with mean \(\mu_Y\) and variance \(\sigma_Y^2\), then their difference is normally distributed with mean \(d = \mu_X - \mu_Y\) and variance \(\sigma_{XY}^2 = \sigma_X^2 + \sigma_Y^2\). By the CLT, as the sample sizes grow, a non-normally distributed X and Y will approach normality, and so will their difference.

The independent samples t-test evaluates an hypothesized difference, \(d_0\) (H0: \(d = d_0\)), from the difference in sample means \(\hat{d} = \bar{x} - \bar{y}\), or constructs a (1 - \(\alpha\))% confidence interval around \(\hat{d}\) to estimate \(d\) within a margin of error, \(\epsilon\).

In principal, you can evaluate \(\hat{d}\) with either a z-test or a t-test. Both require independent samples and approximately normal sampling distributions. Sampling distributions are normal if the underlying populations are normally distributed, or if the sample sizes are large (\(n_X\) and \(n_Y\) \(\ge\) 30). However, the z-test additionally requires known sampling distribution variances, \(\sigma^2_X\) and \(\sigma^2_Y\). These variances are never known, so always use the t-test.

The z-test assumes \(d\) is normally distributed around \(\hat{d} = d\) with standard error \(SE = \sqrt{\frac{\sigma_X^2}{n_X} + \frac{\sigma_Y^2}{n_Y}}.\) The test statistic for H0: \(d = d_0\) is \(Z = \frac{\hat{d} - d_0}{SE}\). The (1 - \(\alpha\))% CI is \(d = \hat{d} \pm z_{(1 - \alpha {/} 2)} SE\).

The t-test assumes \(d\) has a t-distribution around \(\hat{d} = d\) with standard error \(SE = \sqrt{\frac{s_X^2}{n_X} + \frac{s_Y^2}{n_Y}}.\) The test statistic for H0: \(d = d_0\) is \(T = \frac{\hat{d} - d_0}{SE}\). The (1 - \(\alpha\))% CI iss \(d = \hat{d} \pm t_{(1 - \alpha / 2), (n_X + n_Y - 2)} SE\).

There is a complication with the t-test SE and degrees of freedom. If the sample sizes are small and the standard deviations from each population are similar (the ratios of \(s_X\) and \(s_Y\) are <2), pool the variances, \(s_p^2 = \frac{(n_X - 1) s_X^2 + (n_Y-1) s_Y^2}{n_X + n_Y-2}\), so that \(SE = s_p \sqrt{\frac{1}{n_X} + \frac{1}{n_Y}}\) and the degrees of freedom (df) = \(n_X + n_Y - 2\) (the pooled variances t-test). Otherwise, \(SE = \sqrt{\frac{s_X^2}{n_X} + \frac{s_Y^2}{n_Y}}\), but you reduce df using the Welch-Satterthwaite correction, \(df = \frac{\left(\frac{s_X^2}{n_X} + \frac{s_Y^2}{n_Y}\right)^2}{\frac{s_X^4}{n_X^2\left(N_X-1\right)} + \frac{s_Y^4}{n_Y^2\left(N_Y-1\right)}}\) (the separate variance t-test, or Welch’s t-test).