Chapter 3 Effect sizes

As well as determining whether a difference in mean is statistically significant, it can also be useful to determine the relative size of the difference or, as it is called in statistics, the Effect Size. For any \(t\)-test, this can be done by calculating Cohen's \(d\) (J. Cohen 1988).

The effect size is the standardised difference in mean. For example, for the one-sample \(t\)-test, Cohen's \(d\) can be calculated as

\[ d = \displaystyle \frac{\bar{x} - \mu_0}{s},\]


  • \(\bar{x}\) denotes the sample mean
  • \(\mu_0\) denotes the mean under the null hypothesis
  • \(s\) denotes the sample standard deviation.

J. Cohen (1992) provided a guide to quantify the magnitude of an effect size which can be summarised as follows:

Guidelines for quanifying magnitude of Cohen's \(d\) effect sizes for \(t\)-tests:

  • \(|d| < 0.2\): "negligible"
  • \(0.2 \leq |d| < 0.5\): "small"
  • \(0.5 \leq |d| < 0.8\): "medium"
  • \(|d| \geq 0.8\): "large"

Thankfully, most computer packages will do the hard work for us.

We will now see examples of the calculated effect size for each of the one-sample, independent samples, and paired \(t\)-tests respectively.


Cohen, J. 1988. Statistical Power Analysis for the Behavioral Sciences. 2nd edition. New York: Academic Press.
Cohen, J. 1992. “A Power Primer.” Psychological Bulletin 112 (1): 155.