# Chapter 3 Effect sizes

As well as determining whether a difference in mean is ** statistically significant**, it can also be useful to determine the

**of the difference or, as it is called in statistics, the**

*relative size***. For any \(t\)-test, this can be done by calculating**

*Effect Size***(J. Cohen 1988).**

*Cohen's \(d\)*The ** effect size** is the

**. For example, for the one-sample \(t\)-test, Cohen's \(d\) can be calculated as**

*standardised difference in mean*\[ d = \displaystyle \frac{\bar{x} - \mu_0}{s},\]

where

- \(\bar{x}\) denotes the sample mean
- \(\mu_0\) denotes the mean under the null hypothesis
- \(s\) denotes the sample standard deviation.

J. Cohen (1992) provided a guide to quantify the magnitude of an effect size which can be summarised as follows:

**Guidelines for quanifying magnitude of Cohen's \(d\) effect sizes for \(t\)-tests:**

- \(|d| < 0.2\): "negligible"
- \(0.2 \leq |d| < 0.5\): "small"
- \(0.5 \leq |d| < 0.8\): "medium"
- \(|d| \geq 0.8\): "large"

Thankfully, most computer packages will do the hard work for us.

We will now see examples of the calculated effect size for each of the one-sample, independent samples, and paired \(t\)-tests respectively.

### References

*Statistical Power Analysis for the Behavioral Sciences*. 2nd edition. New York: Academic Press.

*Psychological Bulletin*112 (1): 155.