3.3 Type I and Type II Errors

Whenever we carry out a hypothesis test, there is always a chance we will arrive at the incorrect conclusion. For example, we may reject \(H_0\) when \(H_0\) was actually true. Or, we may fail to reject \(H_0\) when \(H_0\) was actually false. We can summarise these types of errors as follows:

There are two types of error that can occur:

  1. Type I error: Reject \(H_0\) when \(H_0\) is true.
  2. Type II error: Fail to reject \(H_0\) when \(H_0\) is false.

Consider the level of significance, \(\alpha\). If we have that \(\alpha = 0.05\), this means that, assuming \(H_0\) is true, as long as there is less than a 5% chance of us obtaining the sample mean we did, we will reject \(H_0\). This means that there is actually a 5% chance we will end up rejecting \(H_0\) when it was actually true. This leads us to the following fact:

Probability of Type I error:

The probability of making a Type I error is equal to the significance level, \(\alpha\).

The researcher carrying out the test controls the level of significance. So, they can feasibly choose a smaller \(\alpha\) in order to reduce the risk of making a Type I error. However, in doing so, it would become harder to reject \(H_0\) when \(H_0\) was actually false. So there is a trade-off between Type I and Type II errors. As mentioned earlier, \(\alpha = 0.05\) is most commonly chosen because many believe this is a small enough risk of making a Type I error while not making the chance of a Type II too great.