18 Sampling variation
So far, you have learnt to ask a RQ, identify different ways of obtaining data, design the study, collect the data describe the data, summarise data graphically and numerically, and understand the decisionmaking process.
In this chapter, you will learn to describe sampling variation. You will learn to:
 explain what a sampling distribution describes.
 explain the difference between variation between individuals and variation in sample statistics.
 determine when a standard error is appropriate to use.
 explain the difference between standard errors and standard deviations.
18.1 Introduction
The last three chapters introduced tools to apply the decisionmaking process (Sect. 15.4) used in research:
 Make an assumption about the population parameter.
 Based on this assumption, describe what values the sample statistic might reasonably be expected from all possible samples.
 Observe the sample data, and see if it seems consistent with the expectation, or if it contradicts the expectation.
One key observation is that, under certain conditions, the variation of many sample statistics (such as the sample mean, etc.) from sample to sample can be described approximately by a normal distribution. As a result, the expected behaviour of these statistics can be described, so we know what to expect from the sample statistic.
This has been alluded to before. In Sect. 15.4, the sample proportion of red cards in a sample of 15 varied from hand to hand, and was approximately distributed as a normal distribution. This is no accident: Many sample statistics vary from sample to sample with an approximate normal distribution if certain conditions are met. This is called the Central Limit Theorem.
A sampling distribution describes the distribution of the sample statistic: How the value of the sample statistic varies from sample to sample for many samples. The sampling distribution here is a normal distribution.
Definition 18.1 (Sampling distribution) A sampling distribution is the distribution of some sample statistic, showing how its value varies from sample to sample.
18.2 Sample proportions have a distribution
As with any sample statistic, sample proportions vary from sample to sample (Sect. 15.4); that is, sampling variation exists, so the sample proportions have a sampling distribution.
Consider a European roulette wheel shown below in the animation: a ball is spun and can land on any number on the wheel from 0 to 36 (inclusive).
Using the classical approach to probability, the probability of the ball landing on an odd number (an 'oddspin') is \(p = 18/37 = 0.486\).
However, if the wheel is spun (say) 15 times, the sample proportion of oddspins in those 15, denoted \(\hat{p}\), can vary. Of course, the sample proportion \(\hat{p}\) of oddspins can vary after spinning the wheel 30, 50 or 100 times also. How does \(\hat{p}\) vary from one set of 15 spins to another set of 15 spins?
Computer simulation can be used to demonstrate what happens if the wheel was spun \(n=15\) times, over and over and over again, and the proportion of oddspins was recorded for each repetition. Clearly, the proportion of odd spins \(\hat{p}\) can vary from sample to sample (sampling variation) for \(n=15\) spins, as shown by the histogram (Fig. 18.1, top left panel).
If the wheel was spun (say) \(n=40\) times, something similar occurs (Fig. 18.1, top right panel): the values of \(\hat{p}\) vary from sample to sample.
The same process can be repeated for (say) \(n=70\) and \(n=100\) spins (Fig. 18.1, bottom panels). Notice that as the sample size \(n\) gets larger, the distribution of the values of \(\hat{p}\) look more like an approximate normal distribution, and the variation gets smaller.
The values of the sample proportion vary from sample to sample. The distribution of the possible values of the sample statistic (in this case the sample proportion) from sample to sample is called a sampling distribution.
Under certain conditions, the sampling distribution of a sample proportion is described by an approximate a normal distribution. In general, the approximation gets better as the sample size gets larger.
18.3 Sample means have a distribution
As with any sample statistic, the sample mean varies from sample to sample (Sect. 15.4) just like sample proportions; that is, sampling variation exists, so the sample means have a sampling distribution.
Consider a European roulette wheel again (Sect. 18.2).
Rather than recording the sample proportion of oddspins, the sample mean of the numbers spun can be recorded. So, for example, if the wheel is spun (say) 15 times, the sample mean of the spins \(\bar{x}\) will vary.
Of course, spinning the wheel 30, 50 or 100 times also shows that the sample mean \(\bar{x}\) can vary too. How much can it vary?
Again, computer simulation can be used to demonstrate what could happen if the wheel was spun 15 times, over and over and over again, and the mean of the spun numbers was recorded for each repetition.
Clearly, the sample mean spin \(\bar{x}\) can vary from sample to sample (sampling variation) for \(n=15\) spins, as shown by a histogram (Fig. 18.2, top left panel).
When \(n=15\), the sample mean \(\bar{x}\) indeed varies from sample to sample, and the distribution of the values of \(\bar{x}\) have an approximate normal distribution. If the wheel was spun more than 15 times (say, \(n=50\) times) something similar occurs (Fig. 18.2, top right panel): the values of \(\bar{x}\) vary from sample to sample, and the values have an approximate normal distribution. In fact, the values of \(\bar{x}\) have a normal distribution for other numbers of spins also (Fig. 18.2, bottom panels).
The values of the sample mean vary from sample to sample. The distribution of the possible values of a sample statistic, in this case the sample mean, is called a sampling distribution.
Under certain conditions, the sampling distribution of a sample mean is a normal distribution.
18.4 Standard errors
As we have seen, each sample is likely to be different, so any statistic estimated from the sample is likely to be different for each sample. This is called sampling variation.
Definition 18.2 (Sampling variation) Sampling variation refers to how much a sample estimate (a statistic) is likely to vary from sample to sample, because each sample is different.
The value of the sample statistic can vary for every possible sample that we could select, so the actual value of the sample statistic that we observe depends on which sample we have.
That is, all the possible values of sample statistics that we could observe have a distribution (a sampling distribution). Perhaps surprisingly, under certain conditions, the sampling distribution is a normal distribution.
If the sampling distribution is a normal distribution, it is reasonable to ask what the value of the standard deviation of that normal distribution is.
Figs. 18.1 and and 18.2 show that the standard deviation appears to get smaller as the sample sizes get larger: the sample statistics show less variation for larger \(n\). This makes sense: larger samples generally produce more precise estimates. (After all, that's the advantage of using larger samples: all else being equal, larger samples are preferred as they produce more precise estimates.)
In other words, the sample statistic varies less in larger samples: the value of the standard deviation of the sampling distribution is smaller for larger samples. The standard error is a measure of how precisely the sample statistic estimates the population parameter.
Example 18.1 (Standard errors) Suppose the sample proportion of oddspins on the roulette wheel (Sect. 18.2) is estimated as \(\hat{p} = 0.51\). If the standard error was 0.01, this estimate is relatively precise: the standard error is very small, which means the value of \(\hat{p}\) is not likely to vary greatly from one sample to the next. Any single estimate of \(p\) is likely to be close to \(p\).
However, if the standard error was 0.2, the estimate of the population proportion is less precise: the standard error is larger, so the value of \(\hat{p}\) is likely to vary a lot from one sample to the next. Any single estimate* of \(p\) may not be close to \(p\).
Definition 18.3 (Standard error) A standard error is the standard deviation of the sampling distribution of a statistic.
Any quantity estimated from a sample has a standard error.
To expand: If every possible sample (found a certain way, and of a given size) was found, and the statistic computed from each sample, the standard deviation of these estimates is the standard error.
Recall from Sect. 18.1 that, for many sample statistics, the variation from sample to sample can be approximately described by a normal distribution (the sampling distribution) if certain conditions are met. Furthermore, the standard deviation of this normal distribution is the standard error.
Notice that the standard error is a special type of standard deviation; the variation in a sample estimate from sample to sample.
The standard error is an unfortunate term: It is not an error or mistake, or even standard. (For example, there is no such thing as a 'nonstandard error'.)
18.5 Standard deviation vs. standard error
Even experienced researchers confuse the meaning and the usage of the terms standard deviation and standard error,^{380} so understanding the difference is important.
The standard deviation, in general, quantifies the amount of variation in any variable. Without further qualification, the standard deviation quantifies how much individual observations vary from individual to individual (for quantitative data).
The standard error is a standard deviation that quantifies how much a sample statistic varies from sample to sample.
Crucially, the standard error is a standard deviation, but has a special name to indicate that it is the standard deviation of something very specific.
Any numerical quantity estimated from a sample (a statistic) can vary from sample to sample, and so has sampling variation, a sampling distribution, and hence a standard error:
 the sample mean \(\bar{x}\);
 the sample proportion \(\hat{p}\);
 the sample odds ratio;
 the sample median;
 the sample standard deviation \(s\);
 etc.
The standard error is often abbreviated to 'SE' or 's.e.'.
For example, the 'standard error of the sample mean' is written as \(\text{s.e.}(\bar{x})\), and the 'standard error of the sample proportion' is written as \(\text{s.e.}(\hat{p})\).
18.6 Summary
A sampling distribution describes how a sample statistic is likely to vary from sample to sample. Under certain circumstances, the sampling distribution often can be described by a normal distribution. The standard deviation of this normal distribution is called a standard error. The standard error is a standard deviation that measures something specific: the variation in the sample statistic from sample to sample.
18.7 Quick review questions

Why is the phrase 'the standard error of the population proportion' inappropriate?

Which one the following does not have a standard error?

Which one of the following is true?
True or false: The standard deviation is a standard error of something quite specific.
True or false: Sampling distributions are always normal distributions.
Progress:
18.8 Exercises
Selected answers are available in Sect. D.18.
Exercise 18.1 In the following scenarios, would a standard deviation or a standard error be the appropriate way to measure the amount of variation? Explain.
 Researchers are studying the spending habits of customers. They would like to measure the variation in the amount spent by shoppers per transaction at a supermarket.
 Researchers are studying the time it takes for innercity office workers to travel to work each morning. They would like to determine the precision with which their estimate (a mean of 47 minutes) has been measured.
 A study examined the effect of taking a painrelieving drug on children. The researchers wish to describe the sample they used in the study, including a description of how the ages of the children vary.
 A study examined the effect of taking a painrelieving drug in teenagers. The researchers wished to report the percentage of teenagers in the sample that experienced sideeffects with some indication of the precision of that estimate.
Exercise 18.2 Which of the following have a standard error?
 The population proportion.
 The sample median.
 The sample IQR.
 The sample standard deviation.
 The population odds.
Exercise 18.3 A research article made this statement:
Although [...] samples should always be summarized by the mean and SD [standard deviation], authors often use the standard error of the mean (SEM) to describe the variability of their sample [...] Although the SD and the SEM are related [...], they give two very different types of information.
If the standard error of the mean is not used to 'describe the variability of the sample', then what is it used for? How would you explain the difference between the standard error and the standard deviation to researchers who misuse the terms?