20 Making decisions

So far, you have learnt to ask a RQ, design a study, and describe and summarise the data. In this chapter, you will learn how decisions are made in science, so you can answer RQs. You will learn to:

  • state the two broad reasons to explain the difference between sample statistics and population parameters.
  • explain how decisions are made in research.

20.1 Introduction

Section 18.4 described a study of three rural communities in Cameroon (López-Serrano et al. 2022) and their access to water. One purpose of the study was to determine contributors to the incidence of diarrhoea in young children.

In the observed sample, the odds of an incidence of diarrhoea in children in households without livestock was \(0.176\) (Table 16.2). However, the odds in households with livestock was \(0.548\). In other words, the odds of an incidence of diarrhoea in young children was over three times greater in households with livestock than without (i.e., \(0.548/0.176 = 3.11\)). That is, a disparity in the odds is seen in the sample; but RQs ask about the population.

However, the sample studied is just one of countless possible samples that could have been chosen. If a different sample was chosen, different values for the odds of an incident of diarrhoea in the two groups (with livestock; without livestock) would have been produced. This leads to one of the most important observations about sampling.

Studying a sample leads to the following observations:

  • Every sample is likely to be different.
  • We observe just one of the many possible samples.
  • Every sample is likely to yield a different value for the sample statistic.
  • We observe just one of the many possible values for the statistic.

Since many values for the sample statistic are possible, the possible values of the sample statistic vary (called sampling variation) and have a distribution (called a sampling distribution).

Since we observe just one of the many possible samples, what does the disparity in the sample odds imply about the population odds? Two reasons could explain why the sample odds are different:

  1. The two population odds are the same; the disparity between the sample odds is due to sampling variation. That is, we just happen to have---by chance---one of those samples where the disparity between the two odds is noticeable. The sample odds are not the same only because we have data from one of the many possible samples, and every sample is likely to be different.
  2. Alternatively, a disparity between the two population odds exists, and the sample odds reflect this.

How do we decide which of these explanations is supported by the data?

The two possible explanations ('statistical hypotheses'3) have special names:

  1. No difference exists between the population parameters: the difference between the statistics is simply due to sampling variation. This is the null hypothesis, or \(H_0\).
  2. A difference exists between the population parameters. This is the alternative hypothesis, or \(H_1\).

How do we decide which of these explanations is supported by the data? What is the decision-making process?

The usual approach to decision making in science begins by assuming the null hypothesis is true. Then the data are examined to see if sufficient information exists to support the alternative hypothesis. Conclusions drawn about the population from the sample can never be certain, since the sample studied is just one of many possible samples that could have been taken.

20.2 The need for making decisions

In research, decisions need to be made about population parameters based on sample statistics. The challenge is that the decision must be made using only one of the many possible samples, and every sample is likely to be different (comprising different individuals from the population), and so each sample will produce different summary statistics. This is called sampling variation.

Definition 20.1 (Sampling variation) Sampling variation refers to how the sample estimates (statistics) vary from sample to sample, because each sample is different.

Sampling variation refers to how a sample estimate (a statistic) varies across all possible samples, because each sample is different.

However, sensible decisions can be made (and are made) about population parameters based on sample statistics. To do this though, the process of how decisions are made needs to be articulated, which is the purpose of this chapter.

To begin, suppose I produce a pack of cards, and shuffle them well. The pack of cards can be considered a population. Suppose I draw a sample of \(15\) cards from the pack. Define \(\hat{p}\) as the proportion of red cards in the sample.

In this sample, \(\hat{p} = 1\); that is, all 15 cards are red cards. What should you conclude? How likely is it that this would happen by chance from a fair pack; see the animation below. Is this evidence that the pack of cards is somehow unfair, or rigged?

Getting \(15\) reds cards out of \(15\) (i.e., \(\hat{p} = 1\)) from a well-shuffled pack seems very unlikely; you probably conclude that the pack is somehow unfair. Importantly, how did you reach that decision? Your unconscious decision-making process may have been this:

  1. Assumption: You assumed, quite reasonably, that I used a standard, well-shuffled pack of cards, where half the cards are red and half the cards are black. That is, you assumed the population proportion of red cards was \(p = 0.5\).
  2. Expectation: Based on that assumption, you expected about half the cards in the sample of \(15\) to be red (i.e., expect \(\hat{p}\) to be about \(0.5\)). You wouldn't necessarily expect exactly half red and half black, but you'd probably expect something close to that. That is, you would expect that \(\hat{p}\) would be close to \(0.5\).
  3. Observation: You observed that all \(15\) cards were red. That is, \(\hat{p} = 1\).
  4. Decision: You were expecting that \(\hat{p} = 0.5\) approximately, but the observed value was \(\hat{p} = 1\). Since what you observed ('all red cards') was not like what you were expecting ('about half red cards'), the sample contradicts what you were expecting, based on your assumption of a fair pack... so your assumption of a fair pack is probably wrong.

Of course, getting \(15\) red cards in a row is possible... but very unlikely. For this reason, you would probably conclude, based on the evidence, that strong evidence suggests that the pack is not a fair pack.

You probably didn't consciously go through this process, but it seems reasonable. This process of decision making is similar to the process used in research.

20.3 How decisions are made

Based on the ideas in the last section, a formal process of decision making in research can be described. To expand:

  1. Assumption: Make an assumption about the population parameter. Initially, assume that the sampling variation explains any discrepancy between the value of the observed statistic and assumed value of the population parameter.
  2. Expectation: Based on the assumption about the parameter, describe what values of the statistic might reasonably be observed from all the possible samples from the population (due to sampling variation).
  3. Observation: Compute the observed sample statistic from the data obtained from one of the many possible samples.
  4. Decision: If the observed sample statistic is:
    • unlikely to have been observed by chance, the statistic (i.e., the evidence) contradicts the assumption about the population parameter: the assumption is probably wrong (but it is not certainly wrong).
    • likely to have been observed by chance, the statistic (i.e., the evidence) is consistent with the assumption about the population parameter, and the assumption may be correct (though not certainly correct).
A way to make decisions

FIGURE 20.1: A way to make decisions

A way to make decisions

FIGURE 20.2: A way to make decisions

This is one way to describe the process of decision making in science (Fig. 20.2). This approach is similar to how we unconsciously make decisions every day. For example, suppose I ask my son to brush his teeth (Budgett et al. 2013). Later, I wish to decide if he really did.

  1. Assumption: I assume my son brushed his teeth (because I told him to).
  2. Expectation: Based on that assumption, I would expect to find his toothbrush is damp.
  3. Observation: When I check later, I observe a dry toothbrush.
  4. Decision: The evidence contradicts what I expected to find based on my assumption, so my assumption is probably false. He probably didn't brush his teeth.

I may have made the wrong decision: He may have brushed his teeth, but dried his brush with a hair dryer. However, based on the evidence, quite probably he has not brushed his teeth.

The situation may have ended differently: When I check later, suppose I observe a damp toothbrush. Then, the evidence seems consistent with what I expected to find based on my assumption, so my assumption is probably true. He probably did brush his teeth.

Again, I may be wrong: He may have ran his toothbrush under a tap... but I don't have any evidence that he didn't brush his teeth, though.

Similar logic underlies most decision making in science4.

Example 20.1 (The decision-making process) Consider the cards example from Sect. 20.2 again. The formal process might look like this:

  1. Assumption: Assume the pack is fair and well-shuffled pack of cards: the population proportion of red cards is \(p = 0.5\) (the value of the parameter).
  2. Expectation: Based on this assumption, roughly (but not necessarily exactly) equal proportions of red and black cards would be expected in a sample of \(15\) cards. The sample proportion of red cards \(\hat{p}\) (the value of the statistic) is expected to be close to, but maybe not exactly, \(0.5\).
  3. Observation: Suppose I then deal \(15\) cards, and all \(15\) are red cards: \(\hat{p} = 1\).
  4. Decision: \(15\) red cards from \(15\) cards seems unlikely to occur if the pack is fair and well-shuffled. The data seem inconsistent with what I was expecting based on the assumption (Fig. 20.3), suggesting the assumption is probably false.

Of course, getting \(15\) red cards out of \(15\) is not impossible, so my conclusion may be wrong. However, the evidence certainly suggests the pack is not fair.

A way to make decisions for the cards example

FIGURE 20.3: A way to make decisions for the cards example

20.4 Making decisions in research

Let's think about each step in the decision-making process (Fig. 20.2) individually.

  • The assumption about the parameter (Sect. 20.4.1);
  • The expectation of the statistic (Sect. 20.4.2);
  • Make the observations (Sect. 20.4.3); and
  • Make a decision (Sect. 20.4.4).

20.4.1 Assumption about the population parameter

The initial assumption is that the difference between the parameter and the statistic is due to sampling variation. In other words, there has been 'no change, no difference, no relationship' in the population, depending on the context; any change, difference or relationship seen in the sample is due only to sampling variation.

Using this idea, a reasonable assumption can be made about the population parameter. For example, when comparing the mean of two groups, we would initially assume no difference between the population means: any difference observed in the sample means would be attributed to chance.

These assumptions about the population parameter are called null hypotheses.

Example 20.2 (Assumptions about the population) Many dental associations recommend brushing teeth for two minutes. I. D. M. Macgregor and Rugg-Gunn (1979) recorded the tooth-brushing time for \(85\) uninstructed schoolchildren from England.

We initially assume the population mean tooth-brushing time in the population is two minutes (\(H_0\): \(\mu = 2\)). If the sample mean is not two minutes, the difference would be explained by sampling variation. A sample can then be obtained to determine if the sample mean is consistent with, or contradicts, this assumption.

20.4.2 Expectations of sample statistics

Based on the assumed value for the parameter, we then determine what values to expect from the sample statistic from all the possible samples we could select. Since many samples are possible, and every sample is likely to be different (sampling variation), the observed value of the sample statistic depends on which one of the possible samples we obtain.

Think about the cards in Sect. 20.2. Assuming a fair pack, then half the cards are red in the population (the pack of cards), so the population proportion is assumed to be \(p = 0.5\). In a sample of \(15\) cards, what values could be reasonably expected for the sample proportion \(\hat{p}\) of red cards (the statistic)? If samples of size \(15\) were repeatedly taken, the sample proportion of red cards would vary from hand to hand, of course. But how would \(\hat{p}\) vary from sample to sample? Perhaps \(15\) red cards out of \(15\) cards happens reasonably frequently... or perhaps not. How could we find out? We could:

  • use mathematical theory.
  • shuffle a pack of cards and deal \(15\) cards many hundreds of times, then count how often we see \(15\) red cards of out \(15\) cards.
  • simulate (using a computer) dealing \(15\) cards many hundreds of times from a fair pack (i.e., \(p = 0.5\)), and count how often we get \(15\) red cards of out \(15\) cards (i.e., \(\hat{p} = 1\)).

The third option is the most practical. To begin, suppose we simulated only ten hands of \(15\) cards each; the animation below shows the sample proportion of red cards from each of the ten repetitions. None of those hands produced \(15\) red cards in \(15\) cards.

Suppose we repeated this for hundreds of hands of \(15\) cards (rather than the ten above), and for each hand we recorded \(\hat{p}\), sample proportion of cards that were red. The value of \(\hat{p}\) would vary from sample to sample (sampling variation), and we could record the value of \(\hat{p}\) from each of those hundreds of hands. A histogram of these hundreds of sample proportions could be constructed; the animation below shows a histogram of the sample proportions from \(1000\) repetitions of a hand of \(15\) cards. This histogram shows how we might expect the sample proportions \(\hat{p}\) to vary from sample to sample, when the population proportion of red cards is \(p = 0.5\).

Interestingly, the shape of the distribution in the animation above is roughly bell-shaped. This is no co-incidence. These bell-shaped distributions appear in many places, and are studied further in Chap. 22.

20.4.3 Observations about our sample

While many samples are possible, we only observe one of those countless possible samples. From a sample of \(15\) cards (one of the many samples that are possible), suppose we observe all red cards, so that the sample statistic is \(\hat{p} = 1\) (i.e., \(15\) red cards out of \(15\) cards). Assuming \(p = 0.5\), is this value of \(\hat{p}\) unusual... or not unusual?

20.4.4 Making a decision

Making a decision means answering this question: is it reasonable to observe \(\hat{p} = 1\) when \(p = 0.5\)? Observing \(15\) red cards out of \(15\) cards is rare: it never happened once in the \(1000\) simulations above (see the animation above), so a sample producing \(\hat{p} = 1\) hardly ever occurs. So based on simulating \(1000\) hands, you could conclude that finding \(\hat{p} = 1\) almost never occurs... if the assumption of a fair pack was true.

But we did find \(\hat{p} = 1\) (i.e., \(15\) red cards in \(15\) cards)... so the assumption of a fair pack (i.e., \(p = 0.5\)) is probably wrong.

What if we had observed \(11\) red cards in a hand of \(15\) cards, a sample proportion of \(\hat{p} = 11/15 = 0.733\) (rather than \(15\) red cards out of \(15\))? The conclusion is not quite so obvious then: the animation above shows shows that \(\hat{p} = 0.733\) is unlikely... but certainly possible.

What if \(9\) red cards were found in the \(15\) (i.e., \(\hat{p} = 0.6\))? The animation above shows shows that \(\hat{p} = 0.6\) could reasonably be observed, since there are many possible samples that lead to \(\hat{p} = 0.6\), or even higher.

More generally, a better approach is required for making a decision. Special tools are needed to describe what to expect from the sample statistic after making assumptions about the population parameter. These special tools are discussed in the next chapters.

Example 20.3 (Sampling variation) Many dental associations recommend brushing teeth for two minutes. One study (I. D. M. Macgregor and Rugg-Gunn 1979) recorded the tooth-brushing time for \(85\) uninstructed schoolchildren from England.

Of course, every possible sample of \(85\) children will include different children, and so produce a different sample mean \(\bar{x}\). Even if the population mean toothbrushing time really is two minutes (\(\mu = 2\)), the sample mean probably won't be exactly two minutes, because of sampling variation.

Assume the population mean tooth-brushing time is two minutes (\(\mu = 2\)). If this is true, we then could describe what values of the sample statistic \(\bar{x}\) to expect from all possible samples. Then, after obtaining a sample and computing the sample mean, we could determine if the sample mean seems consistent with the assumption of two minutes, or whether it seems to contradict this assumption.

20.5 Tools for describing sampling variation

Making decisions about population parameters based on a sample statistic is difficult: only one of the many possible samples is observed. Since every sample is likely to be different, different values of the sample statistic are possible. In this chapter, though, a process for making decisions has been studied (Fig. 20.2). To apply this process to research, describing how sample statistics vary from sample to sample (sampling variation) is necessary. Those tools are discussed in the following chapters:

  • tools to describe how sample statistics vary from sample to sample (sampling variation), and hence what to expect from the sample statistic: Chap. 21.
  • tools to describe the distribution of the population and the sample: Chap. 22.

20.6 Chapter summary

Decisions are often made by making an assumption about the population parameter, which leads to an expectation of what might occur in the sample statistics. We can then make observations about our sample, and then make a decision about whether the sample data support or contradict the initial assumption.

20.7 Quick review questions

Are the following statements true or false?

  1. Parameters describe populations.
  2. Both \(\bar{x}\) and \(\mu\) are statistics.
  3. The value of a statistic is likely to be different in every sample.
  4. Sampling variation describes how the value of a statistic varies from sample to sample.
  5. The initial assumption is made about the sample statistic.
  6. The variation in statistics from sample to sample is called sampling variation
  7. If the sample results seem inconsistent with what was expected, then the assumption about the population is probably true.
  8. In the sample, we know exactly what to expect.
  9. Hypotheses are made about the population.

20.8 Exercises

Answers to odd-numbered exercises are available in App. E.

Exercise 20.1 Suppose you are playing a die-based game, and your opponent rolls a ten times in a row.

  1. Do you think there is a problem with the die?
  2. Explain how you came to this decision.

Exercise 20.2 In a 2012 advertisement, an Australian pizza company claimed that their \(12\)-inch pizzas were 'real \(12\)-inch pizzas' (P. K. Dunn 2012).

  1. What is a reasonable assumption to make to test this claim?
  2. The claim is based on a sample of \(125\) pizzas, for which the sample mean pizza diameter was \(\bar{x} = 11.48\) inches. What are the two reasons why the sample mean is not \(12\)-inches?
  3. Does the claim appear to be supported by, or contradicted by, the data? Why?
  4. Would your conclusion change if the sample mean was \(\bar{x} = 11.25\) inches? Does the claim appear to be supported by, or contradicted by, the data? Why?
  5. Does your answer depend on the sample size? For example, is observing a sample mean of \(11.25\) inches from a sample of size \(10\) equivalent to observing a sample mean of \(11.25\) inches from a sample of size \(125\)? Explain.

Exercise 20.3 Since my wife and I have been married, I have been called to jury service four times. The latest notice reads: 'Your name has been selected at random from the electoral roll...'.

In the same length of time, my wife has never been called to jury service. Do you think the selection process really is 'at random'? Explain.