# 30 More about hypothesis testing

So far, you have learnt to ask a RQ, design a study, describe and summarise the data, understand the decision-making process and to work with probabilities. You have been introduced to the construction of confidence intervals, and began to study hypothesis testing. In this chapter, you will learn about hypothesis tests. You will learn to:

• communicate the results of hypothesis tests.
• interpret $$P$$-values. ## 30.1 Introduction

In Chaps. 28 and 29, hypothesis tests for one mean and one proportion were studied. In later chapters, hypothesis tests are discussed in other contexts, too.

However, the general approach to hypothesis testing is the same for any hypothesis test, and so some general ideas are discussed in this chapter. The sections that follow discuss:

• The assumptions and forming hypotheses (Sect. 30.2).
• The sampling distribution, and the expectations (Sect. 30.3).
• The observations and the test statistic (Sect. 30.4).
• Weighing the evidence for consistency: computing $$P$$-values (Sect. 30.5).
• Interpreting $$P$$-values (Sect. 30.6).
• Wording conclusions (Sect. 30.7).
• Practical importance and statistical significance (Sect. 30.8).
• Statistical validity in hypothesis testing (Sect. 30.9).

When data are provided, begin by producing graphical and numerical summaries of the data. The statistical validity conditions, which vary for different hypothesis tests, should always be checked to see if the test is statistically valid (Sect. 30.9).

Hypothesis testing starts by assuming that the null hypothesis is true. The onus is on the data to provide evidence to refute this default position.

## 30.2 About hypotheses and assumptions

Two statistical hypotheses are made about the population parameter: the null hypothesis $$H_0$$, and the alternative hypothesis $$H_1$$.

### 30.2.1 Null hypotheses

Statistical hypotheses are always about a population parameter. Hypothesising, for example, that the sample mean body temperature is equal to $$37.0^\circ\text{C}$$ is pointless, because it clearly isn't: the sample mean is $$36.8051^\circ\text{C}$$. Besides, the RQ is about the unknown population: the P in POCI stands for Population.

The null hypothesis $$H_0$$ offers one possible reason why the value of the sample statistic (such as the sample mean) is not the same as the value of the proposed population parameter (such as the population mean): sampling variation. Every sample is different, and we only have data from one of many possible samples. The sample statistic will vary from sample to sample; it may not be equal to the population parameter, just because of the sample obtained

Null hypotheses always have an 'equals' in them (for example, the population mean equals 100, is less than or equal to 100, or is more than or equal to 100), because (as part of the decision making process), a specific value must be assumed for the population parameter, so we know what we might expect from the sample. The parameter can take many different forms, depending on the context.

Defining the parameter carefully is important!

For example, if a parameter is about the difference between two means (say, in Group A and Group B), then the parameter description must clarify if the parameter is the 'Group A mean minus the Group B mean', or the 'Group B mean minus the Group A mean'. Either is fine (though one may be easier to understand), but the direction used must be clearly stated.

The null hypothesis about the parameter is the default value of that parameter; for example:

• there is no difference between the parameter value in two (or more) groups;
• there is no change in the parameter value; or
• there is no relationship as measured by a parameter value.

The null hypothesis always has the form 'no difference, no change, no relationship' regarding the population parameter.

Definition 30.1 (Null hypothesis) The null hypothesis proposes that sampling variation explains the difference between the proposed value of the parameter, and the observed value of the statistic.

### 30.2.2 Alternative hypotheses

The other statistical hypothesis is called the alternative hypothesis $$H_1$$. The alternative hypothesis offers another possible reason why the value of the sample statistic (such as the sample proportion) is not the same as the value of the proposed population parameter (such as the population proportion). The alternative hypothesis proposes that the value of the population parameter really is not the value claimed in the null hypothesis.

Definition 30.2 (Alternative hypothesis) The alternative hypothesis proposes that the difference between the proposed value of the parameter and the observed value of the statistic cannot be explained by sampling variation: It proposed that the value of the parameter is not the value claimed in the null hypothesis.

Alternative hypotheses can be one-tailed or two-tailed. A two-tailed alternative hypothesis means, for example, that the population mean could be either smaller or larger than what is claimed. A one-tailed alternative hypothesis admits only one of those two possibilities. Most (but not all) hypothesis tests are two-tailed.

The decision about whether the alternative hypothesis is one- or two-tailed is made by reading the RQ (not by looking at the data). The RQ and hypotheses should (in principle) be formed before the data are obtained, or at least before looking at the data if the data are already collected.

The ideas are the same whether the alternative hypothesis is one- or two-tailed: based on the data and the sample statistic, a decision is to be made about whether the alternative hypotheses is supported by the data.

Example 30.1 (Alternative hypotheses) For the body-temperature study, the alternative hypothesis is two-tailed: The RQ asks if the population mean is $$37.0^\circ\text{C}$$ or not. That is, two possibilities are considered: that $$\mu$$ could be either larger or smaller than $$37.0^\circ\text{C}$$.

A one-tailed alternative hypothesis would be appropriate if the RQ was: 'Is the population mean internal body temperature greater than $$37.0^\circ\text{C}$$?', or 'Is the population mean internal body temperature smaller than $$37.0^\circ\text{C}$$?'.

• Hypotheses always concern a population parameter.
• Null hypothesis always have the form 'no difference, no change, no relationship'.
• Alternative hypothesis are one-tailed or two-tailed, depending on the RQ.
• Null hypotheses always contain an 'equals'.
• Hypotheses emerge from the RQ (not the data): The RQ and the hypotheses could be written down before collecting the data.

## 30.3 About sampling distributions and expectations

The sampling distribution describes, approximately, how all possible values of the sample statistic (such as $$\hat{p}$$ or $$\bar{x}$$) vary across all possible samples, when $$H_0$$ is true: it describes the sampling variation. Under certain circumstances, many sampling distributions have an approximate normal distribution, with a standard deviation described by the standard error. This normal distribution is the basis for computing $$P$$-values (or approximating $$P$$-values using the 68--95--99.7 rule).

When the sampling distribution is described by a normal distribution, the mean of the normal distribution is the parameter value given in the assumption ($$H_0$$), and the standard deviation of the normal distribution is called the standard error. In some cases, the sample statistic may not have a normal distribution, but a quantity easily derived from the sample statistic does have a normal distribution (for example, in the case of odds ratios10).

## 30.4 About observations and the test statistic

The sampling distribution describes what values the sample statistic can reasonably be expected to have, if we repeated the study with all possible samples. Since the sampling distribution of the statistic often has an approximate normal distribution under certain conditions, the observed value of the sample statistic can be expressed as a something like a $$z$$-score or a $$t$$-score. These have the form:

$\text{test statistic} = \frac{\text{sample statistic} - \text{centre of the sampling distribution}} {\text{measure of variation of the sampling distribution}}.$ The $$z$$-scores and $$t$$-scores are called test statistics, since their values are based on sample data ('a statistic') and used in a hypothesis test. Other test statistics are used too (we see one in Chap. 33).

A $$t$$-score is similar to a $$z$$-score; both measure the number of standard deviations from the mean. For any quantity that varies (and hence has a distribution), $$z$$- and $$t$$-scores have the same form:

$\frac{\text{a specific value of quantity that varies} - \text{the corresponding mean}} {\text{the corresponding standard deviation}}.$ Then:

• If the 'quantity that varies' refers to an individual observation $$x$$, the measure of variation is the standard deviation, because the standard deviation measures the variation in the individual observations.
• If the 'quantity that varies' refers to a sample statistic, the measure of variation is a standard error, because the standard deviation measures the variation in the sample statistic.

In both cases, if the measure of variation uses known values, the test statistic is a $$z$$-score; if the measure of variation uses sample estimates, the test statistic is a $$t$$-score.

## 30.5 About finding $$P$$-values

As demonstrated in Sect. 28.5, often $$P$$-values can be approximated by using the 68--95--99.7 rule and using a diagram of a normal distribution. The $$P$$-value is the area more extreme than the calculated $$z$$- or $$t$$-score; the 68--95--99.7 rule can be used to approximate this tail area.

For two-tailed tests, the $$P$$-value is the combined area in the left and right tails. For one-tailed tests, the $$P$$-value is the area in just the left or right tail (as appropriate, according to the alternative hypothesis).

When software reports two-tailed $$P$$-values, a one-tailed $$P$$ is found by halving the two-tailed $$P$$-value.

More accurate estimates of the $$P$$-value can be found using tables. For more precise $$P$$-values, we will generally use the $$P$$-values from software output.

When using software to obtain $$P$$-values, be sure to check if the software reports one- or two-tailed $$P$$-values. For example, some software (such as SPSS) always reports two-tailed $$P$$-values.

## 30.6 About interpreting $$P$$-values

A $$P$$-value is the likelihood of observing the value of the sample statistic (or something even more extreme) over repeated sampling, under the assumption that the null hypothesis about the population parameter is true. For most hypothesis tests in this book, $$P$$-values can be computed because the sampling distribution has an approximate normal distribution.

Since the null hypothesis is initially assumed true, the onus is on the data to present evidence to the contrary.

Conclusion are always about the population parameters. $$P$$-values are needed to determine what we learn about the unknown population parameters, based on what we observed from one of the many possible values of the sample statistic.

Commonly, a $$P$$-value smaller than 5% is considered 'small', but this is arbitrary and sometimes the threshold is discipline-dependent. More reasonably, $$P$$-values should be interpreted as giving varying degrees of evidence in support of the alternative hypothesis (Table 30.1), but these are only guidelines.

TABLE 30.1: A guideline for interpreting $$P$$-values. $$P$$-values should be interpreted in context.
If the $$P$$-value is... Write the conclusion as...
Larger than 0.10 Insufficient evidence to support $$H_1$$
Between 0.05 and 0.10 Slight evidence to support $$H_1$$
Between 0.01 and 0.05 Moderate evidence to support $$H_1$$
Between 0.001 and 0.01 Strong evidence to support $$H_1$$
Smaller than 0.001 Very strong evidence to support $$H_1$$

Definition 30.3 (P-value) A $$P$$-value is the likelihood of observing the sample statistic (or something more extreme) over repeated sampling, under the assumption that the null hypothesis about the population parameter is true.

Conclusions should be written in the context of the problem. Sometimes, authors will write that the results are 'statistically significant' when $$P < 0.05$$.

$$P$$-values are never exactly zero. When SPSS reports that '$$P= 0.000$$', it means the $$P$$-value is less than 0.001, which we write as '$$P < 0.001$$'. jamovi usually reports very small $$P$$-values correctly as '$$P < 0.001$$'.

$$P$$-values are commonly used in research, but must be used and interpreted correctly . Specifically:

• A $$P$$-value is not the probability that the null hypothesis is true.
• A $$P$$-value does not prove anything (we have only stuided one of many possible samples).
• A big $$P$$-value does not mean that the null hypothesis $$H_0$$ is true, or that $$H_1$$ is false.
• A small $$P$$-value does not mean that the null hypothesis $$H_0$$ is false, or that $$H_0$$ is true.
• A small $$P$$-value does not indicate that the results are practically important (Sect. 30.8).
• A small $$P$$-value does not mean a large difference between the statistic and parameter; it means that the difference (whether large or small) could not reasonably be attributed to sampling variation (chance).

When the results of a study are reported as being statistically significant, this usually means that the $$P$$-value is less than 0.05... though a different $$P$$-value is sometimes used as the 'threshold', so check! To avoid confusion, the word "significant" should be avoided in writing about research unless "statistical significance" is what is actually what is meant. In other situations, consider using words like "substantial".

When reporting a conclusion, three things should be included:

1. The answer to the RQ;
2. The evidence used to reach that conclusion (such as the $$t$$-score and $$P$$-value, clarifying if the $$P$$-value is one-tailed or two-tailed); and
3. Some sample summary statistics (such as sample means and sample sizes), including a CI (indicating the precision with which the statistic has been estimated).

Since we assume the null hypothesis is true, conclusions are worded in terms of how strongly the evidence supports the alternative hypothesis. The onus is on the data to disprove the null hypothesis.

Since the null hypothesis is initially assumed to be true, the onus is on the data to provide evidence in support of the alternative hypothesis.

Hence, conclusions are always worded in terms of how much evidence supports the alternative hypothesis.

What is wrong with the following conclusion?

The evidence proves that the mean internal body temperature has changed.

We can never prove anything about a population just from using one of the many possible samples. Instead, we say whether the sample evidence seems to support or not support the alternative hypothesis.

## 30.8 About practical importance and statistical significance

Hypothesis tests assess statistical significance, which answers the question: 'Is there evidence of a difference between the value of the statistic and the value of the assumed parameter?' Even very small differences between the sample statistic and the population parameter can be statistically different if the sample size is sufficiently large.

In contrast, practical importance answers the question: 'Is the conclusion of any importance in practice?' Whether a results is of practical importance depends upon the context: what the data are being used for, by whom, and for what purpose. 'Practical importance' and 'statistical significance' are two separate issues.

Example 30.2 (Practical importance) In the body-temperature study (Sect. 29.1), very strong evidence exists that the mean body temperature had changed ('statistical significance'). But the change was so small that, for most purposes, it has no practical importance. (There may be other (e.g., medical) situations where it does have practical importance however.)

Example 30.3 (Practical importance) A study of some herbal medicines for weight loss found:

Phaseolus vulgaris resulted in a statistically significant weight loss compared to placebo, although this was not considered clinically significant.

This means that the difference in weight loss between placebo and Phaseolus vulgaris was unlikely to be explained by chance ($$P < 0.001$$, which is 'statistical significant'), but the difference was so small in size (a mean weight loss of just 1.61 kg) that it was unlikely to be of any use in practice ('practical importance'). In this context, a weight loss of at least 2.5 kg was considered to be of practical importance.

When performing hypothesis tests, certain statistical validity conditions must be true. These conditions ensure that the sampling distribution is sufficiently close to a normal distribution for the 68--95--99.7 rule rule to apply and hence for $$P$$-values to be computed11. If these conditions are not met, the sampling distribution may not be normally distributed, so the $$P$$-values (and hence conclusions) maybe inappropriate.

In addition to the statistical validity condition, the internal validity and external validity of the study should be discussed also (Fig. 21.1). These are usually (but not always) the same as for CIs (Sect. 21.3).

Regarding external validity, all the computations in this book assume a simple random sample. If the sample is from a random sampling method, but not from a simple random sample, then methods exist for conducting hypothesis tests that are externally valid, but are more complicated than those described in this book.

If the sample is a non-random sample, then the hypothesis test may be reasonable for the specific population that is represented by the sample; however, the sample probably may not represent the more general population that is probably intended.

Externally validity requires that a study is also internally valid. Internal validity can only be discussed if details are known about the study design.

Figure 21.1, regarding validity, remains relevant in the context of hypothesis testing. In addition, hypothesis tests also require that the sample size is less than 10% of the population size; however this is almost always the case.

## 30.10 Summary

Hypothesis testing formalises the steps of the decision-making process. Starting with an assumption about a population parameter of interest, a description of what values the sample statistic might take (based on this assumption) is produced: this describes what values the statistic is expected to take over all possible samples. This sampling distribution is often a normal distribution, or related to a normal distribution.

The sample statistic (the estimate) is then observed, and a test statistic, which often is a $$z$$- or $$t$$-score, is computed to describe this sample statistic. Using a $$P$$-value, a decision is made about whether the sample evidence supports or contradicts the initial assumption, and hence a conclusion is made. Since $$t$$-scores are like $$z$$-scores, $$P$$-values can often be approximated using the 68--95--99.7 rule.

## 30.11 Quick review questions

1. True or false? When a $$P$$-value is very small, a very large difference exists between the statistic and parameter.
2. True or false? The alternative hypothesis is one-tailed if the sample statistic is larger than the hypothesised population mean.
3. What is wrong (if anything) with this null hypothesis: $$H_0=37$$?
4. True or false: When the sampling distribution is a normal distribution, the standard deviation of this normal distribution is called the standard error.
5. True or false? Both $$z$$-scores and $$t$$-scores can be test statistics.
6. True or false? $$P$$-values can never be exactly zero.
7. True or false? A $$P$$-value is the probability that the null hypothesis is true.

## 30.12 Exercises

Selected answers are available in Sect. D.28.

Exercise 30.1 Use the 68--95--99.7 rule to approximate the two-tailed $$P$$-value if:

1. the $$t$$-score is $$3.4$$.
2. the $$t$$-score is $$-2.9$$.
3. the $$t$$-score is $$1.2$$.

1. the $$t$$-score is $$-0.95$$.
2. the $$t$$-score is $$-0.2$$.
3. the $$t$$-score is $$6.7$$.

Exercise 30.2 Consider the $$t$$-scores in Exercise 30.1. Use the 68--95--99.7 rule to approximate the one-tailed $$P$$-values in each case.

Exercise 30.3 Suppose a hypothesis test results in a $$P$$-value of 0.0501. What would we conclude? What about if the $$P$$-value was 0.0499?

Exercise 30.4 Consider again the study to determine the mean body temperature, where $$\bar{x} = 36.8051^{\circ}\text{C}$$. What, if anything, is wrong with these hypotheses? Explain.

1. $$H_0$$: $$\bar{x} = 37$$; $$H_1$$: $$\bar{x} \ne 37$$.
2. $$H_0$$: $$\mu = 37$$; $$H_1$$: $$\mu > 37$$.
3. $$H_0$$: $$\mu > 37$$; $$H_1$$: $$\bar{x} > 37$$.

1. $$H_0$$: $$\bar{x} = 36.8051$$; $$H_1$$: $$\bar{x} > 36.8051$$.
2. $$H_0$$: $$\mu = 36.8051$$; $$H_1$$: $$\mu \ne 36.8051$$.
3. $$H_0$$: $$\mu = 37$$; $$H_1$$: $$\mu = 36.8051$$.

Exercise 30.5 The recommended daily energy intake for women is 7725kJ (for a particular cohort, in a particular country; Altman (1991)). The daily energy intake for 11 women was measured to see if this is being adhered to. The RQ was 'Is the population mean daily energy intake 7725kJ?'

The test produced $$P = 0.018$$. What, if anything, is wrong with these conclusions after completing the hypothesis test?

1. There is moderate evidence ($$P = 0.018$$) that the energy intake is not meeting the recommended daily energy intake.
2. There is moderate evidence ($$P = 0.018$$) that the sample mean energy intake is not meeting the recommended daily energy intake.
3. There is moderate evidence ($$P = 0.018$$) that the population energy intake is not meeting the recommended daily energy intake.

Exercise 30.6 A study compared ALDI batteries to another brand of battery. In one test comparing the length of time it takes for 1.5 volt AA batteries to reach 1.1 volts, the ALDI brand battery took 5.73 hours, and the other brand (Energizer) took 5.44 hours .

1. The $$P$$-value for comparing these two means is about $$P = 0.70$$. What does this mean?
2. Is this difference likely to be of any practical importance? Explain.
3. What would be a useful, but correct, conclusion for ALDI to report from the study? Explain.
4. What else would be useful to know in comparing the two brands of batteries?