24 CIs for two independent means
So far, you have learnt to ask a RQ, design a study, and describe and summarise the data. You have also been introduced to confidence intervals for proportions, means and mean differences. In this chapter, you will learn to construct confidence intervals for the differences between two means. You will learn to:
- produce confidence intervals for the difference between two independent means.
- determine whether the conditions for using the confidence intervals apply in a given situation.
24.1 Introduction
A study examined the reaction times of students (from the University of Utah) while driving (Strayer and Johnston 2001; Agresti and Franklin 2007). In one study, students were randomly allocated to one of two groups: one to use a mobile phone while driving, and one to not use a mobile phone while driving. This is a between-individuals comparison, since different students are in each group. The reaction time for each student was measured in a driving simulator.
The data are not paired; instead, the means of two separate (or independent) samples are being compared. (The data would be paired if each student's reaction time was measured twice: once using a phone, and once without using a phone.)
Consider the RQ:
For students, what is the difference between the mean reaction time while driving when using a mobile phone and the mean reaction time while driving when not using a mobile phone?
The data are shown below.
What are P, O, C and I in this study?
P: Students (this is defined more specifically in the original study).
O: Mean reaction time.
C: Between two groups: those using and not sing a mobile phone while driving.
I: Yes; the use of a phone (or not) was decided by the researchers.
24.2 Defining notation
Since two groups are being compared, distinguishing between the statistics for the two groups is important. One way is to use subscripts (Table 24.1). For the reaction-time data, we use the subscript \(P\) for the phone-users group, and \(C\) for the control (non-phone users) group.
Using this notation, the difference between population means (the parameter) is \(\mu_P - \mu_C\). Since the population values are unknown, this parameter is estimated using the statistic \(\bar{x}_P - \bar{x}_C\).
You must be clear about how you define the differences! The differences could be computed as:
- the reaction time for phone users, minus the reaction time for non-phone users: \(\mu_P - \mu_C\); this measures how much faster the reaction times is for non-phone users, on average; or
- the reaction time for non-phone users, minus the reaction time for phone users: \(\mu_C - \mu_P\): this measures much faster the reaction times is for phone users, on average.
Either is fine, provided you are consistent, and clear about how the difference are computed. The meaning of any conclusions will be the same.
Phone users: Group \(P\) | Non-phone users: Group \(C\) | Difference (\(P - C\)) | |
---|---|---|---|
Population means: | \(\mu_P\) | \(\mu_C\) | \(\mu_P - \mu_C\) |
Sample means: | \(\bar{x}_P\) | \(\bar{x}_C\) | \(\bar{x}_P - \bar{x}_C\) |
Standard deviations: | \(s_P\) | \(s_C\) | |
Sample sizes: | \(n_P\) | \(n_C\) | |
Standard errors: | \(\displaystyle\text{s.e.}(\bar{x}_P) = \frac{s_P}{\sqrt{n_P}}\) | \(\displaystyle\text{s.e.}(\bar{x}_C) = \frac{s_C}{\sqrt{n_C}}\) | \(\displaystyle\text{s.e.}(\bar{x}_P - \bar{x}_C)\) |
Table 24.1 does not include a standard deviation or a sample size for the difference between means; these make no sense in this context. For example, Group \(P\) has 32 individuals, and Group \(C\) has 32 individuals, and we wish to study the difference \(\mu_P - \mu_C\). The sample size is not \(32 - 32 = 0\). There are just two samples of given sizes. However, the standard error of the difference between the means does make sense: it measures how much the value of \(\bar{x}_P - \bar{x}_C\) varies for all possible samples.
24.3 Summarising data
A suitable graphical summary of the data is a boxplot (Fig. 24.1), which shows that the sample medians are slightly different, but the IQR about the same; one large outlier is present for the phone-using group.
The numerical summary of the data should summarise both groups, and the differences between the means (since the RQ is about this difference). All this information can be found using jamovi (Fig. 24.2) or SPSS (Fig. 24.3), then compiled into a table (Table 24.2).
Mean | Sample size | Std dev | Standard error | |
---|---|---|---|---|
Not using phone | 533.59 | 32 | 65.36 | 11.554 |
Using phone | 585.19 | 32 | 89.65 | 15.847 |
Difference | 51.59 | 19.612 |
For those using a phone, what is the difference between the standard deviation and the standard error in the context of the reaction-time study?
The standard deviation quantifies how much the individual reactions times vary from person to person.
The standard error quantifies how much the difference between sample mean reaction times varies from sample to sample.
24.4 Describing the sampling distribution
Since the difference between the population means is unknown, the difference is estimated using the sample means. The difference between the two sample means (the statistic) is \(\bar{x}_P - \bar{x}_C\). The parameter is \(\mu_P - \mu_C\), the difference between the two population means (using a phone, minus not using a phone).
The differences could be compute in the opposite direction (\(\bar{x}_C - \bar{x}_P\)). However, for the reaction-time data, computing differences as the reaction time for phone users, minus the reaction time for non-phone users (controls) probably makes more sense: the differences then refer to how much greater (on average) the reaction times are when students are using phones.
Each sample of students will comprise different students, and will give different reaction times while driving. The means for each group will differ from sample to sample, and the difference between the means will be different for each sample. The difference between the sample means varies from sample to sample, and so has a sampling distribution and standard error.
Definition 24.1 (Sampling distribution for the difference between two sample means) The sampling distribution of the difference between two sample means is described by:
- an approximate normal distribution;
- centred around a sampling mean whose value is \({\mu_{P}} - {\mu_{C}} = 0\), the difference between the population means (from \(H_0\));
- with a standard deviation of \(\displaystyle\text{s.e.}(\bar{x}_A - \bar{x}_B)\),
when the appropriate conditions (Sect. 24.7) are met.
We don't give a formula for finding the standard error \(\displaystyle\text{s.e.}( \bar{x}_A - \bar{x}_B)\), so the value of this standard error will need to be given (e.g., on computer output).
A formula exists for finding the standard error of the difference between two means, but is complicated and we won't provide it. Using software output is sufficient.
For the reaction-time data, the differences between the sample means will have:
- an approximate normal distribution (Fig. 24.4);
- centred around the sampling mean whose value is \(\mu_A - \mu_B\);
- with a standard deviation, called the standard error of the difference, of \(\text{s.e.}(\bar{x}_P - \bar{x}_C) = 19.61\).
See Table 24.3.
What does a negative difference mean?
Earlier, we defined the differences as \(\mu_P - \mu_C\), the difference between the two population means (using a phone, minus not using a phone).
So a negative value simply means that the mean is greater when not using a phone.
Quantity | Description |
---|---|
Individual values in the population | Group A: Vary with mean \(\mu_A\) and standard deviation \(\sigma_A\) |
Group B: Vary with mean \(\mu_B\) and standard deviation \(\sigma_B\) | |
Individual values in a sample | Group A: Vary with mean \(\bar{x}_A\) and standard deviation \(s_A\) |
Group B: Vary with mean \(\bar{x}_B\) and standard deviation \(s_B\) | |
Difference between sample means (\(\bar{x}_A - \bar{x_B}\)) across all possible samples | Vary with approx. normal distribution (under certain conditions): sampling mean \(\mu_ - \mu_B\); standard deviation \(\text{s.e.}(\bar{x}_A - \bar{x}_B)\) |
24.5 Computing confidence intervals
Being able to describe the sampling distribution implies that we have some idea of how the values of \(\bar{x}_P - \bar{x}_C\) are likely to vary from sample to sample. Then, finding an approximate 95% CI for the difference between the mean reaction times is similar to the process used in Chap. 22. Almost all approximate 95% CIs have the same form:
\[ \text{statistic} \pm (2\times\text{s.e.}(\text{statistic})). \] When the statistic is \(\bar{x}_P - \bar{x}_C\), the approximate 95% CI is
\[ (\bar{x}_P - \bar{x}_C) \pm (2 \times \text{s.e.}(\bar{x}_P - \bar{x}_C)). \]
In this case (using more decimal places than in the summary table in Table 24.2), the CI is
\[\begin{eqnarray*} 51.59375 \pm (2 \times 19.61213), \end{eqnarray*}\] or \(51.59\pm 19.61\) after rounding appropriately. We write:
Based on the sample, an approximate 95% CI for the difference in reaction time is from \(12.37\) to \(90.82\) milliseconds, slower for those using a phone compared to those not using a phone.
The plausible values for the difference between the two population means are between \(12.37\) to \(90.82\) milliseconds. Stating the CI alone is insufficient; the direction in which the differences were calculated must be given, so readers know which group had the higher mean.
24.6 Using software
The jamovi output (Fig. 24.2) and the SPSS output (Fig. 24.3) both show two CIs. We will use the results from the second row in both cases, as this row of output is more general (and makes fewer assumptions^{6}).
jamovi and SPSS give two confidence intervals. In this book, we will use the second row of information (the 'Welch's \(t\)' row in jamovi; the 'Equal variance not assumed' row in SPSS) because it is more general and makes fewer assumptions. (Both rows are often similar anyway.)
From the output, the standard error is \(\text{s.e.}(\bar{x}_P - \bar{x}_C) = 19.612\), and the exact 95% CI is from \(12.3\) to \(90.9\). The approximate CI and the exact (from software) CIs are only slightly different, as software uses an exact multiplier (the \(t\)-multiplier of 2 is an approximation, based on the 68--95--99.7 rule), and the sample sizes aren't too small.
24.7 Statistical validity conditions
As usual, these results apply under certain conditions (Example 20.4). The CI computed above is statistically valid if one of these conditions is true:
- Both sample sizes are at least 25; or
- Either sample size is smaller than 25, and the populations corresponding to both comparison groups have an approximate normal distribution.
The sample size of 25 is a rough figure here, and some books give other (similar) values (such as 30). The histograms of the samples could be used to determine if normality of the populations seems reasonable.
Example 24.1 (Statistical validity) For the reaction-time data, both samples are larger than \(25\), so the CI will be statistically valid.
24.8 Error bar charts
A useful way to display the CIs from two (or more) groups is with an error bar chart, which displays the CIs for each group being compared. (A boxplot displays the data.)
Error bars charts display the expected variation in the sample means from sample to sample, while boxplots display the variation in the individual observations and show the median. For the reaction time data, the error bar chart (Fig. 24.5) shows the 95% CI for each group (the mean has been added as a dot).
What is different about the information displayed in the error bar chart (Fig. 24.5) and the boxplot (Fig. 24.1)?
The error bar chart helps us understand how precisely the sample mean estimates the population mean.
The boxplot shows the variation in the individual data values.
Example 24.2 (Error bar charts) A study (Aloy, Vallejo Jr., and Juinio-Meñez 2011) examined the impact of plastic litter on the shoreline at Talim Bay (Batangas, Philippines) during various seasons, and the impact on the gastropod Nassarius pullus. The error bar chart (Fig. 24.6) shows that summer seems different---in terms of average value (mean) and the amount of variation---than the other seasons.
Example 24.3 (Error bar charts) A study (Schepaschenko et al. 2017) examined the foliage biomass of small-leaved lime trees from three sources: coppices; natural; planted.
Three graphical summaries are shown in Fig. 24.7: a boxplot (showing the variation in individual trees; left), an error bar chart (showing the variation in the sample means; centre) on the same vertical scale as the boxplot, and the same error bar chart using a better scale for the error-bar plot (right).
24.9 Example: speed signage
In an attempt to reduce vehicle speeds on freeway exit ramps, a Chinese study tried using additional signage (Ma et al. 2019). At one site studied (Ningxuan Freeway), speeds were recorded for 38 vehicles before the extra signage was added, and then for 41 different vehicles after the extra signage was added.
The researchers are hoping that the addition of extra signage will reduce the mean speed of the vehicles. The RQ is:
At this freeway exit, how much is the mean vehicle speed reduced after extra signage is added?
The data are not paired: different vehicles are measured before and after the extra signage is added. The data are summarised in Table 24.4. The parameter is \(\mu_{\text{Before}} - \mu_{\text{After}}\), the reduction in the mean speed.
Mean | Std deviation | Std error | Sample size | |
---|---|---|---|---|
Before | 98.02 | 13.19 | 2.140 | 38 |
After | 92.34 | 13.13 | 2.051 | 41 |
Speed reduction | 5.68 | 2.964 |
The standard error must given; you cannot easily calculate this from the other information. You are not expected to do so.
A useful graphical summary of the data is a boxplot (Fig. 24.8, left panel); likewise, an error bar chart can be produced by computing the CI for each group (Fig. 24.8, right panel).
Based on the sample, an approximate 95% CI for the difference in mean speeds is \(5.68 \pm (2 \times 2.964)\), or from \(-0.24\) to \(11.6\) km/h, higher before the addition of extra signage. (The negative value refers to a negative reduction; that is, an increase in speed of 0.24 km/h.)
This means that, if many samples of size 38 and 41 were found, and the difference between the mean speeds were found, about 95% of the CIs would contain the population difference (\(\mu_{\text{Before}} - \mu_{\text{After}}\)). Loosely speaking, there is a 95% chance that our CI straddles the difference in the population means (\(\mu_{\text{Before}} - \mu_{\text{After}}\)).
We could write:
Based on the sample, an approximate 95% CI for the reduction in mean speeds after adding extra signage is between -0.24 km/h (i.e., an increase of 0.24 km/h) and 11.6 km/h (two independent samples).
Using the validity conditions, the CI is statistically valid.
Remember: clearly state which mean is larger.
24.10 Example: health promotion services
A study (Becker, Stuifbergen, and Sands 1991) compared the access to health promotion (HP) services for people with and without a disability in southwestern of the USA. 'Access' was measured using the quantitative Barriers to Health Promoting Activities for Disabled Persons (BHADP) scale. Higher scores mean greater barriers to health promotion services. The RQ is:
What is the difference between the mean BHADP scores, for people with and without a disability, in southwestern USA?
The parameter is \(\mu_D - \mu_{ND}\), the difference between the two population means (disability, minus non-disability). The statistic is \(\bar{x}_D - \bar{x}_{ND}\).
In this case, only summary data are available (Table 24.5): the data are not available. Nonetheless, a useful graphical summary (an error bar chart) can be produced by computing the CI for each group manually (Fig. 24.9).
The best estimate of the difference between the population means is the difference between sample means: \((\bar{x}_D - \bar{x}_{ND}) = 6.76\). The standard error for estimating this difference is \(\text{s.e.}(\bar{x}_D - \bar{x}_{ND}) = 0.80285\), as given in the table.
Sample mean | Std deviation | Sample size | Std error | |
---|---|---|---|---|
Disability | 31.83 | 7.73 | 132 | 0.6728 |
No disability | 25.07 | 4.8 | 137 | 0.4101 |
Difference | 6.76 | 0.80285 |
The standard error is given; you cannot easily calculate this from the other information. You are not expected to do so.
Based on the sample, an approximate 95% CI for the difference in population mean BHADP scores between people with and without a disability is \(6.76 \pm (2 \times 0.80285)\), or from \(5.15\) to \(8.37\) (higher for those with a disability).
This means that, if many samples of size 132 and 137 were found, and the difference between the mean BHADP scores were found, about 95% of the CIs would contain the population difference (\(\mu_D - \mu_{ND}\)). Loosely speaking, there is a 95% chance that our CI straddles the difference in the population means (\(\mu_D - \mu_{ND}\)).
We could write:
Based on the sample, an approximate 95% CI for the difference in BHADP scores is between \(5.15\) to \(8.37\), higher for those with a disability.
Remember: clearly state which mean is larger.
Using the validity conditions, the CI is statistically valid.
24.11 Quick review questions
- The appropriate graph for displaying quantitative data for two separate groups is a:
- True or false: The difference in population means could be denoted by \(\mu_A - \mu_B\).
- True or false: The standard error of the difference between the sample means is denoted by \(\text{s.e.}(\bar{x}_A) - \text{s.e.}(\bar{x}_B)\).
24.12 Exercises
Selected answers are available in Sect. D.23.
Exercise 24.1 A study of gray whales (Eschrichtius robustus) measured (among other things) the length of whales at birth (Agbayani, Fortune, and Trites 2020). How much longer are female gray whales than males, on average, in the population? Some summary data are given in Table 24.6; in addition, \(\text{s.e.}(\bar{x}_F - \bar{x}_M) = 0.0929\).
Mean | Std deviation | Sample size | |
---|---|---|---|
Female | 4.66 | 0.379 | 26 |
Male | 4.60 | 0.305 | 30 |
- Define the difference.
- Write down the parameter, and its estimate.
- Sketch an error-bar chart.
- Compute the approximate 95% CI, and write a conclusion.
- Is the CI likely to be statistically valid?
Exercise 24.2 Earlier, we used the NHANES study data (Sect. 12.10), and considered this RQ:
Among Americans, is the mean direct HDL cholesterol different for current smokers and non-smokers?
Use the SPSS output (Fig. 24.10) to answer these questions.
- Construct an appropriate table showing the numerical summary.
- Determine, and suitably communicate, the 95% CI for the difference between the direct HDL cholesterol values between current smokers and non-smokers.
Exercise 24.3 A study (Barrett et al. 2010) of the effectiveness of echinacea to treat the common cold compared, among other things, the duration of the cold for participants treated with echinaca or a placebo. Participants were blinded to the treatment, and allocated to the groups randomly. A summary of the data is given in Table 24.7.
- Compute the standard error for the mean duration of symptoms for each group.
- Sketch an error-bar chart.
- Compute an approximate 95% CI for the difference between the mean durations for the two groups.
- In which direction is the difference computed? What does it mean when the difference is calculated in this way?
- Compute an approximate 95% CI for the population mean duration of symptoms for those treated with echinacea.
- Are the CIs likely to be statistically valid?
Mean | Std deviation | Std error | Sample size | |
---|---|---|---|---|
Placebo | 6.87 | 3.62 | 176 | |
Echinacea | 6.34 | 3.31 | 183 | |
Difference | 0.53 | 0.367 |
Exercise 24.4 Carpal tunnel syndrome (CTS) is pain experienced in the wrists. One study (Schmid et al. 2012) compared two different treatments: night splinting, or gliding exercises.
Participants were randomly allocated to one of the two groups. Pain intensity (measured using a quantitative visual analog scale; larger values mean greater pain) were recorded after one week of treatment. The data are summarised in Table 24.8.
- Compute the standard error for the mean pain intensity for each group.
- In which direction is the difference computed? What does it mean when the difference is calculated in this way?
- Compute an approximate 95% CI for the difference in the mean pain intensity for the treatments.
- Compute an approximate 95% CI for the population mean pain intensity for those treated with splinting.
- Are the CIs likely to be statistically valid?
Mean | Std deviation | Std error | Sample size | |
---|---|---|---|---|
Exercise | 0.8 | 1.4 | 10 | |
Splinting | 1.1 | 1.1 | 10 | |
Difference | 0.3 | 0.563 |
Exercise 24.5 A study (Woodward and Walker 1994) examined the sugar consumption in industrialised (mean: 41.8 kg/person/year) and non-industrialised (mean: 24.6 kg/person/year) countries. Using the jamovi output (Fig. 24.11), write down and interpret the CI.
Exercise 24.6 In an attempt to reduce vehicle speeds on freeway exit ramps, a Chinese study tried using additional signage (Ma et al. 2019). At one site studied (Ningxuan Freeway), speeds were recorded at various points on the freeway exit for 38 vehicles before the extra signage was added, and then for 41 vehicles after the extra signage was added.
From this data, the deceleration of each vehicle was determined (data below) as the vehicle left the 120 km/h speed zone and approached the 80 km/hr speed zone. Use the data, and the summary in Table 24.9, to address this RQ:
At this freeway exit, what is the difference between the mean vehicle deceleration, comparing the times before the extra signage is added and after extra signage is added?
In this context, the researchers are hoping that the extra signage might cause cars to slow down faster (i.e., they will decelerate more, on average, after adding the extra signage).
- Identify clearly the parameter of interest to understand how much the deceleration increased after adding the extra signage.
- Compute and interpret the CI for this parameter.
Mean | Std deviation | Sample size | Std error | |
---|---|---|---|---|
Before | 0.0745 | 0.0494 | 0.00802 | 38 |
After | 0.0765 | 0.0521 | 0.00814 | 41 |
Change | -0.0020 | 0.00181 |
Exercise 24.7 A study (Wojcik et al. 1999) compared the lean-forward angle in younger and older women. An elaborate set-up was constructed to measure this angle, using a harnesses. Consider the RQ:
Among healthy women, what is difference between the mean lean-forward angle for younger women compared to older women?
The data are shown in Table 24.10.
- What is the parameter? Describe what this means.
- What is an appropriate graph to display the data?
- Construct an appropriate numerical summary from the software output (Fig. 24.12).
- Construct approximate and exact 95% CIs. Explain any differences.
- Is the CI expected to be statistically valid?
- Write a conclusion.
29 | 34 | 33 | 27 | 28 | 18 | 15 | 23 | 13 | 12 |
32 | 31 | 34 | 32 | 27 |