## 2.7 Interactions

### 2.7.1 What’s an interaction?

Here’s the official line: an *interaction effect* between two (or more!) predictors occurs when the relationship of one predictor to the response depends on the value of the other predictor (or predictors!). We’ll focus on the two-way version here, but bear in mind that *higher-order* interaction effects between three or more predictors can occur, though they’re perhaps less common.

In various application areas, you may hear people talk about *synergistic effects*, or *constructive/destructive interference*, or one factor acting as a *moderator* for the effect of another.

Let’s take a look at an interaction that occurs in this guinea pig experiment. The response is the tooth growth of the guinea pigs, and it’s related to the dosage of vitamin C that they receive and also the way they ingest it – there are two supplement types, orange juice and an ascorbic acid supplement, which is abbreviated as “VC.”

Here’s a side-by-side boxplot broken down by dosage level and supplement type:

```
%>%
pig_dat ggplot() +
geom_boxplot(aes(x = dose, y = len, color = supp))
```

If I ask you, “what’s the effect of supplement type on tooth growth?”, you *can’t actually answer my question*. It seems like OJ works better than the ascorbic acid…except when it doesn’t. With a sufficiently high dose of vitamin C, it doesn’t seem to matter how it’s administered.

This is what we mean by the effect of one factor *depending on* the level of another factor. It’s not necessarily a *causal* relationship. But you have to know the dosage level before you can tell me about the relationship between supplement type and tooth growth.

### 2.7.2 Spotting interaction effects

So how can you recognize when an interaction effect might be happening? Well, again, it comes back to noticing if the relationship between one factor and the response is *different* for different values, or levels, of another factor.

With two categorical factors, which is often the scenario we’re interested in, the side-by-side boxplot approach can work pretty well:

```
%>%
pig_dat ggplot() +
geom_boxplot(aes(x = dose, y = len, color = supp))
```

You can also reorganize the boxplots with the factors in the other roles, so that the observations split up by supplement type first, and then subdivided by dose:

```
%>%
pig_dat ggplot() +
geom_boxplot(aes(x = supp, y = len, color = dose))
```

In this case, I think the first version is easier to read – it’s easier to see that, hey, supplement type matters for dose level 0.5 but not for 2. But it’s worth looking at both versions with your data.

As you get more levels on the factors, the boxplots can get a little cluttered. A simplification is what’s called (shockingly) an *interaction plot*. It looks like this:

```
interaction.plot(x.factor = pig_dat$dose,
trace.factor = pig_dat$supp,
response = pig_dat$len)
```

The levels of one factor go on the x-axis, and the levels of the other factor are used to define different lines, or *traces*. The y-axis is the average response value for all observations with that combination of factor levels: for example, the 10 guinea pigs on dose level 0.5 with OJ had an average tooth length of about 13.

The advantage to interaction plots is that you can easily look for an interaction by asking whether the lines are *parallel*. If there’s no interaction between the two factors, and their effects are additive, then the “boost” from getting, say, OJ would be the same at all dose levels – so the OJ line would be the same distance above the VC line all the way across. This is not the case here, so there does appear to be an interaction. In extreme cases, you’ll even see different lines cross each other.

You can flip the roles of the two factors here as well. Again, this is technically giving you the same information, but one version or the other may be easier to read.

```
interaction.plot(x.factor = pig_dat$supp,
trace.factor = pig_dat$dose,
response = pig_dat$len)
```

### 2.7.3 Two-way ANOVA with an interaction term

So you’ve uncovered an interaction effect in your data. What should you do about it?

Well, you should probably check if it’s significant. You don’t want to just ignore it. In this example, if you ignored the interaction between supplement type and dosage level, you’d actually come out with misleading results. You’d say that OJ was better than ascorbic acid, that it increased tooth growth by some amount – but that amount wouldn’t really be right for *any* dosage level.

Instead, we can add a term for this effect in the model.

Our theoretical model for two-way ANOVA without an interaction looked like this:

\[y_{ijk} = \mu + \alpha_i + \beta_j + \varepsilon_{ijk}\]

Hokay, let’s add a term:

\[y_{ijk} = \mu + \alpha_i + \beta_j + (\alpha\beta)_{ij} + \varepsilon_{ijk}\]

And yes, if you’re wondering, \(\sum_i (\alpha\beta)_{ij} = 0\) and \(\sum_j (\alpha\beta)_{ij} = 0\). That keeps everything nice and identifiable.

That new \((\alpha\beta)_{ij}\) represents the interaction “adjustment” for the combination of levels \((i,j)\). It’s not necessarily equal to \(\alpha_i\) times \(\beta_j\)!

How do we estimate these? Well, we look at the guinea pigs with this combination of levels, and compare their mean to what we’d expect based on the *additive* effects of dosage and supplement type. With our “dot to take the mean” notation, we have:

Parameter | Estimate from sample |
---|---|

\(\mu\) | \(\widehat{\mu} = \overline{y}_{\cdot\cdot\cdot}\) |

\(\alpha_i\) | \(\widehat{\alpha}_i = \overline{y}_{i\cdot\cdot} - \overline{y}_{\cdot\cdot\cdot}\) |

\(\beta_j\) | \(\widehat{\beta}_j = \overline{y}_{\cdot j \cdot} - \overline{y}_{\cdot\cdot\cdot}\) |

\((\alpha \beta)_{ij}\) | \(\widehat{(\alpha \beta)}_{ij} = \overline{y}_{ij\cdot} - \widehat{\mu} - \widehat{\alpha}_i - \widehat{\beta}_j\) |

Notice how our estimates for the \(\alpha_i\)’s basically ignore supplement type, while our estimates for the \(\beta_j\)’s ignore dosage. Then we use the interaction coefficients to adjust for that.

So we can break down a single observation \(y_{ijk}\) using a bunch of differences between means, similar to what we’ve seen before: \[y_{ijk} = \overline{y}_{\cdot\cdot\cdot} + (\overline{y}_{i\cdot\cdot} - \overline{y}_{\cdot\cdot\cdot}) + (\overline{y}_{\cdot j \cdot} - \overline{y}_{\cdot\cdot\cdot}) + (\overline{y}_{ij\cdot} - \widehat{\mu} - \widehat{\alpha}_i - \widehat{\beta}_j) + (y_{ijk} - \overline{y}_{ij\cdot})\]

Out of this equation we can obtain – surprise! – sums of squares. Now the breakdown is: \[SSTot = SSA + SSB + SSAB + SSE\],

with a new sum of squares term in there for the interaction.

Here are the expressions for the various sums of squares. You can go through and work out the simplifications like pulling out \(n\) from the sums if you like, or you can take my word for it.

Source | Sum of squares | Our example |
---|---|---|

A | \(SSA = \sum_{i=1}^a b n (\widehat{\alpha}_i)^2\) | between dose levels |

B | \(SSB = \sum_{j=1}^b a n (\widehat{\beta}_j)^2\) | between supp. types |

AB | \(SSAB = \sum_{i=1}^a \sum_{j=1}^b n (\widehat{\alpha\beta})_{ij}^2\) | how is this combo different? |

Error | \(SSE = \sum_{i=1}^a \sum_{j=1}^b \sum_{k=1}^n (y_{ijk} - \overline{y}_{ij\cdot})^2\) | variation within combo |

Total | \(SSTot = \sum_{i=1}^a \sum_{j=1}^b \sum_{k=1}^n (y_{ijk} - \overline{y}_{\cdot\cdot\cdot})^2\) | total |

As before, we turn sums of squares into mean squares by dividing by the degrees of freedom. And in order to test whether something is significant, we look at the ratio of mean squares. Here’s the table:

Source | df | Sum of squares | Mean squares | \(F\) stat |
---|---|---|---|---|

A | \(a-1\) | \(SSA\) | \(MSA = SSA/(a-1)\) | \(MSA/MSE\) |

B | \(b-1\) | \(SSB\) | \(MSB = SSB/(b-1)\) | \(MSB/MSE\) |

AB | \((a-1)(b-1)\) | \(SSAB\) | \(MSAB = SSAB/dfAB\) | \(MSAB/MSE\) |

Error | \(abr - (a-1) - (b-1) - 1\) | \(SSE\) | \(MSE = SSE/dfE\) |

Notice that when we create the \(F\) statistic \(\frac{MSAB}{MSE}\), we’re using it to test *all* the interaction effects. Just as the null hypothesis about factor A is “does factor A matter at all?” or in other words, “are *any* of the levels of factor A different?”, so the null hypothesis about the interaction term is “is there ever an interaction effect at all?” Mathematically, the null hypothesis is:

\[H_0: (\alpha\beta)_{ij} = 0 \mbox{ for ALL }i,j\]

Or in other words, do we *ever* have to adjust for dosage level when considering the effect of supplement type, or vice versa? Rejecting this null hypothesis doesn’t require that all the \((\alpha\beta)_{ij}\)’s be nonzero. In our example, we really only saw one case where the interaction mattered: the effect of supplement type changed if we were looking at dosage level 2. But that’s enough to say that, overall, the interaction exists.

### 2.7.4 Data!

Let’s see what happens when we throw this into R. Notice the new notation in the formula provided to `aov`

– the colon (`:`

) indicates that I want an interaction between those two factors. I can also use the star `*`

to tell R to include both the *main effects* and the interaction – so `A*B`

in the formula would be equivalent to `A + B + A:B`

.

```
= aov(len ~ dose + supp + dose:supp, data = pig_dat)
pig_aov_interact # or: aov(len ~ dose*supp, data = pig_dat)
%>% summary() pig_aov_interact
```

```
## Df Sum Sq Mean Sq F value Pr(>F)
## dose 2 2426.4 1213.2 92.000 < 2e-16 ***
## supp 1 205.4 205.4 15.572 0.000231 ***
## dose:supp 2 108.3 54.2 4.107 0.021860 *
## Residuals 54 712.1 13.2
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```

Well! It looks like, with a reasonably lenient \(\alpha\) of 0.05, the dose-supplement type interaction is significant. Note that this makes interpreting the main effects of `dose`

and `supp`

a little tricky. Supplement type has a low p-value here, but that doesn’t mean it always has a significant effect on the response! In fact we know it doesn’t do much of anything if the dosage is 2. When you have a significant interaction effect in the model, you can’t really interpret either of the main effects “in isolation” without considering the level of the other one.

It is worth comparing this to the version we got when we were assuming additivity, with no interaction:

`%>% summary() pig_aov `

```
## Df Sum Sq Mean Sq F value Pr(>F)
## dose 2 2426.4 1213.2 82.81 < 2e-16 ***
## supp 1 205.4 205.4 14.02 0.000429 ***
## Residuals 56 820.4 14.7
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```

It’s not a big change, but note that supplement type has a lower p-value in our new model, with the interaction term. In the old model, we were a little bit less confident that supplement type mattered – because, hey, it *didn’t* matter for some of the observations. But in the new model we’re able to be more confident that supplement type has *some* effect; it’s just that it only has an effect at certain dosage levels.

As a final note: including an interaction term in the model has freed us from the assumption of additivity (whee!). But the other assumptions and conditions still apply: independent errors with roughly equal variance. And the math here continues to rely on the data being *balanced*, with the same number of replicates getting each combination of treatment levels. Unbalanced data are an exciting adventure for another time.

**Response moment:** Think of an example – from your own life or a topic that interests you – involving (at least) two factors and a response, where there could be an interaction effect. For example, when I go for a long run, I go faster when I drink water, and I go faster when it’s colder outside – but the water is a lot *more* important to my speed when it’s hot out than when it’s cold out. So the relationship between the “water” factor and the “speed” response depends on the level of the “temperature” factor.