10 Multicategorical Focal Antecedents and Moderators
In this chapter, [Hayes] extend[ed] the principles of moderation analysis described in Chapters 7 and 8 to testing interaction involving a multicategorical focal antecedent variable or moderator. As you will see, the principles discussed in those chapters generalize quite readily, although the model necessarily requires more than one product to capture an interaction between two variables. This makes the formulas a bit more complex, and the visualizing and probing process a bit more involved. But with comfort with the fundamentals described so far, you should not find it difficult to master this extension of multiple regression analysis. (p. 350)
10.1 Moderation of the effect of a multicategorical antecedent variable
Take the case of a continuous or dichotomous moderator \(W\) and a multicategorical \(X\) “with \(g\) groups, include \(g − 1\) variables coding membership in the groups, the moderator variable \(W\), and \(g − 1\) products between the \(g − 1\) group codes and moderator \(W\) in a regression model” (p. 351) following the form
\[ Y = i_Y + \sum_{i = 1}^{g - 1} b_i D_i + b_g W + \sum_{j = g + 1}^{2g - 1} b_j D_{j - g} W + e_Y, \]
where \(D_i\) denotes the \(i\)th dummy variable. Given the case where \(g = 4\), that formula can be re-expressed as
\[\begin{align*} Y & = i_Y + b_1 D_1 + b_2 D_2 + b_3 D_3 + b_4 W + b_5 D_1 W + b_6 D_2 W + b_7 D_3 W + e_Y, \;\;\;\text{or} \\ & = i_Y + (b_1 + b_5 W) D_1 + (b_2 + b_6 W) D_2 + (b_3 + b_7 W) D_3 + b_4 W + e_Y. \end{align*}\]
10.2 An example from the sex disrimination in the workplace study
Here we load a couple necessary packages, load the data, and take a glimpse()
.
## Observations: 129
## Variables: 6
## $ subnum <dbl> 209, 44, 124, 232, 30, 140, 27, 64, 67, 182, 85, 109, 122, 69, 45, 28, 170, 66, …
## $ protest <dbl> 2, 0, 2, 2, 2, 1, 2, 0, 0, 0, 2, 2, 0, 1, 1, 0, 1, 2, 2, 1, 2, 1, 1, 2, 2, 0, 1,…
## $ sexism <dbl> 4.87, 4.25, 5.00, 5.50, 5.62, 5.75, 5.12, 6.62, 5.75, 4.62, 4.75, 6.12, 4.87, 5.…
## $ angry <dbl> 2, 1, 3, 1, 1, 1, 2, 1, 6, 1, 2, 5, 2, 1, 1, 1, 2, 1, 3, 4, 1, 1, 1, 5, 1, 5, 1,…
## $ liking <dbl> 4.83, 4.50, 5.50, 5.66, 6.16, 6.00, 4.66, 6.50, 1.00, 6.83, 5.00, 5.66, 5.83, 6.…
## $ respappr <dbl> 4.25, 5.75, 4.75, 7.00, 6.75, 5.50, 5.00, 6.25, 3.00, 5.75, 5.25, 7.00, 4.50, 6.…
With a little if_else()
, computing the dummies d1
and d2
is easy enough.
Load brms.
With model10.1
and model10.2
we fit the multicategorical multivariable model and the multicategorical moderation models, respectively.
model10.1 <-
brm(data = protest,
family = gaussian,
liking ~ 1 + d1 + d2 + sexism,
cores = 4)
model10.2 <-
update(model10.1,
newdata = protest,
liking ~ 1 + d1 + d2 + sexism + d1:sexism + d2:sexism,
cores = 4)
Behold the \(R^2\)s.
r2 <-
bayes_R2(model10.1, summary = F) %>%
data.frame() %>%
bind_cols(
bayes_R2(model10.2, summary = F) %>%
data.frame()
) %>%
set_names(str_c("Model 10.", 1:2)) %>%
mutate(`The R2 difference` = `Model 10.2` - `Model 10.1`)
r2 %>%
pivot_longer(everything()) %>%
# this line isn't necessary, but it sets the order the summaries appear in
mutate(name = factor(name, levels = c("Model 10.1", "Model 10.2", "The R2 difference"))) %>%
group_by(name) %>%
summarize(mean = mean(value),
median = median(value),
ll = quantile(value, probs = .025),
ul = quantile(value, probs = .975)) %>%
mutate_if(is.double, round, digits = 3)
## # A tibble: 3 x 5
## name mean median ll ul
## <fct> <dbl> <dbl> <dbl> <dbl>
## 1 Model 10.1 0.07 0.065 0.012 0.156
## 2 Model 10.2 0.159 0.157 0.067 0.265
## 3 The R2 difference 0.088 0.09 -0.034 0.21
Interestingly, even though our posterior means and medians for the model-specific \(R^2\) values differed some from the OLS estimates in the text, their difference corresponded quite nicely to the one in the text. Let’s take a look at their distributions.
r2 %>%
pivot_longer(everything()) %>%
ggplot(aes(x = value)) +
geom_density(size = 0, fill = "grey33") +
scale_y_continuous(NULL, breaks = NULL) +
facet_wrap(~name, scales = "free_y") +
theme_minimal()
The model coefficient summaries cohere well with those in Table 10.1.
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: liking ~ 1 + d1 + d2 + sexism
## Data: protest (Number of observations: 129)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 4.768 0.627 3.552 6.004 1.000 5422 2679
## d1 0.498 0.224 0.072 0.942 1.001 4566 3204
## d2 0.447 0.220 0.016 0.868 1.001 3801 2993
## sexism 0.107 0.119 -0.127 0.343 1.000 5456 2883
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 1.043 0.066 0.925 1.181 1.001 5120 3260
##
## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample
## is a crude measure of effective sample size, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: liking ~ d1 + d2 + sexism + d1:sexism + d2:sexism
## Data: protest (Number of observations: 129)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 7.748 1.071 5.689 9.875 1.002 1354 2194
## d1 -4.195 1.518 -7.194 -1.275 1.002 1360 1729
## d2 -3.561 1.407 -6.330 -0.892 1.004 1378 1857
## sexism -0.481 0.209 -0.896 -0.080 1.003 1348 2329
## d1:sexism 0.915 0.291 0.350 1.501 1.002 1337 1789
## d2:sexism 0.792 0.277 0.255 1.331 1.004 1276 1874
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 1.007 0.065 0.888 1.146 1.002 3190 2503
##
## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample
## is a crude measure of effective sample size, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
10.3 Visualizing the model
To get our version of the values in Table 10.2, we’ll first recreate columns for \(d_1\) through \(W\) (SEXISM) and save then as a tibble, nd
.
(
nd <-
tibble(d1 = c(0, 1, 0),
d2 = c(0, 0, 1)) %>%
expand(nesting(d1, d2),
sexism = quantile(protest$sexism, probs = c(.16, .5, .84)))
)
## # A tibble: 9 x 3
## d1 d2 sexism
## <dbl> <dbl> <dbl>
## 1 0 0 4.31
## 2 0 0 5.12
## 3 0 0 5.87
## 4 0 1 4.31
## 5 0 1 5.12
## 6 0 1 5.87
## 7 1 0 4.31
## 8 1 0 5.12
## 9 1 0 5.87
With nd
in hand, we’ll feed the predictor values into fitted()
for the typical posterior summaries.
## Estimate Est.Error Q2.5 Q97.5
## [1,] 5.676 0.227 5.236 6.119
## [2,] 5.285 0.162 4.974 5.588
## [3,] 4.925 0.232 4.470 5.376
## [4,] 5.527 0.202 5.141 5.927
## [5,] 5.780 0.156 5.470 6.084
## [6,] 6.013 0.220 5.575 6.445
## [7,] 5.421 0.247 4.935 5.900
## [8,] 5.773 0.158 5.463 6.079
## [9,] 6.098 0.201 5.708 6.504
The values in our Estimate
column correspond to those in the \(\hat Y\) column in the table. We, of course, add summaries of uncertainty to the point estimates.
If we want to make a decent line plot for our version of Figure 10.3, we’ll need many more values for sexism
, which will appear on the x-axis.
nd <-
tibble(d1 = c(0, 1, 0),
d2 = c(0, 0, 1)) %>%
expand(nesting(d1, d2),
sexism = seq(from = 3.5, to = 6.5, length.out = 30))
This time we’ll save the results from fitted()
as a tlbble and wrangle a bit to get ready for the plot.
f <-
fitted(model10.2,
newdata = nd,
probs = c(.025, .25, .75, .975)) %>%
data.frame() %>%
bind_cols(nd) %>%
mutate(condition = if_else(d1 == 1, "Individual Protest",
if_else(d2 == 1, "Collective Protest", "No Protest"))) %>%
# this line is not necessary, but it will help order the facets of the plot
mutate(condition = factor(condition, levels = c("No Protest", "Individual Protest", "Collective Protest")))
glimpse(f)
## Observations: 90
## Variables: 10
## $ Estimate <dbl> 6.064727, 6.014962, 5.965197, 5.915432, 5.865667, 5.815901, 5.766136, 5.716371,…
## $ Est.Error <dbl> 0.3660584, 0.3468298, 0.3278937, 0.3093040, 0.2911269, 0.2734448, 0.2563601, 0.…
## $ Q2.5 <dbl> 5.364726, 5.345646, 5.331805, 5.316183, 5.300102, 5.282467, 5.266224, 5.251607,…
## $ Q25 <dbl> 5.812528, 5.776542, 5.739331, 5.704200, 5.668177, 5.630328, 5.590915, 5.552826,…
## $ Q75 <dbl> 6.315876, 6.254369, 6.192635, 6.128761, 6.065838, 6.004212, 5.945169, 5.880642,…
## $ Q97.5 <dbl> 6.784732, 6.699703, 6.614906, 6.528555, 6.442336, 6.356916, 6.271567, 6.186775,…
## $ d1 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ d2 <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ sexism <dbl> 3.500000, 3.603448, 3.706897, 3.810345, 3.913793, 4.017241, 4.120690, 4.224138,…
## $ condition <fct> No Protest, No Protest, No Protest, No Protest, No Protest, No Protest, No Prot…
For Figure 10.3 and many to follow for this chapter, we’ll superimpose 50% intervals on top of 95% intervals.
# this will help us add the original data points to the plot
protest <-
protest %>%
mutate(condition = ifelse(protest == 0, "No Protest",
ifelse(protest == 1, "Individual Protest",
"Collective Protest"))) %>%
mutate(condition = factor(condition, levels = c("No Protest", "Individual Protest", "Collective Protest")))
# this will help us with the x-axis
breaks <-
tibble(values = quantile(protest$sexism, probs = c(.16, .5, .84))) %>%
mutate(labels = values %>% round(2) %>% as.character())
# Here we plot
f %>%
ggplot(aes(x = sexism)) +
geom_ribbon(aes(ymin = Q2.5, ymax = Q97.5),
alpha = 1/3) +
geom_ribbon(aes(ymin = Q25, ymax = Q75),
alpha = 1/3) +
geom_line(aes(y = Estimate)) +
geom_point(data = protest,
aes(y = liking),
size = 2/3) +
scale_x_continuous(breaks = breaks$values,
labels = breaks$labels) +
coord_cartesian(xlim = 4:6,
ylim = c(2.5, 7.2)) +
labs(x = expression(paste("Perceived Pervasiveness of Sex Discrimination in Society (", italic(W), ")")),
y = "Evaluation of the Attorney") +
facet_wrap(~condition) +
theme_minimal()
By adding the data to the plots, they are both more informative and now serve as a posterior predictive check.
10.4 Probing the interaction
These will involve both onmibus tests and pairwise comparisons.
10.4.1 The pick-a-point approach.
“The pick-a-point approach requires you to choose values of the moderator \(W\) and then estimate the conditional effect of \(X\) on \(Y\) at those values and conduct an inferential test” [evaluate the posterior distribution] (p. 368).
10.4.1.1 Omnibus inference.
Hayes used the omnibus testing framework to assess how important coefficients \(b_1\) and \(b_2\) were to our interaction model, model1
. Before fitting the models, he discussed why he preferred to fit models after centering sexism
(i.e., \(W\)) to 4.25. Here we’ll call our centered variable sexism_p
, where _p
stands in for “prime”.
From here on, model10.3
is the moderation model without the lower-order d1
and d2
terms; model10.4
is the full moderation model. But we’re going to be fitting both these models three different ways, based on how we centersexism
. So for this first set where we centered sexism
on 4.25, we’ll give them the suffix a
.
# the model without d1 + d2
model10.3a <-
update(model10.2,
newdata = protest,
liking ~ 1 + sexism_p + d1:sexism_p + d2:sexism_p,
cores = 4)
# the full model with d1 + d2
model10.4a <-
update(model10.2,
newdata = protest,
liking ~ 1 + d1 + d2 + sexism_p + d1:sexism_p + d2:sexism_p,
cores = 4)
The coefficient summaries for model10.4a
correspond to the top section of Table 10.3 (p. 373).
## Estimate Est.Error Q2.5 Q97.5
## Intercept 5.703 0.231 5.259 6.167
## d1 -0.306 0.348 -0.979 0.369
## d2 -0.194 0.307 -0.798 0.380
## sexism_p -0.472 0.206 -0.879 -0.069
## d1:sexism_p 0.903 0.294 0.348 1.479
## d2:sexism_p 0.779 0.275 0.253 1.318
We can compare their Bayesian \(R^2\) distributions like we usually do.
library(tidybayes)
r2 <-
bayes_R2(model10.3a, summary = F) %>%
data.frame() %>%
bind_cols(
bayes_R2(model10.4a, summary = F) %>%
data.frame()
) %>%
set_names("Model without d1 + d2", "Model with d1 + d2") %>%
mutate(`The R2 difference` = `Model with d1 + d2` - `Model without d1 + d2`)
r2 %>%
gather() %>%
mutate(key = factor(key, levels = c("Model without d1 + d2", "Model with d1 + d2", "The R2 difference"))) %>%
group_by(key) %>%
median_qi(value) %>%
mutate_if(is.double, round, digits = 3)
## # A tibble: 3 x 7
## key value .lower .upper .width .point .interval
## <fct> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Model without d1 + d2 0.139 0.053 0.241 0.95 median qi
## 2 Model with d1 + d2 0.155 0.065 0.251 0.95 median qi
## 3 The R2 difference 0.015 -0.122 0.145 0.95 median qi
Our results differ a bit from those in the text (p. 370), but the substantive interpretation is the same. The d1
and d2
parameters added little predictive power to the model in terms of \(R^2\). We can also use information criteria to compare the models. Here are the results from using the LOO-CV.
model10.3a <- add_criterion(model10.3a, "loo")
model10.4a <- add_criterion(model10.4a, "loo")
loo_compare(model10.3a, model10.4a) %>%
print(simplify = F)
## elpd_diff se_diff elpd_loo se_elpd_loo p_loo se_p_loo looic se_looic
## model10.3a 0.0 0.0 -186.0 11.0 6.5 2.0 372.0 22.0
## model10.4a -1.4 0.8 -187.4 10.9 7.9 2.0 374.8 21.8
The LOO-CV difference between the two models was pretty small and its standard error was of about the same magnitude of its difference. Thus, the LOO-CV gives the same general message as the \(R^2\). The d1
and d2
parameters were sufficiently small and uncertain enough that constraining them to zero did little in terms of reducing the explanatory power of the statistical model.
Here’s the same thing all over again, but this time after centering sexism
on 5.120.
Now fit the models.
# the model without d1 + d2
model10.3b <-
update(model10.2,
newdata = protest,
liking ~ 1 + sexism_p + d1:sexism_p + d2:sexism_p,
cores = 4)
# the full model with d1 + d2
model10.4b <-
update(model10.2,
newdata = protest,
liking ~ 1 + d1 + d2 + sexism_p + d1:sexism_p + d2:sexism_p,
cores = 4)
These coefficient summaries correspond to the middle section of Table 10.3 (p. 373).
## Estimate Est.Error Q2.5 Q97.5
## Intercept 5.288 0.159 4.982 5.603
## d1 0.485 0.223 0.053 0.935
## d2 0.491 0.220 0.054 0.911
## sexism_p -0.481 0.208 -0.890 -0.070
## d1:sexism_p 0.907 0.290 0.329 1.478
## d2:sexism_p 0.786 0.283 0.221 1.323
Here are the Bayesian \(R^2\) summaries and the summary for their difference.
r2 <-
bayes_R2(model10.3b, summary = F) %>%
data.frame() %>%
bind_cols(
bayes_R2(model10.4b, summary = F) %>%
data.frame()
) %>%
set_names("Model without d1 + d2", "Model with d1 + d2") %>%
mutate(`The R2 difference` = `Model with d1 + d2` - `Model without d1 + d2`)
r2 %>%
pivot_longer(everything()) %>%
mutate(name = factor(name, levels = c("Model without d1 + d2", "Model with d1 + d2", "The R2 difference"))) %>%
group_by(name) %>%
median_qi(value) %>%
mutate_if(is.double, round, digits = 3)
## # A tibble: 3 x 7
## name value .lower .upper .width .point .interval
## <fct> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Model without d1 + d2 0.099 0.028 0.196 0.95 median qi
## 2 Model with d1 + d2 0.155 0.066 0.256 0.95 median qi
## 3 The R2 difference 0.057 -0.078 0.181 0.95 median qi
This time, our \(\Delta R^2\) distribution was more similar to the results Hayes reported in the text (p. 370, toward the bottom).
Here’s the updated LOO-CV.
model10.3b <- add_criterion(model10.3b, "loo")
model10.4b <- add_criterion(model10.4b, "loo")
loo_compare(model10.3b, model10.4b) %>%
print(simplify = F)
## elpd_diff se_diff elpd_loo se_elpd_loo p_loo se_p_loo looic se_looic
## model10.4b 0.0 0.0 -187.4 10.9 7.9 1.9 374.8 21.7
## model10.3b -1.2 3.0 -188.6 11.8 6.0 1.9 377.3 23.7
Here again our Bayesian \(R^2\) and loo()
results cohere, both suggesting the d1
and d2
parameters were of little predictive utility. Note how this differs a little from the second \(F\)-test on page 370.
Here’s what happens when we center sexism
on 5.896. First center.
Fit the models.
# the model without d1 + d2
model10.3c <-
update(model10.2,
newdata = protest,
liking ~ 1 + sexism_p + d1:sexism_p + d2:sexism_p,
cores = 4)
# the full model with d1 + d2
model10.4c <-
update(model10.2,
newdata = protest,
liking ~ 1 + d1 + d2 + sexism_p + d1:sexism_p + d2:sexism_p,
cores = 4)
These coefficient summaries correspond to the lower section of Table 10.3 (p. 373).
## Estimate Est.Error Q2.5 Q97.5
## Intercept 5.290 0.159 4.977 5.602
## d1 0.483 0.219 0.068 0.915
## d2 0.490 0.217 0.073 0.917
## sexism_p -0.471 0.214 -0.893 -0.057
## d1:sexism_p 0.898 0.295 0.323 1.473
## d2:sexism_p 0.775 0.278 0.228 1.309
Again, compute the \(R^2\) distributions and their difference-score distribution.
r2 <-
bayes_R2(model10.3c, summary = F) %>%
data.frame() %>%
bind_cols(
bayes_R2(model10.4c, summary = F) %>%
data.frame()
) %>%
set_names("Model without d1 + d2", "Model with d1 + d2") %>%
mutate(`The R2 difference` = `Model with d1 + d2` - `Model without d1 + d2`)
r2 %>%
pivot_longer(everything()) %>%
mutate(name = factor(name, levels = c("Model without d1 + d2", "Model with d1 + d2", "The R2 difference"))) %>%
group_by(name) %>%
median_qi(value) %>%
mutate_if(is.double, round, digits = 3)
## # A tibble: 3 x 7
## name value .lower .upper .width .point .interval
## <fct> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Model without d1 + d2 0.101 0.027 0.196 0.95 median qi
## 2 Model with d1 + d2 0.154 0.062 0.256 0.95 median qi
## 3 The R2 difference 0.054 -0.079 0.175 0.95 median qi
That \(\Delta R^2\) distribution matches up nicely with the one Hayes reported at the bottom of page 370. Now compare the models with the LOO.
model10.3c <- add_criterion(model10.3c, "loo")
model10.4c <- add_criterion(model10.4c, "loo")
loo_compare(model10.3c, model10.4c) %>%
print(simplify = F)
## elpd_diff se_diff elpd_loo se_elpd_loo p_loo se_p_loo looic se_looic
## model10.4c 0.0 0.0 -187.6 11.0 8.2 2.1 375.1 22.0
## model10.3c -1.1 2.9 -188.7 11.8 6.0 1.9 377.3 23.7
Although our Bayesian \(R^2\) difference is now predominantly positive, the LOO-CV difference for the two models remains uncertain. Here’s a look at the two parameters in question using a handmade coefficient plot.
posterior_samples(model10.4c) %>%
pivot_longer(b_d1:b_d2) %>%
mutate(name = str_remove(name, "b_")) %>%
ggplot(aes(name, y = value)) +
stat_summary(fun.y = median,
fun.ymin = function(i) {quantile(i, probs = .025)},
fun.ymax = function(i) {quantile(i, probs = .975)},
color = "grey33") +
stat_summary(geom = "linerange",
fun.ymin = function(i) {quantile(i, probs = .25)},
fun.ymax = function(i) {quantile(i, probs = .75)},
color = "grey33",
size = 1.25) +
xlab(NULL) +
coord_flip(ylim = 0:2) +
theme_minimal()
For Figure 10.4, we’ll drop our faceting approach and just make one big plot. Heads up: I’m going to drop the 50% intervals from this plot. They’d just make it too busy.
f %>%
ggplot(aes(x = sexism, alpha = condition)) +
geom_ribbon(aes(ymin = Q2.5, ymax = Q97.5),
size = 0) +
geom_line(aes(y = Estimate)) +
scale_alpha_manual(values = c(.2, .5, .8)) +
scale_x_continuous(breaks = breaks$values,
labels = breaks$labels) +
coord_cartesian(xlim = 4:6,
ylim = c(4.5, 6.7)) +
labs(x = expression(paste("Perceived Pervasiveness of Sex Discrimination in Society (", italic(W), ")")),
y = "Evaluation of the Attorney") +
theme_minimal() +
theme(legend.title = element_blank(),
legend.position = "top",
legend.direction = "vertical")
10.4.1.2 Pairwise inference.
Hayes continues to reference Table 10.3. In the last subsection, we reproduced those results one model at a time. Why not practice doing it altogether? There are a lot of ways you could do this. A good first try is to extend the fixef()
approach from before with a little help from bind_rows()
.
# start with `model4a`
fixef(model10.4a) %>%
data.frame() %>%
rownames_to_column("parameter") %>%
bind_rows(
# add `model4b`
fixef(model10.4b) %>%
data.frame() %>%
rownames_to_column("parameter"),
# add `model4c`
fixef(model10.4c) %>%
data.frame() %>%
rownames_to_column("parameter")
) %>%
# wrangle a bit
mutate(`w'` = str_c("w - ", c(4.25, 5.12, 5.896)) %>% rep(., each = 6)) %>%
select(`w'`, everything()) %>%
mutate_if(is.double, round, digits = 3)
## w' parameter Estimate Est.Error Q2.5 Q97.5
## 1 w - 4.25 Intercept 5.703 0.231 5.259 6.167
## 2 w - 4.25 d1 -0.306 0.348 -0.979 0.369
## 3 w - 4.25 d2 -0.194 0.307 -0.798 0.380
## 4 w - 4.25 sexism_p -0.472 0.206 -0.879 -0.069
## 5 w - 4.25 d1:sexism_p 0.903 0.294 0.348 1.479
## 6 w - 4.25 d2:sexism_p 0.779 0.275 0.253 1.318
## 7 w - 5.12 Intercept 5.288 0.159 4.982 5.603
## 8 w - 5.12 d1 0.485 0.223 0.053 0.935
## 9 w - 5.12 d2 0.491 0.220 0.054 0.911
## 10 w - 5.12 sexism_p -0.481 0.208 -0.890 -0.070
## 11 w - 5.12 d1:sexism_p 0.907 0.290 0.329 1.478
## 12 w - 5.12 d2:sexism_p 0.786 0.283 0.221 1.323
## 13 w - 5.896 Intercept 5.290 0.159 4.977 5.602
## 14 w - 5.896 d1 0.483 0.219 0.068 0.915
## 15 w - 5.896 d2 0.490 0.217 0.073 0.917
## 16 w - 5.896 sexism_p -0.471 0.214 -0.893 -0.057
## 17 w - 5.896 d1:sexism_p 0.898 0.295 0.323 1.473
## 18 w - 5.896 d2:sexism_p 0.775 0.278 0.228 1.309
This code works okay, but it’s redundant. Here’s a streamlined approach where we use a combination of nested tibbles and the purrr::map()
function to work with our three model fits–model10.4a
, model10.4b
, and model10.4c
–in bulk.
t <-
tibble(`w'` = str_c("w - ", c(4.25, 5.12, 5.896)),
name = str_c("model10.4", letters[1:3])) %>%
mutate(fit = map(name, get)) %>%
mutate(s = map(fit, ~fixef(.) %>%
data.frame() %>%
rownames_to_column("parameter"))) %>%
unnest(s) %>%
select(`w'`, parameter:Q97.5)
t %>%
mutate_if(is.double, round, digits = 3)
## # A tibble: 18 x 6
## `w'` parameter Estimate Est.Error Q2.5 Q97.5
## <chr> <chr> <dbl> <dbl> <dbl> <dbl>
## 1 w - 4.25 Intercept 5.70 0.231 5.26 6.17
## 2 w - 4.25 d1 -0.306 0.348 -0.979 0.369
## 3 w - 4.25 d2 -0.194 0.307 -0.798 0.38
## 4 w - 4.25 sexism_p -0.472 0.206 -0.879 -0.069
## 5 w - 4.25 d1:sexism_p 0.903 0.294 0.348 1.48
## 6 w - 4.25 d2:sexism_p 0.779 0.275 0.253 1.32
## 7 w - 5.12 Intercept 5.29 0.159 4.98 5.60
## 8 w - 5.12 d1 0.485 0.223 0.053 0.935
## 9 w - 5.12 d2 0.491 0.22 0.054 0.911
## 10 w - 5.12 sexism_p -0.481 0.208 -0.89 -0.07
## 11 w - 5.12 d1:sexism_p 0.907 0.290 0.329 1.48
## 12 w - 5.12 d2:sexism_p 0.786 0.283 0.221 1.32
## 13 w - 5.896 Intercept 5.29 0.159 4.98 5.60
## 14 w - 5.896 d1 0.483 0.219 0.068 0.915
## 15 w - 5.896 d2 0.49 0.217 0.073 0.917
## 16 w - 5.896 sexism_p -0.471 0.214 -0.893 -0.057
## 17 w - 5.896 d1:sexism_p 0.898 0.295 0.323 1.47
## 18 w - 5.896 d2:sexism_p 0.775 0.278 0.228 1.31
Summary tables like this are precise and very common in the literature. But you can get lost in all those numbers. A coefficient plot can be better. This first version is pretty close to the Table 10.3 format.
t %>%
# this will help us order our y-axis
mutate(parameter = factor(parameter,
levels = c("d2:sexism_p", "d1:sexism_p", "sexism_p", "d2", "d1", "Intercept")),
# this is just for aesthetics
`w'` = str_c("w' = ", `w'`)) %>%
# plot!
ggplot(aes(x = parameter, y = Estimate, ymin = Q2.5, ymax = Q97.5)) +
geom_pointrange() +
labs(x = NULL,
y = NULL) +
coord_flip() +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 0)) +
facet_wrap(~`w'`, nrow = 1)
Notice how this arrangement makes it easiest to compare coefficients within models. If we wanted to make it easier to compare coefficients across models, we might arrange the plot like so.
t %>%
# this will help us order our y-axis
mutate(parameter = factor(parameter,
levels = c("Intercept", "d1", "d2", "sexism_p", "d1:sexism_p", "d2:sexism_p"))) %>%
# plot!
ggplot(aes(x = `w'`, y = Estimate, ymin = Q2.5, ymax = Q97.5)) +
geom_pointrange() +
labs(x = NULL,
y = NULL) +
coord_flip() +
theme_minimal() +
theme(axis.text.y = element_text(hjust = 0)) +
facet_wrap(~parameter, ncol = 1)
Oh man–with sweet plots like these, who needs tables! This makes it much easier to see what happened as we changed values we centered sexism
on. In the middle paragraph on page 374, Hayes pointed out “that \(b_1\) and \(b_2\) differ in these analyses, but \(b_3\), \(b_4\), and \(b_5\) are unaffected by the centering”. Our coefficient plot clarified that in a way I don’t think a table ever could. But before we move on, let’s back up a little in the text.
“To make this more concrete, consider the effect of Catherine’s behavior on how she is perceived among people who are relatively high in their perceptions of the pervasiveness of sex discrimination in society” (p. 372). For this, Hayes defined “relatively high” as \(W = 5.896\). To get those estimates for each condition, we’ll use fitted()
. Since the number of unique predictor values is small for this example, we’ll just plug them directly into the newdata
argument rather than first saving them as a nd
object.
fitted(model10.2,
newdata = tibble(d1 = c(0, 1, 0),
d2 = c(0, 0, 1),
sexism = 5.896)) %>%
round(digits = 3)
## Estimate Est.Error Q2.5 Q97.5
## [1,] 4.912 0.236 4.450 5.373
## [2,] 6.110 0.205 5.712 6.523
## [3,] 6.021 0.224 5.574 6.457
Those posterior summaries match up nicely with the point estimates Hayes presented at the bottom of page 372. Hayes further expounded
So by using the regression centering strategy described earlier in the context of an omnibus test of equality of values of \(\hat Y\), the regression coefficients \(b_1\) and \(b_2\) provide pairwise inferences consistent with the coding system used to represent the three groups, conditioned on the value that \(W\) is centered around.
In the next few sentences, he focused on what happened when \(W = 4.250\) (i.e., in model4a
). Recall that the two coefficients in question, \(b_1\) and \(b_2\), are named d1
and d2
when we pull their summaries with fixef()
.
## Estimate Est.Error Q2.5 Q97.5
## d1 -0.306 0.348 -0.979 0.369
## d2 -0.194 0.307 -0.798 0.380
Hayes then clarified that in this model
\[\begin{align*} b_1 & = \theta_{D_1 \rightarrow Y} | (W = 4.250) = 5.400 - 5.698 = -0.299 \;\;\; \text{ and} \\ b_2 & = \theta_{D_2 \rightarrow Y} | (W = 4.250) = 5.513 - 5.698 = -0.185. \end{align*}\]
That is, it is the same as a difference score of each of the experimental conditions minus the “No protest” condition. To further show the difference-score quality of these coefficients, we can continue using fitted()
in conjunction with the original model10.2
to get the group comparisons for when \(W = 4.250\). Since these involve computing difference scores, we’ll have to use summary = F
and do some wrangling.
fitted(model10.2,
newdata = tibble(d1 = c(0, 1, 0),
d2 = c(0, 0, 1),
sexism = 4.25),
summary = F) %>%
data.frame() %>%
set_names("No Protest", "Individual Protest", "Collective Protest") %>%
mutate(difference_a = `Individual Protest` - `No Protest`,
difference_b = `Collective Protest` - `No Protest`) %>%
pivot_longer(everything()) %>%
mutate(name = factor(name, levels = c("No Protest", "Individual Protest", "Collective Protest",
"difference_a", "difference_b"))) %>%
group_by(name) %>%
mean_qi(value) %>%
select(name:.upper) %>%
mutate_if(is.double, round, digits = 3)
## # A tibble: 5 x 4
## name value .lower .upper
## <fct> <dbl> <dbl> <dbl>
## 1 No Protest 5.70 5.25 6.16
## 2 Individual Protest 5.40 4.89 5.90
## 3 Collective Protest 5.51 5.11 5.92
## 4 difference_a -0.308 -0.988 0.361
## 5 difference_b -0.195 -0.806 0.419
Within simulation variance, difference_a
is the same as \(b_{1 | \text{model10.4a}}\) and difference_b
is the same as \(b_{2 | \text{model10.4a}}\). Here’s the same thing for when \(W = 5.120\).
fitted(model10.2,
newdata = tibble(d1 = c(0, 1, 0),
d2 = c(0, 0, 1),
sexism = 5.120),
summary = F) %>%
data.frame() %>%
set_names("No Protest", "Individual Protest", "Collective Protest") %>%
mutate(difference_a = `Individual Protest` - `No Protest`,
difference_b = `Collective Protest` - `No Protest`) %>%
pivot_longer(everything()) %>%
mutate(name = factor(name, levels = c("No Protest", "Individual Protest", "Collective Protest",
"difference_a", "difference_b"))) %>%
group_by(name) %>%
mean_qi(value) %>%
select(name:.upper) %>%
mutate_if(is.double, round, digits = 3)
## # A tibble: 5 x 4
## name value .lower .upper
## <fct> <dbl> <dbl> <dbl>
## 1 No Protest 5.28 4.97 5.59
## 2 Individual Protest 5.77 5.46 6.08
## 3 Collective Protest 5.78 5.47 6.08
## 4 difference_a 0.488 0.052 0.947
## 5 difference_b 0.494 0.056 0.94
Finally, here it is for when \(W = 5.986\).
fitted(model10.2,
newdata = tibble(d1 = c(0, 1, 0),
d2 = c(0, 0, 1),
sexism = 5.986),
summary = F) %>%
data.frame() %>%
set_names("No Protest", "Individual Protest", "Collective Protest") %>%
mutate(difference_a = `Individual Protest` - `No Protest`,
difference_b = `Collective Protest` - `No Protest`) %>%
pivot_longer(everything()) %>%
mutate(name = factor(name, levels = c("No Protest", "Individual Protest", "Collective Protest",
"difference_a", "difference_b"))) %>%
group_by(name) %>%
mean_qi(value) %>%
select(name:.upper) %>%
mutate_if(is.double, round, digits = 3)
## # A tibble: 5 x 4
## name value .lower .upper
## <fct> <dbl> <dbl> <dbl>
## 1 No Protest 4.87 4.38 5.35
## 2 Individual Protest 6.15 5.73 6.59
## 3 Collective Protest 6.05 5.58 6.51
## 4 difference_a 1.28 0.654 1.96
## 5 difference_b 1.18 0.478 1.86
10.4.2 The Johnson-Neyman technique.
As discussed in section 7.4, a problem with the pick-a-point approach to probing an interaction is having to choose values of the moderator. When the moderator is a continuum, you may not have any basis for choosing some values rather than others, and the choice you make will certainly influence the results of the probing exercise to some extent… Actively choosing a different system or con- vention, such as using the sample mean of \(W\), a standard deviation below the mean, and a standard deviation above the mean also does not eliminate the problem. But the Johnson–Neyman (JN) technique avoids this problem entirely. (p. 376)
10.4.2.1 Omnibus inference.
Consider the first sentence of the section:
Applied to probing an interaction between a multicategorical \(X\) and a continuous \(W\), an omnibus version of the JM technique involves finding the value or values of \(W\) where their \(F\)-ratio comparing the \(g\) estimated values of \(Y\) is just statistically significant. (p. 376)
Since we’re not using \(F\)-tests with our approach to Bayesian modeling, the closest we might have is a series of \(R^2\) difference tests, which would require refitting the model multiple times over many ways of centering the \(W\)-variable, sexism
. I suppose you could do this if you wanted, but it just seems silly, to me. I’ll leave this one up to the interested reader.
10.4.2.2 Pairwise inference.
Hayes didn’t make plots for this section, but if you’re careful constructing your nd
and with the subsequent wrangling, you can make the usual plots. Since we have two conditions we’d like to compare with No Protest, we’ll make two plots. Here’s the comparison using Individual Protest, first.
nd <-
tibble(d1 = 0:1,
d2 = 0) %>%
expand(nesting(d1, d2),
sexism = seq(from = 3.5, to = 6.5, length.out = 30))
# the transition value Hayes identified in the text
Hayes_value <- 5.065
# we need some new data
nd <-
tibble(d1 = 0:1,
d2 = 0) %>%
expand(nesting(d1, d2),
sexism = seq(from = 3.5, to = 6.5, length.out = 30))
# plug those data into `fitted()`
fitted(model10.2,
newdata = nd,
summary = F) %>%
# wrangle
data.frame() %>%
gather(key, estimate) %>%
bind_cols(
nd %>%
expand(nesting(d1, d2, sexism),
iter = 1:4000)
) %>%
mutate(condition = if_else(d1 == 0, "No Protest", "Individual Protest")) %>%
select(-c(key, d1:d2)) %>%
spread(key = condition, value = estimate) %>%
mutate(difference = `Individual Protest` - `No Protest`) %>%
# plot!
ggplot(aes(x = sexism, y = difference)) +
stat_summary(geom = "ribbon",
fun.ymin = function(i) {quantile(i, probs = .025)},
fun.ymax = function(i) {quantile(i, probs = .975)},
alpha = 1/3) +
stat_summary(geom = "ribbon",
fun.ymin = function(i) {quantile(i, probs = .25)},
fun.ymax = function(i) {quantile(i, probs = .75)},
alpha = 1/3) +
stat_summary(geom = "line",
fun.y = median) +
scale_x_continuous(breaks = c(4, Hayes_value, 6),
labels = c("4", Hayes_value, "6")) +
coord_cartesian(xlim = 4:6) +
labs(subtitle = expression(paste("Our JN-technique plot for ", italic("Individual Protest"), " compared with ", italic("No Protest")))) +
theme_minimal()
Now we’re ready to compare No Protest to Collective Protest. The main data difference is which values we assigned to the d1
and d2
columns in nd
. For kicks, we should practice another way to get the median line and interval ribbons. The stat_summary()
approach from above works great, but it’s verbose. The tidybayes::stat_lineribbon()
function will give us the same results with fewer lines of code.
# the transition value Hayes identified in the text
Hayes_value <- 5.036
# new data
nd <-
tibble(d1 = 0,
d2 = 0:1) %>%
expand(nesting(d1, d2),
sexism = seq(from = 3.5, to = 6.5, length.out = 30))
# this part is the same as before
fitted(model10.2,
newdata = nd,
summary = F) %>%
data.frame() %>%
gather(key, estimate) %>%
bind_cols(
nd %>%
expand(nesting(d1, d2, sexism),
iter = 1:4000)
) %>%
# there are some mild differences, here
mutate(condition = if_else(d2 == 0, "No Protest", "Collective Protest")) %>%
select(-c(key, d1:d2)) %>%
pivot_wider(names_from = condition, values_from = estimate) %>%
mutate(difference = `Collective Protest` - `No Protest`) %>%
# plot!
ggplot(aes(x = sexism, y = difference)) +
# look how compact this is!
stat_lineribbon(.width = c(0.5, 0.95),
alpha = 1/3, fill = "black") +
scale_x_continuous(breaks = c(4, Hayes_value, 6),
labels = c("4", Hayes_value, "6")) +
coord_cartesian(xlim = 4:6) +
labs(subtitle = expression(paste("Our JN-technique plot for ", italic("Collective Protest"), " compared with ", italic("No Protest")))) +
theme_minimal()
And here we do it one last time between the two active protest conditions. For good measure, we will continue experimenting with different ways of plotting the results. This time well first summarize the posterior median and intervals with tidybayes::median_qi()
before plotting. We’ll then feed those results into our plot with the aid of tidybayes::geom_lineribbon()
and a follow-up scale_fill_manual()
line.
nd <-
tibble(d1 = 1:0,
d2 = 0:1) %>%
expand(nesting(d1, d2),
sexism = seq(from = 3.5, to = 6.5, length.out = 30))
fitted(model10.2,
newdata = nd,
summary = F) %>%
data.frame() %>%
gather() %>%
bind_cols(
nd %>%
expand(nesting(d1, d2, sexism),
iter = 1:4000)
) %>%
mutate(condition = if_else(d1 == 0, "Individual Protest", "Collective Protest")) %>%
select(-c(key, d1:d2)) %>%
pivot_wider(names_from = condition, values_from = value) %>%
mutate(difference = `Collective Protest` - `Individual Protest`) %>%
# group and summarise, here
group_by(sexism) %>%
median_qi(difference, .width = c(.5, .95)) %>%
# plot!
ggplot(aes(x = sexism, y = difference)) +
# look how simple these two lines are
geom_lineribbon(show.legend = F) +
scale_fill_manual(values = c("grey75", "grey50")) +
coord_cartesian(xlim = 4:6) +
labs(subtitle = expression(paste("Our JN-technique plot for ", italic("Collective Protest"), " compared with ", italic("Individual Protest")))) +
theme_minimal()
Little difference between those conditions.
10.5 When the moderator is multicategorical
From a substantive standpoint the combination of
- a multicategorical variable \(X\) and a dichotomous or continuous moderator \(W\) versus
- a dichotomous or continuous variable \(X\) and a multicategorical moderator \(W\)
might seem different. From a modeling perspective, the difference is trivial. As Hayes pointed out, “when we claim from a statistical test of moderation that \(X\)’s effect is moderated by \(W\), then it is also true that \(W\)’s effect is moderated by \(X\). This is the symmetry property of interactions” (p. 381). This symmetry holds when we’re not using the hypothesis-testing framework, too.
10.5.1 An example.
Just as a refresher, here’s the print()
output for model2
.
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: liking ~ d1 + d2 + sexism + d1:sexism + d2:sexism
## Data: protest (Number of observations: 129)
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
## total post-warmup samples = 4000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 7.748 1.071 5.689 9.875 1.002 1354 2194
## d1 -4.195 1.518 -7.194 -1.275 1.002 1360 1729
## d2 -3.561 1.407 -6.330 -0.892 1.004 1378 1857
## sexism -0.481 0.209 -0.896 -0.080 1.003 1348 2329
## d1:sexism 0.915 0.291 0.350 1.501 1.002 1337 1789
## d2:sexism 0.792 0.277 0.255 1.331 1.004 1276 1874
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 1.007 0.065 0.888 1.146 1.002 3190 2503
##
## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample
## is a crude measure of effective sample size, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
The Bayesian \(R^2\):
## Estimate Est.Error Q2.5 Q97.5
## R2 0.159 0.051 0.067 0.265
And the \(R^2\) difference between this and the model excluding the interaction terms:
bayes_R2(model10.1, summary = F) %>%
data.frame() %>%
bind_cols(
bayes_R2(model10.2, summary = F) %>%
data.frame()
) %>%
set_names(str_c("Model 10.", 1:2)) %>%
transmute(difference = `Model 10.2` - `Model 10.1`) %>%
mean_qi(difference) %>%
mutate_if(is.double, round, digits = 3)
## # A tibble: 1 x 6
## difference .lower .upper .width .point .interval
## <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 0.088 -0.034 0.21 0.95 mean qi
Much like in the text, our Figure 10.7 is just a little different from what we did with Figure 10.3.
# this will help us with the `geom_text()` annotation
slopes <-
tibble(slope = c(fixef(model10.2)["sexism", "Estimate"] + fixef(model10.2)["d1:sexism", "Estimate"],
fixef(model10.2)["sexism", "Estimate"] + fixef(model10.2)["d2:sexism", "Estimate"],
fixef(model10.2)["sexism", "Estimate"]),
x = c(4.8, 4.6, 5),
y = c(6.37, 6.25, 4.5),
condition = c("Individual Protest", "Collective Protest", "No Protest")) %>%
mutate(label = str_c("This slope is about ", slope %>% round(digits = 3)),
condition = factor(condition, levels = c("No Protest", "Individual Protest", "Collective Protest")))
# Here we plot
f %>%
ggplot(aes(x = sexism)) +
geom_ribbon(aes(ymin = Q2.5, ymax = Q97.5),
alpha = 1/3) +
geom_ribbon(aes(ymin = Q25, ymax = Q75),
alpha = 1/3) +
geom_line(aes(y = Estimate)) +
geom_text(data = slopes,
aes(x = x, y = y, label = label)) +
coord_cartesian(xlim = 4:6) +
labs(x = expression(paste("Perceived Pervasiveness of Sex Discrimination in Society (", italic(X), ")")),
y = "Evaluation of the Attorney") +
facet_wrap(~condition) +
theme_minimal()
10.5.2 Probing the interaction and interpreting the regression coefficients.
We computed the posterior means for the slopes when prepping for the figure, above. Here’s how we might get more complete posterior summaries. Much like in the text, our Figure 10.7 is just a little different from what we did with Figure 10.3.
post <-
posterior_samples(model10.2) %>%
transmute(`No Protest` = b_sexism + `b_d1:sexism` * 0 + `b_d2:sexism` * 0,
`Individual Protest` = b_sexism + `b_d1:sexism` * 1 + `b_d2:sexism` * 0,
`Collective Protest` = b_sexism + `b_d1:sexism` * 0 + `b_d2:sexism` * 1)
post %>%
pivot_longer(everything()) %>%
mutate(name = factor(name, levels = c("No Protest", "Individual Protest", "Collective Protest"))) %>%
group_by(name) %>%
mean_qi(value) %>%
mutate_if(is.double, round, digits = 3)
## # A tibble: 3 x 7
## name value .lower .upper .width .point .interval
## <fct> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 No Protest -0.481 -0.896 -0.08 0.95 mean qi
## 2 Individual Protest 0.434 0.041 0.841 0.95 mean qi
## 3 Collective Protest 0.311 -0.048 0.672 0.95 mean qi
Here are the differences among the three protest groups.
post %>%
transmute(`Individual Protest - No Protest` = `Individual Protest` - `No Protest`,
`Collective Protest - No Protest` = `Collective Protest` - `No Protest`,
`Individual Protest - Collective Protest` = `Individual Protest` - `Collective Protest`) %>%
pivot_longer(everything()) %>%
# again, not necessary, but useful for reordering the summaries
mutate(name = factor(name, levels = c("Individual Protest - No Protest", "Collective Protest - No Protest", "Individual Protest - Collective Protest"))) %>%
group_by(name) %>%
mean_qi(value) %>%
mutate_if(is.double, round, digits = 3)
## # A tibble: 3 x 7
## name value .lower .upper .width .point .interval
## <fct> <dbl> <dbl> <dbl> <dbl> <chr> <chr>
## 1 Individual Protest - No Protest 0.915 0.35 1.50 0.95 mean qi
## 2 Collective Protest - No Protest 0.792 0.255 1.33 0.95 mean qi
## 3 Individual Protest - Collective Protest 0.123 -0.423 0.679 0.95 mean qi
Session info
## R version 3.6.0 (2019-04-26)
## Platform: x86_64-apple-darwin15.6.0 (64-bit)
## Running under: macOS High Sierra 10.13.6
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
##
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] tidybayes_1.1.0 brms_2.10.3 Rcpp_1.0.2 forcats_0.4.0 stringr_1.4.0 dplyr_0.8.3
## [7] purrr_0.3.3 readr_1.3.1 tidyr_1.0.0 tibble_2.1.3 ggplot2_3.2.1 tidyverse_1.2.1
##
## loaded via a namespace (and not attached):
## [1] colorspace_1.4-1 ellipsis_0.3.0 ggridges_0.5.1
## [4] rsconnect_0.8.15 ggstance_0.3.2 markdown_1.1
## [7] base64enc_0.1-3 rstudioapi_0.10 rstan_2.19.2
## [10] svUnit_0.7-12 DT_0.9 fansi_0.4.0
## [13] lubridate_1.7.4 xml2_1.2.0 bridgesampling_0.7-2
## [16] knitr_1.23 shinythemes_1.1.2 zeallot_0.1.0
## [19] bayesplot_1.7.0 jsonlite_1.6 broom_0.5.2
## [22] shiny_1.3.2 compiler_3.6.0 httr_1.4.0
## [25] backports_1.1.5 assertthat_0.2.1 Matrix_1.2-17
## [28] lazyeval_0.2.2 cli_1.1.0 later_1.0.0
## [31] htmltools_0.4.0 prettyunits_1.0.2 tools_3.6.0
## [34] igraph_1.2.4.1 coda_0.19-3 gtable_0.3.0
## [37] glue_1.3.1.9000 reshape2_1.4.3 cellranger_1.1.0
## [40] vctrs_0.2.0 nlme_3.1-139 crosstalk_1.0.0
## [43] xfun_0.10 ps_1.3.0 rvest_0.3.4
## [46] mime_0.7 miniUI_0.1.1.1 lifecycle_0.1.0
## [49] gtools_3.8.1 zoo_1.8-6 scales_1.0.0
## [52] colourpicker_1.0 hms_0.4.2 promises_1.1.0
## [55] Brobdingnag_1.2-6 parallel_3.6.0 inline_0.3.15
## [58] shinystan_2.5.0 gridExtra_2.3 loo_2.1.0
## [61] StanHeaders_2.19.0 stringi_1.4.3 dygraphs_1.1.1.6
## [64] pkgbuild_1.0.5 rlang_0.4.1 pkgconfig_2.0.3
## [67] matrixStats_0.55.0 evaluate_0.14 lattice_0.20-38
## [70] rstantools_2.0.0 htmlwidgets_1.5 labeling_0.3
## [73] tidyselect_0.2.5 processx_3.4.1 plyr_1.8.4
## [76] magrittr_1.5 R6_2.4.0 generics_0.0.2
## [79] pillar_1.4.2 haven_2.1.0 withr_2.1.2
## [82] xts_0.11-2 abind_1.4-5 modelr_0.1.4
## [85] crayon_1.3.4 arrayhelpers_1.0-20160527 utf8_1.1.4
## [88] rmarkdown_1.13 grid_3.6.0 readxl_1.3.1
## [91] callr_3.3.2 threejs_0.3.1 digest_0.6.21
## [94] xtable_1.8-4 httpuv_1.5.2 stats4_3.6.0
## [97] munsell_0.5.0 viridisLite_0.3.0 shinyjs_1.0