6.3 Using a different group coding system
Here we’ll make our alternative dummies, what we’ll call D_1
and D_2
, with orthogonal contrast coding.
protest <-
protest %>%
mutate(D_1 = ifelse(protest == 0, -2/3, 1/3),
D_2 = ifelse(protest == 0, 0,
ifelse(protest == 1, -1/2, 1/2)))
Here are the sub-models.
m_model <- bf(respappr ~ 1 + D_1 + D_2)
y_model <- bf(liking ~ 1 + D_1 + D_2 + respappr)
Now we fit.
model4 <-
brm(data = protest, family = gaussian,
m_model + y_model + set_rescor(FALSE),
chains = 4, cores = 4)
Here are our intercepts and regression coefficient summaries.
fixef(model4)
## Estimate Est.Error Q2.5 Q97.5
## respappr_Intercept 4.8399962 0.10578752 4.6301115 5.0460324
## liking_Intercept 3.6303107 0.35919791 2.9137984 4.3166260
## respappr_D_1 1.4358633 0.21557632 1.0162848 1.8717145
## respappr_D_2 0.3497242 0.25111687 -0.1406033 0.8324690
## liking_D_1 -0.1146844 0.20498317 -0.5065575 0.2993028
## liking_D_2 -0.2168985 0.20270585 -0.6162726 0.1886207
## liking_respappr 0.4129279 0.07201573 0.2767746 0.5557400
It’s important to note that these will not correspond to the “TOTAL EFFECT MODEL” section of the PROCESS output of Figure 6.3. Hayes’s PROCESS has the mcx=3
command which tells the program to reparametrize the orthogonal contrasts. brms doesn’t have such a command.
For now, we’ll have to jump to equation 6.8 towards the bottom of page 207. Those parameters are evident in our output.
fixef(model4)[c(1, 3:4) , ] %>% round(digits = 3)
## Estimate Est.Error Q2.5 Q97.5
## respappr_Intercept 4.840 0.106 4.630 5.046
## respappr_D_1 1.436 0.216 1.016 1.872
## respappr_D_2 0.350 0.251 -0.141 0.832
Thus it’s easy to get the \(\overline{M}_{ij}\) means with a little posterior manipulation.
post <- posterior_samples(model4)
post <-
post %>%
mutate(M_np = b_respappr_Intercept + b_respappr_D_1*-2/3 + b_respappr_D_2*0,
M_ip = b_respappr_Intercept + b_respappr_D_1*1/3 + b_respappr_D_2*-1/2,
M_cp = b_respappr_Intercept + b_respappr_D_1*1/3 + b_respappr_D_2*1/2)
post %>%
select(starts_with("M_")) %>%
gather() %>%
mutate(key = factor(key, levels = c("M_np", "M_ip", "M_cp"))) %>%
group_by(key) %>%
summarize(mean = mean(value),
sd = sd(value))
## # A tibble: 3 x 3
## key mean sd
## <fct> <dbl> <dbl>
## 1 M_np 3.88 0.182
## 2 M_ip 5.14 0.181
## 3 M_cp 5.49 0.173
With these in hand, we can compute \(a_{1}\) and \(a_{2}\).
post <-
post %>%
mutate(a1 = (M_ip + M_cp)/2 - M_np,
a2 = M_cp - M_ip)
post %>%
select(a1:a2) %>%
gather() %>%
group_by(key) %>%
summarize(mean = mean(value),
sd = sd(value))
## # A tibble: 2 x 3
## key mean sd
## <chr> <dbl> <dbl>
## 1 a1 1.44 0.216
## 2 a2 0.350 0.251
Happily, our model output will allow us to work with Hayes’s \(\overline{Y}^*_{ij}\) equations in the middle of page 210.
post <-
post %>%
mutate(Y_np = b_liking_Intercept + b_liking_D_1*-2/3 + b_liking_D_2*0 + b_liking_respappr*mean(protest$respappr),
Y_ip = b_liking_Intercept + b_liking_D_1*1/3 + b_liking_D_2*-1/2 + b_liking_respappr*mean(protest$respappr),
Y_cp = b_liking_Intercept + b_liking_D_1*1/3 + b_liking_D_2*1/2 + b_liking_respappr*mean(protest$respappr))
post %>%
select(starts_with("Y_")) %>%
gather() %>%
mutate(key = factor(key, levels = c("Y_np", "Y_ip", "Y_cp"))) %>%
group_by(key) %>%
summarize(mean = mean(value),
sd = sd(value))
## # A tibble: 3 x 3
## key mean sd
## <fct> <dbl> <dbl>
## 1 Y_np 5.72 0.160
## 2 Y_ip 5.71 0.144
## 3 Y_cp 5.49 0.149
And with these in hand, we can compute \(c'_{1}\) and \(c'_{2}\).
post <-
post %>%
mutate(c1_prime = (Y_ip + Y_cp)/2 - Y_np,
c2_prime = Y_cp - Y_ip)
post %>%
select(c1_prime:c2_prime) %>%
gather() %>%
group_by(key) %>%
summarize(mean = mean(value),
sd = sd(value))
## # A tibble: 2 x 3
## key mean sd
## <chr> <dbl> <dbl>
## 1 c1_prime -0.115 0.205
## 2 c2_prime -0.217 0.203
It appears Hayes has a typo in the formula for \(c'_{2}\) on page 211. The value he has down for \(\overline{Y}^*_{IP}\), 5.145, is incorrect. It’s not the one he displayed at the bottom of the previous page and it also contradicts the analyses herein. So it goes… These things happen.
We haven’t spelled out, but the \(b\) parameter is currently labeled b_liking_respappr
in our post
object. Here we’ll make a b
column to make things easier. While we’re at it, we’ll compute the indirect effects, too.
post <-
post %>%
mutate(b = b_liking_respappr) %>%
mutate(a1b = a1*b,
a2b = a2*b)
post %>%
select(a1b:a2b) %>%
gather() %>%
group_by(key) %>%
summarize(mean = mean(value),
sd = sd(value),
ll = quantile(value, probs = .025),
ul = quantile(value, probs = .975)) %>%
mutate_if(is.double, round, digits = 3)
## # A tibble: 2 x 5
## key mean sd ll ul
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 a1b 0.593 0.137 0.345 0.882
## 2 a2b 0.144 0.108 -0.061 0.366
Now we can compute and summarize()
our \(c_{1}\) and \(c_{1}\).
post <-
post %>%
mutate(c1 = c1_prime + a1b,
c2 = c2_prime + a2b)
post %>%
select(c1:c2) %>%
gather() %>%
group_by(key) %>%
summarize(mean = mean(value),
sd = sd(value))
## # A tibble: 2 x 3
## key mean sd
## <chr> <dbl> <dbl>
## 1 c1 0.478 0.199
## 2 c2 -0.0726 0.225