14 Missing Data and Other Opportunities
“A big advantage of Bayesian inference is that it obviates the need to be clever” (McElreath, 2015, p. 423).
For the opening example, we’re playing with the conditional probability
\[ \text{Pr(burnt down | burnt up)} = \frac{\text{Pr(burnt up, burnt down)}}{\text{Pr(burnt up)}}. \]
It works out that
\[ \text{Pr(burnt down | burnt up)} = \frac{1/3}{1/2} = \frac{2}{3}. \]
We might express the math in the middle of page 423 in tibble form like this.
library(tidyverse)
<- 1/3
p_pancake
(<-
d tibble(pancake = c("BB", "BU", "UU"),
p_burnt = c(1, .5, 0)) %>%
mutate(p_burnt_up = p_burnt * p_pancake)
)
## # A tibble: 3 × 3
## pancake p_burnt p_burnt_up
## <chr> <dbl> <dbl>
## 1 BB 1 0.333
## 2 BU 0.5 0.167
## 3 UU 0 0
%>%
d summarise(`p (burnt_down | burnt_up)` = p_pancake / sum(p_burnt_up))
## # A tibble: 1 × 1
## `p (burnt_down | burnt_up)`
## <dbl>
## 1 0.667
I understood McElreath’s simulation better after breaking it apart. The first part of sim_pancake()
takes one random draw from the integers 1, 2, and 3. It just so happens that if we set set.seed(1)
, the code returns a 1.
set.seed(1)
sample(x = 1:3, size = 1)
## [1] 1
So here’s what it looks like if we use seeds 2:11
.
<- function(seed) {
take_sample set.seed(seed)
sample(x = 1:3, size = 1)
}
tibble(seed = 2:11) %>%
mutate(value_returned = map_dbl(seed, take_sample))
## # A tibble: 10 × 2
## seed value_returned
## <int> <dbl>
## 1 2 1
## 2 3 1
## 3 4 3
## 4 5 2
## 5 6 1
## 6 7 2
## 7 8 3
## 8 9 3
## 9 10 3
## 10 11 2
Each of those value_returned
values stands for one of the three pancakes: 1 = BB, 2 = BU, and 3 = UU. In the next line, McElreath made slick use of a matrix to specify that. Here’s what the matrix looks like.
matrix(c(1, 1, 1, 0, 0, 0), nrow = 2, ncol = 3)
## [,1] [,2] [,3]
## [1,] 1 1 0
## [2,] 1 0 0
See how the three columns are identified as [,1]
, [,2]
, and [,3]
? If, say, we wanted to subset the values in the second column, we’d execute
matrix(c(1, 1, 1, 0, 0, 0), nrow = 2, ncol = 3)[, 2]
## [1] 1 0
which returns a numeric vector.
matrix(c(1, 1, 1, 0, 0, 0), nrow = 2, ncol = 3)[, 2] %>% str()
## num [1:2] 1 0
And that 1 0
corresponds to the pancake with one burnt (i.e., 1) and one unburnt (i.e., 0) side. So when McElreath then executed sample(sides)
, he randomly sampled from one of those two values. In the case of pancake == 2
, he randomly sampled one the pancake with one burnt and one unburnt side. Had he sampled from pancake == 1
, he would have sampled from the pancake with both sides burnt.
Going forward, let’s amend McElreath’s sim_pancake()
function so it will take a seed
argument, which will allow us to make the output reproducible.
# simulate a `pancake` and return randomly ordered `sides`
<- function(seed) {
sim_pancake
set.seed(seed)
<- sample(x = 1:3, size = 1)
pancake <- matrix(c(1, 1, 1, 0, 0, 0), nrow = 2, ncol = 3)[, pancake]
sides sample(sides)
}
Let’s take this baby for a whirl.
<- 1e4
n_sim
<-
d tibble(seed = 1:n_sim) %>%
mutate(burnt = map(seed, sim_pancake)) %>%
unnest(burnt) %>%
mutate(side = rep(c("up", "down"), times = n() / 2))
Take a look at what we’ve done.
head(d, n = 10)
## # A tibble: 10 × 3
## seed burnt side
## <int> <dbl> <chr>
## 1 1 1 up
## 2 1 1 down
## 3 2 1 up
## 4 2 1 down
## 5 3 1 up
## 6 3 1 down
## 7 4 0 up
## 8 4 0 down
## 9 5 1 up
## 10 5 0 down
And now we’ll spread()
and summarise()
to get the value we’ve been working for.
%>%
d spread(key = side, value = burnt) %>%
summarise(`p (burnt_down | burnt_up)` = sum(up == 1 & down == 1) / (sum(up)))
## # A tibble: 1 × 1
## `p (burnt_down | burnt_up)`
## <dbl>
## 1 0.658
The results are within rounding error of the ideal 2/3.
Probability theory is not difficult mathematically. It’s just counting. But it is hard to interpret and apply. Doing so often seems to require some cleverness, and authors have an incentive to solve problems in clever ways, just to show off. But we don’t need that cleverness, if we ruthlessly apply conditional probability….
In this chapter, [we’ll] meet two commonplace applications of this assume-and-deduce strategy. The first is the incorporation of measurement error into our models. The second is the estimation of missing data through Bayesian imputation…
In neither application do [we] have to intuit the consequences of measurement errors nor the implications of missing values in order to design the models. All [we] have to do is state [the] information about the error or about the variables with missing values. Logic does the rest. (p. 424)
14.1 Measurement error
Let’s grab those WaffleDivorce
data from back in Chapter 5.
library(rethinking)
data(WaffleDivorce)
<- WaffleDivorce
d rm(WaffleDivorce)
Switch out rethinking for brms.
detach(package:rethinking, unload = T)
library(brms)
For the plots in this chapter, we’ll use the dark themes from the ggdark package (Grantham, 2019).
library(ggdark)
Our primary theme will be ggdark::dark_theme_gray()
. One way to use the dark_theme_gray()
function is to make it part of the code for an individual plot, such as ggplot() + geom_point() + dark_theme_gray()
. Another way is to make dark_theme_gray()
the default setting with ggplot2::theme_set()
. That’s the method we’ll use.
theme_set(
dark_theme_gray() +
theme(legend.position = "none",
panel.grid = element_blank())
)
# to reset the default ggplot2 theme to its default parameters,
# execute `ggplot2::theme_set(theme_gray())` and `ggdark::invert_geom_defaults()`
For the rest of our color palette, we’ll use colors from the viridis package (Garnier, 2021), which provides a variety of colorblind-safe color palettes (see Rudis et al., 2018).
# install.packages("viridis")
library(viridis)
The viridis_pal()
function gives a list of colors within a given palette. The colors in each palette fall on a spectrum. Within viridis_pal()
, the option
argument allows one to select a given spectrum, “C”, in our case. The final parentheses, ()
, allows one to determine how many discrete colors one would like to break the spectrum up by. We’ll choose 7.
viridis_pal(option = "C")(7)
## [1] "#0D0887FF" "#5D01A6FF" "#9C179EFF" "#CC4678FF" "#ED7953FF" "#FDB32FFF" "#F0F921FF"
With a little data wrangling, we can put the colors of our palette in a tibble and display them in a plot.
tibble(number = 1:7,
color_number = str_c(1:7, ". ", viridis_pal(option = "C")(7))) %>%
ggplot(aes(x = factor(0), y = reorder(color_number, number))) +
geom_tile(aes(fill = factor(number))) +
geom_text(aes(color = factor(number), label = color_number)) +
scale_color_manual(values = c(rep("black", times = 4),
rep("white", times = 3))) +
scale_fill_viridis(option = "C", discrete = T, direction = -1) +
scale_x_discrete(NULL, breaks = NULL) +
scale_y_discrete(NULL, breaks = NULL) +
ggtitle("Behold: viridis C!")
Now, let’s make use of our custom theme and reproduce/reimagine Figure 14.1.a.
<- viridis_pal(option = "C")(7)[7]
color
<-
p1 %>%
d ggplot(aes(x = MedianAgeMarriage, y = Divorce,
ymin = Divorce - Divorce.SE,
ymax = Divorce + Divorce.SE)) +
geom_pointrange(shape = 20, alpha = 2/3, color = color) +
labs(x = "Median age marriage" ,
y = "Divorce rate")
Notice how viridis_pal(option = "C")(7)[7]
called the seventh color in the color scheme, "#F0F921FF"
. For Figure 14.1.b, we’ll select the sixth color in the palette by coding viridis_pal(option = "C")(7)[6]
. We’ll then combine the two subplots with patchwork.
<- viridis_pal(option = "C")(7)[6]
color
<-
p2 %>%
d ggplot(aes(x = log(Population), y = Divorce,
ymin = Divorce - Divorce.SE,
ymax = Divorce + Divorce.SE)) +
geom_pointrange(shape = 20, alpha = 2/3, color = color) +
scale_y_continuous(NULL, breaks = NULL) +
xlab("log population")
library(patchwork)
| p2 p1
Just like in the text, our plot shows states with larger populations tend to have smaller measurement error. The relation between measurement error and MedianAgeMarriage
is less apparent.
14.1.1 Error on the outcome.
To get a better sense of what we’re about to do, imagine for a moment that each state’s divorce rate is normally distributed with a mean of Divorce
and standard deviation Divorce.SE
. Those distributions would be:
%>%
d mutate(Divorce_distribution = str_c("Divorce ~ Normal(", Divorce, ", ", Divorce.SE, ")")) %>%
select(Loc, Divorce_distribution) %>%
head()
## Loc Divorce_distribution
## 1 AL Divorce ~ Normal(12.7, 0.79)
## 2 AK Divorce ~ Normal(12.5, 2.05)
## 3 AZ Divorce ~ Normal(10.8, 0.74)
## 4 AR Divorce ~ Normal(13.5, 1.22)
## 5 CA Divorce ~ Normal(8, 0.24)
## 6 CO Divorce ~ Normal(11.6, 0.94)
As in the text,
in [the following] example we’ll use a Gaussian distribution with mean equal to the observed value and standard deviation equal to the measurement’s standard error. This is the logical choice, because if all we know about the error is its standard deviation, then the maximum entropy distribution for it will be Gaussian…
Here’s how to define the distribution for each divorce rate. For each observed value \(D_{\text{OBS},i}\), there will be one parameter, \(D_{\text{EST},i}\), defined by:
\[D_{\text{OBS},i} \sim \operatorname{Normal} (D_{\text{EST},i}, D_{\text{SE},i})\]
All this does is define the measurement \(D_{\text{OBS},i}\) as having the specified Gaussian distribution centered on the unknown parameter \(D_{\text{EST},i}\). So the above defines a probability for each State \(i\)’s observed divorce rate, given a known measurement error. (pp. 426–427)
Now we’re ready to fit some models. In brms, there are at least two ways to accommodate measurement error in the criterion. The first way uses the se()
syntax, following the form <response> | se(<se_response>, sigma = TRUE)
. With this syntax, se
stands for standard error, the loose frequentist analogue to the Bayesian posterior \(\textit{SD}\). Unless you’re fitting a meta-analysis on summary information, which we’ll be doing at the end of this chapter, make sure to specify sigma = TRUE
. Without that you’ll have no posterior for \(\sigma\)! For more information on the se()
method, go to the brms reference manual (Bürkner, 2022h) and find the Additional response information subsection of the brmsformula
section.
The second way uses the mi()
syntax, following the form <response> | mi(<se_response>)
. This follows a missing data logic, resulting in Bayesian missing data imputation for the criterion values. The mi()
syntax is based on the newer missing data capabilities for brms. We will cover that in more detail in the second half of this chapter.
We’ll start off using both methods. Our first model, b14.1_se
, will follow the se()
syntax; the second model, b14.1_mi
, will follow the mi()
syntax.
# put the data into a `list()`
<- list(
dlist div_obs = d$Divorce,
div_sd = d$Divorce.SE,
R = d$Marriage,
A = d$MedianAgeMarriage)
# here we specify the initial (i.e., starting) values
<- list(Yl = dlist$div_obs)
inits <- list(inits, inits)
inits_list
# fit the models
.1_se <-
b14brm(data = dlist,
family = gaussian,
| se(div_sd, sigma = TRUE) ~ 0 + Intercept + R + A,
div_obs prior = c(prior(normal(0, 10), class = b),
prior(cauchy(0, 2.5), class = sigma)),
iter = 5000, warmup = 1000, cores = 2, chains = 2,
seed = 14,
init = inits_list,
file = "fits/b14.01_se")
.1_mi <-
b14brm(data = dlist,
family = gaussian,
| mi(div_sd) ~ 0 + Intercept + R + A,
div_obs prior = c(prior(normal(0, 10), class = b),
prior(cauchy(0, 2.5), class = sigma)),
iter = 5000, warmup = 1000, cores = 2, chains = 2,
seed = 14,
control = list(adapt_delta = .9),
# note this line for the `mi()` model
save_pars = save_pars(latent = TRUE),
init = inits_list,
file = "fits/b14.01_mi")
Before we dive into the model summaries, notice how the starting values (i.e., inits
) differ by model. Even though we coded init = inits_list
for both models, the differ by fit@inits
.
.1_se$fit@inits b14
## [[1]]
## [[1]]$b
## [1] 0.6133048 -1.9171497 1.7551789
##
## [[1]]$sigma
## [1] 0.4668127
##
## [[1]]$lprior
## [1] -11.10238
##
##
## [[2]]
## [[2]]$b
## [1] 0.9114156 1.2512265 -0.4276127
##
## [[2]]$sigma
## [1] 1.906943
##
## [[2]]$lprior
## [1] -11.50392
.1_mi$fit@inits b14
## [[1]]
## [[1]]$Yl
## [1] 12.7 12.5 10.8 13.5 8.0 11.6 6.7 8.9 6.3 8.5 11.5 8.3 7.7 8.0 11.0 10.2 10.6 12.6 11.0
## [20] 13.0 8.8 7.8 9.2 7.4 11.1 9.5 9.1 8.8 10.1 6.1 10.2 6.6 9.9 8.0 9.5 12.8 10.4 7.7
## [39] 9.4 8.1 10.9 11.4 10.0 10.2 9.6 8.9 10.0 10.9 8.3 10.3
##
## [[1]]$b
## [1] -0.5034648 1.1693530 -1.0539336
##
## [[1]]$sigma
## [1] 1.281562
##
## [[1]]$lprior
## [1] -11.27942
##
##
## [[2]]
## [[2]]$Yl
## [1] 12.7 12.5 10.8 13.5 8.0 11.6 6.7 8.9 6.3 8.5 11.5 8.3 7.7 8.0 11.0 10.2 10.6 12.6 11.0
## [20] 13.0 8.8 7.8 9.2 7.4 11.1 9.5 9.1 8.8 10.1 6.1 10.2 6.6 9.9 8.0 9.5 12.8 10.4 7.7
## [39] 9.4 8.1 10.9 11.4 10.0 10.2 9.6 8.9 10.0 10.9 8.3 10.3
##
## [[2]]$b
## [1] -0.1543955 1.1642108 -0.4231833
##
## [[2]]$sigma
## [1] 4.802142
##
## [[2]]$lprior
## [1] -12.5856
As we explore further, it should become apparent why. Here are the primary model summaries.
print(b14.1_se)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: div_obs | se(div_sd, sigma = TRUE) ~ 0 + Intercept + R + A
## Data: dlist (Number of observations: 50)
## Draws: 2 chains, each with iter = 5000; warmup = 1000; thin = 1;
## total post-warmup draws = 8000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 21.34 6.65 8.41 34.24 1.00 1818 2586
## R 0.13 0.08 -0.02 0.28 1.00 2330 3151
## A -0.55 0.21 -0.97 -0.12 1.00 1899 2432
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 1.13 0.21 0.76 1.56 1.00 2747 3285
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
print(b14.1_mi)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: div_obs | mi(div_sd) ~ 0 + Intercept + R + A
## Data: dlist (Number of observations: 50)
## Draws: 2 chains, each with iter = 5000; warmup = 1000; thin = 1;
## total post-warmup draws = 8000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 21.58 6.61 8.32 34.28 1.00 2934 4448
## R 0.13 0.08 -0.02 0.28 1.00 3430 4870
## A -0.56 0.21 -0.97 -0.13 1.00 3042 4638
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 1.11 0.20 0.75 1.55 1.00 2193 3578
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Based on the print()
/summary()
information, the main parameters for the models are about the same. However, the plot deepens when we summarize the models with the posterior::summarise_draws()
method.
library(posterior)
# you can get similar output with `b14.1_mi$fit`
summarise_draws(b14.1_se, mean, sd, ~quantile(.x, probs = c(.025, .975))) %>%
mutate_if(is.numeric, round, digits = 2)
## # A tibble: 6 × 5
## variable mean sd `2.5%` `97.5%`
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 b_Intercept 21.3 6.65 8.41 34.2
## 2 b_R 0.13 0.08 -0.02 0.28
## 3 b_A -0.55 0.21 -0.97 -0.12
## 4 sigma 1.13 0.21 0.76 1.56
## 5 lprior -13.7 1.43 -17.1 -11.6
## 6 lp__ -105. 1.46 -109. -104.
summarise_draws(b14.1_mi, mean, sd, ~quantile(.x, probs = c(.025, .975))) %>%
mutate_if(is.numeric, round, digits = 2)
## # A tibble: 56 × 5
## variable mean sd `2.5%` `97.5%`
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 b_Intercept 21.6 6.61 8.32 34.3
## 2 b_R 0.13 0.08 -0.02 0.28
## 3 b_A -0.56 0.21 -0.97 -0.13
## 4 sigma 1.11 0.2 0.75 1.55
## 5 Yl[1] 11.8 0.68 10.5 13.1
## 6 Yl[2] 11.2 1.04 9.2 13.3
## 7 Yl[3] 10.5 0.63 9.23 11.7
## 8 Yl[4] 12.3 0.87 10.6 14.1
## 9 Yl[5] 8.05 0.24 7.58 8.52
## 10 Yl[6] 11.0 0.73 9.6 12.5
## # … with 46 more rows
Again, from b_Intercept
to sigma
, the output is about the same. But model b14.1_mi
, based on the mi()
syntax, contained posterior summaries for all 50 of the criterion values. The se()
method gave us similar model result, but no posterior summaries for the 50 criterion values. The rethinking package indexed those additional 50 as div_est[i]
; with the mi()
method, brms indexed them as Yl[i]
–no big deal. So while both brms methods accommodated measurement error, the mi()
method appears to be the brms analogue to what McElreath did with his model m14.1
in the text. Thus, it’s our b14.1_mi
model that follows the form
\[\begin{align*} \text{Divorce}_{\text{estimated}, i} & \sim \operatorname{Normal}(\mu_i, \sigma) \\ \mu & = \alpha + \beta_1 \text A_i + \beta_2 \text R_i \\ \text{Divorce}_{\text{observed}, i} & \sim \operatorname{Normal}(\text{Divorce}_{\text{estimated}, i}, \text{Divorce}_{\text{standard error}, i}) \\ \alpha & \sim \operatorname{Normal}(0, 10) \\ \beta_1 & \sim \operatorname{Normal}(0, 10) \\ \beta_2 & \sim \operatorname{Normal}(0, 10) \\ \sigma & \sim \operatorname{HalfCauchy}(0, 2.5). \end{align*}\]
Note. The normal(0, 10)
prior McElreath used was quite informative and can lead to discrepancies between the rethinking and brms results if you’re not careful. A large issue is the default way brms handles intercept priors. From the hyperlink, Bürkner wrote:
The formula for the original intercept is
b_intercept = temp_intercept - dot_product(means_X, b)
, wheremeans_X
is the vector of means of the predictor variables and b is the vector of regression coefficients (fixed effects). That is, when transforming a prior on the intercept to an “equivalent” prior on the temporary intercept, you have to take the means of the predictors and well as the priors on the other coefficients into account.
If this seems confusing, you have an alternative. The 0 + Intercept
part of the brm formula kept the intercept in the metric of the untransformed data, leading to similar results to those from rethinking. When your priors are vague, this might not be much of an issue. And since many of the models in Statistical rethinking use only weakly-regularizing priors, this hasn’t been much of an issue up to this point. But this model is quite sensitive to the intercept syntax. My general recommendation for applied data analysis is this: If your predictors aren’t mean centered, default to the 0 + Intercept
syntax for the formula
argument when using brms::brm()
. Otherwise, your priors might not be doing what you think they’re doing.
Anyway, since our mi()
-syntax b14.1_mi
model appears to be the analogue to McElreath’s m14.1
, we’ll use that one for our plots. Here’s the code for our Figure 14.2.a.
# add the posterior mean and sd for each State's D_EST to the d data
<-
d %>%
d bind_cols(
summarise_draws(b14.1_mi, d_est = mean, d_est_sd = sd) %>%
filter(str_detect(variable, "Yl"))
)
# redefine the color
<- viridis_pal(option = "C")(7)[5]
color
# plot
<-
p1 %>%
d ggplot(aes(x = Divorce.SE, y = d_est - Divorce)) +
geom_hline(yintercept = 0, linetype = 2, color = "white") +
geom_point(alpha = 2/3, size = 2, color = color) +
labs(x = "Observed standard error for divorce",
y = "Divorce (estimate - observed)")
Before we make Figure 14.2.b, we need to fit a model that ignores measurement error.
.1b <-
b14brm(data = dlist,
family = gaussian,
~ 0 + Intercept + R + A,
div_obs prior = c(prior(normal(0, 50), class = b, coef = Intercept),
prior(normal(0, 10), class = b),
prior(cauchy(0, 2.5), class = sigma)),
chains = 2, iter = 5000, warmup = 1000, cores = 2,
seed = 14,
file = "fits/b14.01b")
print(b14.1b)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: div_obs ~ 0 + Intercept + R + A
## Data: dlist (Number of observations: 50)
## Draws: 2 chains, each with iter = 5000; warmup = 1000; thin = 1;
## total post-warmup draws = 8000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 36.37 7.92 20.54 51.76 1.00 1816 2566
## R -0.05 0.08 -0.21 0.11 1.00 2043 2912
## A -0.98 0.25 -1.48 -0.48 1.00 1913 2708
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 1.51 0.16 1.24 1.89 1.00 3345 3336
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
With the ignore-measurement-error fit in hand, we’re ready for Figure 14.2.b.
<-
nd tibble(R = mean(d$Marriage),
A = seq(from = 22, to = 30.2, length.out = 30),
div_sd = mean(d$Divorce.SE))
# red line
<-
f_error fitted(b14.1_mi, newdata = nd) %>%
as_tibble() %>%
bind_cols(nd)
# yellow line
<-
f_no_error fitted(b14.1b, newdata = nd) %>%
as_tibble() %>%
bind_cols(nd)
<- viridis_pal(option = "C")(7)[7]
color_y <- viridis_pal(option = "C")(7)[4]
color_r
# plot
<-
p2 %>%
f_no_error ggplot() +
# `f_no_error`
geom_smooth(aes(x = A, y = Estimate,
ymin = Q2.5, ymax = Q97.5),
stat = "identity",
fill = color_y, color = color_y,
alpha = 1/4, linewidth = 1/2, linetype = 2) +
# `f_error`
geom_smooth(data = f_error,
aes(x = A, y = Estimate,
ymin = Q2.5, ymax = Q97.5),
stat = "identity",
fill = color_r, color = color_r,
alpha = 1/3, linewidth = 1/2, linetype = 1) +
# white dots
geom_pointrange(data = d,
aes(x = MedianAgeMarriage, y = d_est,
ymin = d_est - d_est_sd,
ymax = d_est + d_est_sd),
color = "white", shape = 20, alpha = 1/2) +
scale_y_continuous("Divorce rate (posterior)", breaks = seq(from = 4, to = 14, by = 2)) +
xlab("Median age marriage") +
coord_cartesian(xlim = range(d$MedianAgeMarriage),
ylim = c(4, 15))
| p2 p1
In our plot on the right, it’s the reddish regression line that accounts for measurement error.
14.1.2 Error on both outcome and predictor.
What happens when there is measurement error on predictor variables as well? The approach is the same. Again, consider the problem generatively: Each observed predictor value is a draw from a distribution with an unknown mean, the true value, but known standard deviation. So we define a vector of parameters, one for each unknown true value, and then make those parameters the means of a family of Gaussian distributions with known standard deviations. (p. 429)
We might express this model for our current example as
\[\begin{align*} \text{Divorce}_{\text{estimated}, i} & \sim \operatorname{Normal}(\mu_i, \sigma) \\ \mu & = \alpha + \beta_1 \text A_i + \beta_2 \color{blue}{\text{Marriage_rate}_{\text{estimated}, i}} \\ \text{Divorce}_{\text{observed}, i} & \sim \operatorname{Normal}(\text{Divorce}_{\text{estimated}, i}, \text{Divorce}_{\text{standard error}, i}) \\ \color{blue}{\text{Marriage_rate}_{\text{observed}, i}} & \color{blue}\sim \color{blue}{\operatorname{Normal}(\text{Marriage_rate}_{\text{estimated}, i}, \text{Marriage_rate}_{\text{standard error}, i})} \\ \alpha & \sim \operatorname{Normal}(0, 10) \\ \beta_1 & \sim \operatorname{Normal}(0, 10) \\ \beta_2 & \sim \operatorname{Normal}(0, 10) \\ \sigma & \sim \operatorname{HalfCauchy}(0, 2.5). \end{align*}\]
In brms, you can specify error on predictors with an me()
statement in the form of me(predictor, sd_predictor)
where sd_predictor
is a vector in the data denoting the size of the measurement error, presumed to be in a standard-deviation metric.
# the data
<- list(
dlist div_obs = d$Divorce,
div_sd = d$Divorce.SE,
mar_obs = d$Marriage,
mar_sd = d$Marriage.SE,
A = d$MedianAgeMarriage)
# the `inits`
<- list(Yl = dlist$div_obs)
inits <- list(inits, inits)
inits_list
# the models
.2_se <-
b14brm(data = dlist,
family = gaussian,
| se(div_sd, sigma = TRUE) ~ 0 + Intercept + me(mar_obs, mar_sd) + A,
div_obs prior = c(prior(normal(0, 10), class = b),
prior(cauchy(0, 2.5), class = sigma)),
iter = 5000, warmup = 1000, chains = 3, cores = 3,
seed = 14,
save_pars = save_pars(latent = TRUE),
# note the lack if `inits`
file = "fits/b14.02_se")
.2_mi <-
b14brm(data = dlist,
family = gaussian,
| mi(div_sd) ~ 0 + Intercept + me(mar_obs, mar_sd) + A,
div_obs prior = c(prior(normal(0, 10), class = b),
prior(cauchy(0, 2.5), class = sigma)),
iter = 5000, warmup = 1000, cores = 2, chains = 2,
seed = 14,
control = list(adapt_delta = 0.95),
save_pars = save_pars(latent = TRUE),
init = inits_list,
file = "fits/b14.02_mi")
We already know including inits
values for our Yl[i]
estimates is a waste of time for our se()
model. But note how we still defined our inits
values as inits <- list(Yl = dlist$div_obs)
for the mi()
model. Although it’s easy in brms to set the starting values for our Yl[i]
estimates, much the way McElreath did, that is not the case when you have measurement error on the predictors. The brms package uses a non-centered parameterization for these, which requires users to have a deeper understanding of the underlying Stan code. This is where I get off the train, but if you want to go further, execute stancode(b14.2_mi)
.
Here are the two versions of the model.
print(b14.2_se)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: div_obs | se(div_sd, sigma = TRUE) ~ 0 + Intercept + me(mar_obs, mar_sd) + A
## Data: dlist (Number of observations: 50)
## Draws: 3 chains, each with iter = 5000; warmup = 1000; thin = 1;
## total post-warmup draws = 12000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 15.51 6.78 2.16 28.94 1.00 4985 6255
## A -0.44 0.20 -0.83 -0.04 1.00 5606 6811
## memar_obsmar_sd 0.28 0.11 0.07 0.49 1.00 5055 6842
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 1.00 0.21 0.62 1.45 1.00 8508 6769
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
print(b14.2_mi)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: div_obs | mi(div_sd) ~ 0 + Intercept + me(mar_obs, mar_sd) + A
## Data: dlist (Number of observations: 50)
## Draws: 2 chains, each with iter = 5000; warmup = 1000; thin = 1;
## total post-warmup draws = 8000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 15.91 6.75 2.43 28.87 1.00 2187 3146
## A -0.45 0.20 -0.83 -0.04 1.00 2483 3882
## memar_obsmar_sd 0.27 0.11 0.07 0.48 1.00 1962 3343
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 1.00 0.21 0.62 1.43 1.00 1605 2192
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
We’ll use posterior::summarise_draws()
, again, to get a sense of depth=2
summaries.
summarise_draws(b14.2_se, mean, sd, ~quantile(.x, probs = c(.025, .975))) %>%
mutate_if(is.numeric, round, digits = 2)
summarise_draws(b14.2_mi, mean, sd, ~quantile(.x, probs = c(.025, .975))) %>%
mutate_if(is.numeric, round, digits = 2)
Due to space concerns, I’m not going to show the results, here. You can do that on your own. Both methods yielded the posteriors for Xme_memar_obs[i]
, but only the b14.2_mi
model based on the mi()
syntax yielded posteriors for the criterion, the Yl[i]
summaries.
Note that you’ll need to specify save_pars = save_pars(latent = TRUE)
in the brm()
function in order to save the posterior samples of error-adjusted variables obtained by using the me()
argument. Without doing so, functions like predict()
may give you trouble.
Here is the code for Figure 14.3.a.
# add the posterior mean for each State's D_EST and M_EST to the d data
<-
d %>%
d select(!starts_with("d_est")) %>%
bind_cols(
# Divorce rate estimates
summarise_draws(b14.2_mi, d_est = mean) %>%
filter(str_detect(variable, "Yl")) %>%
select(d_est),
# Marriage rate estimated
summarise_draws(b14.2_mi, m_est = mean) %>%
filter(str_detect(variable, "Xme_")) %>%
select(m_est)
)
<- viridis_pal(option = "C")(7)[3]
color
<-
p1 %>%
d ggplot(aes(x = Marriage.SE, y = m_est - Marriage)) +
geom_hline(yintercept = 0, linetype = 2, color = "white") +
geom_point(alpha = 2/3, size = 2, color = color) +
labs(x = "Observed standard error for marriage rate",
y = "Marriage rate (estimate - observed)")
It takes just a little bit of rearranging some of the data in d
to make our version of Figure 14.4.b.
<- viridis_pal(option = "C")(7)[7]
color_y <- viridis_pal(option = "C")(7)[2]
color_p
# wrangle
<-
p2 full_join(
%>%
d select(Loc, Marriage, m_est) %>%
gather(key, m, -Loc) %>%
mutate(key = if_else(key == "Marriage", "observed", "posterior")),
%>%
d select(Loc, Divorce, d_est) %>%
gather(key, d, -Loc) %>%
mutate(key = if_else(key == "Divorce", "observed", "posterior")),
by = c("Loc", "key")
%>%
)
# plot!
ggplot(aes(x = m, y = d)) +
geom_line(aes(group = Loc),
color = "white", linewidth = 1/4) +
geom_point(aes(color = key)) +
scale_color_manual(values = c(color_p, color_y)) +
scale_y_continuous(breaks = seq(from = 4, to = 14, by = 2)) +
labs(x = "Marriage rate (posterior)" ,
y = "Divorce rate (posterior)") +
coord_cartesian(ylim = c(4, 14.5))
| p2 p1
In the right panel, the yellow points are model-implied; the purple ones are of the original data. When you look at both plots, it turns out our brms model regularized more aggressively than McElreath’s rethinking model. I’m unsure of why. If you understand the difference, please share with the rest of the class.
Anyway,
the big take home point for this section is that when you have a distribution of values, don’t reduce it down to a single value to use in a regression. Instead, use the entire distribution. Anytime we use an average value, discarding the uncertainty around that average, we risk overconfidence and spurious inference. This doesn’t only apply to measurement error, but also to cases which data are averaged before analysis.
Do not average. Instead, model. (p. 431)
14.2 Missing data
Starting with version 2.2.0, brms supports Bayesian missing data imputation using adaptations of the multivariate syntax (Bürkner, 2022c). Bürkner’s (2022f) vignette, Handle missing values with brms, is quite helpful for learning the basics.
14.2.1 Imputing neocortex
Once again, here are the milk
data.
library(rethinking)
data(milk)
<- milk
d
<-
d %>%
d mutate(neocortex.prop = neocortex.perc / 100,
logmass = log(mass))
Now we’ll switch out rethinking for brms and do a little data wrangling.
detach(package:rethinking, unload = T)
library(brms)
rm(milk)
# prep data
<-
data_list list(kcal = d$kcal.per.g,
neocortex = d$neocortex.prop,
logmass = d$logmass)
Here’s the structure of our data list.
data_list
## $kcal
## [1] 0.49 0.51 0.46 0.48 0.60 0.47 0.56 0.89 0.91 0.92 0.80 0.46 0.71 0.71 0.73 0.68 0.72 0.97 0.79
## [20] 0.84 0.48 0.62 0.51 0.54 0.49 0.53 0.48 0.55 0.71
##
## $neocortex
## [1] 0.5516 NA NA NA NA 0.6454 0.6454 0.6764 NA 0.6885 0.5885 0.6169 0.6032
## [14] NA NA 0.6997 NA 0.7041 NA 0.7340 NA 0.6753 NA 0.7126 0.7260 NA
## [27] 0.7024 0.7630 0.7549
##
## $logmass
## [1] 0.6678294 0.7371641 0.9202828 0.4824261 0.7839015 1.6582281 1.6808279 0.9202828
## [9] -0.3424903 -0.3856625 -2.1202635 -0.7550226 -1.1394343 -0.5108256 1.2441546 0.4382549
## [17] 1.9572739 1.1755733 2.0719133 2.5095993 2.0268316 1.6808279 2.3721112 3.5689692
## [25] 4.3748761 4.5821062 3.7072104 3.4998354 4.0064237
Our statistical model follows the form
\[\begin{align*} \text{kcal}_i & \sim \operatorname{Normal}(\mu_i, \sigma) \\ \mu_i & = \alpha + \beta_1 \color{blue}{\text{neocortex}_i} + \beta_2 \text{logmass}_i \\ \color{blue}{\text{neocortex}_i} & \color{blue}\sim \color{blue}{\operatorname{Normal}(\nu, \sigma_\text{neocortex})} \\ \alpha & \sim \operatorname{Normal}(0, 100) \\ \beta_1 & \sim \operatorname{Normal}(0, 10) \\ \beta_2 & \sim \operatorname{Normal}(0, 10) \\ \sigma & \sim \operatorname{HalfCauchy}(0, 1) \\ \nu & \sim \operatorname{Normal}(0.5, 1) \\ \sigma_\text{neocortex} & \sim \operatorname{HalfCauchy}(0, 1). \end{align*}\]
If you look closely, you’ll discover the prior McElreath reported in the model equation for the intercept, \(\alpha \sim \operatorname{Normal}(0, 10)\), does not match up with the prior he used in R code 14.7, a ~ dnorm(0,100)
. Here we use the latter.
When writing a multivariate model in brms, I find it easier to save the model code by itself and then insert it into the brm()
function. Otherwise, things get cluttered in a hurry.
<-
b_model # here's the primary `kcal` model
bf(kcal ~ 1 + mi(neocortex) + logmass) +
# here's the model for the missing `neocortex` data
bf(neocortex | mi() ~ 1) +
# here we set the residual correlations for the two models to zero
set_rescor(FALSE)
Note the mi(neocortex)
syntax in the kcal
model. This indicates that the predictor, neocortex
, has missing values that are themselves being modeled.
To get a sense of how to specify the priors for such a model, use the get_prior()
function.
get_prior(data = data_list,
family = gaussian,
b_model)
## prior class coef group resp dpar nlpar lb ub source
## (flat) b default
## (flat) Intercept default
## (flat) b kcal default
## (flat) b logmass kcal (vectorized)
## (flat) b mineocortex kcal (vectorized)
## student_t(3, 0.6, 2.5) Intercept kcal default
## student_t(3, 0, 2.5) sigma kcal 0 default
## student_t(3, 0.7, 2.5) Intercept neocortex default
## student_t(3, 0, 2.5) sigma neocortex 0 default
With the one-step Bayesian imputation procedure in brms, you might need to use the resp
argument when specifying non-default priors.
Anyway, here we fit the model.
.3 <-
b14brm(data = data_list,
family = gaussian,
# here we insert the model
b_model, prior = c(prior(normal(0, 100), class = Intercept, resp = kcal),
prior(normal(0.5, 1), class = Intercept, resp = neocortex),
prior(normal(0, 10), class = b, resp = kcal),
prior(cauchy(0, 1), class = sigma, resp = kcal),
prior(cauchy(0, 1), class = sigma, resp = neocortex)),
iter = 1e4, chains = 2, cores = 2,
seed = 14,
file = "fits/b14.03")
The imputed neocortex
values are indexed by occasion number from the original data.
summarise_draws(b14.3, mean, sd, ~quantile(.x, probs = c(.025, .975))) %>%
mutate_if(is.numeric, round, digits = 2)
## # A tibble: 20 × 5
## variable mean sd `2.5%` `97.5%`
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 b_kcal_Intercept -0.53 0.48 -1.41 0.45
## 2 b_neocortex_Intercept 0.67 0.01 0.64 0.7
## 3 b_kcal_logmass -0.07 0.02 -0.11 -0.02
## 4 bsp_kcal_mineocortex 1.9 0.74 0.38 3.28
## 5 sigma_kcal 0.13 0.02 0.09 0.18
## 6 sigma_neocortex 0.06 0.01 0.04 0.09
## 7 Ymi_neocortex[2] 0.63 0.05 0.54 0.73
## 8 Ymi_neocortex[3] 0.62 0.05 0.53 0.73
## 9 Ymi_neocortex[4] 0.62 0.05 0.53 0.73
## 10 Ymi_neocortex[5] 0.65 0.05 0.56 0.75
## 11 Ymi_neocortex[9] 0.7 0.05 0.6 0.8
## 12 Ymi_neocortex[14] 0.66 0.05 0.56 0.76
## 13 Ymi_neocortex[15] 0.69 0.05 0.6 0.78
## 14 Ymi_neocortex[17] 0.7 0.05 0.6 0.79
## 15 Ymi_neocortex[19] 0.71 0.05 0.61 0.81
## 16 Ymi_neocortex[21] 0.65 0.05 0.55 0.75
## 17 Ymi_neocortex[23] 0.66 0.05 0.56 0.75
## 18 Ymi_neocortex[26] 0.7 0.05 0.59 0.79
## 19 lprior -13.8 0.01 -13.9 -13.8
## 20 lp__ 40.6 4.36 30.8 47.7
Here’s the model that drops the cases with NAs on neocortex
.
.3cc <-
b14brm(data = data_list,
family = gaussian,
~ 1 + neocortex + logmass,
kcal prior = c(prior(normal(0, 100), class = Intercept),
prior(normal(0, 10), class = b),
prior(cauchy(0, 1), class = sigma)),
iter = 1e4, chains = 2, cores = 2,
seed = 14,
file = "fits/b14.03cc")
Check the parameter summaries.
summarise_draws(b14.3cc, mean, sd, ~quantile(.x, probs = c(.025, .975))) %>%
mutate_if(is.numeric, round, digits = 2)
## # A tibble: 6 × 5
## variable mean sd `2.5%` `97.5%`
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 b_Intercept -1.08 0.58 -2.22 0.08
## 2 b_neocortex 2.78 0.91 0.99 4.56
## 3 b_logmass -0.1 0.03 -0.15 -0.04
## 4 sigma 0.14 0.03 0.09 0.21
## 5 lprior -12.5 0.03 -12.6 -12.4
## 6 lp__ -4.24 1.63 -8.47 -2.22
In order to make our versions of Figure 14.4, we’ll need to do a little data wrangling with fitted()
.
<-
nd tibble(neocortex = seq(from = .5, to = .85, length.out = 30),
logmass = median(data_list$logmass))
.3 <-
f_b14fitted(b14.3, newdata = nd) %>%
as_tibble() %>%
bind_cols(nd)
.3 %>%
f_b14glimpse()
## Rows: 30
## Columns: 10
## $ Estimate.kcal <dbl> 0.3305725, 0.3534716, 0.3763707, 0.3992698, 0.4221689, 0.4450680, 0.46…
## $ Est.Error.kcal <dbl> 0.12622945, 0.11751312, 0.10883616, 0.10020879, 0.09164502, 0.08316450…
## $ Q2.5.kcal <dbl> 0.09313117, 0.13193564, 0.17089619, 0.20916860, 0.24740915, 0.28596563…
## $ Q97.5.kcal <dbl> 0.5922145, 0.5961559, 0.6018996, 0.6061100, 0.6114530, 0.6157898, 0.62…
## $ Estimate.neocortex <dbl> 0.6716265, 0.6716265, 0.6716265, 0.6716265, 0.6716265, 0.6716265, 0.67…
## $ Est.Error.neocortex <dbl> 0.01363675, 0.01363675, 0.01363675, 0.01363675, 0.01363675, 0.01363675…
## $ Q2.5.neocortex <dbl> 0.6442283, 0.6442283, 0.6442283, 0.6442283, 0.6442283, 0.6442283, 0.64…
## $ Q97.5.neocortex <dbl> 0.6988593, 0.6988593, 0.6988593, 0.6988593, 0.6988593, 0.6988593, 0.69…
## $ neocortex <dbl> 0.5000000, 0.5120690, 0.5241379, 0.5362069, 0.5482759, 0.5603448, 0.57…
## $ logmass <dbl> 1.244155, 1.244155, 1.244155, 1.244155, 1.244155, 1.244155, 1.244155, …
To include the imputed neocortex
values in the plot, we’ll extract the information from summarise_draws()
.
.3_mi <-
f_b14summarise_draws(b14.3, mean, ~quantile2(.x, probs = c(.025, .975))) %>%
filter(str_detect(variable, "Ymi")) %>%
bind_cols(data_list %>%
as_tibble() %>%
filter(is.na(neocortex)))
.3_mi %>% head() f_b14
## # A tibble: 6 × 7
## variable mean q2.5 q97.5 kcal neocortex logmass
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Ymi_neocortex[2] 0.632 0.536 0.733 0.51 NA 0.737
## 2 Ymi_neocortex[3] 0.624 0.528 0.730 0.46 NA 0.920
## 3 Ymi_neocortex[4] 0.623 0.527 0.730 0.48 NA 0.482
## 4 Ymi_neocortex[5] 0.653 0.556 0.752 0.6 NA 0.784
## 5 Ymi_neocortex[9] 0.702 0.602 0.803 0.91 NA -0.342
## 6 Ymi_neocortex[14] 0.657 0.562 0.761 0.71 NA -0.511
Data wrangling done–here’s our code for Figure 14.4.a.
<- viridis_pal(option = "D")(7)[4]
color
<-
p1 .3 %>%
f_b14ggplot(aes(x = neocortex)) +
geom_smooth(aes(y = Estimate.kcal, ymin = Q2.5.kcal, ymax = Q97.5.kcal),
stat = "identity",
fill = color, color = color, alpha = 1/3, linewidth = 1/2) +
geom_point(data = data_list %>% as_tibble(),
aes(y = kcal),
color = "white") +
geom_pointrange(data = f_b14.3_mi,
aes(x = mean, xmin = q2.5, xmax = q97.5, y = kcal),
color = color, shape = 1, linewidth = 1/4, fatten = 4, stroke = 1/2) +
labs(subtitle = "Note: For the regression line in this plot,\nlog(mass) has been set to its median, 1.244.",
x = "neocortex proportion",
y = "kcal per gram") +
coord_cartesian(xlim = c(.55, .8),
ylim = range(data_list$kcal, na.rm = T))
Here we make Figure 14.4.b, combine it with Figure 14.4.a, and plot.
<- viridis_pal(option = "D")(7)[4]
color
<-
p2 %>%
data_list as_tibble() %>%
ggplot(aes(x = logmass, y = neocortex)) +
geom_point(color = "white") +
geom_pointrange(data = f_b14.3_mi,
aes(y = mean, ymin = q2.5, ymax = q97.5),
color = color, shape = 1, linewidth = 1/4, fatten = 4, stroke = 1/2) +
scale_x_continuous("log(mass)", breaks = -2:4) +
ylab("neocortex proportion") +
coord_cartesian(xlim = range(data_list$logmass, na.rm = T),
ylim = c(.55, .8))
| p2 p1
14.2.2 Improving the imputation model
Like McElreath, we’ll update the imputation line of our statistical model to:
\[\begin{align*} \text{neocortex}_i & \sim \operatorname{Normal}(\nu_i, \sigma_\text{neocortex}) \\ \nu_i & = \alpha_\text{neocortex} + \gamma_1 \text{logmass}_i, \end{align*}\]
which includes the updated priors
\[\begin{align*} \alpha_\text{neocortex} & \sim \operatorname{Normal}(0.5, 1) \\ \gamma_1 & \sim \operatorname{Normal}(0, 10). \end{align*}\]
As far as the brms code goes, adding logmass
as a predictor to the neocortex
submodel is pretty simple.
# define the model
<-
b_model bf(kcal ~ 1 + mi(neocortex) + logmass) +
bf(neocortex | mi() ~ 1 + logmass) + # here's the big difference
set_rescor(FALSE)
# fit the model
.4 <-
b14brm(data = data_list,
family = gaussian,
b_model,prior = c(prior(normal(0, 100), class = Intercept, resp = kcal),
prior(normal(0.5, 1), class = Intercept, resp = neocortex),
prior(normal(0, 10), class = b, resp = kcal),
prior(normal(0, 10), class = b, resp = neocortex),
prior(cauchy(0, 1), class = sigma, resp = kcal),
prior(cauchy(0, 1), class = sigma, resp = neocortex)),
iter = 1e4, chains = 2, cores = 2,
seed = 14,
file = "fits/b14.04")
Behold the parameter summaries.
summarise_draws(b14.4, mean, sd, ~quantile(.x, probs = c(.025, .975))) %>%
mutate_if(is.numeric, round, digits = 2)
## # A tibble: 21 × 5
## variable mean sd `2.5%` `97.5%`
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 b_kcal_Intercept -0.86 0.48 -1.76 0.13
## 2 b_neocortex_Intercept 0.64 0.01 0.61 0.66
## 3 b_kcal_logmass -0.09 0.02 -0.13 -0.04
## 4 b_neocortex_logmass 0.02 0.01 0.01 0.03
## 5 bsp_kcal_mineocortex 2.43 0.74 0.88 3.83
## 6 sigma_kcal 0.13 0.02 0.09 0.18
## 7 sigma_neocortex 0.04 0.01 0.03 0.06
## 8 Ymi_neocortex[2] 0.63 0.03 0.56 0.7
## 9 Ymi_neocortex[3] 0.63 0.04 0.56 0.7
## 10 Ymi_neocortex[4] 0.62 0.04 0.55 0.69
## # … with 11 more rows
Here’s our pre-Figure 14.5 data wrangling.
.4 <-
f_b14fitted(b14.4, newdata = nd) %>%
as_tibble() %>%
bind_cols(nd)
.4_mi <-
f_b14summarise_draws(b14.4, mean, ~quantile2(.x, probs = c(.025, .975))) %>%
filter(str_detect(variable, "Ymi")) %>%
bind_cols(data_list %>%
as_tibble() %>%
filter(is.na(neocortex)))
.4 %>%
f_b14glimpse()
## Rows: 30
## Columns: 10
## $ Estimate.kcal <dbl> 0.2429865, 0.2722728, 0.3015592, 0.3308456, 0.3601320, 0.3894184, 0.41…
## $ Est.Error.kcal <dbl> 0.12692274, 0.11818596, 0.10948568, 0.10083132, 0.09223583, 0.08371734…
## $ Q2.5.kcal <dbl> 0.005117345, 0.050765650, 0.095918955, 0.141026058, 0.186430429, 0.230…
## $ Q97.5.kcal <dbl> 0.5033977, 0.5145662, 0.5256842, 0.5374260, 0.5489286, 0.5602954, 0.57…
## $ Estimate.neocortex <dbl> 0.6671208, 0.6671208, 0.6671208, 0.6671208, 0.6671208, 0.6671208, 0.66…
## $ Est.Error.neocortex <dbl> 0.009472688, 0.009472688, 0.009472688, 0.009472688, 0.009472688, 0.009…
## $ Q2.5.neocortex <dbl> 0.6483687, 0.6483687, 0.6483687, 0.6483687, 0.6483687, 0.6483687, 0.64…
## $ Q97.5.neocortex <dbl> 0.6859631, 0.6859631, 0.6859631, 0.6859631, 0.6859631, 0.6859631, 0.68…
## $ neocortex <dbl> 0.5000000, 0.5120690, 0.5241379, 0.5362069, 0.5482759, 0.5603448, 0.57…
## $ logmass <dbl> 1.244155, 1.244155, 1.244155, 1.244155, 1.244155, 1.244155, 1.244155, …
.4_mi %>%
f_b14glimpse()
## Rows: 12
## Columns: 7
## $ variable <chr> "Ymi_neocortex[2]", "Ymi_neocortex[3]", "Ymi_neocortex[4]", "Ymi_neocortex[5]", …
## $ mean <dbl> 0.6315751, 0.6291646, 0.6197582, 0.6463785, 0.6629332, 0.6271872, 0.6794268, 0.6…
## $ q2.5 <dbl> 0.5639619, 0.5573172, 0.5480440, 0.5790899, 0.5932412, 0.5597960, 0.6135046, 0.6…
## $ q97.5 <dbl> 0.7004343, 0.6998674, 0.6908186, 0.7146579, 0.7314117, 0.6962155, 0.7464838, 0.7…
## $ kcal <dbl> 0.51, 0.46, 0.48, 0.60, 0.91, 0.71, 0.73, 0.72, 0.79, 0.48, 0.51, 0.53
## $ neocortex <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA
## $ logmass <dbl> 0.7371641, 0.9202828, 0.4824261, 0.7839015, -0.3424903, -0.5108256, 1.2441546, 1…
For our final plots, let’s play around with colors from viridis_pal(option = "D")
. Here’s the code for Figure 14.5.a.
<- viridis_pal(option = "D")(7)[3]
color
<-
p1 .4 %>%
f_b14ggplot(aes(x = neocortex)) +
geom_smooth(aes(y = Estimate.kcal, ymin = Q2.5.kcal, ymax = Q97.5.kcal),
stat = "identity",
fill = color, color = color, alpha = 1/2, linewidth = 1/2) +
geom_point(data = data_list %>% as_tibble(),
aes(y = kcal),
color = "white") +
geom_pointrange(data = f_b14.4_mi,
aes(x = mean, xmin = q2.5, xmax = q97.5, y = kcal),
color = color, shape = 1, linewidth = 1/4, fatten = 4, stroke = 1/2) +
labs(subtitle = "Note: For the regression line in this plot,\nlog(mass) has been set to its median, 1.244.",
x = "neocortex proportion",
y = "kcal per gram") +
coord_cartesian(xlim = c(.55, .8),
ylim = range(data_list$kcal, na.rm = T))
Make the code for Figure 14.5.b, combine it with Figure 14.5.a, and plot.
<- viridis_pal(option = "D")(7)[3]
color
<-
p2 %>%
data_list as_tibble() %>%
ggplot(aes(x = logmass, y = neocortex)) +
geom_point(color = "white") +
geom_pointrange(data = f_b14.4_mi,
aes(y = mean, ymin = q2.5, ymax = q97.5),
color = color, shape = 1, linewidth = 1/4, fatten = 4, stroke = 1/2) +
scale_x_continuous("log(mass)", breaks = -2:4) +
ylab("neocortex proportion") +
coord_cartesian(xlim = range(data_list$logmass, na.rm = T),
ylim = c(.55, .8))
| p2 p1
14.2.3 Bonus: mi()
can replace me()
In one of his responses to a question thread on the Stan forums, Bürkner remarked he wasn’t a fan of the brms me()
syntax and that he planned to depreciate it in the future. As a consequence, our code for models b14.2_se
and b14.2_mi
from above might soon break. Happily, we can use the mi()
workflow to replace me()
. Here’s what that might look like for an alternate version of b14.2_mi
, which we’ll call b14.2_mi_mi
.
Warning: Specifying global priors for regression coefficients in multivariate models is deprecated. Please specify priors separately for each response variable.
<-
b_model # here's the primary `div_obs` model
bf(div_obs | mi(div_sd) ~ 0 + Intercept + mi(mar_obs) + A) +
# here's the model for the measurement error in the `mar_obs` data
bf(mar_obs | mi(mar_sd) ~ 0 + Intercept) +
# here we set the residual correlations for the two models to zero
set_rescor(FALSE)
.2_mi_mi <-
b14brm(data = dlist,
family = gaussian,
# here we insert the model
b_model, prior = c(prior(normal(0, 10), class = b, resp = divobs),
prior(normal(0, 10), class = b, resp = marobs),
prior(cauchy(0, 2.5), class = sigma, resp = divobs),
prior(cauchy(0, 2.5), class = sigma, resp = marobs)),
iter = 5000, warmup = 1000, cores = 2, chains = 2,
seed = 14,
control = list(adapt_delta = 0.99,
max_treedepth = 11),
save_pars = save_pars(latent = TRUE),
init = inits_list,
file = "fits/b14.02_mi_mi")
The advantage of the mi()
approach is the brm()
syntax requires an explicit formula for the mar_obs
predictor. The other advantage is mar_obs
get’s an explicit summary in the print()
output.
print(b14.2_mi_mi)
## Family: MV(gaussian, gaussian)
## Links: mu = identity; sigma = identity
## mu = identity; sigma = identity
## Formula: div_obs | mi(div_sd) ~ 0 + Intercept + mi(mar_obs) + A
## mar_obs | mi(mar_sd) ~ 0 + Intercept
## Data: dlist (Number of observations: 50)
## Draws: 2 chains, each with iter = 5000; warmup = 1000; thin = 1;
## total post-warmup draws = 8000
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## divobs_Intercept 15.46 6.92 1.65 28.74 1.00 2096 3341
## divobs_A -0.43 0.21 -0.84 -0.02 1.00 2364 3496
## marobs_Intercept 19.67 0.47 18.77 20.60 1.00 5196 5386
## divobs_mimar_obs 0.28 0.11 0.06 0.50 1.00 1767 3316
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma_divobs 1.00 0.21 0.62 1.45 1.00 1438 2434
## sigma_marobs 2.91 0.38 2.24 3.72 1.00 3600 5232
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
If you compare the output from summarise_draws(b14.2_mi_mi)
to the output from summarise_draws(b14.2_mi)
, above, you’ll see the values are largely the same, but the ordering and some of the variable
names have changed a bit. I’ll leave that comparison to the interested reader.
To further cement the similarities to the two models, here’s a reworking of Figure 14.4.b, this time based on our b14.2_mi_mi
model.
full_join(
# observed values
tibble(row = 1:50,
Divorce = dlist$div_obs,
Marriage = dlist$mar_obs) %>%
gather(variable, observed, -row),
# posterior means
summarise_draws(b14.2_mi_mi, posterior = mean) %>%
filter(str_detect(variable, "Yl_")) %>%
mutate(row = str_extract(variable, "\\d+") %>% as.double(),
variable = if_else(str_detect(variable, "divobs"), "Divorce", "Marriage")),
by = c("row", "variable")
%>%
) pivot_longer(observed:posterior) %>%
pivot_wider(names_from = variable, values_from = value) %>%
# plot!
ggplot(aes(x = Marriage, y = Divorce)) +
geom_line(aes(group = row),
color = "white", linewidth = 1/4) +
geom_point(aes(color = name)) +
scale_color_manual(values = c(color_p, color_y)) +
scale_y_continuous(breaks = seq(from = 4, to = 14, by = 2)) +
labs(x = "Marriage rate (posterior)" ,
y = "Divorce rate (posterior)") +
coord_cartesian(ylim = c(4, 14.5))
The results are basically the same.
To wrap up, if modern missing data methods are new to you, you might also check out van Burren’s great (2018) text, Flexible imputation of missing data: Second edition. I’m also a fan of Enders’s (2022) Applied missing data analysis, for which you can download a free sample chapter by clicking here. I’ll also quickly mention that brms accommodates multiple imputation, too.
14.3 Summary Bonus: Meta-analysis
If your mind isn’t fully blown by those measurement-error and missing-data models, let’s keep building. As it turns out, meta-analyses are often just special kinds of multilevel measurement-error models. Thus, you can use brms::brm()
to fit Bayesian meta-analyses, too.
Before we proceed, I should acknowledge that this section is heavily influenced by Matti Vourre’s great blog post, Bayesian meta-analysis with R, Stan & brms. Since McElreath’s text doesn’t directly address meta-analyses, we’ll also have to borrow a bit from Gelman, Carlin, Stern, Dunson, Vehtari, and Rubin’s (2013) Bayesian data analysis, Third edition. We’ll let Gelman and colleagues introduce the topic:
Discussions of meta-analysis are sometimes imprecise about the estimands of interest in the analysis, especially when the primary focus is on testing the null hypothesis of no effect in any of the studies to be combined. Our focus is on estimating meaningful parameters, and for this objective there appear to be three possibilities, accepting the overarching assumption that the studies are comparable in some broad sense. The first possibility is that we view the studies as identical replications of each other, in the sense we regard the individuals in all the studies as independent samples from a common population, with the same outcome measures and so on. A second possibility is that the studies are so different that the results of any one study provide no information about the results of any of the others. A third, more general, possibility is that we regard the studies as exchangeable but not necessarily either identical or completely unrelated; in other words we allow differences from study to study, but such that the differences are not expected a priori to have predictable effects favoring one study over another…. this third possibility represents a continuum between the two extremes, and it is this exchangeable model (with unknown hyperparameters characterizing the population distribution) that forms the basis of our Bayesian analysis…
The first potential estimand of a meta-analysis, or a hierarchically structured problem in general, is the mean of the distribution of effect sizes, since this represents the overall ‘average’ effect across all studies that could be regarded as exchangeable with the observed studies. Other possible estimands are the effect size in any of the observed studies and the effect size in another, comparable (exchangeable) unobserved study. (pp. 125—126, emphasis in the original)
The basic version of a Bayesian meta-analysis follows the form
\[y_i \sim \operatorname{Normal}(\theta_i, \sigma_i),\]
where \(y_i\) = the point estimate for the effect size of a single study, \(i\), which is presumed to have been a draw from a Normal distribution centered on \(\theta_i\). The data in meta-analyses are typically statistical summaries from individual studies. The one clear lesson from this chapter is that those estimates themselves come with error and those errors should be fully expressed in the meta-analytic model. Which we do. The standard error from study \(i\) is specified \(\sigma_i\), which is also a stand-in for the standard deviation of the Normal distribution from which the point estimate was drawn. Do note, we’re not estimating \(\sigma_i\), here. Those values we take directly from the original studies.
Building on the model, we further presume that study \(i\) is itself just one draw from a population of related studies, each of which have their own effect sizes. As such. we presume \(\theta_i\) itself has a distribution following the form
\[\theta_i \sim \operatorname{Normal} (\mu, \tau),\]
where \(\mu\) is the meta-analytic effect (i.e., the population mean) and \(\tau\) is the variation around that mean, what you might also think of as \(\sigma_\tau\).
Since there’s no example of a meta-analysis in the text, we’ll have to get our data elsewhere. We’ll focus on Gershoff and Grogan-Kaylor’s (2016) paper, Spanking and child outcomes: Old controversies and new meta-analyses. From their introduction, we read:
Around the world, most children (80%) are spanked or otherwise physically punished by their parents (UNICEF, 2014). The question of whether parents should spank their children to correct misbehaviors sits at a nexus of arguments from ethical, religious, and human rights perspectives both in the U.S. and around the world (Elizabeth T. Gershoff, 2013). Several hundred studies have been conducted on the associations between parents’ use of spanking or physical punishment and children’s behavioral, emotional, cognitive, and physical outcomes, making spanking one of the most studied aspects of parenting. What has been learned from these hundreds of studies? (p. 453)
Our goal will be to learn Bayesian meta-analysis by answering part of that question. I’ve transcribed the values directly from Gershoff and Grogan-Kaylor’s paper and saved them as a file called spank.xlsx
.
You can find the data in this project’s GitHub repository. Let’s load them and glimpse()
.
<- readxl::read_excel("spank.xlsx")
spank
glimpse(spank)
## Rows: 111
## Columns: 8
## $ study <chr> "Bean and Roberts (1981)", "Day and Roberts (1983)", "Minton, Kagan, and Levine (1…
## $ year <dbl> 1981, 1983, 1971, 1988, 1990, 1961, 1962, 1990, 2002, 2005, 1986, 2012, 1979, 2009…
## $ outcome <chr> "Immediate defiance", "Immediate defiance", "Immediate defiance", "Immediate defia…
## $ between <dbl> 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1…
## $ within <dbl> 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0…
## $ d <dbl> -0.74, 0.36, 0.34, -0.08, 0.10, 0.63, 0.19, 0.47, 0.14, -0.18, 1.18, 0.70, 0.63, 0…
## $ ll <dbl> -1.76, -1.04, -0.09, -1.01, -0.82, 0.16, -0.14, 0.20, -0.42, -0.49, 0.15, 0.35, -0…
## $ ul <dbl> 0.28, 1.77, 0.76, 0.84, 1.03, 1.10, 0.53, 0.74, 0.70, 0.13, 2.22, 1.05, 1.71, 0.22…
In this paper, the effect size of interest is a Cohen’s \(d\), derived from the formula
\[d = \frac{\mu_\text{treatment} - \mu_\text{comparison}}{\sigma_\text{pooled}},\]
where
\[\sigma_\text{pooled} = \sqrt{\frac{[(n_1 - 1) \sigma_1^2] + [(n_2 - 1) \sigma_2^2]}{n_1 + n_2 -2}}.\]
To help make the equation for \(d\) clearer for our example, we might re-express it as
\[d = \frac{\mu_\text{spanked} - \mu_\text{not spanked}}{\sigma_\text{pooled}}.\]
McElreath didn’t really focus on effect sizes in his text. If you need a refresher, you might check out Kelley and Preacher’s (2012) On effect size. But in words, Cohen’s \(d\) is a standardized mean difference between two groups.
So if you look back up at the results of glimpse(spank)
you’ll notice the column d
, which is indeed a vector of Cohen’s \(d\) effect sizes. The last two columns, ll
and ul
, are the lower and upper limits of the associated 95% frequentist confidence intervals. But we don’t want confidence intervals for our d
-values; we want their standard errors. Fortunately, we can compute those with the following formula
\[SE = \frac{\text{upper limit} - \text{lower limit}}{3.92}.\]
Here it is in code.
<-
spank %>%
spank mutate(se = (ul - ll) / 3.92)
glimpse(spank)
## Rows: 111
## Columns: 9
## $ study <chr> "Bean and Roberts (1981)", "Day and Roberts (1983)", "Minton, Kagan, and Levine (1…
## $ year <dbl> 1981, 1983, 1971, 1988, 1990, 1961, 1962, 1990, 2002, 2005, 1986, 2012, 1979, 2009…
## $ outcome <chr> "Immediate defiance", "Immediate defiance", "Immediate defiance", "Immediate defia…
## $ between <dbl> 1, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, 0, 0, 1…
## $ within <dbl> 0, 0, 1, 0, 0, 1, 0, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 0, 1, 1, 1, 0, 1, 1, 0, 1, 1, 0…
## $ d <dbl> -0.74, 0.36, 0.34, -0.08, 0.10, 0.63, 0.19, 0.47, 0.14, -0.18, 1.18, 0.70, 0.63, 0…
## $ ll <dbl> -1.76, -1.04, -0.09, -1.01, -0.82, 0.16, -0.14, 0.20, -0.42, -0.49, 0.15, 0.35, -0…
## $ ul <dbl> 0.28, 1.77, 0.76, 0.84, 1.03, 1.10, 0.53, 0.74, 0.70, 0.13, 2.22, 1.05, 1.71, 0.22…
## $ se <dbl> 0.52040816, 0.71683673, 0.21683673, 0.47193878, 0.47193878, 0.23979592, 0.17091837…
Now are data are ready, we can express our first Bayesian meta-analysis with the formula
\[\begin{align*} \text{d}_i & \sim \operatorname{Normal}(\theta_i, \sigma_i = \text{se}_i) \\ \theta_i & \sim \operatorname{Normal}(\mu, \tau) \\ \mu & \sim \operatorname{Normal}(0, 1) \\ \tau & \sim \operatorname{HalfCauchy}(0, 1). \end{align*}\]
The last two lines, of course, spell out our priors. In psychology, it’s pretty rare to see Cohen’s \(d\)-values greater than the absolute value of \(\pm 1\). So in the absence of more specific domain knowledge–which I don’t have–, it seems like \(\operatorname{Normal}(0, 1)\) is a reasonable place to start. And just like McElreath used \(\operatorname{HalfCauchy}(0, 1)\) as the default prior for the group-level standard deviations, it makes sense to use it here for our meta-analytic \(\tau\) parameter (Donald R. Williams et al., 2018).
Here’s the code for the first model.
.5 <-
b14brm(data = spank,
family = gaussian,
| se(se) ~ 1 + (1 | study),
d prior = c(prior(normal(0, 1), class = Intercept),
prior(cauchy(0, 1), class = sd)),
iter = 4000, warmup = 1000, cores = 4, chains = 4,
seed = 14,
file = "fits/b14.05")
One thing you might notice is our se(se)
function excluded the sigma
argument. If you recall from Section 14.1, we specified sigma = T
in our measurement-error models. The brms default is that within se()
, sigma = FALSE
. As such, we have no estimate for sigma the way we would if we were doing this analysis with the raw data from the studies. Hopefully this makes sense. The uncertainty around the d
-value for each study \(i\) has already been encoded in the data as se
.
This brings us to another point. We typically perform meta-analyses on data summaries. In my field and perhaps in yours, this is due to the historical accident that it has not been the norm among researchers to make their data publicly available. So effect size summaries were the best we typically had. However, times are changing (e.g., Winerman, 2017). If the raw data from all the studies for your meta-analysis are available, you can just fit a multilevel model in which the data are nested in the studies. Heck, you could even allow the studies to vary by \(\sigma\) by taking the distributional modeling approach (Bürkner, 2022a) and specify something like sigma ~ 0 + study
or even sigma ~ 1 + (1 | study)
.
But enough technical talk. Let’s look at the model results.
print(b14.5)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: d | se(se) ~ 1 + (1 | study)
## Data: spank (Number of observations: 111)
## Draws: 4 chains, each with iter = 4000; warmup = 1000; thin = 1;
## total post-warmup draws = 12000
##
## Group-Level Effects:
## ~study (Number of levels: 76)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.26 0.03 0.21 0.33 1.00 2541 4657
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 0.37 0.04 0.30 0.44 1.00 1534 2927
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 0.00 0.00 0.00 0.00 NA NA NA
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Thus, in our simple Bayesian meta-analysis, we have a population Cohen’s \(d\) of about 0.37. Our estimate for \(\tau\), 0.26, suggests we have quite a bit of between-study variability. One question you might ask is: What exactly are these Cohen’s \(d\)’s measuring, anyways? We’ve encoded that in the outcome
vector of the spank
data.
%>%
spank distinct(outcome) %>%
::kable() knitr
outcome |
---|
Immediate defiance |
Low moral internalization |
Child aggression |
Child antisocial behavior |
Child externalizing behavior problems |
Child internalizing behavior problems |
Child mental health problems |
Child alcohol or substance abuse |
Negative parent–child relationship |
Impaired cognitive ability |
Low self-esteem |
Low self-regulation |
Victim of physical abuse |
Adult antisocial behavior |
Adult mental health problems |
Adult alcohol or substance abuse |
Adult support for physical punishment |
There are a few things to note. First, with the possible exception of Adult support for physical punishment
, all of the outcomes are negative. We prefer conditions associated with lower values for things like Child aggression
and Adult mental health problems
. Second, the way the data are coded, larger effect sizes are interpreted as more negative outcomes associated with children having been spanked. That is, our analysis suggests spanking children is associated with worse outcomes. What might not be immediately apparent is that even though there are 111 cases in the data, there are only 76 distinct studies.
%>%
spank distinct(study) %>%
count()
## # A tibble: 1 × 1
## n
## <int>
## 1 76
In other words, some studies have multiple outcomes. In order to better accommodate the study
- and outcome
-level variances, let’s fit a cross-classified Bayesian meta-analysis reminiscent of the cross-classified chimp model from Chapter 13.
.6 <-
b14brm(data = spank,
family = gaussian,
| se(se) ~ 1 + (1 | study) + (1 | outcome),
d prior = c(prior(normal(0, 1), class = Intercept),
prior(cauchy(0, 1), class = sd)),
iter = 4000, warmup = 1000, cores = 4, chains = 4,
seed = 14,
file = "fits/b14.06")
print(b14.6)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: d | se(se) ~ 1 + (1 | study) + (1 | outcome)
## Data: spank (Number of observations: 111)
## Draws: 4 chains, each with iter = 4000; warmup = 1000; thin = 1;
## total post-warmup draws = 12000
##
## Group-Level Effects:
## ~outcome (Number of levels: 17)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.08 0.03 0.04 0.14 1.00 2640 5078
##
## ~study (Number of levels: 76)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.25 0.03 0.20 0.32 1.00 2592 5195
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 0.35 0.04 0.27 0.43 1.00 2294 4056
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 0.00 0.00 0.00 0.00 NA NA NA
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Now we have two \(\tau\) parameters. We might plot them to get a sense of where the variance is at.
# we'll want this to label the plot
<-
label tibble(tau = c(.12, .3),
y = c(15, 10),
label = c("sigma['outcome']", "sigma['study']"))
# wrangle
as_draws_df(b14.6) %>%
select(starts_with("sd")) %>%
gather(key, tau) %>%
mutate(key = str_remove(key, "sd_") %>% str_remove(., "__Intercept")) %>%
# plot
ggplot(aes(x = tau)) +
geom_density(aes(fill = key),
color = "transparent") +
geom_text(data = label,
aes(y = y, label = label, color = label),
size = 5, parse = T) +
scale_fill_viridis_d(NULL, option = "B", begin = .5) +
scale_color_viridis_d(NULL, option = "B", begin = .5) +
scale_x_continuous(expression(tau), limits = c(0, NA)) +
scale_y_continuous(NULL, breaks = NULL)
So at this point, the big story is there’s more variability between the studies than there is the outcomes. But I still want to get a sense of the individual outcomes. Here we’ll use tidybayes::geom_halfeye()
to help us make our version of a forest plot and tidybayes::spread_draws()
to help with the initial wrangling.
library(tidybayes)
.6 %>%
b14spread_draws(b_Intercept, r_outcome[outcome,]) %>%
# add the grand mean to the group-specific deviations
mutate(mu = b_Intercept + r_outcome) %>%
ungroup() %>%
mutate(outcome = str_replace_all(outcome, "[.]", " ")) %>%
# plot
ggplot(aes(x = mu, y = reorder(outcome, mu), fill = reorder(outcome, mu))) +
geom_vline(xintercept = fixef(b14.6)[1, 1], color = "grey33", linewidth = 1) +
geom_vline(xintercept = fixef(b14.6)[1, 3:4], color = "grey33", linetype = 2) +
stat_halfeye(.width = .95, size = 2/3, color = "white") +
scale_fill_viridis_d(option = "B", begin = .2) +
labs(x = expression("Cohen's "*italic(d)),
y = NULL) +
theme(axis.text.y = element_text(hjust = 0),
axis.ticks.y = element_blank())
The solid and dashed vertical white lines in the background mark off the grand mean (i.e., the meta-analytic effect) and its 95% intervals. But anyway, there’s not a lot of variability across the outcomes. Let’s go one step further with the model. Doubling back to Gelman and colleagues, we read:
When assuming exchangeability we assume there are no important covariates that might form the basis of a more complex model, and this assumption (perhaps misguidedly) is widely adopted in meta-analysis. What if other information (in addition to the data \((n, y)\)) is available to distinguish among the \(J\) studies in a meta-analysis, so that an exchangeable model is inappropriate? In this situation, we can expand the framework of the model to be exchangeable in the observed data and covariates, for example using a hierarchical regression model. (p. 126)
One important covariate Gershoff and Grogan-Kaylor addressed in their meta-analysis was the type of study. The 76 papers they based their meta-analysis on contained both between- and within-participants designs. In the spank
data, we’ve dummy coded that information with the between
and within
vectors. Both are dummy variables and \(\text{within} = 1 - \text{between}\). Here are the counts.
%>%
spank count(between)
## # A tibble: 2 × 2
## between n
## <dbl> <int>
## 1 0 71
## 2 1 40
When I use dummies in my models, I prefer to have the majority group stand as the reference category. As such, I typically name those variables by the minority group. In this case, most occasions are based on within-participant designs. Thus, we’ll go ahead and add the between
variable to the model. While we’re at it, we’ll practice using the 0 + Intercept
syntax.
.7 <-
b14brm(data = spank,
family = gaussian,
| se(se) ~ 0 + Intercept + between + (1 | study) + (1 | outcome),
d prior = c(prior(normal(0, 1), class = b),
prior(cauchy(0, 1), class = sd)),
iter = 4000, warmup = 1000, cores = 4, chains = 4,
seed = 14,
file = "fits/b14.07")
Behold the summary.
print(b14.7)
## Family: gaussian
## Links: mu = identity; sigma = identity
## Formula: d | se(se) ~ 0 + Intercept + between + (1 | study) + (1 | outcome)
## Data: spank (Number of observations: 111)
## Draws: 4 chains, each with iter = 4000; warmup = 1000; thin = 1;
## total post-warmup draws = 12000
##
## Group-Level Effects:
## ~outcome (Number of levels: 17)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.08 0.03 0.04 0.14 1.00 4889 7141
##
## ~study (Number of levels: 76)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.25 0.03 0.20 0.31 1.00 3113 5843
##
## Population-Level Effects:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept 0.38 0.05 0.29 0.48 1.00 3398 5719
## between -0.08 0.07 -0.21 0.06 1.00 3508 6016
##
## Family Specific Parameters:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma 0.00 0.00 0.00 0.00 NA NA NA
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Let’s take a closer look at the b_between
parameter.
<- viridis_pal(option = "B")(7)[5]
color
as_draws_df(b14.7) %>%
ggplot(aes(x = b_between, y = 0)) +
stat_halfeye(.width = c(.5, .95), color = "white", fill = color) +
scale_y_continuous(NULL, breaks = NULL) +
xlab("Overall difference for between- vs within-participant designs")
That difference isn’t as large I’d expect it to be. But then again, I’m no spanking researcher. So what do I know?
There are other things you might do with these data. For example, you might check for trends by year or, as the authors did in their manuscript, distinguish among different severities of corporal punishment. But I think we’ve gone far enough to get you started.
If you’d like to learn more about these methods, do check out Vourre’s Bayesian meta-analysis with R, Stan & brms. From his blog, you’ll learn additional tricks, like making a more traditional-looking forest plot and how our Bayesian brms method compares with frequentist meta-analyses via the metafor package (Viechtbauer, 2010, 2022). You might also check out that aforementioned paper by Williams and colleagues’, Bayesian meta-analysis with weakly informative prior distributions, to give you an empirical justification for using a half-Cauchy prior for your meta-analysis \(\tau\) parameters.
Session info
sessionInfo()
## R version 4.2.2 (2022-10-31)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS Big Sur ... 10.16
##
## Matrix products: default
## BLAS: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRlapack.dylib
##
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
##
## attached base packages:
## [1] parallel stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] tidybayes_3.0.2 posterior_1.3.1 patchwork_1.1.2 viridis_0.6.2
## [5] viridisLite_0.4.1 ggdark_0.2.1 brms_2.18.0 Rcpp_1.0.9
## [9] cmdstanr_0.5.3 rstan_2.21.8 StanHeaders_2.21.0-7 forcats_0.5.1
## [13] stringr_1.4.1 dplyr_1.0.10 purrr_1.0.1 readr_2.1.2
## [17] tidyr_1.2.1 tibble_3.1.8 ggplot2_3.4.0 tidyverse_1.3.2
##
## loaded via a namespace (and not attached):
## [1] readxl_1.4.1 backports_1.4.1 plyr_1.8.7 igraph_1.3.4
## [5] svUnit_1.0.6 sp_1.5-0 splines_4.2.2 crosstalk_1.2.0
## [9] TH.data_1.1-1 rstantools_2.2.0 inline_0.3.19 digest_0.6.31
## [13] htmltools_0.5.3 fansi_1.0.3 magrittr_2.0.3 checkmate_2.1.0
## [17] googlesheets4_1.0.1 tzdb_0.3.0 modelr_0.1.8 RcppParallel_5.1.5
## [21] matrixStats_0.63.0 xts_0.12.1 sandwich_3.0-2 prettyunits_1.1.1
## [25] colorspace_2.0-3 rvest_1.0.2 ggdist_3.2.1 rgdal_1.5-30
## [29] haven_2.5.1 xfun_0.35 callr_3.7.3 crayon_1.5.2
## [33] jsonlite_1.8.4 lme4_1.1-31 survival_3.4-0 zoo_1.8-10
## [37] glue_1.6.2 gtable_0.3.1 gargle_1.2.0 emmeans_1.8.0
## [41] distributional_0.3.1 pkgbuild_1.3.1 shape_1.4.6 abind_1.4-5
## [45] scales_1.2.1 mvtnorm_1.1-3 DBI_1.1.3 miniUI_0.1.1.1
## [49] xtable_1.8-4 stats4_4.2.2 DT_0.24 htmlwidgets_1.5.4
## [53] httr_1.4.4 threejs_0.3.3 arrayhelpers_1.1-0 ellipsis_0.3.2
## [57] pkgconfig_2.0.3 loo_2.5.1 farver_2.1.1 sass_0.4.2
## [61] dbplyr_2.2.1 utf8_1.2.2 labeling_0.4.2 tidyselect_1.2.0
## [65] rlang_1.0.6 reshape2_1.4.4 later_1.3.0 munsell_0.5.0
## [69] cellranger_1.1.0 tools_4.2.2 cachem_1.0.6 cli_3.6.0
## [73] generics_0.1.3 broom_1.0.2 evaluate_0.18 fastmap_1.1.0
## [77] processx_3.8.0 knitr_1.40 fs_1.5.2 nlme_3.1-160
## [81] mime_0.12 projpred_2.2.1 xml2_1.3.3 compiler_4.2.2
## [85] bayesplot_1.10.0 shinythemes_1.2.0 rstudioapi_0.13 gamm4_0.2-6
## [89] reprex_2.0.2 bslib_0.4.0 stringi_1.7.8 highr_0.9
## [93] ps_1.7.2 Brobdingnag_1.2-8 lattice_0.20-45 Matrix_1.5-1
## [97] nloptr_2.0.3 markdown_1.1 shinyjs_2.1.0 tensorA_0.36.2
## [101] vctrs_0.5.1 pillar_1.8.1 lifecycle_1.0.3 jquerylib_0.1.4
## [105] bridgesampling_1.1-2 estimability_1.4.1 httpuv_1.6.5 R6_2.5.1
## [109] bookdown_0.28 promises_1.2.0.1 gridExtra_2.3 codetools_0.2-18
## [113] boot_1.3-28 colourpicker_1.1.1 MASS_7.3-58.1 gtools_3.9.4
## [117] assertthat_0.2.1 withr_2.5.0 shinystan_2.6.0 multcomp_1.4-20
## [121] mgcv_1.8-41 hms_1.1.1 grid_4.2.2 coda_0.19-4
## [125] minqa_1.2.5 rmarkdown_2.16 googledrive_2.0.0 shiny_1.7.2
## [129] lubridate_1.8.0 base64enc_0.1-3 dygraphs_1.1.1.6