10 Model Comparison and Hierarchical Modeling

There are situations in which different models compete to describe the same set of data…

…Bayesian inference is reallocation of credibility over possibilities. In model comparison, the focal possibilities are the models, and Bayesian model comparison reallocates credibility across the models, given the data. In this chapter, we explore examples and methods of Bayesian inference about the relative credibilities of models. (pp. 265–266)

In the text, the emphasis is on the Bayes Factor paradigm. While we will discuss that, we will also present the alternatives available with information criteria, model averaging, and model stacking.

10.1 General formula and the Bayes factor

So far we have spoken of

  • the data, denoted by \(D\) or \(y\);
  • the model parameters, generically denoted by \(\theta\);
  • the likelihood function, denoted by \(p(D | \theta)\); and
  • the prior distribution, denoted by \(p(\theta)\).

Now we add to that \(m\), which is a model index with \(m = 1\) standing for the first model, \(m = 2\) standing for the second model, and so on. So when we have more than one model in play, we might refer to the likelihood as \(p_m(y | \theta_m, m)\) and the prior as \(p_m(\theta_m | m)\). It’s also the case, then, that each model can be given a prior probability \(p(m)\).

“The Bayes factor (BF) is the ratio of the probabilities of the data in models 1 and 2” (p. 268).

This can be expressed simply as

\[\text{BF} = \frac{p(D | m = 1)}{p(D | m = 2)}.\]

Kruschke further explained that

one convention for converting the magnitude of the BF to a discrete decision about the models is that there is “substantial” evidence for model \(m = 1\) when the BF exceeds 3.0 and, equivalently, “substantial” evidence for model \(m = 2\) when the BF is less than 1/3 (Jeffreys, 1961; Kass & Raftery, 1995; Wetzels et al., 2011).

However, as with \(p\)-values, effect sizes, and so on, BF values exist within continua and might should be evaluated in terms of degree more so than as ordered kinds.

10.2 Example: Two factories of coins

Kruschke considered the coin bias of two factories, each described by the beta distribution. We can organize how to derive the \(\alpha\) and \(\beta\) parameters from \(\omega\) and \(\kappa\) with a tibble.

factory omega kappa alpha beta
1 0.25 12 3.5 8.5
2 0.75 12 8.5 3.5

Thus given \(\omega_1 = .25\), \(\omega_2 = .75\) and \(\kappa = 12\), we can describe the bias of the two coin factories as \(\text B_1 (3.5, 8.5)\) and \(\text B_2 (8.5, 3.5)\). With a little wrangling, we canuse our d tibble to make the densities of Figure 10.2.

We might recreate the top panel with geom_col().

Consider the Bernoulli bar plots in the bottom panels of Figure 10.2. The heights of the bars are arbitrary and just intended to give a sense of the Bernoulli distribution. If we wanted the heights to correspond to the Beta distributions above them, we might do so like this.

But now

suppose we flip the coin nine times and get six heads. Given those data, what are the posterior probabilities of the coin coming from the head-biased or tail-biased factories? We will pursue the answer three ways: via formal analysis, grid approximation, and MCMC. (p. 270)

10.2.1 Solution by formal analysis.

Here we rehearse if we have \(\operatorname{beta} (\theta, a, b)\) prior for \(\theta\) of the Bernoulli likelihood function, then the analytic solution for the posterior is \(\operatorname{beta} (\theta | z + a, N – z + b)\). Within this paradigm, if you would like to compute \(p(D | m)\), don’t use the following function. If suffers from underflow with large values.

This version is more robust.

You’d use it like this to compute \(p(D|m_1)\).

## [1] 0.0004993439

So to compute our BF, \(\frac{p(D|m_1)}{p(D|m_2)}\), you might use the p_d() function like this.

## [1] 0.2135266

And if we computed the BF the other way, it’d look like this.

## [1] 4.683258

Since the BF itself is only \(\text{BF} = \frac{p(D | m = 1)}{p(D | m = 2)}\), we’d need to bring in the priors for the models themselves to get the posterior probabilities, which follows the form

\[\frac{p(m = 1 | D)}{p(m = 2 | D)} = \Bigg (\frac{p(D | m = 1)}{p(D | m = 2)} \Bigg ) \Bigg ( \frac{p(m = 1)}{p(m = 2)} \Bigg).\]

If for both our models \(p(m) = .5\), then the BF is

## [1] 0.2135266

As Kruschke pointed out, because we’re working in the probability metric, the sum of \(p(m = 1 | D )\) and \(p(m = 2 | D )\) must be 1. By simple algebra then,

\[p(m = 2 | D ) = 1 - p(m = 1 | D ).\]

Therefore, it’s also the case that

\[\frac{p(m = 1 | D)}{1 - p(m = 1 | D)} = 0.2135266.\]

Thus, 0.2135266 is in an odds metric. If you want to convert odds to a probability, you follow the formula

\[\text{odds} = \frac{\text{probability}}{1 - \text{probability}}.\]

And with more algegraic manipulation, you can solve for the probability.

\[\begin{align*} \text{odds} & = \frac{\text{probability}}{1 - \text{probability}} \\ \text{odds} - \text{odds} \cdot \text{probability} & = \text{probability} \\ \text{odds} & = \text{probability} + \text{odds} \cdot \text{probability} \\ \text{odds} & = \text{probability} (1 + \text{odds}) \\ \frac{\text{odds}}{1 + \text{odds}} & = \text{probability} \end{align*}\]

Thus, the posterior probability for \(m = 1\) is

\[p(m = 1 | D) = \frac{0.2135266}{1 + 0.2135266}.\]

We can express that in code like so.

## [1] 0.1759554

Relative to \(m = 2\), our posterior probability for \(m = 1\) is about .18. Therefore the posterior probability of \(m = 2\) is 1 minus that.

## [1] 0.8240446

Given the data, the two models and the prior assumption they were equally credible, we conclude \(m = 2\) is .82 probable.

10.2.2 Solution by grid approximation.

We won’t be able to make the wireframe plots on the left of Figure 10.3, but we can do some of the others. Here’s the upper right panel.

Building on that, here’s the upper middle panel of the “two [prior] dorsal fins” (p. 271).

This time we’ll separate \(p_{m = 1}(\theta)\) and \(p_{m = 2}(\theta)\) into the two short plots on the right of the next row down.

We can continue to build on those sensibilities for the middle panel of the same row. Here we’re literally adding \(p_{m = 1}(\theta)\) to \(p_{m = 2}(\theta)\) and taking their average.

We need the Bernoulli likelihood function for the next step.

Time to feed our data and the parameter space into bernoulli_likelihood(), which will allow us to make the 2-dimensional density plot at the heart of Figure 10.3.

Now we just need the marginal likelihood, \(p(D)\), to compute the posterior. Our first depiction will be the middle panel of the second row from the bottom–the panel with the uneven dolphin fins.

Here, then, is a way to get the panel in on the right of the second row from the bottom.

To make the middle bottom panel of Figure 10.3, we have to average the posterior values of \(\theta\) over the grid of \(\omega\) values. That is, we have to marginalize.

For the lower right panel of Figure 10.3, we’ll filter() to our two focal values of \(\omega\) and then facet by them.

Do note the different scales on the \(y\). Here’s what they’d look like on the same scale.

Hopefully that helps build the intuition of what Kruschke meant when he wrote “visual inspection suggests that the ratio of the heights is about 5 to 1, which matches the Bayes factor of 4.68 that we computed exactly in the previous section” (p. 273, emphasis in the original).

Using the grid, you might compute that BF like this.

## # A tibble: 1 x 1
##      BF
##   <dbl>
## 1  4.68

10.3 Solution by MCMC

Kruschke started with: “For large, complex models, we cannot derive \(p(D | m)\) analytically or with grid approximation, and therefore we will approximate the posterior probabilities using MCMC methods” (p. 274). He’s not kidding. Welcome to modern Bayes.

10.3.1 Nonhierarchical MCMC computation of each model’s marginal likelihood.

Before you get excited, Kruschke warned: “For complex models, this method might not be tractable. [But] for the simple application here, however, the method works well, as demonstrated in the next section” (p. 277).

10.3.1.1 Implementation with JAGS brms.

Load brms.

Let’s save the trial_data as a tibble.

Time to learn a new brms skill. When you want to enter variables into the parameters defining priors in brms::brm(), you need to specify them using the stanvar() function. Since we want to do this for two variables, we’ll use stanvar() twice and save the results as an object, conveniently named stanvars.

Now we have our stanvars object, we are ready to fit the first model (i.e., the model for which \(\omega = .75\)).

Note how we fed our stanvars object into the stanvars function.

Anyway, let’s inspect the chains.

They look great. Now we glance at the model summary.

##  Family: bernoulli 
##   Links: mu = identity 
## Formula: y ~ 1 
##    Data: trial_data (Number of observations: 9) 
## Samples: 4 chains, each with iter = 11000; warmup = 1000; thin = 1;
##          total post-warmup samples = 40000
## 
## Population-Level Effects: 
##           Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## Intercept     0.69      0.10     0.48     0.86 1.00     8007     9306
## 
## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 
## is a crude measure of effective sample size, and Rhat is the potential 
## scale reduction factor on split chains (at convergence, Rhat = 1).

Next we’ll follow Kruschke and extract the posterior samples, saving them as theta.

##   b_Intercept      lp__
## 1   0.7263424 -4.691665
## 2   0.7626307 -4.815941
## 3   0.7222605 -4.686314
## 4   0.8012281 -5.125448
## 5   0.7373272 -4.714354
## 6   0.6857306 -4.707360

The fixef() function will return the posterior summaries for the model intercept (i.e., \(\theta\)). We can then index and save the desired summaries.

##           Estimate Est.Error      Q2.5    Q97.5
## Intercept 0.691113  0.098457 0.4820692 0.863507
## [1] 0.691113
## [1] 0.098457

Now we’ll convert them to the \(\alpha\) and \(\beta\) parameters, a_post and b_post, respectively.

Recall we’ve already defined several values.

The reason we’re saving all these values is we’re aiming to compute \(p(D)\), the probability of the data (i.e., the marginal likelihood), given the model. But our intermediary step will be computing its reciprocal, \(\frac{1}{p(D)}\). Here we’ll express Kruschke’s oneOverPD as a function, one_over_pd().

We’re ready to use one_over_pd() to help compute \(p(D)\).

##            pd
## 1 0.002338466

That matches up nicely with Kruschke’s value! Let’s rinse, wash, and repeat for \(\omega = .25\). First, we’ll need to redefine omega and our stanvars.

Fit the model.

We’ll do the rest in bulk.

##             pd
## 1 0.0004992476

Boom!

10.3.2 Hierarchical MCMC computation of relative model probability is not available in brms: We’ll cover information criteria instead.

I’m not aware of a way to specify a model “in which the top-level parameter is the index across models” in brms (p. 278). If you know of a way, share your code.

However, we do have options. We can compare and weight models using information criteria, about which you can learn more here. In brms, the LOO and WAIC are two primary information criteria available. You can compute them for a given model with the loo() and waic() functions, respectively. Here’s a quick example of how to use the waic() function.

## 
## Computed from 40000 by 9 log-likelihood matrix
## 
##           Estimate  SE
## elpd_waic     -6.2 1.3
## p_waic         0.5 0.1
## waic          12.5 2.7

We’ll explain that output in a bit. Before we do, you should know the current recommended workflow for information criteria with brms models is to use the add_criterion() function, which will allow us to compute information-criterion-related output and save it to our brms fit objects. Here’s how to do that with both our fits.

You can extract the same WAIC output for fit1 we saw above by executing fit1$waic. Here we look at the LOO summary for fit1, instead.

## 
## Computed from 40000 by 9 log-likelihood matrix
## 
##          Estimate  SE
## elpd_loo     -6.2 1.3
## p_loo         0.5 0.1
## looic        12.5 2.7
## ------
## Monte Carlo SE of elpd_loo is 0.0.
## 
## All Pareto k estimates are good (k < 0.5).
## See help('pareto-k-diagnostic') for details.

You get a wealth of output, more of which can be seen by executing str(fit1$loo). First, notice the message “All Pareto k estimates are good (k < 0.5).” Pareto \(k\) values can be used for diagnostics. Each case in the data gets its own \(k\) value and we like it when those \(k\)s are low. The makers of the loo package get worried when \(k\) values exceed 0.7 and, as a result, we will get warning messages when they do. Happily, we have no such warning messages in this example.

In the main section, we get estimates for the expected log predictive density (elpd_loo), the estimated effective number of parameters (p_loo), and the Pareto smoothed importance-sampling leave-one-out cross-validation (PSIS-LOO; looic). Each estimate comes with a standard error (i.e., SE). Like other information criteria, the LOO values aren’t of interest in and of themselves. However, the estimate of one model’s LOO relative to that of another is of great interest. We generally prefer models with lower information criteria. With the loo_compare() function, we can compute a formal difference score between two models.

##      elpd_diff se_diff
## fit1  0.0       0.0   
## fit2 -0.8       1.7

The loo_compare() output rank orders the models such that the best fitting model appears on top. All models receive a difference score relative to the best model. Here the best fitting model is fit1 and since the LOO for fit1 minus itself is zero, the values in the top row are all zero.

Each difference score also comes with a standard error. In this case, even though fit1 has the lower estimates, the standard error is twice the magnitude of the difference score. So the LOO difference score puts the two models on similar footing. You can do a similar analysis with the WAIC estimates.

In addition to difference-score comparisons, you can also use the LOO or WAIC for AIC-type model weighting. In brms, you do this with the model_weights() function.

##     fit1     fit2 
## 0.830191 0.169809

I don’t know that I’d call these weights probabilities, but they do sum to one. In this case, the analysis suggests we put about five times more weight to fit1 relative to fit2.

##     fit1 
## 4.888969

With brms::model_weights(), we have a variety of weighting schemes avaliable to us. Since we didn’t specify any in the weights argument, we used the default "loo2", which is–perhaps confusingly given the name–the stacking method according to the paper by Yao, Vehtari, Simpson, and Gelman. Vehtari has written about the paper on Gelman’s blog, too. But anyway, the point is that different weighting schemes might not produce the same results. For example, here’s the result from weighting using the WAIC.

##      fit1      fit2 
## 0.6967995 0.3032005

The results are similar, for sure. But they’re not the same. The stacking method via the brms default weights = "loo2" is the current preferred method by the folks on the Stan team (e.g., the authors of the above linked paper).

For more on stacking and other weighting schemes, see Vehtari and Gabry’s vignette Bayesian Stacking and Pseudo-BMA weights using the loo package or Vehtari’s modelselection_tutorial GitHub repository. But don’t worry. We will have more opportunities to practice with information criteria, model weights, and such later in this project.

10.3.2.1 Using [No need to use] pseudo-priors to reduce autocorrelation.

Since we didn’t use Kruschke’s method from the last subsection, we don’t have the same worry about autocorrelation. For example, here are the autocorrelation plots for fit1.

Our autocorrelations were a little high for HMC, but nowhere near pathological. The results for fit2 were similar. As you might imagine from the moderate autocorrelations, the \(N_{eff}/N\) ratio for b_Intercept wasn’t great.

But we specified a lot of post-warmup iterations, so we’re still in good shape. Plus, the \(\hat{R}\) was fine.

## b_Intercept 
##    1.000613

10.3.3 Models with different “noise” distributions in JAGS brms.

Probability distribution[s are] sometimes [called “noise”] distribution[s] because [they describe] the random variability of the data values around the underlying trend. In more general applications, different models can have different noise distributions. For example, one model might describe the data as log-normal distributed, while another model might describe the data as gamma distributed. (p. 288)

If there are more than one plausible noise distributions for our data, we might want to compare the models. Kruschke then gave us a general trick in the form of this JAGS code:

I’m not aware that we can do this within the Stan/brms framework. If I’m in error and you know how, please share your code. However, we do have options. In anticipation of Chapter 16, let’s consider Gaussian-like data with thick tails. We might generate some like this:

## # A tibble: 1,000 x 1
##          y
##      <dbl>
##  1  0.0214
##  2 -0.987 
##  3  0.646 
##  4 -0.237 
##  5  0.977 
##  6 -0.200 
##  7  0.781 
##  8 -1.09  
##  9  1.83  
## 10 -0.682 
## # … with 990 more rows

The resulting data look like this.

As you’d expect with a small-\(\nu\) Student’s \(t\), some of our values are quite distinct from the central clump. If you don’t recall, Student’s \(t\)-distribution has three parameters: \(\nu\), \(\mu\), and \(\sigma\). The Gaussian is a special case of Student’s \(t\) for which \(\nu = \infty\). When \(\nu\) gets small, the consequence is the distribution allocates more mass in the tails. From a Gaussian perspective, the small-\(\nu\) Student’s \(t\) expects more outliers–though it’s a little odd calling them outliers from a small-\(\nu\) Student’s \(t\) perspective.

Let’s see how well the Gaussian versus the Student’s \(t\) likelihoods handle the data. Here we’ll use fairly liberal priors.

In case you were curious, here’s what that default gamma(2, 0.1) prior on nu looks like.

That prior puts most of the probability mass below 50, but the right tail gently fades off into the triple digits, allowing for the possibility of larger estimates.

We can use the posterior_summary() function to get a compact look at the model summaries.

##             Estimate Est.Error     Q2.5    Q97.5
## b_Intercept    -0.03      0.04    -0.11     0.05
## sigma           1.25      0.03     1.20     1.31
## lp__        -1646.97      0.98 -1649.49 -1646.02
##             Estimate Est.Error     Q2.5    Q97.5
## b_Intercept    -0.01      0.04    -0.08     0.06
## sigma           0.98      0.04     0.90     1.05
## nu              5.76      1.02     4.12     8.06
## lp__        -1590.50      1.26 -1593.76 -1589.07

Now we can compare the two approaches using information criteria. For kicks, we’ll use the WAIC.

##      elpd_diff se_diff
## fit4   0.0       0.0  
## fit3 -60.3      40.1

Based on the WAIC difference, we hace some support for preferring the Student’s \(t\), but do notice how wide that SE was. We can also compare the models using model weights. Here we’ll use the default weighting scheme.

##       fit3       fit4 
## 0.03221235 0.96778765

Virtually all of the stacking weight was placed on the Student’s-\(t\) model, fit4.

Remember what that \(p(\nu)\) looked like? Here’s our posterior distribution for \(\nu\).

Even though our prior for \(\nu\) was relatively weak, the posterior ended up concentrated on values in the middle-single-digit range. Recall the data-generating value was 7.

We can also compare the models using posterior-predictive checks. There are a variety of ways we might do this, but the most convenient way is with brms::pp_check(), which is itself a wrapper for the family of ppc functions from the bayesplot package.

The default pp_check() setting allows us to compare the density of the data \(y\) (i.e., the dark blue) with 10 density’s simulated from the posterior \(y_\text{rep}\) (i.e., the light blue). We prefer model that produce \(y_\text{rep}\) distributions that resemble \(y\). Though the results from both models were similar, the simulated distributions from fit4 mimicked the original data a little more convincingly. To learn more about this approach, check out Gabry’s vignette Graphical posterior predictive checks using the bayesplot package.

10.4 Prediction: Model averaging

In many applications of model comparison, the analyst wants to identify the best model and then base predictions of future data on that single best model, denoted with index \(b\). In this case, predictions of future \(\hat{y}\) are based exclusively on the likelihood function \(p_b(\hat{y} | \theta_b, m = b)\) and the posterior distribution \(p_b(\theta_b | D, m = b)\) of the winning model:

\[p_b(\hat y | D, m = b) = \int \text d \theta_b p_b (\hat{y} | \theta_b, m = b) p_b(\theta_b | D, m = b)\]

But the full model of the data is actually the complete hierarchical structure that spans all the models being compared, as indicated in Figure 10.1 (p. 267). Therefore, if the hierarchical structure really expresses our prior beliefs, then the most complete prediction of future data takes into account all the models, weighted by their posterior credibilities. In other words, we take a weighted average across the models, with the weights being the posterior probabilities of the models. Instead of conditionalizing on the winning model, we have

\[\begin{align*} p (\hat y | D) & = \sum_m p (\hat y | D, m) p (m | D) \\ & = \sum_m \int \text d \theta_m p_m (\hat{y} | \theta_m, m) p_m(\theta_m | D, m) p (m | D) \end{align*}\]

This is called model averaging. (p. 289)

Okay, while the concept of model averaging is of great interest, we aren’t going to be able to follow this approach to it within the Stan/brms paradigm. This, recall, is because our paradigm doesn’t allow for a hierarchical organization of models in the same way JAGS does. However, we can still play the model averaging game with extensions of our model weighting paradigm, above. Before we get into the details,

recall that there were two models of mints that created the coin, with one mint being tail-biased with mode \(\omega = 0.25\) and one mint being head-biased with mode \(\omega = 0.75\) The two subpanels in the lower-right illustrate the posterior distributions on \(\omega\) within each model, \(p(\theta | D, \omega = 0.25)\) and \(p(\theta | D, \omega = 0.75)\) The winning model was \(\omega = 0.75\), and therefore the predicted value of future data, based on the winning model alone, would use \(p(\theta | D, \omega = 0.75)\). (p. 289)

That is, the posterior for fit1.

But the overall model included \(\omega = 0.75\), and if we use the overall model, then the predicted value of future data should be based on the complete posterior summed across values of \(\omega\). The complete posterior distribution [is] \(p(\theta | D)\) (p. 289).

The cool thing about the model weighting stuff we learned about earlier is that you can use those model weights to average across models. Again, we’re not weighting the models by posterior probabilities the way Kruschke discussed in text. However, the spirit is similar. We can use the brms::pp_average() function to make posterior predictive prediction with mixtures of the models, weighted by our chosen weighting scheme. Here, we’ll go with the default stacking weights.

## # A tibble: 6 x 1
##   theta
##   <dbl>
## 1 0.717
## 2 0.755
## 3 0.908
## 4 0.689
## 5 0.666
## 6 0.765

We can plot our model-averaged \(\theta\) with a little help from good old tidybayes::stat_pointintervalh().

As Kruschke concluded, “you can see the contribution of \(p(\theta | D, \omega = 0.25)\) as the extended leftward tail” (p. 289). Interestingly enough, that looks a lot like the density we made with grid approximation in Figure 10.3, doesn’t it?

10.5 Model complexity naturally accounted for

A complex model (usually) has an inherent advantage over a simpler model because the complex model can find some combination of its parameter values that match the data better than the simpler model. There are so many more parameter options in the complex model that one of those options is likely to fit the data better than any of the fewer options in the simpler model. The problem is that data are contaminated by random noise, and we do not want to always choose the more complex model merely because it can better fit noise. Without some way of accounting for model complexity, the presence of noise in data will tend to favor the complex model.

Bayesian model comparison compensates for model complexity by the fact that each model must have a prior distribution over its parameters, and more complex models must dilute their prior distributions over larger parameter spaces than simpler models. Thus, even if a complex model has some particular combination of parameter values that fit the data well, the prior probability of that particular combination must be small because the prior is spread thinly over the broad parameter space. (pp. 289–290)

Now our two models are:

  • \(p(\theta | D, \kappa = 2000)\) (i.e., the “must-be-fair” model) and
  • \(p(\theta | D, \kappa = 2)\) (i.e., the “anything’s-possible” model).

They look like this.

Here’s how you might compute the \(\alpha\) and \(\beta\) values for the corresponding Beta distributions.

## # A tibble: 2 x 5
##   omega kappa model                         alpha  beta
##   <dbl> <dbl> <chr>                         <dbl> <dbl>
## 1   0.5  1000 The must-be-fair model          500   500
## 2   0.5     2 The anything's-possible model     1     1

With those in hand, we can use our p_d() function to compute the Bayes factor based on flipping a coin \(N = 20\) times and observing \(z = 15\) heads.

## [1] 0.3229023

Let’s try again, this time supposing we observe \(z = 15\) heads out of \(N = 20\) coin flips.

## [1] 3.337148

The anything’s-possible model loses because it pays the price of having a small prior probability on the values of \(\theta\) near the data proportion, while the must-be-fair model has large prior probability on \(\theta\) values sufficiently near the data proportion to be credible. Thus, in Bayesian model comparison, a simpler model can win if the data are consistent with it, even if the complex model fits just as well. The complex model pays the price of having small prior probability on parameter values that describe simple data. (p. 291)

10.5.1 Caveats regarding nested model comparison.

A frequently encountered special case of comparing models of different complexity occurs when one model is “nested” within the other. Consider a model that implements all the meaningful parameters we can contemplate for the particular application. We call that the full model. We might consider various restrictions of those parameters, such as setting some of them to zero, or forcing some to be equal to each other. A model with such a restriction is said to be nested within the full model. (p. 291)

Kruschke didn’t walk out the examples in this section. But for the sake of practice, let’s work through the first one. “Recall the hierarchical model of baseball batting abilities” from Chapter 9 (p. 291). Let’s reload those data.

## Observations: 948
## Variables: 6
## $ Player       <chr> "Fernando Abad", "Bobby Abreu", "Tony Abreu", "Dust…
## $ PriPos       <chr> "Pitcher", "Left Field", "2nd Base", "2nd Base", "1…
## $ Hits         <dbl> 1, 53, 18, 137, 21, 0, 0, 2, 150, 167, 0, 128, 66, …
## $ AtBats       <dbl> 7, 219, 70, 607, 86, 1, 1, 20, 549, 576, 1, 525, 27…
## $ PlayerNumber <dbl> 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, …
## $ PriPosNumber <dbl> 1, 7, 4, 4, 3, 1, 1, 3, 3, 4, 1, 5, 4, 2, 7, 4, 6, …

“The full model has a distinct modal batting ability, \(\omega_c\) , for each of the nine fielding positions. The full model also has distinct concentration parameters for each of the nine positions” (p. 291). Let’s fit that model again.

Next we’ll consider a restricted version of fit5 “in which all infielders (first base, second base, etc.) are grouped together versus all outfielders (right field, center field, and left field). In this restricted model, we are forcing the modal batting abilities of all the outfielders to be the same, that is, \(\omega_\text{left field} = \omega_\text{center field} = \omega_\text{right field}\)” (p. 291). To fit that model, we’ll need to make a new variable PriPos_small which is identical to its parent variable PriPos except that it collapses those three positions into our new category Outfield.

Now use update() to fit the restricted model.

Unlike with what Kruschke alluded to in the prose, here we’ll compare the two models with the LOO information criteria.

##      elpd_diff se_diff
## fit5  0.0       0.0   
## fit6 -2.2       2.1

Based on the LOO difference score, they’re near equivalent. Now let’s see how their model weights shake out. Here we’ll continue to use the default stacking method.

##        fit5        fit6 
## 0.997251197 0.002748803

Though most of the weight went to the parsimonious fit6, we should be skeptical. “Does that mean we should believe that [these positions] have literally identical batting abilities? Probably not” (p. 292). It’s good to be cautious of unnecessary model expansion. But we should also use good substantive reasoning, too. Just because you can restrict a model, that doesn’t necessarily mean it leads to better science.

10.6 Extreme sensitivity to the prior distribution

In many realistic applications of Bayesian model comparison, the theoretical emphasis is on the difference between the models’ likelihood functions. For example, one theory predicts planetary motions based on elliptical orbits around the sun, and another theory predicts planetary motions based on circular cycles and epicycles around the earth. The two models involve very different parameters. In these sorts of models, the form of the prior distribution on the parameters is not a focus, and is often an afterthought. But, when doing Bayesian model comparison, the form of the prior is crucial because the Bayes factor integrates the likelihood function weighted by the prior distribution. (p. 292)

However, “the sensitivity of Bayes factors to prior distributions is well known in the literature (e.g., Kass & Raftery, 1995; Liu & Aitkin, 2008; Vanpaemel, 2010),” and furthermore, when comparing Bayesian models using the methods Kruschke outlined in this chapter of the text, “different forms of vague priors can yield very different Bayes factors” (p. 293).

In the two BFs to follow, we compare the must-be-fair model and the anything’s-possible models from 10.5 to new data: \(z = 65, N = 100\).

## [1] 0.125287

The resulting 0.13 favored the anything’s-possible model.

Another way to express the anything’s-possible model is with the Haldane prior, which sets the two parameters within the beta distribution to be a) equivalent and b) quite small (i.e., 0.01 in this case).

## [1] 5.728066

Now we flipped to favoring the must-be-fair model. You might be asking, Wait, kind of distribution did that Haldane prior produce? Here we compare it to the Beta(1, 1).

Before we can complete the analyses of this subsection, we’ll need to define our version of Kruschke’s HDIofICDF function(), hdi_of_icdf(). Like we’ve done in previous chapters, here we mildly reformat the function.

And here we’ll make a custom variant to be more useful within the context of map2().

Recall that when we combine a \(\text{Beta} (\theta | \alpha, \beta)\) prior with the results of a Bernoulli likelihood, we get a posterior defined by \(\text{Beta} (\theta | z + \alpha, N - z + \beta)\).

## # A tibble: 2 x 5
##   model   prior_a prior_b posterior_a posterior_b
##   <chr>     <dbl>   <dbl>       <dbl>       <dbl>
## 1 Uniform    1       1           66          36  
## 2 Haldane    0.01    0.01        65.0        35.0

Now we’ll use our custom hdi_of_qbeta() to compute the HDIs.

## # A tibble: 2 x 7
##   model   prior_a prior_b posterior_a posterior_b    ll    ul
##   <chr>     <dbl>   <dbl>       <dbl>       <dbl> <dbl> <dbl>
## 1 Uniform    1       1           66          36   0.554 0.738
## 2 Haldane    0.01    0.01        65.0        35.0 0.556 0.742

Let’s compare those HDIs in a plot.

“The HDIs are virtually identical. In particular, for either prior, the posterior distribution rules out \(\theta = 0.5\), which is to say that the must-be-fair hypothesis is not among the credible values” (p. 294).

10.6.1 Priors of different models should be equally informed.

“We have established that seemingly innocuous changes in the vagueness of a vague prior can dramatically change a model’s marginal likelihood, and hence its Bayes factor in comparison with other models. What can be done to ameliorate the problem” (p. 294)? Kruschke posed one method might be taking a small representative portion of the data in hand and use them to make an empirically-based prior for the remaining set of data. From our previous example, “suppose that the 10% subset has 6 heads in 10 flips, so the remaining 90% of the data has \(z = 65 − 6\) and \(N = 100 − 10\)” (p. 294).

Here are the new Bayes factors based on that method.

## [1] 0.05570509
## [1] 0.05748123

Now the two Bayes Factors are nearly the same.

It’s not in the text, but let’s compare these three models using brms, information criteria, model weights, model averaging, and posterior predictive checks. First, we’ll save the \(z\) and \(N\) information as a tibble with a series of 0s and 1s.

## Observations: 100
## Variables: 1
## $ y <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…

Next, fit the three models with brms::brm().

Compare the models by the LOO.

##      elpd_diff se_diff
## fit8  0.0       0.0   
## fit9  0.0       0.1   
## fit7 -2.9       2.7

Based on the LOO comparisons, none of the three models was a clear favorite. Although both versions of the anything’s-possible model (i.e., fit8 and fit9) had lower numeric estimates than the must-be-fair model (i.e., fit7), the standard errors on the difference scores were the same magnitude as the difference estimates themselves. As for comparing the two variants of the anything’s-possible model directly, their LOO estimates were almost indistinguishable.

Now let’s see what happens when we compute their model weights.

##      fit7      fit8      fit9 
## 0.1237299 0.3254725 0.5507977

If that’s hard to read, just round().

## fit7 fit8 fit9 
## 0.12 0.33 0.55

Here most of the stacking weight went to fit8, the model with the Beta(1, 1) prior.

Like we did earlier with fit1 and fit2, we can use the pp_average() function to compute the stacking weighted posterior for \(\theta\).

Did you notice the weights = mw argument, there? From the pp_average section of the brms reference manual (version 2.10.0), we read: “Alternatively, weights can be a numeric vector of pre-specified weights.” Since we saved the results of model_weights() as an object mw, we were able to capitalize on that feature. If you leave out that argument, you’ll have to wait a bit for brms to compute those weights again from scratch.

And just for the sake of practice, we can also compare the models with separate posterior predictive checks using pp_check().

Instead of the default 10, this time we used 1000 posterior simulations from each fit, which we summarized with dot and error bars. This method did a great job showing how little fit7 learned from the data. Another nice thing about this method is it reveals how similar the results are between fit8 and fit9, the two alternate versions of the anything’s-possible model. Also, did you notice how we tacked ylim(0, 80) onto the end of each plots’ code? Holding the scale of the axes constant makes it easier to compare results across plots.

10.7 Bonus: There’s danger ahead

If you’re new to model comparison with Bayes factors, information criteria, model stacking and so on, you should know these methods are still subject to spirited debate amongst scholars. For a recent example, see Gronau and Wagenmakers’ (2019) Limitations of Bayesian leave-one-out cross-validation for model selection, which criticized the LOO. Their paper was commented on by Navarro (2019); Chandramouli and Shiffrin (2019); and Vehtari, Simpson, Yao, and Gelman (2019). You can find Gronau and Wagenmakers’ (2019) rejoinder here.

And if you love those hot scholarly twitter discussions, these topics seem to spawn one every few months or so (e.g., here).

Session info

## R version 3.6.0 (2019-04-26)
## Platform: x86_64-apple-darwin15.6.0 (64-bit)
## Running under: macOS High Sierra 10.13.6
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] tidybayes_1.1.0 bayesplot_1.7.0 brms_2.10.3     Rcpp_1.0.2     
##  [5] patchwork_1.0.0 forcats_0.4.0   stringr_1.4.0   dplyr_0.8.3    
##  [9] purrr_0.3.3     readr_1.3.1     tidyr_1.0.0     tibble_2.1.3   
## [13] ggplot2_3.2.1   tidyverse_1.2.1
## 
## loaded via a namespace (and not attached):
##   [1] colorspace_1.4-1          ggridges_0.5.1           
##   [3] rsconnect_0.8.15          ggstance_0.3.2           
##   [5] markdown_1.1              base64enc_0.1-3          
##   [7] rstudioapi_0.10           rstan_2.19.2             
##   [9] svUnit_0.7-12             DT_0.9                   
##  [11] fansi_0.4.0               lubridate_1.7.4          
##  [13] xml2_1.2.0                bridgesampling_0.7-2     
##  [15] knitr_1.23                shinythemes_1.1.2        
##  [17] zeallot_0.1.0             jsonlite_1.6             
##  [19] broom_0.5.2               shiny_1.3.2              
##  [21] compiler_3.6.0            httr_1.4.0               
##  [23] backports_1.1.5           assertthat_0.2.1         
##  [25] Matrix_1.2-17             lazyeval_0.2.2           
##  [27] cli_1.1.0                 later_1.0.0              
##  [29] htmltools_0.4.0           prettyunits_1.0.2        
##  [31] tools_3.6.0               igraph_1.2.4.1           
##  [33] coda_0.19-3               gtable_0.3.0             
##  [35] glue_1.3.1.9000           reshape2_1.4.3           
##  [37] cellranger_1.1.0          vctrs_0.2.0              
##  [39] nlme_3.1-139              crosstalk_1.0.0          
##  [41] xfun_0.10                 ps_1.3.0                 
##  [43] rvest_0.3.4               mime_0.7                 
##  [45] miniUI_0.1.1.1            lifecycle_0.1.0          
##  [47] gtools_3.8.1              zoo_1.8-6                
##  [49] scales_1.0.0              colourpicker_1.0         
##  [51] hms_0.4.2                 promises_1.1.0           
##  [53] Brobdingnag_1.2-6         parallel_3.6.0           
##  [55] inline_0.3.15             shinystan_2.5.0          
##  [57] yaml_2.2.0                gridExtra_2.3            
##  [59] loo_2.1.0                 StanHeaders_2.19.0       
##  [61] stringi_1.4.3             highr_0.8                
##  [63] dygraphs_1.1.1.6          pkgbuild_1.0.5           
##  [65] rlang_0.4.1               pkgconfig_2.0.3          
##  [67] matrixStats_0.55.0        HDInterval_0.2.0         
##  [69] evaluate_0.14             lattice_0.20-38          
##  [71] rstantools_2.0.0          htmlwidgets_1.5          
##  [73] labeling_0.3              tidyselect_0.2.5         
##  [75] processx_3.4.1            plyr_1.8.4               
##  [77] magrittr_1.5              R6_2.4.0                 
##  [79] generics_0.0.2            pillar_1.4.2             
##  [81] haven_2.1.0               withr_2.1.2              
##  [83] xts_0.11-2                abind_1.4-5              
##  [85] modelr_0.1.4              crayon_1.3.4             
##  [87] arrayhelpers_1.0-20160527 utf8_1.1.4               
##  [89] rmarkdown_1.13            grid_3.6.0               
##  [91] readxl_1.3.1              callr_3.3.2              
##  [93] threejs_0.3.1             digest_0.6.21            
##  [95] xtable_1.8-4              httpuv_1.5.2             
##  [97] stats4_3.6.0              munsell_0.5.0            
##  [99] viridisLite_0.3.0         shinyjs_1.0