3 The Simple Mediation Model

Hayes closed the opening with:

Whereas answering questions about when or for whom is the domain of moderation analysis, questions that ask about how pertain to mediation, the focus of this and the next three chapters. In this chapter, [we’ll explore] the simple mediation model and illustrate using OLS regression-based Bayesian path analysis how the effect of an antecedent variable \(X\) on some final consequent \(Y\) can be partitioned into two paths of influence, direct and indirect. (p. 78, emphasis in the original)

3.1 The simple mediation model

Mediation analysis is a statistical method used to evaluate evidence from studies designed to test hypotheses about how some causal antecedent variable \(X\) transmits its effect on a consequent variable \(Y\).

When thinking about whether a phenomenon or theory you are studying could be conceptualized as a mediation process, it is important to keep in mind that mediation is ultimately a causal explanation. It is assumed that the relationships in the system are causal, and, importantly, that \(M\) is causally located between \(X\) and \(Y\). It must be assumed, if not also empirically substantiated, that \(X\) causes \(M\), which in turn causes \(Y\). \(M\) cannot possibly carry \(X\)’s effect on \(Y\) if \(M\) is not located causally between \(X\) and \(Y\). (pp. 78–81, emphasis in the original)

3.2 Estimation of the direce, indirect, and total effects of \(X\)

Given the simple three-term mediation model, the statistical model is expressed in the two equations

\[\begin{align*} M & = i_M + a X + e_M \\ Y & = i_Y + c' X + b M + e_Y. \end{align*}\]

When using OLS software, as Hayes promotes throughout the text, these equations are estimated sequentially. However, the brms package has multivariate capabilities. As such, our results will be from a Bayesian multilevel model that simultaneously computes both equations at once. They are both part of a joint model. And when we consider more advanced models later in the text, our multivariate models will fit even more than two equations at once. None of this is a problem for brms.

3.3 Example with dichotomous \(X\): The influence of presumed media influence

Here we load a couple necessary packages, load the data, and take a peek.

## Observations: 123
## Variables: 6
## $ cond     <dbl> 1, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0, …
## $ pmi      <dbl> 7.0, 6.0, 5.5, 6.5, 6.0, 5.5, 3.5, 6.0, 4.5, 7.0, 1.0, 6.0, 5.0, 7.0, 7.0, 7.0, 4.5, 3.5, …
## $ import   <dbl> 6, 1, 6, 6, 5, 1, 1, 6, 6, 6, 3, 3, 4, 7, 1, 6, 3, 3, 2, 4, 4, 6, 7, 4, 5, 4, 6, 5, 5, 7, …
## $ reaction <dbl> 5.25, 1.25, 5.00, 2.75, 2.50, 1.25, 1.50, 4.75, 4.25, 6.25, 1.25, 2.75, 3.75, 5.00, 4.00, …
## $ gender   <dbl> 1, 1, 1, 0, 1, 1, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 1, 0, 1, 0, 0, 0, 1, …
## $ age      <dbl> 51.0, 40.0, 26.0, 21.0, 27.0, 25.0, 23.0, 25.0, 22.0, 24.0, 22.0, 21.0, 23.0, 21.0, 22.0, …

You can get the male/female split like so.

## # A tibble: 2 x 2
##   gender     n
##    <dbl> <int>
## 1      0    80
## 2      1    43

Here is the split by condition.

## # A tibble: 2 x 2
##    cond     n
##   <dbl> <int>
## 1     0    65
## 2     1    58

Here is how to get the ungrouped mean and \(SD\) values for reaction and pmi, as presented in Table 3.1.

## # A tibble: 2 x 3
##   name      mean    sd
##   <chr>    <dbl> <dbl>
## 1 pmi       5.60  1.32
## 2 reaction  3.48  1.55

You might get the mean and \(SD\) values for reaction and pmi grouped by cond like this.

## # A tibble: 4 x 4
## # Groups:   cond [2]
##    cond name      mean    sd
##   <dbl> <chr>    <dbl> <dbl>
## 1     0 pmi       5.38  1.34
## 2     0 reaction  3.25  1.61
## 3     1 pmi       5.85  1.27
## 4     1 reaction  3.75  1.45

Let’s load our primary statistical package, brms.

Before we begin, I should acknowledge that I greatly benefited by this great blog post on path analysis in brms by Jarrett Byrnes. With brms, we handle mediation models using the multivariate syntax. There are a few ways to do this. Let’s start simple.

If you look at the path model in Figure 3.3, you’ll note that reaction is predicted by pmi and cond. pmi, in turn, is predicted solely by cond. So we have two regression models, which is just the kind of thing the brms multivariate syntax is for. So first let’s specify both models, which we’ll nest in bf() functions and save as objects.

Now we have our bf() objects in hand, we’ll combine them with the + operator within the brm() function. We’ll also specify set_rescor(FALSE)–we’re not interested in adding a residual correlation between reaction and pmi.

Here are our results.

##  Family: MV(gaussian, gaussian) 
##   Links: mu = identity; sigma = identity
##          mu = identity; sigma = identity 
## Formula: reaction ~ 1 + pmi + cond 
##          pmi ~ 1 + cond 
##    Data: pmi (Number of observations: 123) 
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
##          total post-warmup samples = 4000
## 
## Population-Level Effects: 
##                    Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## reaction_Intercept     0.52      0.56    -0.57     1.65 1.00     6426     3230
## pmi_Intercept          5.38      0.16     5.06     5.70 1.00     6210     2969
## reaction_pmi           0.51      0.10     0.31     0.70 1.00     6261     2875
## reaction_cond          0.26      0.26    -0.26     0.77 1.00     6390     3224
## pmi_cond               0.47      0.24    -0.01     0.95 1.00     6810     3307
## 
## Family Specific Parameters: 
##                Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sigma_reaction     1.41      0.09     1.25     1.59 1.00     6351     2500
## sigma_pmi          1.32      0.08     1.17     1.50 1.00     5592     2827
## 
## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 
## is a crude measure of effective sample size, and Rhat is the potential 
## scale reduction factor on split chains (at convergence, Rhat = 1).

If you compare our model summary with the coefficients in the path model in Figure 3.3, you’ll see our coefficients are the same. The brms summary also includes intercepts and residual variances, which are typically omitted in path diagrams even though they’re still part of the model.

If you’re getting lost in all the model output, try taking out the constant and error terms.

##               Estimate Est.Error   Q2.5 Q97.5
## reaction_pmi     0.507     0.099  0.310 0.702
## reaction_cond    0.259     0.261 -0.259 0.769
## pmi_cond         0.472     0.240 -0.014 0.954

In his Table 3.2, Hayes included the \(R^2\) values. Here are ours.

##            Estimate Est.Error  Q2.5 Q97.5
## R2reaction    0.210     0.057 0.099 0.321
## R2pmi         0.039     0.031 0.001 0.116

It’s worth it to actually plot the \(R^2\) distributions.

We went through the trouble of plotting the \(R^2\) distributions because it’s useful to understand that they won’t often be symmetric when they’re near their logical boundaries (i.e., 0 and 1). This is where asymmetric Bayesian credible intervals can really shine.

Let’s get down to business and examine the indirect effect, the \(ab\) pathway. In our model,

  • \(a\) = pmi_cond and
  • \(b\) = reaction_pmi.

You can isolate them with fixef()[i].

##    Estimate   Est.Error        Q2.5       Q97.5 
##  0.47213618  0.24020629 -0.01439811  0.95380657
##   Estimate  Est.Error       Q2.5      Q97.5 
## 0.50699147 0.09885669 0.30960267 0.70221028

So the naive approach would be to just multiply them.

##  Estimate Est.Error      Q2.5     Q97.5 
##     0.239     0.024    -0.004     0.670

Now, this does get us the correct ‘Estimate’ (i.e., posterior mean). However, the posterior \(SD\) and 95% intervals are off. If you want to do this properly, you need to work with the poster samples themselves. We do that with the posterior_samples() function.

## Observations: 4,000
## Variables: 8
## $ b_reaction_Intercept <dbl> 0.51540720, 0.40541249, 0.63404832, 0.28124422, 0.70300758, 0.72773623, 0.7608…
## $ b_pmi_Intercept      <dbl> 5.522656, 5.526570, 5.233996, 5.526494, 5.251560, 5.437609, 5.083563, 5.621216…
## $ b_reaction_pmi       <dbl> 0.4454816, 0.5781861, 0.4289168, 0.5859403, 0.4387670, 0.4857291, 0.4548907, 0…
## $ b_reaction_cond      <dbl> 1.003075337, -0.051680431, 0.619523421, -0.068325690, 0.567368727, 0.402503624…
## $ b_pmi_cond           <dbl> 0.4317949, 0.3282448, 0.6163566, 0.3163480, 0.5319986, 0.2861751, 0.8062659, 0…
## $ sigma_reaction       <dbl> 1.470505, 1.307822, 1.488764, 1.324966, 1.528775, 1.278531, 1.310666, 1.465768…
## $ sigma_pmi            <dbl> 1.280238, 1.358432, 1.264979, 1.254725, 1.286195, 1.251778, 1.303006, 1.229578…
## $ lp__                 <dbl> -436.2352, -433.9475, -434.1470, -433.2939, -433.6824, -433.7403, -433.5189, -…

Here we compute the indirect effect, ab.

Now we have ab as a properly computed vector, we can summarize it with the quantile() function.

##    50%   2.5%  97.5% 
##  0.232 -0.006  0.527

And we can even visualize it as a density.

It’s also worth pointing out that as the indirect effect isn’t perfectly symmetric, its mean and median aren’t quite the same.

##    mean median
## 1 0.239  0.232

Their magnitudes are similar, but this asymmetry will be a source of contrast to our estimates and the OLS estimates Hayes reported in the text. This is also something to consider when reporting on central tendency. When the indirect effect–or any other parameter, for that matter–is quite asymmetric, you might prefer reporting the median rather than the mean.

On page 90, Hayes computed the adjusted means for \(Y\). For both cond == 1 and cond == 0, he computed the expected values for reaction when pmi was at its mean. A natural way to do that in brms is with fitted(). First we’ll put our input values for cond and pmi in a tibble, which we’ll call nd. Then we’ll feed nd into the newdata argument within the fitted() function.

## , , reaction
## 
##      Estimate Est.Error     Q2.5    Q97.5
## [1,] 3.621530 0.1857745 3.245964 3.989349
## [2,] 3.362035 0.1815533 3.010791 3.727515
## 
## , , pmi
## 
##      Estimate Est.Error     Q2.5    Q97.5
## [1,] 5.850008 0.1738619 5.504383 6.192191
## [2,] 5.377872 0.1632537 5.058797 5.696318

Because model3.1 is a multivariate model, fitted() returned the model-implied summaries for both reaction and pmi. If you just want the adjusted means for reaction, you can use the resp argument within fitted().

##      Estimate Est.Error  Q2.5 Q97.5
## [1,]    3.622     0.186 3.246 3.989
## [2,]    3.362     0.182 3.011 3.728

Note how this is where the two values in the \(Y\) adjusted column in Table 3.1 came from.

However, if we want to reproduce how Hayes computed the total effect (i.e., \(c' + ab\)), we’ll need to work with the posterior draws themselves, post. Recall, we’ve already saved the indirect effect as a vector, ab. The direct effect, \(c'\), is labeled b_reaction_cond within post. To get the total effect, \(c\), all we need to is add those vectors together.

Here’s the posterior mean with its quantile-based 95% intervals

##        mean          ll       ul
## 1 0.4985056 -0.06307429 1.035278

3.3.1 Estimation of the model in PROCESS for SPSS and SAS.

Nothing new for us, here.

3.4 Statistical inference

Our approach will not match up neatly with Hayes’s on this topic.

3.4.1 Inference about the total effect of \(X\) on \(Y\).

As we mentioned in Chapter 2, we can indeed focus on rejecting \(H_0\) when using Bayesian statistics. I, however, am not a fan of that approach and I will not be focusing on Bayesian \(p\)-values. But throughout this project, we will make great efforts to express the (un)certainty in our models with various plots of posterior distributions and summary statistics, such as measures of central tendency (e.g., means) and spread (e.g., percentile-based 95% intervals).

So instead of \(t\)- and \(p\)-values for \(c'\), we are going to focus on the distribution. We already gave the mean and 95% intervals, above. Here’s a look at the density.

3.4.2 Inference about the direct effect of \(X\) on \(Y\).

Like in the last section, we will just look at the posterior distribution for the direct effect (i.e., \(c'\), b_reaction_cond).

If you wanted to quantify what proportion of the density was less than 0, you could execute something like this.

##   proportion_below_zero
## 1                 0.158

This is something like a Bayesian \(p\)-value. But of course, you could always just look at the posterior intervals.

##       2.5%      97.5% 
## -0.2591588  0.7686859

3.4.3 Inference about the indirect of \(X\) on \(Y\) through \(M\).

The indirect effect quantifies how much two cases that differ by a unit on \(X\) are estimated to differ on \(Y\) as a result of \(X\)’s influence on \(M\), which in turn influences \(Y\). The indirect effect is relevant as to [what extent] \(X\)’s effect on \(Y\) can be said to be transmitted through the mechanism represented by the \(X \rightarrow M \rightarrow Y\) causal chain of events. (p. 95)

3.4.3.1 The normal theory approach.

This is not our approach.

3.4.3.2 Bootstrap confidence interval.

This is not our approach.

However, Markov chain Monte Carlo (MCMC) methods are iterative and share some characteristics with boostrapping. On page 98, Hayes outlined 6 steps for constructing the \(ab\) bootstrap confidence interval. Here are our responses to those steps w/r/t Bayes with MCMC–or in our case HMC (i.e., Hamiltonian Monte Carlo).

If HMC or MCMC, in general, are new to you, you might check out this lecture or the Stan Reference Manual if you’re more technically oriented.

Anyway, Hayes’s 6 steps:

3.4.3.2.1 Step 1.

With HMC we do not take random samples of the data themselves. Rather, we take random draws from the posterior distribution. The posterior distribution is the joint probability distribution of our model.

3.4.3.2.2 Step 2.

After we fit our model with the brm() function and save our posterior draws in a data frame (i.e., post <- posterior_samples(my_model_fit)), we then make a new column (a.k.a. vector, variable) that is the product of our coefficients for the \(a\) and \(b\) pathways. In the example above, this looked like post %>% mutate(ab = b_pmi_cond * b_reaction_pmi). Let’s take a look at those columns.

##    b_pmi_cond b_reaction_pmi        ab
## 1   0.4317949      0.4454816 0.1923567
## 2   0.3282448      0.5781861 0.1897866
## 3   0.6163566      0.4289168 0.2643657
## 4   0.3163480      0.5859403 0.1853611
## 5   0.5319986      0.4387670 0.2334234
## 6   0.2861751      0.4857291 0.1390035
## 7   0.8062659      0.4548907 0.3667628
## 8   0.2674144      0.5624687 0.1504122
## 9   0.6673819      0.5266402 0.3514701
## 10  0.2771463      0.5150936 0.1427563

Our data frame, post, has 4000 rows. Why 4000? By default, brms runs 4 HMC chains. Each chain has 2000 iterations, 1000 of which are warmups, which we always discard. As such, there are 1000 good iterations left in each chain and \(1000 \times 4 = 4000\). We can change these defaults as needed.

Each row in post contains the parameter values based on one of those draws. And again, these are draws from the posterior distribution. They are not draws from the data.

3.4.3.2.3 Step 3.

We don’t refit the model \(k\) times based on the samples from the data. We take a number of draws from the posterior distribution. Hayes likes to take 5000 samples when he bootstraps. Happily, that number is quite similar to our default 4000 HMC draws. Whether 5000, 4000 or 10,000, these are all large enough numbers that the distributions become fairly stable. With HMC, however, you might want to increase the number of iterations if the effective sample size, ‘Eff.Sample’ in the print() output, is substantially smaller than the number of iterations.

3.4.3.2.4 Step 4.

When we use the quantile() function to compute our Bayesian credible intervals, we’ve sorted. Conceptually, we’ve done this.

##            ab
## 1  -0.2056336
## 2  -0.1745748
## 3  -0.1627155
## 4  -0.1527381
## 5  -0.1507441
## 6  -0.1337197
## 7  -0.1271329
## 8  -0.1261394
## 9  -0.1197648
## 10 -0.1194200
3.4.3.2.5 Step 5.

Yes, this is what we do, too.

## [1] 2.5
3.4.3.2.6 Step 6.

This is also what we do.

## [1] 97.5

Also, notice the headers in the rightmost two columns in our posterior_summary() output:

##                          Estimate  Est.Error          Q2.5        Q97.5
## b_reaction_Intercept    0.5220588 0.56038627   -0.56894728    1.6542507
## b_pmi_Intercept         5.3778721 0.16325372    5.05879679    5.6963180
## b_reaction_pmi          0.5069915 0.09885669    0.30960267    0.7022103
## b_reaction_cond         0.2594951 0.26076802   -0.25915879    0.7686859
## b_pmi_cond              0.4721362 0.24020629   -0.01439811    0.9538066
## sigma_reaction          1.4079448 0.09152163    1.24656446    1.5949728
## sigma_pmi               1.3177959 0.08377812    1.16592170    1.4979835
## lp__                 -434.8522548 1.94452334 -439.65536983 -432.2107403

Those .025 and .975 quantiles from above are just what brms is giving us in our 95% Bayesian credible intervals.

Here’s our version of Figure 3.5.

Again, as Hayes discussed how to specify different types of intervals in PROCESS on page 102, you can ask for different kinds of intervals in your print() or summary() output with the probs argument, just as you can with quantile() when working directly with the posterior draws.

Hayes discussed setting the seed in PROCESS on page 104. You can do this with the seed argument in the brm() function, too.

3.4.3.3 Alternative “asymmetric” confidence interval approaches.

This section does not quite refer to us. I’m a little surprised Hayes didn’t at least dedicate a paragraph or two on Bayesian estimation. Sure, he mentioned Monte Carlo, but not within the context of Bayes. So it goes. But if you’re interested, you can read about Bayesian intervals for mediation models in Yuan and MacKinnon’s Bayesian mediation analysis. And yes, Hayes is aware of this. He has cited it in his work.

3.5 An example with continuous \(X\): Economic stress among small-business owners

Here’s the estress data.

## Observations: 262
## Variables: 7
## $ tenure   <dbl> 1.67, 0.58, 0.58, 2.00, 5.00, 9.00, 0.00, 2.50, 0.50, 0.58, 9.00, 1.92, 2.00, 1.42, 0.92, …
## $ estress  <dbl> 6.0, 5.0, 5.5, 3.0, 4.5, 6.0, 5.5, 3.0, 5.5, 6.0, 5.5, 4.0, 3.0, 2.5, 3.5, 6.0, 4.0, 6.0, …
## $ affect   <dbl> 2.60, 1.00, 2.40, 1.16, 1.00, 1.50, 1.00, 1.16, 1.33, 3.00, 3.00, 2.00, 1.83, 1.16, 1.16, …
## $ withdraw <dbl> 3.00, 1.00, 3.66, 4.66, 4.33, 3.00, 1.00, 1.00, 2.00, 4.00, 4.33, 1.00, 5.00, 1.66, 4.00, …
## $ sex      <dbl> 1, 0, 1, 1, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, 1, 0, 1, 0, 0, 0, 1, …
## $ age      <dbl> 51, 45, 42, 50, 48, 48, 51, 47, 40, 43, 57, 36, 33, 29, 33, 48, 40, 45, 37, 42, 54, 57, 37…
## $ ese      <dbl> 5.33, 6.05, 5.26, 4.35, 4.86, 5.05, 3.66, 6.13, 5.26, 4.00, 2.53, 6.60, 5.20, 5.66, 5.66, …

The model set up is just like before. There are no complications switching from a binary \(X\) variable to a continuous one.

With our y_model and m_model defined, we’re ready to fit.

Let’s take a look at the results.

##  Family: MV(gaussian, gaussian) 
##   Links: mu = identity; sigma = identity
##          mu = identity; sigma = identity 
## Formula: withdraw ~ 1 + estress + affect 
##          affect ~ 1 + estress 
##    Data: estress (Number of observations: 262) 
## Samples: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
##          total post-warmup samples = 4000
## 
## Population-Level Effects: 
##                    Estimate Est.Error l-95% CI u-95% CI  Rhat Bulk_ESS Tail_ESS
## withdraw_Intercept    1.448     0.256    0.945    1.963 1.002     5808     3165
## affect_Intercept      0.800     0.143    0.522    1.081 1.001     6502     3321
## withdraw_estress     -0.078     0.053   -0.182    0.026 1.000     5461     3448
## withdraw_affect       0.771     0.106    0.564    0.972 1.001     5187     3009
## affect_estress        0.173     0.029    0.116    0.229 1.001     6388     3134
## 
## Family Specific Parameters: 
##                Estimate Est.Error l-95% CI u-95% CI  Rhat Bulk_ESS Tail_ESS
## sigma_withdraw    1.139     0.051    1.045    1.245 1.001     6170     3185
## sigma_affect      0.686     0.031    0.630    0.750 1.003     6978     2902
## 
## Samples were drawn using sampling(NUTS). For each parameter, Eff.Sample 
## is a crude measure of effective sample size, and Rhat is the potential 
## scale reduction factor on split chains (at convergence, Rhat = 1).

The ‘Eff.Sample’ and ‘Rhat’ values look great. Happily, the values in our summary cohere well with those Hayes reported in Table 3.5. Here are our \(R^2\) values.

##             Estimate  Est.Error       Q2.5     Q97.5
## R2withdraw 0.1833827 0.03932012 0.10720426 0.2577434
## R2affect   0.1167220 0.03407392 0.05450603 0.1848251

These are also quite similar to those in the text. Here’s our indirect effect.

##   50%  2.5% 97.5% 
## 0.131 0.081 0.193

We can visualize its shape, median, and 95% intervals in a density plot.

Here’s \(c'\), the direct effect of esterss predicting withdraw.

##    Estimate   Est.Error        Q2.5       Q97.5 
## -0.07781526  0.05300991 -0.18218003  0.02647619

It has wide flapping intervals which do straddle zero. A little addition will give us the direct effect, \(c\).

##    50%   2.5%  97.5% 
##  0.055 -0.052  0.163

Session info

## R version 3.6.0 (2019-04-26)
## Platform: x86_64-apple-darwin15.6.0 (64-bit)
## Running under: macOS High Sierra 10.13.6
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/3.6/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
##  [1] ggthemes_4.2.0  brms_2.10.3     Rcpp_1.0.2      forcats_0.4.0   stringr_1.4.0   dplyr_0.8.3    
##  [7] purrr_0.3.3     readr_1.3.1     tidyr_1.0.0     tibble_2.1.3    ggplot2_3.2.1   tidyverse_1.2.1
## 
## loaded via a namespace (and not attached):
##  [1] nlme_3.1-139         matrixStats_0.55.0   xts_0.11-2           lubridate_1.7.4      threejs_0.3.1       
##  [6] httr_1.4.0           rstan_2.19.2         tools_3.6.0          backports_1.1.5      utf8_1.1.4          
## [11] R6_2.4.0             DT_0.9               lazyeval_0.2.2       colorspace_1.4-1     withr_2.1.2         
## [16] prettyunits_1.0.2    processx_3.4.1       tidyselect_0.2.5     gridExtra_2.3        Brobdingnag_1.2-6   
## [21] compiler_3.6.0       cli_1.1.0            rvest_0.3.4          xml2_1.2.0           shinyjs_1.0         
## [26] labeling_0.3         colourpicker_1.0     scales_1.0.0         dygraphs_1.1.1.6     callr_3.3.2         
## [31] ggridges_0.5.1       StanHeaders_2.19.0   digest_0.6.21        rmarkdown_1.13       base64enc_0.1-3     
## [36] pkgconfig_2.0.3      htmltools_0.4.0      htmlwidgets_1.5      rlang_0.4.1          readxl_1.3.1        
## [41] rstudioapi_0.10      shiny_1.3.2          generics_0.0.2       zoo_1.8-6            jsonlite_1.6        
## [46] crosstalk_1.0.0      gtools_3.8.1         inline_0.3.15        magrittr_1.5         loo_2.1.0           
## [51] bayesplot_1.7.0      Matrix_1.2-17        munsell_0.5.0        fansi_0.4.0          abind_1.4-5         
## [56] lifecycle_0.1.0      stringi_1.4.3        pkgbuild_1.0.5       plyr_1.8.4           grid_3.6.0          
## [61] parallel_3.6.0       promises_1.1.0       crayon_1.3.4         miniUI_0.1.1.1       lattice_0.20-38     
## [66] haven_2.1.0          hms_0.4.2            ps_1.3.0             zeallot_0.1.0        knitr_1.23          
## [71] pillar_1.4.2         igraph_1.2.4.1       markdown_1.1         shinystan_2.5.0      stats4_3.6.0        
## [76] reshape2_1.4.3       rstantools_2.0.0     glue_1.3.1.9000      evaluate_0.14        modelr_0.1.4        
## [81] vctrs_0.2.0          httpuv_1.5.2         cellranger_1.1.0     gtable_0.3.0         assertthat_0.2.1    
## [86] xfun_0.10            mime_0.7             xtable_1.8-4         broom_0.5.2          coda_0.19-3         
## [91] later_1.0.0          rsconnect_0.8.15     shinythemes_1.1.2    ellipsis_0.3.0       bridgesampling_0.7-2