14 Miscellaneous Topics and Some Frequently Asked Questions

The vast majority of scientists would probably argue that when push comes to shove, the theoretical horse should pull the statistical cart. Statistical methods are mathematical tools, some of them quite amazing in what they do, which can help us to discern order amid the apparent chaos in a batch of data. But ultimately, the stories that statistical methods help us tell are told by our brains, not by the mathematics, and our brains are good at making sense of things–of coming up with stories to explain what we perceive. The problem is that the same pattern of results can be interpreted in many different ways, especially if the pattern is found after an extensive round of exploratory data analysis. Without a theoretical orientation to guide our attempts at making sense of our data or, better still, to guide our research design and data collection efforts, our awesome storytelling ability can lead us astray by invoking explanations for findings that may sound good but that are mere conjecture even if we can find a theoretical hook on which to hang them post hoc.

I won’t argue against this perspective, as I believe it is for the most part right on the money. But I also believe that statistical methods can play an important role in theory development as well–that the statistical cart need not always, should not always, and often does not follow the theoretical horse. When we learn something new analytically, this can change the way we think of things theoretically and how we then go about testing the ideas that our newfound awareness of an analytical method inspired (cf. Andrew F. Hayes et al., 2008, p. 2). (Andrew F. Hayes, 2018, pp. 507–508)

14.1 A strategy for approaching a conditional process analysis

Things don’t always turn out as we expected and articulated in hypotheses 1, 2, and 3. And sometimes after looking at the data, our thinking about the process at work changes and new hypotheses come to mind that are worth testing. Scientists routinely switch back and forth between the context of justification and the context of discovery, testing hypotheses conceived before the data were analyzed while also exploring one’s data to see what else can be learned from patterns observed but not anticipated.

In the next paragraph, we see Hayes anticipated criticism at this paragraph and some of the subsections to follow. My two cents are that problems arise when we approach statistics from a \(p\)-value-based hypothesis-testing perspective, when we pose exploratory research as confirmatory, when we engage in closed science practices, and when we only present the analyses that “worked.” For some engaging thoughts on iterative Bayesian workflows, you might check out Navarro’s (2020) paper, Paths in strange spaces.

14.1.1 Step 1: Construct your conceptual diagram of the process.

Build up your model slowly and with the aid of visuals.

14.1.2 Step 2: Translate the conceptual model into a statistical model.

We don’t estimate the conceptual model. [Presuming mediation,] a conceptual model must be translated into a statistical model in the form of at least two equations, depending on the number of proposed mediators in the model. With an understanding of the principles of moderation and mediation analysis described in this book, you should be able to do this without too much difficulty. (p. 510)

14.1.3 Step 3: Estimate the statistical model.

Hayes used OLS-based procedures throughout the text. We have been using Bayesian software. Though I prefer brms, you might also check out blavaan or rstanarm. As a group, these will allow you to fit many more kinds of regression models than available through the OLS paradigm.

14.1.4 Step 4: Determine whether expected moderation exists.

I’m not a fan of this “whether [x] exists” talk. First, it seems way to black and white. For more thoughts along those lines, check out Gelman’s (2015) paper, The connection between varying treatment effects and the crisis of unreplicable research: A Bayesian perspective. Second, it places too much faith in the analysis of a single data set. For more on those lines, consider what Hayes wrote in the middle of this subsection:

Just because an interaction is not significant, that doesn’t mean your proposed moderator does not moderate the path you proposed it moderates. Parsimony might dictate that your model should be cleansed of this interaction, but null hypotheses tests are fallible, and sometimes real effects are so weak that we don’t have the power to detect them given the limitations of our resources or other things out of our control. (p. 511)

This is all posed in the language of NHST, but the basic points still hold for other paradigms. If you have a theory-informed model, I recommend showing the results for that model regardless of the sizes of the parameters.

I have concerns about the next subsection.

14.1.4.1 Step 4A.

If you have decided to prune your model of nonsignificant interactions, then go back to step 1 and start fresh by redrawing your conceptual diagram in light of the evidence you now have and proceed through these steps again. A certain moral or ethical logic might dictate that you not pretend when describing your analysis that this is where you started in the first place. Yet Bem (1987) makes the argument that spending lots of time talking about ideas that turned out to be “wrongheaded” isn’t going to produce a particularly interesting paper. You’ll have to sort out for yourself where you stand on this continuum of scientific ethics. (p. 512)

Hayes is quite right: “You’ll have to sort out for yourself where you stand on this continuum of scientific ethics.” At this point in the social-psychology replication crisis–a crisis of which Bem’s shoddy work has played no small part (Earp & Trafimow, 2015; Galak et al., 2012; see Nelson et al., 2018; Pashler & Wagenmakers, 2012; Yong, 2012)–, it’s shocking to read Hayes endorsing this advice.

At a bare minimum, I recommend presenting your failed theory-based models in supplemental materials. You can upload them to the Open Science Framework at https://osf.io/ for free.

14.1.5 Step 5: Probe and interpret interactions involving components of the indirect effect.

At this stage, probe any interactions involving components of the indirect effect of \(X\) so that you will have some understanding of the contingencies of the various effects that are the components of the larger conditional process model you are estimating. This exercise will help inform and clarify your interpretation of the conditional indirect effect(s) of \(X\) later on. (p. 512)

14.1.6 Step 6: Quantify and test conditional indirect effects (if relevant).

Within the paradigm I have introduces throughout this text, “testing” effects is never relevant. However, one can and should think in terms of the magnitudes of the parameters in the model. Think in terms of effect sizes. For a frequentist introduction to effect-size thinking, Geoff Cumming’s work is a fine place to start, such as his (2014) article, The new statistics: Why and how. For a Bayesian alternative, check out Kruschke and Liddell’s (2018) article, The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective.

14.1.7 Step 7: Quantify and test conditional direct effects (if relevant).

“If your model includes moderation of the direct effect of \(X\), you will want to probe this interaction by estimating the conditional direct effects” (p. 513).

14.1.8 Step 8: Tell your story.

Real science does not proceed in the manner described in re- search methods and statistics textbooks. Rather, we routinely straddle the fence between the hypothetico-deductive approach and a more discovery-oriented or inquisitive mindset that is open to any story the data may inform. Sometimes the story we originally conceived prior to data analysis is simply wrong and we know it after analyzing the data. No one wants to read (as Daryl Bem once put it rather bluntly) “a personal history about your stillborn thoughts” (Bem, 1987, p. 173). Sometimes our data speak to us in ways that change the story we thought we were going to tell into something much more interesting, and hopefully more accurate. (p. 514)

I largely agree, though I still recommend against following Bem’s recommendations. Yes, we need to present out work compellingly. But do resist the urge to un-transparently reinvent your research hypotheses for the sake of flashy rhetoric. If you have evidence against one of the theoretical models in your field, let them know! For an example of scientists embracing the failure of theory, check out Klein and colleagues’ (2019) Many Labs 4: Failure to replicate mortality salience effect with and without original author involvement.

Of course, there is always the danger of capitalizing on chance when you let your explorations of the data influence the story you tell. We are great at explaining patterns we see. Our brains are wired to do it. So replication is important, and it may be the only way of establishing the generality of our findings and claims in the end. (p. 514)

Yes indeed. So please stop following the poor examples of Bem and others. Present your research honestly and transparently.

14.2 How do I write about this?

I have become convinced that the ability to communicate in writing is at least as if not more important than the ability to manipulate numbers and think in abstractions. You don’t have to be a good data analyst to be a good scientist, but your future on the front lines of science is limited if you can’t write effectively. (p. 515)

Agreed.

On page 518, Hayes gave an example of how to write up the results of a mediation analysis in a narrative style. Here’s a more Bayesian version:

From a simple [Bayesian] mediation analysis conducted using [Hamiltonian Monte Carlo], article location indirectly influenced intentions to buy sugar through its effect on beliefs about how others would be influenced. As can be seen in Figure 3.3 and Table 3.2, participants told that the article would be published on the front page believed others would be more influenced to buy sugar than those told that the article would appear in an economic supplement (\(a\) = 0.477[, 95% CI [0.277, 0.677]]), and participants who believed others would be more influenced by the story expressed a stronger intention to go buy sugar themselves (\(b\) = 0.506[, 95% CI [0.306, 0.706]]). [The credible interval] for the indirect effect (\(ab\) = 0.241) based on 5,000 bootstrap samples was entirely above zero (0.007 to 0.526). [The evidence suggested] that article location [had little influence on] intention to buy sugar independent of its effect on presumed media influence (\(c'\) = 0.254[, 95% CI [-0.154, 0.554]]).

In addition to throwing in talk about Bayes, I added 95% credible interval information for all point estimates (i.e., posterior means or medians) and shifted the language away from a dichotomous NHST style to emphasize the magnitude and uncertainty of the conditional direct effect (\(c'\)). A little further down we read:

Third, throughout this book, I emphasize estimation and interpretation of effects in their unstandardized metric, and I report unstandardized effects when I report the results in my own research. There is a widespread belief that standardized effects are best reported, because the measurement scales used in most sciences are arbitrary and not inherently meaningful, or that standardized effects are more comparable across studies or investigators using different methods. But standardization simply changes one arbitrary measurement scale into another arbitrary scale, and because standardized effects are scaled in terms of variability in the sample, they are not comparable across studies conducted by different investigators regardless of whether the same measurement scales are used. (p. 519)

To my mind, this leads directly to thoughts about effect sizes. I recommend you ponder long and hard on how to interpret your results in terms of effect sizes. If your research is anything like mine, it’s not always clear what the best approach might be. To get your juices flowing, you might check out Kelley and Preacher’s (2012) On effect size.

14.2.1 Reporting a mediation analysis.

Mediation is a causal phenomenon. You will find some people skeptical of the use of mediation analysis in studies based on data that are purely correlational in nature and involve no experimental manipulation or measurement over time. Some people are quite fanatical about this, and if you are unlucky enough to get such an extreme reviewer when trying to publish your work, there may not be much you can to do to convince him or her otherwise. (p. 520)

In recent years, I’ve shifted more and more in the direction of the ‘fanatics.’ That might seem odd given I’ve poured all this time and effort into translating a book highlighting and endorsing cross-sectional mediation. I think cross-sectional mediation is a fine teaching tool; it’s a good place for budding data analysts to start. But please do not stop there. Do not stop with this book. Mediation is causal process and causal processes are necessarily longitudinal. A fine place to dip your toes into those waters are the works of Maxwell, Cole, and Mitchell (Maxwell et al., 2011; e.g., Maxwell & Cole, 2007; Mitchell & Maxwell, 2013). If you care about the causal process under your study, please collect longitudinal data. If you learn best by snarky twitter discussions, go here.

14.2.2 Reporting a moderation analysis.

Some of the more interesting studies you will find show that what is commonly assumed turns out to be true only sometimes, or that a well-known manipulation only works for some types of people. When writing about moderation, you have the opportunity to tell the scientific world that things aren’t as simple as perhaps they have seemed or been assumed to be, and that there are conditions that must be placed on our understanding of the world. (p. 522)

I don’t care for the NHST framing in many of the paragraphs to follow. If you find it a struggle to move beyond this way of thinking, I recommend soaking in Gelman’s (2015) editorial commentary, The connection between varying treatment effects and the crisis of unreplicable research: A Bayesian perspective. Later we read:

The interpretation of the regression coefficients for \(X\) and \(W\) in a model that includes \(XW\) are highly dependent on the scaling of \(X\) and \(W\). If you have centered a variable, say so. If one or both of these variables is dichotomous, tell the reader what numerical codes were used to represent the two groups. Preferably, choose codes for the two groups that differ by one unit. The more information you give to the reader about how your variables are scaled or coded, the more the reader will be able to look at your results in the text, tables, or figures, discern their meaning, and interpret them correctly. (0. 524)

Agreed. And if you’re working under tight word-limit constraints, just give all these details–and others such as centering, standardizing, priors, formal model formulas, HMC chain diagnostic checks, posterior-predictive checks–in supplemental material posted to a stable online repository, such as the Open Science Framework.

Other than that, please consider plotting your interactions. Hayes gave some examples of that in the text, but I think his approach was inadequate. Include measures of uncertainty (e.g., 95% credible intervals) in your visualizations. For examples of this beyond those I reported in this text, check out this nice (2018) tutorial by McCabe, Kim, and King. For visualization ideas specific to the brms-tidybayes framework, check out Kay’s (2021) Extracting and visualizing tidy draws from brms models.

14.2.3 Reporting a conditional process analysis.

One of the challenges you are likely to face when writing about a conditional process analysis is staying within the page allowance that most journal editors provide. A well-written analysis will usually contain multiple tables, perhaps a figure or two to depict the conceptual model and perhaps an interaction or two, and enough text to describe what was found in substantive terms while also providing sufficient detail about the analysis for the reader to understand what was done. Many reviewers, editors, and readers will not be familiar with this approach and may need to be educated within the text, further lengthening the manuscript. Yet I am also amazed how much I am able to cut from my own writing with sufficient editing, so don’t be wedded to every word you write, and don’t be afraid to delete that sentence or two you spent much time pondering, crafting, and fine-tuning but that upon third or fourth reading really isn’t necessary to convey what needs to be conveyed.

That said, I would err on the side of presenting more information rather than less whenever space allows it. (p. 526)

As recommended just above, you can also cover all this with online supplemental material posted to a stable online repository, such as the Open Science Framework. I can’t recommend this enough.

14.3 Should I use structural equation modeling instead of regression analysis?

First to clarify, one can do structural equation modeling (SEM) as a Bayesian. At the time of this writing, brms is only capable of limited forms of SEM. In fact, all the multivariate models we have fit this far can be thought of as special cases of Bayesian SEM (see here). However, it appears brms will offer expanded SEM capabilities sometime in the future. To keep up with the Bürkner’s progress, you might intermittently check in to issue #303 on the brms GitHub page. And of course, you can always check out the blavaan package.

14.4 The pitfalls of subgroups analysis

If at all possible, just don’t do this. Hayes covered the following reasons why:

  • “First, the subgroups approach may not accurately reflect the process purportedly at work” (p. 531).
  • “Second, a direct or indirect effect may be descriptively different in the two groups but may not actually be different when subjected to a formal statistical test of differences” (pp. 531–532). For more on this point, check out Gelman and Stern’s (2006) The difference between “significant” and “not significant” is not itself statistically significant.
  • “Third, the subgroups approach conflates statistical significance with sample size” (p. 532).
  • “Finally, this approach requires that the proposed moderator be categorical” (p. 532).

14.5 Can a variable simultaneously mediate and moderate another variable’s effect?

“Just because something is mathematically possible doesn’t mean that it is sensible theoretically or substantively interpretable when it happens” (p. 540).

I suspect part of this issue has to do with confusion around the longitudinal nature of mediation al processes and a confounding of state versus trait (i.e., within- and between-case variation). See the Cole, Maxwell and Mitchell papers referenced, above, to start thinking about mediation within the context of longitudinal data. The issue of within- versus between-case variation is huge and IMO under-appreciated in the mediation literature. If you’re ready to blow your mind a little, I can think of no better place to start learning about this than Ellen Hamaker’s engaging (2012) chapter, Why researchers should think “within-person”: A paradigmatic rationale.

14.6 Interaction between \(X\) and \(M\) in mediation analysis?

The mathematics of this approach relies on the counterfactual or potential outcomes approach to causal analysis that is popular in statistics, less widely known in the social science research community, and, to many, a little harder to digest. For discussions of this approach, see Imai, Keele, and Tingley (2010), Muthé́n and Asparouhov (2015), Valeri and VanderWeele (2013), and VanderWeele (2015).

14.7 Repeated measures designs

In all examples throughout this book, cases were measured on mediator(s) \(M\) and outcome \(Y\) only once. In nonexperimental studies, they were measured once on the causal antecedent \(X\), whereas in experimental studies they were assigned to one of two or three experimental conditions used as \(X\) in the model. Although such research designs are common, also common are “repeated measures” designs that involve measuring cases more than once on the variables in a mediation model. There are several forms such designs take, and there are approaches to mediation and conditional process analysis that can be used depending on the form of the design. (p. 541)

On the next page, Hayes introduced the multilevel approach to mediation. For more on this topic, check out the great (2018) paper from Vuorre and Bolger, Within-subject mediation analysis for experimental data in cognitive psychology and neuroscience which introduced a package for Bayesian multilevel mediation models called bmlm (Vuorre, 2017, 2021). Yes, you can do this in brms.

Hayes also discussed the cross-lag panel model for longitudinal mediation (e.g., Valente & MacKinnon, 2017). I have not tried it, but I believe you could do this with current versions of brms. Here’s a thread on the Stan forums discussing how from a multilevel perspective.

For a more detailed discussion of mediation in panel designs like this, see Cole and Maxwell (2003), Little, Preacher, Selig, and Card (2007), and Selig and Preacher (2009).

Another analytical option is available when \(M\) and \(Y\) are measured at least three times, regardless of the number of measurements of \(X\). Parallel process latent growth modeling allows for \(X\) (either in a single measurement or manipulation, or its change over time) to influence the trajectory in change in the mediator, which in turn can influence the trajectory in the change in \(Y\) over time. [Yes, brms can do this]. See Cheong et al. Cheong et al. (2003) and Selig and Preacher (2009) for discussions of the mathematics of mediation analysis in a latent growth context. (p. 545, emphasis in the original)

14.8 Dichotomous, ordinal, count, and survival outcomes

[Our version of] this book is focused squarely and exclusively on linear regression analysis using the [single-level Gaussian likelihood] as the computational backbone of mediation, moderation, and conditional process analysis. In all examples that included a mediation component, all mediator(s) \(M\) and final consequent variable \(Y\) were always treated as continuous dimensions with at least interval level measurement properties. But no doubt you will find yourself in a situation where \(M\) and/or \(Y\) is dichotomous, or an ordinal scale with only a few scale points, or perhaps a count variable. Although such variables can be modeled with [the Gaussian likelihood], doing so is controversial because there are better methods that respect the special statistical considerations that come up when such variables are on the left sides of equations. (p. 545)

Going beyond the Gaussian likelihood is often framed in terms of the generalized linear model (GLM). To wade into the GLM from within a Bayesian framework, I recommend Kruschke’s (2015) text, Doing Bayesian data analysis, Second Edition: A tutorial with R, JAGS, and Stan, or either edition of McElreath’s text, Statistical rethinking: A Bayesian course with examples in R and Stan (2020, 2015). I have free ebooks of all three wherein I have translated their code into a tidyverse and brms framework (Kurz, 2023a, 2023b, 2023c). Liberate yourself from the tyranny of the Gauss. Embrace the GLM.

Session info

sessionInfo()
## R version 4.2.2 (2022-10-31)
## Platform: x86_64-apple-darwin17.0 (64-bit)
## Running under: macOS Big Sur ... 10.16
## 
## Matrix products: default
## BLAS:   /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRblas.0.dylib
## LAPACK: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRlapack.dylib
## 
## locale:
## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## loaded via a namespace (and not attached):
##  [1] bookdown_0.28   digest_0.6.31   R6_2.5.1        jsonlite_1.8.4  magrittr_2.0.3  evaluate_0.18  
##  [7] stringi_1.7.8   cachem_1.0.6    rlang_1.0.6     cli_3.6.0       rstudioapi_0.13 jquerylib_0.1.4
## [13] bslib_0.4.0     rmarkdown_2.16  tools_4.2.2     stringr_1.4.1   xfun_0.35       fastmap_1.1.0  
## [19] compiler_4.2.2  htmltools_0.5.3 knitr_1.40      sass_0.4.2

References

Bem, D. J. (1987). Writing the empirical journal article. In M. P. Zanna & J. M. Darley (Eds.), The complete academic: A practical guide for the beginning social scientist (pp. 171–201). Lawrence Erlbaum Associates.
Cheong, J., MacKinnon, D. P., & Khoo, S. T. (2003). Investigation of mediational processes using parallel process latent growth curve modeling. Structural Equation Modeling : A Multidisciplinary Journal, 10(2), 238. https://doi.org/10.1207/S15328007SEM1002_5
Cole, D. A., & Maxwell, S. E. (2003). Testing mediational models with longitudinal data: Questions and tips in the use of structural equation modeling. Journal of Abnormal Psychology, 112(4), 558–577. https://doi.org/10.1037/0021-843X.112.4.558
Cumming, G. (2014). The new statistics: Why and how. Psychological Science, 25(1), 7–29. https://doi.org/10.1177/0956797613504966
Earp, B. D., & Trafimow, D. (2015). Replication, falsification, and the crisis of confidence in social psychology. Frontiers in Psychology, 6. https://doi.org/10.3389/fpsyg.2015.00621
Galak, J., LeBoeuf, R. A., Nelson, L. D., & Simmons, J. P. (2012). Correcting the past: Failures to replicate psi. Journal of Personality and Social Psychology, 103(6), 933–948. https://doi.org/10.1037/a0029709
Gelman, A. (2015). The connection between varying treatment effects and the crisis of unreplicable research: A Bayesian perspective. Journal of Management, 41(2), 632–643. https://doi.org/10.1177/0149206314525208
Gelman, A., & Stern, H. (2006). The difference between “significant” and “not significant” is not itself statistically significant. The American Statistician, 60(4), 328–331. https://doi.org/10.1198/000313006X152649
Hamaker, E. L. (2012). Why researchers should think "within-person": A paradigmatic rationale. In Handbook of research methods for studying daily life (pp. 43–61). The Guilford Press. https://www.guilford.com/books/Handbook-of-Research-Methods-for-Studying-Daily-Life/Mehl-Conner/9781462513055
Hayes, Andrew F. (2018). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (Second edition). The Guilford Press. https://www.guilford.com/books/Introduction-to-Mediation-Moderation-and-Conditional-Process-Analysis/Andrew-Hayes/9781462534654
Hayes, Andrew F., Slater, M. D., & Snyder, L. B. (2008). The SAGE sourcebook of advanced data analysis methods for communication research. https://us.sagepub.com/en-us/nam/the-sage-sourcebook-of-advanced-data-analysis-methods-for-communication-research/book228339
Imai, K., Keele, L., & Tingley, D. (2010). A general approach to causal mediation analysis. Psychological Methods, 15(4), 309–334. https://doi.org/10.1037/a0020761
Kay, M. (2021). Extracting and visualizing tidy draws from brms models. https://mjskay.github.io/tidybayes/articles/tidy-brms.html
Kelley, K., & Preacher, K. J. (2012). On effect size. Psychological Methods, 17(2), 137. https://doi.org/10.1037/a0028086
Klein, R. A., Cook, C. L., Ebersole, C. R., Vitiello, C., Nosek, B. A., Chartier, C. R., Christopherson, C. D., Clay, S., Collisson, B., Crawford, J., Cromar, R., Vidamuerte, D., Gardiner, G., Gosnell, C., Grahe, J., Hall, C., Joy-Gaba, J., Legg, A. M., Levitan, C., … Ratliff, K. (2019). Many Labs 4: Failure to replicate mortality salience effect with and without original author involvement. PsyArXiv. https://doi.org/10.31234/osf.io/vef2c
Kruschke, J. K. (2015). Doing Bayesian data analysis: A tutorial with R, JAGS, and Stan. Academic Press. https://sites.google.com/site/doingbayesiandataanalysis/
Kruschke, J. K., & Liddell, T. M. (2018). The Bayesian New Statistics: Hypothesis testing, estimation, meta-analysis, and power analysis from a Bayesian perspective. Psychonomic Bulletin & Review, 25(1), 178–206. https://doi.org/10.3758/s13423-016-1221-4
Kurz, A. S. (2023a). Doing Bayesian data analysis in brms and the tidyverse (Version 1.1.0). https://bookdown.org/content/3686/
Kurz, A. S. (2023b). Statistical Rethinking with brms, ggplot2, and the tidyverse (version 1.3.0). https://bookdown.org/content/3890/
Kurz, A. S. (2023c). Statistical Rethinking with brms, ggplot2, and the tidyverse: Second Edition (version 0.4.0). https://bookdown.org/content/4857/
Little, T. D., Preacher, K. J., Selig, J. P., & Card, N. A. (2007). New developments in latent variable panel analyses of longitudinal data. International Journal of Behavioral Development, 31(4), 357–365. https://doi.org/10.1177/0165025407077757
Maxwell, S. E., & Cole, D. A. (2007). Bias in cross-sectional analyses of longitudinal mediation. Psychological Methods, 12(1), 23–44. https://doi.org/10.1037/1082-989X.12.1.23
Maxwell, S. E., Cole, D. A., & Mitchell, M. A. (2011). Bias in cross-sectional analyses of longitudinal mediation: Partial and complete mediation under an autoregressive model. Multivariate Behavioral Research, 46(5), 816–841. https://doi.org/10.1080/00273171.2011.606716
McCabe, C. J., Kim, D. S., & King, K. M. (2018). Improving present practices in the visual display of interactions. Advances in Methods and Practices in Psychological Science, 1(2), 147–165. https://doi.org/10.1177/2515245917746792
McElreath, R. (2020). Statistical rethinking: A Bayesian course with examples in R and Stan (Second Edition). CRC Press. https://xcelab.net/rm/statistical-rethinking/
McElreath, R. (2015). Statistical rethinking: A Bayesian course with examples in R and Stan. CRC press. https://xcelab.net/rm/statistical-rethinking/
Mitchell, M. A., & Maxwell, S. E. (2013). A comparison of the cross-sectional and sequential designs when assessing longitudinal mediation. Multivariate Behavioral Research, 48(3), 301–339. https://doi.org/10.1080/00273171.2013.784696
Muthén, B., & Asparouhov, T. (2015). Causal effects in mediation modeling: An introduction with applications to latent variables. Structural Equation Modeling: A Multidisciplinary Journal, 22(1), 12–23. https://doi.org/10.1080/10705511.2014.935843
Navarro, D. (2020). Paths in strange spaces: A comment on preregistration. https://doi.org/10.31234/osf.io/wxn58
Nelson, L. D., Simmons, J., & Simonsohn, U. (2018). Psychology’s renaissance. Annual Review of Psychology, 69(1), 511–534. https://doi.org/10.1146/annurev-psych-122216-011836
Pashler, H., & Wagenmakers, E. (2012). Editors’ introduction to the special section on replicability in psychological science: A crisis of confidence? Perspectives on Psychological Science, 7(6), 528–530. https://doi.org/10.1177/1745691612465253
Selig, J. P., & Preacher, K. J. (2009). Mediation models for longitudinal data in developmental research. Research in Human Development, 6(2-3), 144–164. https://doi.org/10.1080/15427600902911247
Valente, M. J., & MacKinnon, D. P. (2017). Comparing models of change to estimate the mediated effect in the pretest-posttest control group design. Structural Equation Modeling : A Multidisciplinary Journal, 24(3), 428–450. https://doi.org/10.1080/10705511.2016.1274657
Valeri, L., & VanderWeele, T. J. (2013). Mediation analysis allowing for exposuremediator interactions and causal interpretation: Theoretical assumptions and implementation with SAS and SPSS macros. Psychological Methods, 18(2), 137. https://doi.org/10.1037/a0031034
VanderWeele, T. (2015). Explanation in causal inference: Methods for mediation and interaction (1st edition). Oxford University Press.
Vuorre, M. (2017). bmlm: Bayesian multilevel mediation [Manual]. https://cran.r-project.org/package=bmlm
Vuorre, M. (2021). bmlm: Bayesian multilevel mediation [Manual]. https://github.com/mvuorre/bmlm/
Vuorre, M., & Bolger, N. (2018). Within-subject mediation analysis for experimental data in cognitive psychology and neuroscience. Behavior Research Methods, 50(5), 2125–2143. https://doi.org/10.3758/s13428-017-0980-9
Yong, E. (2012). Replication studies: Bad copy. Nature News, 485(7398), 298. https://doi.org/10.1038/485298a