4.2 Random-Effects-Model

Previously, we showed how to perform a fixed-effect-model meta-analysis using the metagen and metacont functions.

However, we can only use the fixed-effect-model when we can assume that all included studies come from the same population. In practice this is hardly ever the case: interventions may vary in certain characteristics, the sample used in each study might be slightly different, or its methods. In this case, we cannot assume that all studies stem from the same hypothesized “population” of studies.

Same is the case once we detect statistical heterogeneity in our fixed-effect-model meta-analysis, as indicated by \(I^{2}>0\%\).

So, it is very likely that you will actually use a random-effects-model for your meta-analysis. Thankfully, there is not much more we have to think about when conducting a random-effects model meta-analysis in R instead of a fixed-effect model meta-analysis.

The Idea behind the Random-Effects-Model

In the Random-Effects-Model, we want to account for our assumption that the study effect estimates show more variance than when drawn from a single population (Schwarzer, Carpenter, and Rücker 2015). The random-effects-model works under the so-called assumption of exchangeability. This means that in random-effects model meta-analyses, we not only assume that effects of individual studies deviate from the true intervention effect of all studies due to sampling error, but that there is another source of variance introduced by the fact that the studies do not stem from one single population, but are drawn from a “universe” of populations.

We therefore assume that there is not only one true effect size, but a distribution of true effect sizes. We therefore want to estimate the mean of this distribution of true effect sizes. The fixed-effect-model assumes that when the observed effect size \(\hat\theta_k\) of an individual study \(k\) deviates from the true effect size \(\theta_F\), the only reason for this is that the estimate is burdened by (sampling) error \(\epsilon_k\).

\[\hat\theta_k = \theta_F + \epsilon_k\]

While the random-effects-model assumes that, in addition, there is a second source of error \(\zeta_k\).This second source of error is introduced by the fact that even the true effect size \(\theta_k\) of our study \(k\) is also only part of an over-arching distribution of true effect sizes with the mean \(\mu\) (Borenstein et al. 2011).

An illustration of parameters of the random-effects-model

The formula for the random-effects-model therefore looks like this:

\[\hat\theta_k = \mu + \epsilon_k + \zeta_k\]

When calculating a random-effects-model meta-analysis, we therefore also have to take the error \(\zeta_k\) into account. To do this, we have to estimate the variance of the distribution of true effect sizes, which is denoted by \(\tau^{2}\), or tau2. There are several estimators for \(\tau^{2}\), many of which are implemented in meta. We will give you more details about them in the next section.

Even though it is conventional to use random-effects-model meta-analyses in psychological outcome research, applying this model is not undisputed. The random-effects-model pays more attention to small studies when pooling the overall effect in a meta-analysis (Schwarzer, Carpenter, and Rücker 2015). Yet, small studies in particular are often fraught with bias (see Chapter 8.1). This is why some have argued that the fixed-effects-model should be nearly always preferred (Poole and Greenland 1999; Furukawa, McGuire, and Barbui 2003).

4.2.1 Estimators for \(\tau^2\) in the random-effects-model

Operationally, conducting a random-effects-model meta-analysis in R is not so different from conducting a fixed-effects-model meta-analysis. Yet, we do have choose an estimator for \(\tau^{2}\). Here are the estimators implemented in meta, which we can choose using the method.tau variable in our meta-analysis code.

Code Estimator
DL DerSimonian-Laird
PM Paule-Mandel
REML Restricted Maximum-Likelihood
ML Maximum-likelihood
HS Hunter-Schmidt
SJ Sidik-Jonkman
HE Hedges
EB Empirical Bayes

Which estimator should i use?

All of these estimators derive \(\tau^{2}\) using a slightly different approach, leading to somewhat different pooled effect size estimates and confidence intervals. If one of these approaches is more or less biased often depends on the context, and parameters such as the number of studies \(k\), the number of participants \(n\) in each study, how much \(n\) varies from study to study, and how big \(\tau^{2}\) is.

An overview paper by Veroniki and colleagues (Veroniki et al. 2016) provides an excellent summary on current evidence on which estimator might be more or less biased in which situation. The article is openly accessible, and you can read it here.

Especially in medical and psychological research, the by far most often used estimator is the DerSimonian-Laird estimator (DerSimonian and Laird 1986). Part of this widespread use might be attributable to the fact that programs such as RevMan or Comprehensive Meta-Analysis (older versions) only use this estimator. It is also the default option in the meta package in R. Simulation studies, however, have shown that the Maximum-Likelihood, Sidik-Jonkman, and Empirical Bayes estimators have better properties in estimating the between-study variance (Sidik and Jonkman 2007; Viechtbauer 2005).

The Hartung-Knapp-Sidik-Jonkman method

Another criticism of the DerSimonian-Laird method is that when estimating the variance of our pooled effect \(var(\hat\theta_F)\), this method is prone to producing false positives (IntHout, Ioannidis, and Borm 2014). This is especially the case when the number of studies is small, and when there is substantial heterogeneity (Hartung 1999; Hartung and Knapp 2001a, 2001b; Follmann and Proschan 1999; Makambi 2004). Unfortunately, this is very often the case in when we do meta-analyses in the medical field or in psychology. This is quite a problem, as we don’t want to find pooled effects to be statistically significant when in fact they are not!

The Hartung-Knapp-Sidik-Jonkman (HKSJ) method was thus proposed a way to produce more robust estimates of \(var(\hat\theta_F)\). It has been shown that this method substantially outperforms the DerSimonian-Laird method in many cases (IntHout, Ioannidis, and Borm 2014). The HKSJ method can also be very easily applied in R, while other programs don’t have this option yet. This is another big plus of doing a meta-analysis in R. The HKSJ usually leads to more conservative results, indicated by wider confidence intervals.

Residual concerns with the Hartung-Knapp-Sidik-Jonkman method

It should be noted, however, that the HKSJ method is not uncontroversial. Some authors argue that other (standard) pooling models should also be used in addition to the HKSJ as a sensitivity analysis (Wiksten, Rücker, and Schwarzer 2016). Jackson and colleagues (Jackson et al. 2017) present four residual concerns with this method, which you may take into account before selecting your meta-analytic method. The paper can be read here.

4.2.2 Pre-calculated effect size data

After all this input, you will see that even random-effects model meta-analyses are very easy to code in R. Compared to the fixed-effects-model Chapter 4.1, there is just three extra parameters we have to define. Especially, as we have described before, we have to tell R which between-study-variance estimator (\(\tau^{2}\)) we want to use, and if we want to use the Knapp-Hartung(-Sidik-Jonkman) adjustment.

Here’s a table of all parameters we have to define in our code to perform a random-effects-model meta-analysis with pre-calculated effect sizes:

Parameter Function
TE This tells R to use the TE column to retrieve the effect sizes for each study
seTE This tells R to use the seTE column to retrieve the standard error for each study
data= After =, paste the name of your dataset here
studlab=paste() This tells the function were the labels for each study are stored. If you named the spreadsheet columns as advised, this should be studlab=paste(Author)
comb.fixed= Whether to use a fixed-effects model
comb.random= Whether to use a random-effects model. This has to be set to TRUE
method.tau= Which estimator to use for the between-study variance
hakn= Whether to use the Knapp-Hartung method
prediction= Whether to print a prediction interval for the effect of future studies based on present evidence
sm= The summary measure we want to calculate. We can either calculate the mean difference (MD) or Hedges’ g (SMD)

I will use my madata dataset again to do the meta-analysis. For illustrative purposes, let us use the Sidik-Jonkman estimator ("SJ") and the HKSJ method. To do this analysis, make sure that meta as well as metafor are loaded in R.

library(meta)
library(metafor)

Now, let us code our random-effects-model meta-analysis. Remember, as our effect size data are precalculated, I will use the meta::metagen() function.

m.hksj <- metagen(TE,
                  seTE,
                  data = madata,
                  studlab = paste(Author),
                  comb.fixed = FALSE,
                  comb.random = TRUE,
                  method.tau = "SJ",
                  hakn = TRUE,
                  prediction = TRUE,
                  sm = "SMD")
m.hksj
##                           SMD            95%-CI %W(random)
## Call et al.            0.7091 [ 0.1979; 1.2203]        5.2
## Cavanagh et al.        0.3549 [-0.0300; 0.7397]        6.1
## DanitzOrsillo          1.7912 [ 1.1139; 2.4685]        4.2
## de Vibe et al.         0.1825 [-0.0484; 0.4133]        7.1
## Frazier et al.         0.4219 [ 0.1380; 0.7057]        6.8
## Frogeli et al.         0.6300 [ 0.2458; 1.0142]        6.1
## Gallego et al.         0.7249 [ 0.2846; 1.1652]        5.7
## Hazlett-Stevens & Oren 0.5287 [ 0.1162; 0.9412]        5.9
## Hintz et al.           0.2840 [-0.0453; 0.6133]        6.5
## Kang et al.            1.2751 [ 0.6142; 1.9360]        4.3
## Kuhlmann et al.        0.1036 [-0.2781; 0.4853]        6.1
## Lever Taylor et al.    0.3884 [-0.0639; 0.8407]        5.6
## Phang et al.           0.5407 [ 0.0619; 1.0196]        5.4
## Rasanen et al.         0.4262 [-0.0794; 0.9317]        5.3
## Ratanasiripong         0.5154 [-0.1731; 1.2039]        4.1
## Shapiro et al.         1.4797 [ 0.8618; 2.0977]        4.5
## SongLindquist          0.6126 [ 0.1683; 1.0569]        5.7
## Warnecke et al.        0.6000 [ 0.1120; 1.0880]        5.4
## 
## Number of studies combined: k = 18
## 
##                         SMD            95%-CI    t  p-value
## Random effects model 0.5935 [ 0.3891; 0.7979] 6.13 < 0.0001
## Prediction interval         [-0.2084; 1.3954]              
## 
## Quantifying heterogeneity:
## tau^2 = 0.1337; H = 1.64 [1.27; 2.11]; I^2 = 62.6% [37.9%; 77.5%]
## 
## Test of heterogeneity:
##      Q d.f. p-value
##  45.50   17  0.0002
## 
## Details on meta-analytical method:
## - Inverse variance method
## - Sidik-Jonkman estimator for tau^2
## - Hartung-Knapp adjustment for random effects model

The output shows that our estimated effect is \(g=0.59\), and the 95% confidence interval stretches from \(g=0.39\) to \(0.80\) (rounded). It also becomes clear that this effect is different (and larger) than the one we found in the fixed-effects-model meta-analysis in Chapter 4.1 (\(g=0.48\)).

Let us compare this to the output using the DerSimonian-Laird estimator, and when setting hakn = FALSE. As this estimator is the default, I do not have to define method.tau this time.

m.dl <- metagen(TE,
                seTE,
                data=madata,
                studlab=paste(Author),
                comb.fixed = FALSE,
                comb.random = TRUE,
                hakn = FALSE,
                prediction=TRUE,
                sm="SMD")
m.dl
##                           SMD            95%-CI %W(random)
## Call et al.            0.7091 [ 0.1979; 1.2203]        5.0
## Cavanagh et al.        0.3549 [-0.0300; 0.7397]        6.3
## DanitzOrsillo          1.7912 [ 1.1139; 2.4685]        3.7
## de Vibe et al.         0.1825 [-0.0484; 0.4133]        8.0
## Frazier et al.         0.4219 [ 0.1380; 0.7057]        7.4
## Frogeli et al.         0.6300 [ 0.2458; 1.0142]        6.3
## Gallego et al.         0.7249 [ 0.2846; 1.1652]        5.7
## Hazlett-Stevens & Oren 0.5287 [ 0.1162; 0.9412]        6.0
## Hintz et al.           0.2840 [-0.0453; 0.6133]        6.9
## Kang et al.            1.2751 [ 0.6142; 1.9360]        3.8
## Kuhlmann et al.        0.1036 [-0.2781; 0.4853]        6.3
## Lever Taylor et al.    0.3884 [-0.0639; 0.8407]        5.6
## Phang et al.           0.5407 [ 0.0619; 1.0196]        5.3
## Rasanen et al.         0.4262 [-0.0794; 0.9317]        5.1
## Ratanasiripong         0.5154 [-0.1731; 1.2039]        3.6
## Shapiro et al.         1.4797 [ 0.8618; 2.0977]        4.1
## SongLindquist          0.6126 [ 0.1683; 1.0569]        5.7
## Warnecke et al.        0.6000 [ 0.1120; 1.0880]        5.2
## 
## Number of studies combined: k = 18
## 
##                         SMD            95%-CI    z  p-value
## Random effects model 0.5741 [ 0.4082; 0.7399] 6.78 < 0.0001
## Prediction interval         [-0.0344; 1.1826]              
## 
## Quantifying heterogeneity:
## tau^2 = 0.0752; H = 1.64 [1.27; 2.11]; I^2 = 62.6% [37.9%; 77.5%]
## 
## Test of heterogeneity:
##      Q d.f. p-value
##  45.50   17  0.0002
## 
## Details on meta-analytical method:
## - Inverse variance method
## - DerSimonian-Laird estimator for tau^2

We see that the overall effect size estimate using this estimator is similar to the previous one (\(g=0.57\)), but the confidence intervals is narrower because we did not adjust it using the HKSJ method.

4.2.3 Raw effect size data

I we use raw effect size data, such as the one stored in my metacont dataset, we can use the meta::metacont() function again. The parameters stay the same as before. I will name the meta-analysis results m.hksj.raw this time.

m.hksj.raw <- metacont(Ne,
                       Me,
                       Se,
                       Nc,
                       Mc,
                       Sc,
                       data = metacont,
                       studlab = paste(Author),
                       comb.fixed = FALSE,
                       comb.random = TRUE,
                       method.tau = "SJ",
                       hakn = TRUE,
                       prediction = TRUE,
                       sm = "SMD")
m.hksj.raw
##              SMD             95%-CI %W(random)
## Cavanagh -0.4118 [-0.8081; -0.0155]       15.7
## Day      -0.2687 [-0.6154;  0.0781]       17.7
## Frazier  -0.7734 [-1.0725; -0.4743]       19.7
## Gaffney  -0.7303 [-1.2542; -0.2065]       11.7
## Greer    -0.7624 [-1.0992; -0.4256]       18.1
## Harrer   -0.1669 [-0.5254;  0.1916]       17.2
## 
## Number of studies combined: k = 6
## 
##                          SMD             95%-CI     t p-value
## Random effects model -0.5161 [-0.8043; -0.2280] -4.60  0.0058
## Prediction interval          [-1.1947;  0.1625]              
## 
## Quantifying heterogeneity:
## tau^2 = 0.0472; H = 1.51 [1.00; 2.38]; I^2 = 56.1% [0.0%; 82.3%]
## 
## Test of heterogeneity:
##      Q d.f. p-value
##  11.39    5  0.0441
## 
## Details on meta-analytical method:
## - Inverse variance method
## - Sidik-Jonkman estimator for tau^2
## - Hartung-Knapp adjustment for random effects model
## - Hedges' g (bias corrected standardised mean difference)




References

Schwarzer, Guido, James R Carpenter, and Gerta Rücker. 2015. Meta-Analysis with R. Springer.

Borenstein, Michael, Larry V Hedges, Julian PT Higgins, and Hannah R Rothstein. 2011. Introduction to Meta-Analysis. John Wiley & Sons.

Poole, Charles, and Sander Greenland. 1999. “Random-Effects Meta-Analyses Are Not Always Conservative.” American Journal of Epidemiology 150 (5). Oxford University Press: 469–75.

Furukawa, Toshi A, Hugh McGuire, and Corrado Barbui. 2003. “Low Dosage Tricyclic Antidepressants for Depression.” Cochrane Database of Systematic Reviews, no. 3. John Wiley & Sons, Ltd.

Veroniki, Areti Angeliki, Dan Jackson, Wolfgang Viechtbauer, Ralf Bender, Jack Bowden, Guido Knapp, Oliver Kuss, Julian PT Higgins, Dean Langan, and Georgia Salanti. 2016. “Methods to Estimate the Between-Study Variance and Its Uncertainty in Meta-Analysis.” Research Synthesis Methods 7 (1). Wiley Online Library: 55–79.

DerSimonian, Rebecca, and Nan Laird. 1986. “Meta-Analysis in Clinical Trials.” Controlled Clinical Trials 7 (3). Elsevier: 177–88.

Sidik, Kurex, and Jeffrey N Jonkman. 2007. “A Comparison of Heterogeneity Variance Estimators in Combining Results of Studies.” Statistics in Medicine 26 (9). Wiley Online Library: 1964–81.

Viechtbauer, Wolfgang. 2005. “Bias and Efficiency of Meta-Analytic Variance Estimators in the Random-Effects Model.” Journal of Educational and Behavioral Statistics 30 (3). Sage Publications Sage CA: Los Angeles, CA: 261–93.

IntHout, Joanna, John PA Ioannidis, and George F Borm. 2014. “The Hartung-Knapp-Sidik-Jonkman Method for Random Effects Meta-Analysis Is Straightforward and Considerably Outperforms the Standard Dersimonian-Laird Method.” BMC Medical Research Methodology 14 (1). BioMed Central: 25.

Hartung, Joachim. 1999. “An Alternative Method for Meta-Analysis.” Biometrical Journal: Journal of Mathematical Methods in Biosciences 41 (8). Wiley Online Library: 901–16.

Hartung, Joachim, and Guido Knapp. 2001a. “A Refined Method for the Meta-Analysis of Controlled Clinical Trials with Binary Outcome.” Statistics in Medicine 20 (24). Wiley Online Library: 3875–89.

Hartung, Joachim, and Guido Knapp. 2001b. “On Tests of the Overall Treatment Effect in Meta-Analysis with Normally Distributed Responses.” Statistics in Medicine 20 (12). Wiley Online Library: 1771–82.

Follmann, Dean A, and Michael A Proschan. 1999. “Valid Inference in Random Effects Meta-Analysis.” Biometrics 55 (3). Wiley Online Library: 732–37.

Makambi, Kepher H. 2004. “The Effect of the Heterogeneity Variance Estimator on Some Tests of Treatment Efficacy.” Journal of Biopharmaceutical Statistics 14 (2). Taylor & Francis: 439–49.

Wiksten, Anna, Gerta Rücker, and Guido Schwarzer. 2016. “Hartung–Knapp Method Is Not Always Conservative Compared with Fixed-Effect Meta-Analysis.” Statistics in Medicine 35 (15). Wiley Online Library: 2503–15.

Jackson, Dan, Martin Law, Gerta Rücker, and Guido Schwarzer. 2017. “The Hartung-Knapp Modification for Random-Effects Meta-Analysis: A Useful Refinement but Are There Any Residual Concerns?” Statistics in Medicine 36 (25). Wiley Online Library: 3923–34.

banner