9.1 Small-study effect methods

The small-study effect methods we present here have been conventional for many years. Thus various methods to assess and control for publication bias have been developed, but we will only focus on the most important ones here.

The model behind small-study effects methods

According to Borenstein et al. (Borenstein et al. 2011). The model behind the most common small-study effects methods has these core assumptions:

  1. Because they involve large commitment of ressources and time, large studies are likely to get published, weather the results are significant or not
  2. Moderately sized studies are at greater risk of missing, but with a moderate sample size even moderately sized effects are likely to become significant, which means that only some studies will be missing
  3. Small studies are at greatest risk for being non-significant, and thus being missing. Only small studies with a very large effect size become significant, and will be found in the published literature.

In accordance with these assumptions, the methods we present here particularly focus on small studies with small effect sizes, and wheather they are missing.

9.1.1 Funnel plots

The best way to visualize weather small studies with small effect sizes are missing is through funnel plots.

We can generate a funnel plot for our m.hksj meta-analysis output using the funnel() function in meta.

funnel(m.hksj,xlab = "Hedges' g")

The funnel plot basically consists of a funnel and two axes: the y-axis showing the Standard Error \(SE\) of each study, with larger studies (which thus have a smaller \(SE\)) plotted on top of the y-axis; and the x-axis showing the effect size of each study.

Given our assumptions, and in the case when there is no publication bias, all studies would lie symmetrically around our pooled effect size (the striped line) within the form of the funnel. When publication bias is present, we would assume that the funnel would look asymmetrical, because only the small studies with a large effect size very published, while small studies without a significant, large effect would be missing.

We see from the plot that in the case of our meta-anlysis m.hksj, the latter is probably true. We see that the plot is highly asymmetrical, with exactly the small studies with low effect size missing in the bottom-left corner of our plot.

We can also display the name of each study using the studlab parameter.

funnel(m.hksj,xlab = "g",studlab = TRUE)

Here, we see that asymmetry is primarily driven by three studies with high effects, but a small study sample in the bottom right corner. Interestingly, two of these studies are the ones we also detected in our outlier and influence analyses.

An even better way to inspect the funnel plot is through contour-enhanced funnel plots, which help to distinguish publication bias from other forms of asymmetry (Peters et al. 2008). Contour-enhanced funnels include colors signifying the significance level into which the effects size of each study falls. We can plot such funnels using this code:

funnel(m.hksj, xlab="Hedges' g", 
       contour = c(.95,.975,.99),
       col.contour=c("darkblue","blue","lightblue"))+
legend(1.4, 0, c("p < 0.05", "p<0.025", "< 0.01"),bty = "n",
       fill=c("darkblue","blue","lightblue"))

We can see in the plot that while some studies have statistically significant effect sizes (blue background), other do not (white background). We also see a trend that while the moderately sized studies are partly significant and non-significant, with slightly more significant studies, the asymmetry is much larger for small studies. This gives us a hint that publication bias might indeed be present in our analysis.

9.1.2 Testing for funnel plot asymmetry using Egger’s test

Egger’s test of the intercept (Egger et al. 1997) quantifies the funnel plot asymmetry and performs a statistical test.

We have prepared a function called eggers.test for you, which can be found below. The function is a wrapper for the metabias function in meta.

Again, R doesn’t know this function yet, so we have to let R learn it by copying and pasting the code underneath in its entirety into the console on the bottom left pane of RStudio, and then hit Enter ⏎.

eggers.test<-function(data){

  data<-data
  eggers<-metabias(data)
  intercept<-as.numeric(eggers$estimate[1])
  intercept<-round(intercept,digits=3)
  se.intercept<-eggers$estimate[2]
  lower.intercept<-as.numeric(intercept-1.96*se.intercept)
  lower.intercept<-round(lower.intercept,digits = 2)
  higher.intercept<-as.numeric(intercept+1.96*se.intercept)
  higher.intercept<-round(higher.intercept,digits = 2)
  ci.intercept<-paste(lower.intercept,"-",higher.intercept)
  ci.intercept<-gsub(" ", "", ci.intercept, fixed = TRUE)
  intercept.pval<-as.numeric(eggers$p.value)
  intercept.pval<-round(intercept.pval,digits=5)
  eggers.output<-data.frame(intercept,ci.intercept, intercept.pval)
  names(eggers.output)<-c("intercept","95%CI","p-value")
  title<-"Results of Egger's test of the intercept"
  
print(title)
print(eggers.output)
}

Now we can use the eggers.test function. We only have to specify our meta-analysis output m.hksj as the data the function should use.

eggers.test(data=m.hksj)
## [1] "Results of Egger's test of the intercept"
##   intercept     95%CI p-value
## 1     4.111 2.39-5.83 0.00025

The function returns the intercept along with its confidence interval. We can see that the p-value of Egger’s test is significant (\(p<0.05\)), which means that there is substanital asymmetry in the Funnel plot. This asymmetry could have been caused by publication bias.

9.1.3 Duval & Tweedie’s trim-and-fill procedure

Duval & Tweedie’s trim-and-fill procedure (Duval and Tweedie 2000) is also based the funnel plot and its symmetry/asymmetry. When Egger’s test is significant, we can use this method to estimate what the actaul effect size would be had the “missing” small studies been published. The procedure imputes missing studies into the funnel plot until symmetry is reached again.

The trim-and-fill procedure includes the following five steps (Schwarzer, Carpenter, and Rücker 2015):

  1. Estimating the number of studies in the outlying (right) part of the funnel plot
  2. Removing (trimming) these effect sizes and pooling the results with the remaining effect sizes
  3. This pooled effect is then taken as the center of all effect sizes
  4. For each trimmed/removed study, a additional study is imputed, mirroring the effect of the study on the left side of the funnel plot
  5. Pooling the results with the imputed studies and the trimmed studies included

The trim-and-fill-procedure can be performed using the trimfill function in meta, and specifying our meta-analysis output.

trimfill(m.hksj)
##                             SMD             95%-CI %W(random)
## Call et al.              0.7091 [ 0.1979;  1.2203]        3.8
## Cavanagh et al.          0.3549 [-0.0300;  0.7397]        4.1
## DanitzOrsillo            1.7912 [ 1.1139;  2.4685]        3.3
## de Vibe et al.           0.1825 [-0.0484;  0.4133]        4.4
## Frazier et al.           0.4219 [ 0.1380;  0.7057]        4.3
## Frogeli et al.           0.6300 [ 0.2458;  1.0142]        4.1
## Gallego et al.           0.7249 [ 0.2846;  1.1652]        4.0
## Hazlett-Stevens & Oren   0.5287 [ 0.1162;  0.9412]        4.0
## Hintz et al.             0.2840 [-0.0453;  0.6133]        4.2
## Kang et al.              1.2751 [ 0.6142;  1.9360]        3.4
## Kuhlmann et al.          0.1036 [-0.2781;  0.4853]        4.1
## Lever Taylor et al.      0.3884 [-0.0639;  0.8407]        3.9
## Phang et al.             0.5407 [ 0.0619;  1.0196]        3.9
## Rasanen et al.           0.4262 [-0.0794;  0.9317]        3.8
## Ratanasiripong           0.5154 [-0.1731;  1.2039]        3.3
## Shapiro et al.           1.4797 [ 0.8618;  2.0977]        3.5
## SongLindquist            0.6126 [ 0.1683;  1.0569]        4.0
## Warnecke et al.          0.6000 [ 0.1120;  1.0880]        3.9
## Filled: Warnecke et al.  0.0520 [-0.4360;  0.5401]        3.9
## Filled: SongLindquist    0.0395 [-0.4048;  0.4837]        4.0
## Filled: Frogeli et al.   0.0220 [-0.3621;  0.4062]        4.1
## Filled: Call et al.     -0.0571 [-0.5683;  0.4541]        3.8
## Filled: Gallego et al.  -0.0729 [-0.5132;  0.3675]        4.0
## Filled: Kang et al.     -0.6230 [-1.2839;  0.0379]        3.4
## Filled: Shapiro et al.  -0.8277 [-1.4456; -0.2098]        3.5
## Filled: DanitzOrsillo   -1.1391 [-1.8164; -0.4618]        3.3
## 
## Number of studies combined: k = 26 (with 8 added studies)
## 
##                         SMD            95%-CI    t p-value
## Random effects model 0.3431 [ 0.0994; 0.5868] 2.90  0.0077
## Prediction interval         [-0.8463; 1.5326]             
## 
## Quantifying heterogeneity:
## tau^2 = 0.3181; H = 2.05 [1.70; 2.47]; I^2 = 76.2% [65.4%; 83.7%]
## 
## Test of heterogeneity:
##       Q d.f.  p-value
##  105.15   25 < 0.0001
## 
## Details on meta-analytical method:
## - Inverse variance method
## - Sidik-Jonkman estimator for tau^2
## - Hartung-Knapp adjustment for random effects model
## - Trim-and-fill method to adjust for funnel plot asymmetry

We see that the procedure identified and trimmed eight studies (with 8 added studies)). The overall effect estimated by the procedure is \(g = 0.34\).

Let’s compare this to our initial results.

m.hksj$TE.random
## [1] 0.593535

The initial pooled effect size was \(g = 0.59\), which is substantially larger than the bias-corrected effect. In our case, if we assume that publication bias was a problem in the analyses, the trim-and-fill procedure lets us assume that our initial results were overestimated due to publication bias, and the “true” effect when controlling for selective publication might be \(g = 0.34\) rather than \(g = 0.59\).

If we store the results of the trimfill function in an object, we can also create funnel plots including the imputed studies.

m.hksj.trimfill<-trimfill(m.hksj)
funnel(m.hksj.trimfill,xlab = "Hedges' g")

References

Borenstein, Michael, Larry V Hedges, Julian PT Higgins, and Hannah R Rothstein. 2011. Introduction to Meta-Analysis. John Wiley & Sons.

Peters, Jaime L, Alex J Sutton, David R Jones, Keith R Abrams, and Lesley Rushton. 2008. “Contour-Enhanced Meta-Analysis Funnel Plots Help Distinguish Publication Bias from Other Causes of Asymmetry.” Journal of Clinical Epidemiology 61 (10). Elsevier: 991–96.

Egger, Matthias, George Davey Smith, Martin Schneider, and Christoph Minder. 1997. “Bias in Meta-Analysis Detected by a Simple, Graphical Test.” Bmj 315 (7109). British Medical Journal Publishing Group: 629–34.

Duval, Sue, and Richard Tweedie. 2000. “Trim and Fill: A Simple Funnel-Plot–based Method of Testing and Adjusting for Publication Bias in Meta-Analysis.” Biometrics 56 (2). Wiley Online Library: 455–63.

Schwarzer, Guido, James R Carpenter, and Gerta Rücker. 2015. Meta-Analysis with R. Springer.

banner