6.3 Influence Analyses

We have now showed you how you can detect and remove extreme effect sizes (outliers) in your meta-analysis. As we have mentioned before, however, it is not only statistical outliers which may cause concerns regarding the robustness of our pooled effect. It is also possible that some studies in a meta-analysis exert a very high influence on our overall results. For example, it could be the case that we find that an overall effect is not significant, when in fact, a highly significant effect is consistently found once we remove one particular study in our analysis. Such information is highly important once we want to communicate the results of our meta-analysis to the public.

Here, we present techniques which dig a little deeper than simple outlier removal. These techniques are based on the Leave-One-Out-method, in which we recalculate the results of our meta-analysis \(K-1\) times, each times leaving out one study. This way, we can more easily detect studies which influence the overall estimate of our meta-analysis the most, and this lets us better assess if this influence may distort our pooled effect (Viechtbauer and Cheung 2010). Thus, such analyses are called Influence Analyses. We have created the function InfluenceAnalysis for you, in which these analyses are conducted and results are visualized all in one. This function is part of the dmetar package. If you have the package installed already, you have to load it into your library first.

library(dmetar)

If you do not want to use the dmetar package, you can find the source code for this function here. In this case, R doesn’t know this function yet, so we have to let R learn it by copying and pasting the code in its entirety into the console on the bottom left pane of RStudio, and then hit Enter ⏎. The function requires the ggplot2, ggrepel, forcats, dplyr, grid, gridExtra, metafor, and meta package to work.

The InfluenceAnalysis function has several parameters which we have to define.

Code Description
x An object of class meta, generated by the metabin, metagen, metacont, metacor, metainc, or metaprop function
random Logical. Should the random-effects model be used to generate the influence diagnostics? Uses the method.tau specified in the meta object if one of ‘DL’, ‘HE’, ‘SJ’, ‘ML’, ‘REML’, ‘EB’, ‘HS’ or ‘GENQ’ (to ensure compatibility with the metafor package). Otherwise, the DerSimonian-Laird (‘DL’; DerSimonian & Laird, 1986) estimator is used. FALSE by default.
subplot.heights Concatenated array of two numerics. Specifies the heights of the first (first number) and second (second number) row of the overall results plot generated by the function. Default is c(30,18).
subplot.widths Concatenated array of two numerics. Specifies the widths of the first (first number) and second (second number) column of the overall results plot generated by the function. Default is c(30,30).
forest.lims Concatenated array of two numerics. Specifies the x-axis limits of the forest plots generated by the function. Use ‘default’ if standard settings should be used (this is the default).
return.separate.plots Logical. Should the influence plots be returned as seperate plots in lieue of returning them in one overall plot? Additionally returns a dataframe containing the data used for plotting. If set to TRUE, the ouput of the function must be saved in a variable; specific plots can then be accessed by selecting the plot element and using plot().
text.scale Positive numeric. Scaling factor for the text geoms used in the plot. Values <1 shrink the text, while values >1 increase the text size. Default is 1.

This is how the function code looks for my m.hksj data. I will save the output of the function as inf.analysis.

inf.analysis <- InfluenceAnalysis(x = m.hksj,
                                  random = TRUE)
## [===========================================================================] DONE

Now, let us have a look at the output, using the summary function.

summary(inf.analysis)
## Leave-One-Out Analysis (Sorted by I2) 
##  ----------------------------------- 
##                                 Effect  LLCI  ULCI    I2
## Omitting DanitzOrsillo           0.527 0.362 0.691 0.481
## Omitting Shapiro et al.          0.544 0.358 0.731 0.546
## Omitting de Vibe et al.          0.624 0.414 0.835 0.576
## Omitting Kang et al.             0.560 0.360 0.761 0.598
## Omitting Kuhlmann et al.         0.624 0.417 0.831 0.614
## Omitting Hintz et al.            0.616 0.401 0.830 0.636
## Omitting Gallego et al.          0.588 0.370 0.806 0.638
## Omitting Call et al.             0.590 0.372 0.807 0.642
## Omitting Frogeli et al.          0.594 0.374 0.813 0.644
## Omitting Cavanagh et al.         0.611 0.394 0.827 0.645
## Omitting SongLindquist           0.595 0.376 0.814 0.646
## Omitting Frazier et al.          0.608 0.390 0.826 0.647
## Omitting Lever Taylor et al.     0.608 0.391 0.824 0.647
## Omitting Warnecke et al.         0.596 0.377 0.814 0.647
## Omitting Hazlett-Stevens & Oren  0.600 0.381 0.819 0.648
## Omitting Phang et al.            0.599 0.381 0.818 0.648
## Omitting Rasanen et al.          0.605 0.388 0.822 0.648
## Omitting Ratanasiripong          0.599 0.382 0.816 0.648
## 
## 
## Influence Diagnostics 
##  ------------------- 
##                                 rstudent dffits cook.d cov.r QE.del   hat
## Omitting Call et al.               0.253  0.038  0.001 1.114 44.706 0.052
## Omitting Cavanagh et al.          -0.586 -0.163  0.028 1.101 45.066 0.061
## Omitting DanitzOrsillo             2.828  0.746  0.423 0.694 30.819 0.042
## Omitting de Vibe et al.           -1.121 -0.304  0.091 1.058 37.742 0.071
## Omitting Frazier et al.           -0.445 -0.137  0.020 1.120 45.317 0.068
## Omitting Frogeli et al.            0.082 -0.002  0.000 1.127 44.882 0.061
## Omitting Gallego et al.            0.302  0.053  0.003 1.118 44.260 0.057
## Omitting Hazlett-Stevens & Oren   -0.160 -0.062  0.004 1.122 45.447 0.059
## Omitting Hintz et al.             -0.790 -0.215  0.047 1.087 44.006 0.065
## Omitting Kang et al.               1.445  0.332  0.104 0.970 39.829 0.043
## Omitting Kuhlmann et al.          -1.242 -0.301  0.087 1.025 41.500 0.061
## Omitting Lever Taylor et al.      -0.483 -0.134  0.019 1.103 45.336 0.056
## Omitting Phang et al.             -0.126 -0.053  0.003 1.118 45.439 0.054
## Omitting Rasanen et al.           -0.381 -0.109  0.012 1.105 45.456 0.053
## Omitting Ratanasiripong           -0.159 -0.055  0.003 1.101 45.493 0.041
## Omitting Shapiro et al.            2.037  0.516  0.230 0.867 35.207 0.045
## Omitting SongLindquist             0.039 -0.014  0.000 1.122 45.146 0.057
## Omitting Warnecke et al.           0.009 -0.021  0.000 1.119 45.263 0.054
##                                 weight infl
## Omitting Call et al.             5.215     
## Omitting Cavanagh et al.         6.107     
## Omitting DanitzOrsillo           4.156    *
## Omitting de Vibe et al.          7.128     
## Omitting Frazier et al.          6.801     
## Omitting Frogeli et al.          6.112     
## Omitting Gallego et al.          5.712     
## Omitting Hazlett-Stevens & Oren  5.910     
## Omitting Hintz et al.            6.496     
## Omitting Kang et al.             4.252     
## Omitting Kuhlmann et al.         6.129     
## Omitting Lever Taylor et al.     5.627     
## Omitting Phang et al.            5.439     
## Omitting Rasanen et al.          5.254     
## Omitting Ratanasiripong          4.092     
## Omitting Shapiro et al.          4.513     
## Omitting SongLindquist           5.683     
## Omitting Warnecke et al.         5.375     
## 
## 
## Baujat Diagnostics (sorted by Heterogeneity Contribution) 
##  ------------------------------------------------------- 
##                                 HetContrib InfluenceEffectSize
## Omitting DanitzOrsillo              14.385               0.298
## Omitting Shapiro et al.             10.044               0.251
## Omitting de Vibe et al.              6.403               1.357
## Omitting Kang et al.                 5.552               0.121
## Omitting Kuhlmann et al.             3.746               0.256
## Omitting Hintz et al.                1.368               0.129
## Omitting Gallego et al.              1.183               0.060
## Omitting Call et al.                 0.768               0.028
## Omitting Frogeli et al.              0.582               0.039
## Omitting Cavanagh et al.             0.409               0.027
## Omitting SongLindquist               0.339               0.017
## Omitting Warnecke et al.             0.230               0.009
## Omitting Frazier et al.              0.164               0.021
## Omitting Lever Taylor et al.         0.159               0.008
## Omitting Phang et al.                0.061               0.003
## Omitting Hazlett-Stevens & Oren      0.052               0.003
## Omitting Rasanen et al.              0.044               0.002
## Omitting Ratanasiripong              0.010               0.000

We can also conveniently generate the difference influence plots by using plot, and specifying the plot we want to generate in the second argument. Let us interpret them one by one.



Influence Analyses

plot(inf.analysis, "influence")
Influence Analyses

Figure 6.1: Influence Analyses

In the first analysis, you can see different influence measures, for which we can see graphs including each individual study of our meta-analysis. This type of influence analysis has been proposed by Viechtbauer and Cheung (Viechtbauer and Cheung 2010). Let us discuss the most important subplots here:

  • dffits: The DIFFITS value of a study indicates in standard deviations how much the predicted pooled effect changes after excluding this study.
  • cook.d: The Cook’s distance resembles the Mahalanobis distance you may know from outlier detection in conventional multivariate statistics. It is the distance between the value once the study is included compared to when it is excluded.
  • cov.r: The covariance ratio is the determinant of the variance-covariance matrix of the parameter estimates when the study is removed, divided by the determinant of the variance-covariance matrix of the parameter estimates when the full dataset is considered. Importantly, values of cov.r < 1 indicate that removing the study will lead to a more precise effect size estimation (i.e., less heterogeneity).

Usually, however, you don’t have to dig this deep into the calculations of the individual measures. As a rule of thumb, influential cases are studies with very extreme values in the graphs. Viechtbauer and Cheung have also proposed cut-offs when to define a a study as an influential case, for example (with \(p\) being the number of model coefficients and \(k\) the number of studies):

\[ DFFITS > 3\times\sqrt{\frac{p}{k-p}}\] \[ hat > 3\times\frac{p}{k}\]

If a case was determined being an influential case using these cut-offs, its value will be displayed in red (in our example, this is the case for study “Dan”, or Danitz-Orsillo).

Please note, as Viechtbauer & Cheung emphasize, that these cut-offs are set on somewhat arbitrary thresholds. Therefore, you should never only have a look on the color of the study, but the general structure of the graph, and interpret results in context.

In our example, we see that while only the study by Danitz-Orsillo is defined as an influential case, there are actually two spikes in most plots, while the other studies have quite the same value. Given this structure, we could also decide to define Study “Sha” (Shapiro et al.) as an influential case too, because its values are very extreme too.

In these analyses, we found that the studies “Danitz-Orsillo” and “Shapiro et al.” might be influential. This is an interesting finding, as we detected the same studies when only looking at statistical outliers. This further corroborates that these two studies could maybe have distorted our pooled effect estimate, and might cause parts of the between-group heterogeneity we found in our meta-analysis.



Baujat Plot

plot(inf.analysis, "baujat")
Baujat Plot

Figure 6.2: Baujat Plot

The Baujat Plot (Baujat et al. 2002) is a diagnostic plot to detect studies overly contributing to the heterogeneity of a meta-analysis. The plot shows the contribution of each study to the overall heterogeneity as measured by Cochran’s \(Q\) on the horizontal axis, and its influence on the pooled effect size on the vertical axis. As we want to assess heterogeneity and studies contributing to it, all studies on the right side of the plot are important to look at, as this means that they cause much of the heterogeneity we observe. This is even more important when a study contributes much to the overall heterogeneity, while at the same time being not very influential concerning the overall pooled effect (e.g., because the study had a very small sample size). Therefore, all studies on the right side of the Baujat plot, especially in the lower part, are important for us.

As you might have already recognized, the only two studies we find in this region of the plot are the two studies we already detected before (Danitz & Orsillo, Shapiro et al.). These studies do not have a large impact on the overall results (presumably because they are very small), but they do add substantially to the heterogeneity we found in the meta-analysis.



Leave-One-Out Analyses

plot(inf.analysis, "es")

plot(inf.analysis, "i2")

In these to forest plots, we see the pooled effect recalculated, with one study omitted each time. There are two plots, which provide the same data, but are ordered by different values.

The first plot is ordered by heterogeneity (low to high), as measured by \(I^2\). We see in the plot that the lowest \(I*^2\) heterogeneity is reached (as we’ve seen before) by omitting the studies Danitz & Orsillo and Shapiro et al.. This again corroborates our finding that these two studies were the main “culprits” for the between-study heterogeneity we found in the meta-analysis.

The second plot is ordered by effect size (low to high). Here, we see how the overall effect estimate changes with one study removed. Again, as the two outlying studies have very high effect sizes, we find that the overall effect is smallest when they are removed.

All in all, the results of our outlier and influence analysis in this example point in the same direction. The two studies are probably outliers which may distort the effect size estimate, as well as its precision. We should therefore also conduct and report a sensitivity analysis in which these studies are excluded.



References

Viechtbauer, Wolfgang, and Mike W-L Cheung. 2010. “Outlier and Influence Diagnostics for Meta-Analysis.” Research Synthesis Methods 1 (2). Wiley Online Library: 112–25.

Baujat, Bertrand, Cédric Mahé, Jean-Pierre Pignon, and Catherine Hill. 2002. “A Graphical Method for Exploring Heterogeneity in Meta-Analyses: Application to a Meta-Analysis of 65 Trials.” Statistics in Medicine 21 (18). Wiley Online Library: 2641–52.

banner