Chapter 5 Interpretation and assessment
- Interpret meta-analyses.
- Leverage meta-analyses to inform decision making.
- Appreciate extent of synthesis tools capacities to direct inference.
Interpreting meta-analyses and systematic reviews effectively and fairly is critical for the advancement of scientific theory. Conceptual and methodological developments in many fields currently rely on syntheses to evaluate the relative merit of contrasting options (Halpern et al. 2020). Importantly however, meta-analyses can fail (Kotiaho and Tomkins 2002). The representativeness of the evidence is crucial because the intent and purpose of the primary studies should in principle align with the purpose of the synthesis (as discussed previously). The selection process for evidence and the statistics can introduce biases and lead to potential spurious interpretations of the relevance of a hypothesis for instance. Failure to find support for a hypothesis does not necessarily mean that this is not a conceptually valid endeavor or that it does not explain the functioning of a specific system. Sparse evidence or publication biases can skew negative results in meta-analyses and lead to erroneous interpretations because of the evidence. What we know about the world (or the functioning of the world) may not match what we know about the science about the world. However, support for hypotheses in spite of biases and evidence limitations is likely to be representative of the underlying processes and patterns in a system. Consequently, meta-analyses can be evaluated, at times based on evidence volume, as more one-sided versus two-sided tests of concepts or methods. Heterogeneity and moderator analyses will also temper the assessment capacity for a meta-analysis to succeed in functioning as a knowledge engineering tool.
Evaluation and use of meta-analyses is relevant to society at large. Controversy can be resolved or obfuscated by these syntheses (Vrieze 2018). Two key factors play into the public reuse arena. The interpretation process and the relative strength of the derived evidence between syntheses that diverge in their conclusions. The collision of ideas can be exacerbated when synthesist scientists do not transparently and clearly report findings. This can be further magnified when the media and others directly reuse a published meta-analysis. The other factor, strength of evidence, must be transparently and accurately handled within each meta-analysis by better reporting. Lack of supporting derived data and inadequate reporting of the study populations (of papers) that comprise contrasting meta-analyses impede quantitative contrasts of syntheses. Description of the differences between studies within a synthesis is foundational to a better mapping of science onto ‘scientific truth’ (Ioannidis 2005). We must set a higher standard for synthesis reporting (Haddaway et al. 2018) and adopt more open science components into these projects. Finally, education and discussion within a synthesis of how well that specific process functioned in summarizing an evidence hierarchy must be developed.
- Select a meta-analysis or systematic review with extensive reporting. Assess whether the interpretations are well supported by the evidence that was incorporated in the synthesis.
- Explore one of the case studies provided in this short course and examine the sensitivity of the conclusions to reductions in the volume of evidence.
- Test full versus reduced models in one of the examples, statistically, to explore moderator influences on net outcomes.
- A set of evidence from a meta-analysis with a sense of study quality.
- A simple script to explore the statistical interpretations of meta-analyses done in R.
- Do primary studies need to be scored for quality?
- Do single large primary studies inform meta-analyses more than many smaller trials or experiments?
- Is there a mechanism qualitatively or quantitatively to demonstrate representativeness and address matching truths to a system?