16.1 Fixed-Effect Model

To determine the power of a meta-analysis under the fixed-effect model, we have to assume the true value of a distribution when the alternative hypothesis is correct (i.e., when there is an effect). For power analysis in a conventional study, this distribution is \(Z\). Follwing Borenstein et al. (Borenstein et al. 2011), we will call the true value \(\lambda\) here to make clear that we are dealing with a meta-analysis, and not a primary study. \(\lambda\) is defined as:

\[\lambda=\frac{\delta}{\sqrt{V_{\delta}}}\]

Where \(\delta\) is the true effect size and \(V_{\delta}\) its variance.

\(V_{\delta}\) can be calculated for meta-analysis using the fixed-effect model with this formula:

\[V_{\delta}=\frac{\frac{n_1+n_2}{n_1xn_2}+\frac{d^2}{2(n_1+n_2)}}{k}\]

Where \(k\) are all the included studies, and \(n_1\) and \(n_2\) are the average sample sizes in each trial arm we assume across our studies.

Assuming a normal distribution and using \(\lambda\), we can calculate the Power:

\[Power = 1- \beta\] \[Power = 1- \Phi(c_{\alpha}-\lambda)+\Phi(-c_{\alpha}-\lambda) \]

Where \(c_{\alpha}\) is the critical value of a \(Z\)-distribution. \(\Phi\) is the standard normal density function, which we we need to calcuate the power using this equation: \[\Phi(Z)=\frac{1}{\sqrt {2\pi}}e^{-\frac{Z^2}{2}}\]

Luckily, you don’t have too think about these statistical details too much, as we have prepared a function for you with which you can easily conduct a power analysis using the fixed-effect model yourself. The function is called power.analysis. The function is part of the dmetar package. If you have the package installed already, you have to load it into your library first.

library(dmetar)

If you don’t want to use the dmetar package, you can find the source code for this function here. In this case, R doesn’t know this function yet, so we have to let R learn it by copying and pasting the code in its entirety into the console in the bottom left pane of RStudio, and then hit Enter ⏎. The function then requires the ggplot2 package to work.


For this function, we have to specify the following parameters:

Parameter Description
d The hypothesized, or plausible overall effect size of a treatment/intervention under study compared to control, expressed as the standardized mean difference (SMD). Effect sizes must be positive numerics (i.e., expressed as positive effect sizes).
OR The hypthesized, or plausible overall effect size of a treatment/intervention under study compared to control, expressed as the Odds Ratio (OR). If both d and OR are specified, results will only be computed for the value of d.
k The expected number of studies to be included in the meta-analysis.
n1 The expected, or plausible mean sample size of the treatment group in the studies to be included in the meta-analysis.
n2 The expected, or plausible mean sample size of the control group in the studies to be included in the meta-analysis.
p The alpha level to be used for the power computation. Default is alpha=0.05
heterogeneity Which level of between-study heterogeneity to assume for the meta-analysis. Can be either ‘fixed’ for no heterogeneity/a fixed-effect model, ‘low’ for low heterogeneity, ‘moderate’ for moderate-sized heterogeneity or ‘high’ for high levels of heterogeneity. Default is ‘fixed’.

Now, let’s give an example. I assume that an effect of \(d=0.30\) is likely and meaningful for the field of my meta-analysis. I also assume that on average, the studies in my analysis will be rather small, with 25 participants in each trial arm, and that there will be 10 studies in my analysis. I will set the \(\alpha\)-level to 0.05, as is convention.

power.analysis.(d=0.30, 
                k=10, 
                n1=25, 
                n2=25, 
                p=0.05)

The output of the function is:

## Fixed-effect model used.

## Power:
## [1] 0.9155008

Meaning that my power is 92%. This is more than the desired 80%, so given that my assumptions are remotely true, my meta-analysis will have sufficient power using the fixed-effect model to detect a clinically relevant effect if it exists.

So, if i assume an effect of \(d = 0.30\) in this example, i am lucky. If we play around with the effect size a little, however, while holding the other parameters constant, this can look very different.

As you can see from this plot, sufficient power (see the dashed line) is soon reached for \(d=0.30\), even if only few studies are included. If i assume a smaller effect size of \(d=0.10\), however, even 50 studies will not be sufficient to find a true effect.


References

Borenstein, Michael, Larry V Hedges, Julian PT Higgins, and Hannah R Rothstein. 2011. Introduction to Meta-Analysis. John Wiley & Sons.

banner