## 4.1 Fixed-Effects-Model

### 4.1.1 Pre-calculated effect size data

**The idea behind the fixed-effects-model**

The fixed-effects-model assumes that all studies along with their effect sizes stem from a single homogeneous population (Borenstein et al. 2011). To calculate the overall effect, we therefore average all effect sizes, but give studies with greater precision a higher weight. In this case, greater precision means that the study has a larger **N**, which leads to a smaller **Standard Error** of its effect size estimate.

For this weighing, we use the **inverse of the variance** \(1/\hat\sigma^2_k\) of each study \(k\). We then calculate a weighted average of all studies, our fixed effect size estimator \(\hat\theta_F\):

In Chapter 3.1, we have described two ways your EXCEL spreadsheet for your meta-analysis data can look like:

- It can either be stored as the
**raw data**(including the Mean, N, and SD of every study arm) - Or it only contains the
**calculated effect sizes and the standard error (SE)**

The functions to pool the results with a fixed-effect-model **differ depending on which data format you used**, but not much. First, let’s assume you already have a dataset with the **calucated effects and SE** for each study. In my case, this is my `madata`

dataset.

`str(madata)`

```
## Classes 'tbl_df', 'tbl' and 'data.frame': 18 obs. of 17 variables:
## $ Author : chr "Call et al." "Cavanagh et al." "DanitzOrsillo" "de Vibe et al." ...
## $ TE : num 0.709 0.355 1.791 0.182 0.422 ...
## $ seTE : num 0.261 0.196 0.346 0.118 0.145 ...
## $ RoB : chr "low" "low" "high" "low" ...
## $ Control : chr "WLC" "WLC" "WLC" "no intervention" ...
## $ intervention duration: chr "short" "short" "short" "short" ...
## $ intervention type : chr "mindfulness" "mindfulness" "ACT" "mindfulness" ...
## $ population : chr "undergraduate students" "students" "undergraduate students" "undergraduate students" ...
## $ type of students : chr "psychology" "general" "general" "general" ...
## $ prevention type : chr "selective" "universal" "universal" "universal" ...
## $ gender : chr "female" "mixed" "mixed" "mixed" ...
## $ mode of delivery : chr "group" "online" "group" "group" ...
## $ ROB streng : chr "high" "low" "high" "low" ...
## $ ROB superstreng : chr "high" "high" "high" "low" ...
## $ compensation : chr "none" "none" "voucher/money" "voucher/money" ...
## $ instruments : chr "DASS" "PSS" "DASS" "other" ...
## $ guidance : chr "f2f" "self-guided" "f2f" "f2f" ...
```

This dataset has **continuous outcome data**. As our effect sizes are already calculated, we can use the `meta::metagen`

function. For this function, we can specify loads of parameters, all of which you can accessed by typing `?metagen`

in your console once the `meta`

package is loaded.

**Here is a table with the most important parameters for our code:**

Parameter | Function |
---|---|

TE | This tells R to use the TE column to retrieve the effect sizes for each study |

seTE | This tells R to use the seTE column to retrieve the standard error for each study |

data= | After =, paste the name of your dataset here |

studlab=paste() | This tells the function were the labels for each study are stored. If you named the spreadsheet columns as advised, this should be studlab=paste(Author) |

comb.fixed= | Weather to use a fixed-effect-model |

comb.random | Weather to use a random-effects-model |

prediction= | Weather to print a prediction interval for the effect of future studies based on present evidence |

sm= | The summary measure we want to calculate. We can either calculate the mean difference (MD) or Hedges’ g/Cohen’s d (SMD) |

Let’s code our first fixed-effects-model Meta-Analysis. We we will give the results of this analysis the simple name `m`

.

```
m<-metagen(TE,
seTE,
data=madata,
studlab=paste(Author),
comb.fixed = TRUE,
comb.random = FALSE,
prediction=TRUE,
sm="SMD")
m
```

```
## SMD 95%-CI %W(fixed)
## Call et al. 0.7091 [ 0.1979; 1.2203] 3.6
## Cavanagh et al. 0.3549 [-0.0300; 0.7397] 6.3
## DanitzOrsillo 1.7912 [ 1.1139; 2.4685] 2.0
## de Vibe et al. 0.1825 [-0.0484; 0.4133] 17.5
## Frazier et al. 0.4219 [ 0.1380; 0.7057] 11.6
## Frogeli et al. 0.6300 [ 0.2458; 1.0142] 6.3
## Gallego et al. 0.7249 [ 0.2846; 1.1652] 4.8
## Hazlett-Stevens & Oren 0.5287 [ 0.1162; 0.9412] 5.5
## Hintz et al. 0.2840 [-0.0453; 0.6133] 8.6
## Kang et al. 1.2751 [ 0.6142; 1.9360] 2.1
## Kuhlmann et al. 0.1036 [-0.2781; 0.4853] 6.4
## Lever Taylor et al. 0.3884 [-0.0639; 0.8407] 4.6
## Phang et al. 0.5407 [ 0.0619; 1.0196] 4.1
## Rasanen et al. 0.4262 [-0.0794; 0.9317] 3.6
## Ratanasiripong 0.5154 [-0.1731; 1.2039] 2.0
## Shapiro et al. 1.4797 [ 0.8618; 2.0977] 2.4
## SongLindquist 0.6126 [ 0.1683; 1.0569] 4.7
## Warnecke et al. 0.6000 [ 0.1120; 1.0880] 3.9
##
## Number of studies combined: k = 18
##
## SMD 95%-CI z p-value
## Fixed effect model 0.4805 [ 0.3840; 0.5771] 9.75 < 0.0001
## Prediction interval [-0.0344; 1.1826]
##
## Quantifying heterogeneity:
## tau^2 = 0.0752; H = 1.64 [1.27; 2.11]; I^2 = 62.6% [37.9%; 77.5%]
##
## Test of heterogeneity:
## Q d.f. p-value
## 45.50 17 0.0002
##
## Details on meta-analytical method:
## - Inverse variance method
```

We now see the results of our Meta-Analysis, including

- The
**individual effect sizes**for each study, and their weight - The total
**number of included studies**(k) - The
**overall effect**(in our case,*g*= 0.4805) and its confidence interval and p-value - Measures of
**between-study heterogeneity**, such as*tau*or^{2}*I*and a^{2}*Q*-test of heterogeneity

Using the `$`

command, we can also have a look at various outputs directly. For example

`m$lower.I2`

Gives us the lower bound of the 95% confidence interval for *I ^{2}*

`## [1] 0.3787897`

We can **save the results of the meta-analysis** to our working directory as a .txt-file using this command

```
sink("results.txt")
print(m)
sink()
```

### 4.1.2 Raw effect size data

To conduct a fixed-effects-model Meta-Analysis from **raw data** (i.e, if your data has been prepared the way we describe in Chapter 3.1.1), we have to use the `meta::metacont()`

function instead. The structure of the code however, looks quite similar.

Parameter | Function |
---|---|

Ne | The number of participants (N) in the intervention group |

Me | The Mean (M) of the intervention group |

Se | The Standard Deviation (SD) of the intervention group |

Nc | The number of participants (N) in the control group |

Mc | The Mean (M) of the control group |

Sc | The Standard Deviation (SD) of the control group |

data= | After ‘=’, paste the name of your dataset here |

studlab=paste() | This tells the function were the labels for each study are stored. If you named the spreadsheet columns as advised, this should be studlab=paste(Author) |

comb.fixed= | Weather to use a fixed-effects-model |

comb.random | Weather to use a random-effects-model |

prediction= | Weather to print a prediction interval for the effect of future studies based on present evidence |

sm= | The summary measure we want to calculate. We can either calculate the mean difference (MD) or Hedges’ g (SMD) |

For this purpose, i will use my dataset `metacont`

, which contains the raw data of all studies i want to snythesize

`str(metacont)`

```
## Classes 'tbl_df', 'tbl' and 'data.frame': 6 obs. of 7 variables:
## $ Author: chr "Cavanagh" "Day" "Frazier" "Gaffney" ...
## $ Ne : num 50 64 90 30 77 60
## $ Me : num 4.5 18.3 12.5 2.34 15.21 ...
## $ Se : num 2.7 6.4 3.2 0.87 5.35 ...
## $ Nc : num 50 65 95 30 69 60
## $ Mc : num 5.6 20.2 15.5 3.13 20.13 ...
## $ Sc : num 2.6 7.6 4.4 1.23 7.43 ...
```

Now, let’s code the Meta-Analysis function, this time using the `meta::metacont`

function, and my `metacont`

dataset. I want to name my output `m.raw`

now.

```
m.raw<-metacont(Ne,
Me,
Se,
Nc,
Mc,
Sc,
data=metacont,
studlab=paste(Author),
comb.fixed = TRUE,
comb.random = FALSE,
prediction=TRUE,
sm="SMD")
m.raw
```

```
## SMD 95%-CI %W(fixed)
## Cavanagh -0.4118 [-0.8081; -0.0155] 13.8
## Day -0.2687 [-0.6154; 0.0781] 18.0
## Frazier -0.7734 [-1.0725; -0.4743] 24.2
## Gaffney -0.7303 [-1.2542; -0.2065] 7.9
## Greer -0.7624 [-1.0992; -0.4256] 19.1
## Harrer -0.1669 [-0.5254; 0.1916] 16.9
##
## Number of studies combined: k = 6
##
## SMD 95%-CI z p-value
## Fixed effect model -0.5245 [-0.6718; -0.3773] -6.98 < 0.0001
## Prediction interval [-1.1817; 0.1494]
##
## Quantifying heterogeneity:
## tau^2 = 0.0441; H = 1.51 [1.00; 2.38]; I^2 = 56.1% [0.0%; 82.3%]
##
## Test of heterogeneity:
## Q d.f. p-value
## 11.39 5 0.0441
##
## Details on meta-analytical method:
## - Inverse variance method
## - Hedges' g (bias corrected standardised mean difference)
```

As you can see, all the calculated effect sizes are **negative** now, including the pooled effect. However, all studies report a positive outcome, meaning that the symptoms in the intervention group (e.g., of depression) were reduced. The negative orientation results from the fact that in **most clinical trials, lower scores indicate better outcomes** (e.g., less depression). It is no problem to report values like this: in fact, it is conventional.

Some readers who are unfamiliar with meta-analysis, however, **might be confused** by this, so may consider changing the orientation of your values before you report them in your paper.

We can **save the results of the meta-analysis** to our working directory as a .txt-file using this command

```
sink("results.txt")
print(m.raw)
sink()
```

### References

Borenstein, Michael, Larry V Hedges, Julian PT Higgins, and Hannah R Rothstein. 2011. *Introduction to Meta-Analysis*. John Wiley & Sons.