Methods: Bias (9)
The items from STROBE state that you should report:
- Describe any efforts to address potential sources of bias
Some key items to consider adding:
- Describe the nature and magnitude of any potential biases and explain what approach was used to deal with these (e.g., discovery, ascertainment, selection, information, etc.)
- For quantitative outcome variables, specify if any investigation of potential bias resulting from pharmacotherapy was undertaken
- Report how bias in dietary or nutritional assessment was addressed, e.g., misreporting, changes in habits as a result of being measured, or data imputation from other sources
Explanation
Biased studies produce results that differ systematically from the truth (see also box 3). It is important for a reader to know what measures were taken during the conduct of a study to reduce the potential of bias. Ideally, investigators carefully consider potential sources of bias when they plan their study. At the stage of reporting, we recommend that authors always assess the likelihood of relevant biases. Specifically, the direction and magnitude of bias should be discussed and, if possible, estimated. For instance, in casecontrol studies information bias can occur, but may be reduced by selecting an appropriate control group, as in the first example.(Phillips et al., 2002) Differences in the medical surveillance of participants were a problem in the second example.(Pasquale et al., 2006)
Consequently, the authors provide more detail about the additional data they collected to tackle this problem. When investigators have set up quality control programs for data collection to counter a possible “drift” in measurements of variables in longitudinal studies, or to keep variability at a minimum when multiple observers are used, these should be described.
Unfortunately, authors often do not address important biases when reporting their results. Among 43 case-control and cohort studies published from 1990 to 1994 that investigated the risk of second cancers in patients with a history of cancer, medical surveillance bias was mentioned in only 5 articles.(Craig & Feinstein, 1999) A survey of reports of mental health research published during 1998 in 3 psychiatric journals found that only 13% of 392 articles mentioned response bias.(Rogler et al., 2001) A survey of cohort studies in stroke research found that 14 of 49 (28%) articles published from 1999 to 2003 addressed potential selection bias in the recruitment of study participants and 35 (71%) mentioned the possibility that any type of bias may have affected results.5 (Vandenbroucke et al., 2007)
Examples
- Example 1.
“In most case-control studies of suicide, the control group comprises living individuals but we decided to have a control group of people who had died of other causes (…). With a control group of deceased individuals, the sources of information used to assess risk factors are informants who have recently experienced the death of a family member or close associate - and are therefore more comparable to the sources of information in the suicide group than if living controls were used” (Phillips et al., 2002).
- Example 2.
“Detection bias could influence the association between Type 2 diabetes mellitus (T2DM) and primary open-angle glaucoma (POAG) if women with T2DM were under closer ophthalmic surveillance than women without this condition. We compared the mean number of eye examinations reported by women with and without diabetes. We also recalculated the relative risk for POAG with additional control for covariates associated with more careful ocular surveillance (a self-report of cataract, macular degeneration, number of eye examinations, and number of physical examinations).” (Pasquale et al., 2006; Vandenbroucke et al., 2007)
Box 3. Bias
Bias is a systematic deviation of a study’s result from a true value. Typically, it is introduced during the design or implementation of a study and cannot be remedied later. Bias and confounding are not synonymous. Bias arises from flawed information or subject selection so that a wrong association is found. Confounding produces relations that are factually right, but that cannot be interpreted causally because some underlying, unaccounted for factor is associated with both exposure and outcome (see Box 5). Also, bias needs to be distinguished from random error, a deviation from a true value caused by statistical fluctuations (in either direction) in the measured data. Many possible sources of bias have been described and a variety of terms are used (Murphy, 1976; Sackett, 1979). We find two simple categories helpful: information bias and selection bias.
Information bias occurs when systematic differences in the completeness or the accuracy of data lead to differential misclassification of individuals regarding exposures or outcomes. For instance, if diabetic women receive more regular and thorough eye examinations, the ascertainment of glaucoma will be more complete than in women without diabetes (see item 9) (Pasquale et al., 2006). Patients receiving a drug that causes non-specific stomach discomfort may undergo gastroscopy more often and have more ulcers detected than patients not receiving the drug – even if the drug does not cause more ulcers. This type of information bias is also called ‘detection bias’ or ‘medical surveillance bias’. One way to assess its influence is to measure the intensity of medical surveillance in the different study groups, and to adjust for it in statistical analyses. In case-control studies information bias occurs if cases recall past exposures more or less accurately than controls without that disease, or if they are more or less willing to report them (also called ‘recall bias’). ‘Interviewer bias’ can occur if interviewers are aware of the study hypothesis and subconsciously or consciously gather data selectively (Johannes et al., 1997). Some form of blinding of study participants and researchers is therefore often valuable.
Selection bias may be introduced in case-control studies if the probability of including cases or controls is associated with exposure. For instance, a doctor recruiting participants for a study on deep-vein thrombosis might diagnose this disease in a woman who has leg complaints and takes oral contraceptives. But she might not diagnose deep-vein thrombosis in a woman with similar complaints who is not taking such medication. Such bias may be countered by using cases and controls that were referred in the same way to the diagnostic service (Bloemenkamp et al., 1999). Similarly, the use of disease registers may introduce selection bias: if a possible relationship between an exposure and a disease is known, cases may be more likely to be submitted to a register if they have been exposed to the suspected causative agent (Feinstein, 1985). ‘Response bias’ is another type of selection bias that occurs if differences in characteristics between those who respond and those who decline participation in a study affect estimates of prevalence, incidence and, in some circumstances, associations. In general, selection bias affects the internal validity of a study. This is different from problems that may arise with the selection of participants for a study in general, which affects the external rather than the internal validity of a study (also see item 21).(Vandenbroucke et al., 2007)
Field-specific guidance
Seroepidemiologic studies for influenza (Horby et al., 2017)
- If relevant, describe efforts to control for the potential effect of immunization on estimates of outcomes
Resources
Do you know of any good guidance or resources related to this item? Suggest them via comments below, Twitter, GitHub, or e-mail.
-Boffetta, P. (1995). Sources of bias, effect of confounding in the application of biomarkers to epidemiological studies. Toxicology Letters, 77(1), 235–238. https://doi.org/10.1016/0378-4274(95)03301-7 (Boffetta, 1995)
- Centre for Evidence-Based Medicine. (2017). Catalogue of Bias. In Catalog of Bias. https://catalogofbias.org/ (Centre for Evidence-Based Medicine, 2017)
References
Bloemenkamp, K. W. M., Rosendaal, F. R., Büller, H. R., Helmerhorst, F. M., Colly, L. P., & Vandenbroucke, J. P. (1999). Risk of Venous Thrombosis With Use of Current Low-Dose Oral Contraceptives Is Not Explained by Diagnostic Suspicion and Referral Bias. Archives of Internal Medicine, 159(1), 65–70. https://doi.org/10.1001/archinte.159.1.65
Boffetta, P. (1995). Sources of bias, effect of confounding in the application of biomarkers to epidemiological studies. Toxicology Letters, 77(1), 235–238. https://doi.org/10.1016/0378-4274(95)03301-7
Centre for Evidence-Based Medicine. (2017). Catalogue of Bias. In Catalog of Bias. https://catalogofbias.org/
Craig, S. L., & Feinstein, A. R. (1999). Antecedent Therapy Versus Detection Bias as Causes of Neoplastic Multimorbidity. American Journal of Clinical Oncology, 22(1), 51. https://journals.lww.com/amjclinicaloncology/Abstract/1999/02000/Antecedent_Therapy_Versus_Detection_Bias_as_Causes.13.aspx
Feinstein, A. R. (1985). Clinical epidemiology: The architecture of clinical research. W.B. Saunders.
Horby, P. W., Laurie, K. L., Cowling, B. J., Engelhardt, O. G., Sturm-Ramirez, K., Sanchez, J. L., Katz, J. M., Uyeki, T. M., Wood, J., Van Kerkhove, M. D., & the CONSISE Steering Committee. (2017). CONSISE statement on the reporting of Seroepidemiologic Studies for influenza (ROSES-I statement): An extension of the STROBE statement. Influenza and Other Respiratory Viruses, 11(1), 2–14. https://doi.org/10.1111/irv.12411
Johannes, C. B., Crawford, S. L., & McKinlay, J. B. (1997). Interviewer Effects in a Cohort StudyResults from the Massachusetts Women’s Health Study. American Journal of Epidemiology, 146(5), 429–438. https://doi.org/10.1093/oxfordjournals.aje.a009296
Murphy, E. (1976). The logic of medicine. Johns Hopkins University Press.
Pasquale, L. R., Kang, J. H., Manson, J. E., Willett, W. C., Rosner, B. A., & Hankinson, S. E. (2006). Prospective Study of Type 2 Diabetes Mellitus and Risk of Primary Open-Angle Glaucoma in Women. Ophthalmology, 113(7), 1081–1086. https://doi.org/10.1016/j.ophtha.2006.01.066
Phillips, M. R., Yang, G., Zhang, Y., Wang, L., Ji, H., & Zhou, M. (2002). Risk factors for suicide in China: A national case-control psychological autopsy study. The Lancet, 360(9347), 1728–1736. https://doi.org/10.1016/S0140-6736(02)11681-3
Rogler, L. H., Mroczek, D. K., Fellows, M., & Loftus, S. T. (2001). The Neglect of Response Bias in Mental Health Research. The Journal of Nervous and Mental Disease, 189(3), 182. https://journals.lww.com/jonmd/Abstract/2001/03000/The_Neglect_of_Response_Bias_in_Mental_Health.7.aspx
Sackett, D. L. (1979). BIAS IN ANALYTIC RESEARCH. In M. A. Ibrahim (Ed.), The Case-Control Study Consensus and Controversy (pp. 51–63). Pergamon. https://doi.org/10.1016/B978-0-08-024907-0.50013-4
Vandenbroucke, J. P., Elm, E. von, Altman, D. G., Gotzsche, P. C., Mulrow, C. D., Pocock, S. J., Poole, C., Schlesselman, J. J., & Egger, M. (2007). Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and Elaboration. Epidemiology, 18(6), 805–835. https://doi.org/10.1097/EDE.0b013e3181577511