Chapter 1 Introduction

In the 1980s, large outbreaks of transfusion-transmitted human immunodeficiency virus (HIV) and hepatitis C virus (HCV) created public health crises in health systems around the world. In the United States, an estimated 12,000 Americans contracted HIV through blood transfusion between 1978 and 1984, the year transfusion-associated AIDS was first linked to HIV [1]. As the scale of this public health disaster became apparent, the practice and regulation of blood collection came under public scrutiny, and blood safety became a public health priority [2]. The 1980s and ’90s saw the introduction of new donor deferral policies and the use of antibody tests for HIV (introduced in 1985) and HCV (introduced in 1990) [2]. Other countries also implemented early pathogen inactivation technologies for plasma-derived blood products [3]. As the rate of development for blood safety interventions increased, experts began to question whether it was sustainable to implement all available interventions. “Zero risk” and “safety at any cost” as policy goals began to be supplanted by calls for efficiency and “proportionate responses” to blood safety threats [3,4]. Decision analytic models were developed to assess the relative costs and consequences of implementing new disease marker tests [511] and pathogen reduction technologies [12]. In a 2010 Consensus Conference by the Alliance of Blood Operators, regulators, blood collectors, and researchers from several countries developed the risk-based decision making framework for blood safety, which facilitated an evidence-based response to blood safety threats [4,13]. Decision-analytic modeling, and particularly cost-effectiveness analyses, were central to the risk-based decision-making framework [14]. In the 2010s, researchers continued to develop decision-analytic models of bloods safety policies [1520]. While the risk of transfusion-transmitted disease remains a major focus of blood safety, researchers and policymakers are increasingly concerned with non-infectious transfusion-related adverse events and health risks experienced by donors. Observational studies show that frequent blood donation can cause or exacerbate iron deficiency, particularly for teen donors and premenopausal women [2126], prompting regulators, researchers, and blood collectors in many countries to question the adequacy of existing policies designed to avoid depleting the iron stores of donors [2733].

Today, thanks to more than 11 million volunteer blood donors, the U.S. healthcare system annually transfuses 12.6 million red blood cells units, 2.4 million platelet units, and 3.7 million plasma units to patients [34], and transfusion-transmitted infections are exceedingly rare [35]. Safely collecting and transfusing blood involves complex and critical policy decisions that impact both blood donors and transfusion recipients. To protect recipients from transfusion-transmitted infection, the US Food and Drug Administration (FDA) requires that blood centers employ a combination of donor deferral, disease marker testing, and risk-reducing modifications (e.g., pathogen inactivation). Additionally, the FDA requires two policies to protect blood donors from iron deficiency: a minimum acceptable pre-donation hemoglobin/hematocrit requirement and a minimum period of 56 days between subsequent whole blood donations (and 84 days for double red cell donations). Because blood donation is an altruistic act and transfusion recipients are often critically ill patients, it is critical that policies are in place to minimize risks to both donors and recipients. At the same time, policies to increase blood safety involve complex considerations around cost, efficiency, ease of implementation, fairness, and public opinion. While considerable progress has been made in developing decision-analytic models to estimate the risks and trade-offs of policy alternatives, decision-analytic modeling has not been fully integrated into the policymaking process. Some recent decisions (for example, the requirement to screen all donations for Zika virus in the United States) are widely viewed as at odds with the risk-based decision-making framework and concept of proportional response [36].

This dissertation presents four decision-analytic modeling projects that aim to inform critical policy questions in blood safety:

  • In the second chapter, I present a cost-effectiveness analysis comparing policies for screening the blood supply for Zika virus in the United States. This analysis was developed after a 2016 decision to require universal screening for Zika in U.S. states and territories, a requirement which persists today. As part of this analysis, I developed a new microsimulation model of transfusion recipients that captures the relationship between each recipient’s component exposure and their risk of transfusion-transmitted disease. This new method allows better estimation of the consequences of transfusion-transmitted disease and can be used for other infections.

  • The third chapter estimates the health-economic impact of implementing whole blood pathogen inactivation nationwide in Ghana. I estimate the reduction in the number of transfusion-related adverse events and the net impact on healthcare spending, factoring in the costs of pathogen inactivation and the cost savings from averted adverse events. In collaboration with clinician researchers in Ghana, I developed a detailed micro-costing models of each adverse event. This is the first blood safety analysis for sub-Saharan Africa that accounts for the likelihood and timing of chronic disease detection and treatment, allowing for more accurate estimation of the healthcare costs due to transfusion-related adverse events.

  • While cost-effectiveness analyses can assess the addition or modification of a blood safety intervention, they cannot evaluate all feasible portfolios (i.e., combinations) of numerous interventions. In the fourth chapter, I develop an optimization-based framework for identifying the optimal portfolio of blood safety interventions across three modalities: deferring high-risk donors, testing for disease markers, and using risk-reducing modifications. By applying this framework retrospectively to evaluate U.S. policies for Zika and West Nile virus, I show that the optimal policy can vary by geography, season, and year.

  • The fifth chapter explores a novel approach to protecting blood donors from iron-related adverse outcomes. I developed a machine learning-based decision model that tailors the inter-donation interval (how long donors must wait before returning) to each donor’s risk of iron-related adverse outcomes. As an alternative to a one-size-fits-all inter-donation interval, this approach could be used by blood collectors to balance risks to donors against risks to the sufficiency of the blood supply.

Together, these projects inform critical policy decisions and introduce novel methods for blood safety policy assessment, guiding the efficient and effective deployment of limited resources to keep blood donors and transfusion recipients safe.