Chapter 4 Data and Measurement

4.1 Policy

In Shor (2018), I identified several dozen roll calls on three facets of state implementations of the ACA (state health insurance exchanges, Medicaid expansion, and individual mandate nullification). In this paper, I massively expand this data by using two separate sources of data on health policy bills at the state level.

4.1.1 NCSL Data

The first of these National Conference of State Legislatures’ Health Care Reform Database2 This data listed bill titles that I manually matched to electronic roll call records. This results in data summarized in Table ??. One principal advantage of this data source is that it also assigns topic tags to each bill, allowing me to disaggregate bills by topic and assess effect heterogeneity by issue.

NCSL Data Summary
Subset Bills Cmte Roll Calls Floor Roll Calls Cmte Votes Floor Votes
All 4,465 5,822 18,722 69,459 1,441,820
Non-unanimous 2,652 1,752 10,485 21,751 856,648

4.1.2 Search Data

The NCSL Database is a black box. If it has systematic errors or biases in how it assesses which bills are related to health policy, I might draw erroneous conclusions, or ones with more limited external validity. As a companion data set, I search bill titles and descriptions using a set of health policy search terms (listed in the appendix). As shown in Table ?? This results in an order of magnitude increase in data compared with the NCSL Database, with over 7 million recorded votes from 15,000 legislators across more than 80,000 bills.

Search Data Summary
Subset Bills Cmte Roll Calls Floor Roll Calls Cmte Votes Floor Votes
All 54,272 61,939 103,196 737,853 7,385,325
Non-unanimous 25,965 16,982 41,242 208,656 3,254,954

Throughout the paper I show results from both data sets. They are surprisingly very similar, giving me a higher degree of confidence in my estimates.

4.1.3 Topics

The NCSL health reform database labels bills with over 50 subcategories, like payment reform, Medicaid expansion, or exchange administration. I use 16 subcategories that have more than 10 roll call votes.

  • Over 50 separate topics coded by NCSL
    • Examples
    • COVID-19
    • Medicaid expansion and waivers
    • Insurance coverage mandates
    • Network regulation
    • Payment reforms
    • Health insurance marketplace structure
    • Drug abuse treatment
  • Going beyond NCSL
    • Unsupervised machine topic coding in the search data [TBD]

4.2 Legislative Data

4.2.1 Party and Ideology

Legislator ideal points are derived from the original data set I have been working on over the past decade with Nolan McCarty, covering over 25,000 unique state legislators and more than 2,200 chamber-years of data.3 Prior to Shor and McCarty (2011), measures of legislator level ideal points were unavailable for two reasons: the lack of data on voting records and the lack of a metric for comparing across states. To address the first problem, legislative journals of all 50 states (generally from the mid-90s onward) were either downloaded from the web or purchased in hard copy. The hard copy journals were disassembled, photocopied, and scanned. These scans were converted to text using optical character recognition software. To convert the raw legislative text to roll call voting data, we developed dozens of data-mining scripts. Because the format of each journal is unique, a script had to be developed for each state and each time a state changed its publication format. State legislative journals and votes have gradually become more accessible online, although more commonly for recent years than older ones. New resources like OpenStates and Legiscan aggregate these electronic archives and make accessing roll call votes much easier and less noisy than ever before. In part due to these new data sources, we have continued to update the data. Now our measures extend to 2021, meaning we now incorporate the legislators elected between 1994 and 2020.

The second issue is that we can only compare the positions of two legislators if they have cast votes on the same issues. If we assume that legislators have fairly consistent positions over time, we can compare two legislators so long as they both have voted on the same issues as a third legislator. But this issue poses special problems to the study of state legislators because two legislators from different states rarely cast votes on exactly the same issue. So to make comparisons across states we use a survey of federal and state legislative candidates that asks similar questions across states and across time. The National Political Awareness Test (NPAT) is administered by Project Vote Smart, a nonpartisan organization that disseminates these surveys as voter guides to the public at large. Additional work needs to be done to process the raw NPAT data by merging identical questions and respondents across states and time. Then, by combining the data on roll call votes with the processed NPAT survey data from 1996 to 2018, we generate universal coverage of state legislators who have served in the states for which we have the roll call data. The technical details of how we combine these two data sources can be found in Shor and McCarty (2011).

4.3 Opinion Data

4.4 Constituency Opinion

I begin my assessment of representation at the state legislative district level by developing a survey instrument to give me valid estimates of constituency-level public opinion. The key questions will be related to policy issues related to Affordable Care Act implementation, reform, and repeal in the states. To compare individual legislator behavior to the opinion and needs of her constituents, I need a way to disaggregate my survey data to the state and the district levels. This is a problem, given typically sized survey samples (who would have very few respondents in any given district).

4.5 Multilevel regression with poststratification (MRP)

4.5.1 Introduction

I model opinion via the new approach of multilevel regression with poststratification (MRP). This technique was first developed by Gelman and Little (1997) and introduced in a political science context in Park, Gelman, and Bafumi (2004). It has experienced a flowering of use in the study of state politics (J. R. Lax and Phillips (2009); J. Lax and Phillips (2009); Jonathan P. Kastellec, Lax, and Phillips (2010); J. Pacheco (2011)). The key advantage of this technique is the ability to obtain trustworthy estimates of opinion at subconstituency (typically state) levels, using relatively few respondents (eg, the number of respondents in typically-sized national surveys). It leverages individual level survey data with high quality (typically Census) post-stratification data to adjust for sparse survey coverage across geographic units. Dynamic MRP estimates are possible when surveys are conducted at multiple points in time [J. Pacheco (2011); Pacheco:2012; Lewis:2017].

MRP proceeds in two stages. First, individual survey responses and regression analysis are used to estimate the opinions of different types of people. A respondent’s opinions are treated as being, in part, a function of his or her demographic and geographic characteristics. Research has consistently demonstrated that demographic variables are crucial determinants of individuals’ political opinions, particularly their ideological orientation. In addition to demography, survey responses are treated as a function of a respondent’s geographic characteristics. Why are geographic predictors included? Existing research has shown that place in which people live is an important predictor of their core political attitudes (Erikson, Wright, and McIver 1993). J. Lax and Phillips (2009) demonstrate that the inclusion of geographic predictors greatly enhances the accuracy of MRP opinion estimates when compared to models that rely exclusive on demographic predictors.

The second stage of MRP is referred to as poststratification. Based on data from the U.S. Census, we know what proportion of a given district’s population is comprised by each demographic-geographic type from stage one. Within each district, we simply take the estimated ideology across every demographic-geographic type, and weight it by its frequency in the population. Finally, these weighted estimates are summed in each district to get a measure of overall district-level ideology (i.e., the ideology of the ``median voter”). Standard errors can be bootstrapped in a manner similar to Jonathan P. Kastellec et al. (2015).

The applied literature that utilizes multilevel regression with poststratification (MRP) is exploding, and for good reason. We want to be able to say something about public opinion or representation of geographical units smaller than countries as a whole, but we typically lack the respondents to do just that. So when the method was introduced in Gelman and Little (1997)}, many researchers were excited by the possibility of applying the method to studying American states, which are foundational in our federalist system. Using standard nationally representative survey numbers like 800-1000 respondents, analysts could generate statistically reliable estimates of opinion in all fifty states. This was used to estimate state level presidential vote using a single nationally representative sample (Park, Gelman, and Bafumi 2004). This is possible despite the fact that, while the average state might have 20 respondents, population disparities mean that while states like California or Texas might have a few dozen respondents, the smallest states like Delaware or South Dakota might have less than a literal handful each. Even so, MRP has been validated over and over again with a variety of strategies (see for example J. R. Lax and Phillips (2009)). This combination of data efficiency and statistical reliability is obviously driving the vast increase in MRP’s use.

What accounts for this seemingly-magical ability? First, the technique builds on the powerful predictive abilities of demographic and residential variables on individual-level public opinion. This is possible thanks to a long history of political science research on the foundation of opinion and vote choice. Something new was the addition of aggregate level of information. Thus, we do not have to pretend like we know nothing about the small areas where survey respondents live. What is also new is how the individual and aggregate information is combined via a multilevel regression setup, which efficiently combines data at multiple levels of analysis, borrowing strength from units with more data to assist in estimates for units with less data Raudenbush and Bryk (2002), Gelman:2006. The final, missing piece of the puzzle was the addition of high quality population data from sources like the U.S. Census. In a sense, our final estimates are an amalgam of individual level survey response, a well-specified multilevel model, the presence of aggregate level information, and the ``borrowing’’ of strength from a gold-standard information source like the Census. Standard errors can be bootstrapped in a manner similar to Jonathan P. Kastellec et al. (2015).

Part of the confidence in the MRP estimation technique at the state level has been the extensive validation exercises against administrative and gold standard massive surveys (disaggregated to the state level). Examples of these validations include Park, Gelman, and Bafumi (2004), Lax:2009, Gelman:2016. Simply put, MRP estimates using typically sized __national opinion surveys (in the 800-1000 respondents range), line up very closely to administrative data measured with very little measurement error, and to surveys with hundreds of thousands or even millions of respondents.

Nearly all the MRP applications, at least in this country, have been set at the state level. And it’s easy to understand why. In the American federal system, states are important for a number of reasons. First, states suffuse the the formation of the national government, for example via the equal representation of states in the Senate and the composition of the Electoral College in selecting the president. Second, states are responsible for an incredible amount of policy generation and implementation in a federal system like ours that accords a large degree of independence to the regional units. State elect their own governments and deliver their own policies. This justifies why we’d like to know public opinion at these levels. Before MRP, there was no practical way to do so, especially on particular issues like health care, same-sex marriage or criminal justice reform. MRP has revolutionized the study of state level opinion and is now routinely used in the academic and data journalism communities.4 It is also no surprise that it is spreading overseas in explaining regional opinion on, for example, the Brexit vote..5

The next step in the use of MRP for estimating opinion is in political constituencies smaller than states. This, too, makes sense given the single-member district setup of the US House and American state legislatures. Both US Representatives and state legislators from upper (State Senate) and lower (State House) chambers are elected district-by-district. The electoral connection Mayhew (1974) __should bind district constituents to their representatives. Whether that is true in reality is, of course, an empirical question. It may be that district legislators are faithful delegates of their constituencies. Or it may be that other forces block the quality functioning of the representational relationship.

The attraction of those wishing to study substate opinion to MRP is obvious. Researchers like myself are in a similar quandary with regards to opinion data as are researchers who study states as a whole (among whom I count myself as well). We would like to estimate opinion at these levels because of the obvious analytical need to do so, but can we make use of MRP to do so? The method says nothing about states being the sole area of implementation. While all the previous steps seem easy to implement–individual level data, group level predictors, multilevel model setup, gold standard poststratification data–the number of respondents needed is a key question. In fact, it is a methodological question that requires careful comparisons and validation. One way to think of MRP is as an amalgam between already-existing data and newly collected survey data for the purposes of small area estimation. The other data has already been collected or could in principle be collected easily. But how much new survey data is needed?

We have exactly that in Warshaw and Rodden (2012). They pool questions on a variety of issue questions from a number of very large surveys (2004 ANES, 2006-2008 CCES), and get as many as 110,000 responses. Then they run a large number of MRP simulations where they randomly sample as few as 2,500 respondents to as many as 30,000. They compare the MRP small area estimates to the true measures of opinion from the full data set. Their findings are unequivocal; congressional district MRP estimates with 2,500 respondents and upper chamber (State Senate) legislative district MRP estimates with 5,000 respondents are extremely close to the true measures. They also find that statistical properties improve with more numbers, but the marginal improvement drops as the number of respondents increase (diminishing marginal utility). At 30,000 or above, the MRP estimates are essentially identical to the true estimates, and MRP is no longer to be automatically preferred to mere disaggregation (it has other properties that could be attractive, however).

Since the Warshaw and Rodden (2012) paper, a number of papers have implemented MRP at the substate level for a variety of empirical applications. Broockman and Skovron (2017) relies on CCES team content to assess legislator knowledge of district opinion. The problem here is that, while the number of respondents is really high, the content of the questions are not determined by the researchers. This is a problem if we want to understand health care at the state level beyond a single question devoted to the topic. McCarty et al. (2018) (of which I am a coauthor) aims to see whether legislative extremism is related to district opinion polarization. It also has a very large number of respondents, but the responses are combined together to form a generalized ideology measure rather than a specific issue measure. This is again of no use when studying specific policy questions.

So what we need are examples of studies that estimate substate opinion with MRP on specific issue questions that are of particular interest to the researcher. I have found three of these, and I am the author of one of them – the only one on state legislative districts. Howe et al. (2015) and Zhang et al. (2018) study substate variation in __county} opinion on climate change and mitigation in two papers published in __Nature Climate Change}. These two papers made inferences at the congressional district (435 total) and county (3,143 total) levels, with 12,061 and 6,301 respondents, respectively. I wrote Shor (2018)} to investigate the ACA implementation votes by individual state legislators, and to see whether state senate constituencies (1,972 total) were able to budge legislators. I collected 5,000 respondents with a survey funded by the Robert Wood Johnson Foundation.

4.5.2 District level estimates

Julianna Pacheco has used this technique to great effect to study state health politics. While state estimates are interesting in their own right, I seek to go much further, by addressing the microfoundations of such macro phenomena. In particular, I aim to generate MRP estimates for state legislative districts. This would allow us to make direct comparisons between individual state legislators, and the districts they represent. A recent paper by Warshaw and Rodden (2012) validates their use at these levels at approximately the number of respondents I plan to survey (7,500).

What is astonishing is that there is almost no published applied work using MRP at these constituency levels. A forthcoming paper on which I am coauthor (McCarty et al. 2018) uses MRP to get generalized ideology measures, but not specific issue opinions. Howe et al. (2015) disaggregates environmental opinion to the county level, which doesn’t address the representational relationship between district constituents and state legislators. Broockman and Skovron (2017) have an unpublished paper that does look at specific issue opinion at the district level, but is limited to the set of questions asked on the common content of the CCES survey; the only health related question is about universal, publicly-provided healthcare.

I plan to collect data primarily with a nationally representative online sample of adults with geographic identifiers (zip codes) that I can use to place respondents fairly precisely into districts.

MRP is an incredibly powerful technique that promises vast economies while generating valid and reliable small area estimates. But getting estimates at these very small and numerous legislative district level is much more resource intensive than typical state-level applications which can use typically sized nationally representative samples (800 and up). Getting estimates for state House/assembly districts requires–at a minimum–15,000 respondents.6 Even state Senate districts need 5,000 or more. Thus, truly dynamic estimates of public opinion at many points in time (Julianna Pacheco and Maltby 2017a, 2017b) are not possible, nor are panel studies which would need even larger samples to deal with attrition.

Instead, I will survey two cross-sections of the country in 2019 and 2020. I will use each survey on its own to generate state, congressional district, and state Senate estimates with MRP. Combined together for the full set of 15,000, I will be able to estimate state House district opinion as well.

4.5.3 Interest Groups

Major strides have been made in data availability on interest group activity at the federal levels. We know more about campaign finance, lobbying, mobilization, and other aspects of interest group influence than ever before. At the state level, too, our knowledge has made major strides. Thanks to the hard work of researchers like Theda Skocpol, Alexander Hertel-Fernandez, and others, we know much more about the extent of the influence of the troika of ALEC, the State Policy Network, and Americans for Prosperity (AfP) (Hertel-Fernandez 2014, 2016, 2018; Hertel-Fernandez, Skocpol, and Lynch 2016). That data is typically collected at the state-level.

Hertel-Fernandez is writing a book manuscript that estimates at the bill level the degree of ``policy plagiarism’’ (copying of ALEC model bills), work that complements other scholars active in the area of subnational interest groups (Kroeger 2016; Jansa, Hansen, and Gray 2015). Other bill-level measures of interest group lobbying activity can be accessed from individual state web sites for a subset of states that have transparency requirements that mandate reporting on which groups were involved in crafting a bill.

What we have little information about is at the district level. We have a little of it here and there; for example, an internal leak of AfP documents revealed the fine grained location of their rallies we can resolve into districts (Hertel-Fernandez 2018). But in general, this is more of an exception rather than the rule. How many and where are the local AfP activists? How many people have been lobbied by AfP and its staff and volunteers?

On the left, we know even less. Part of that is due to the weak analogues to the state conservative organizations (Hertel-Fernandez 2016), but the other is that we just don’t have good measures of what the labor movement is doing at the district level. Given the state of our knowledge, what can we build on? Existing census data on employment is helpful. Hertel-Fernandez, for example, uses government employment as a proxy for district union membership because of the lack of data on union membership at the local levels (Hertel-Fernandez 2018). This is a good first step (and necessary due to the lack of alternatives), but an imperfect one. For example, union membership for public employees is quite heterogeneous across states and localities, union participation in politics may also vary a lot, this definition misses membership in private sector unions which may be very important, and so on.

Another example is Hertel-Fernandez’ use of small business employment as a proxy for conservative economic interests (Hertel-Fernandez 2018). Again, this is understandable because of the lack of data on the latter. But these interests may not be colinear with business size, small business in different locales may be radically different ideologically.

Take the specific example of physicians. We know they are crucially important to moving the public and elected officials on a number of critical policy areas, both historically and in current politics (Starr 2008, 2013; Patashnik, Gerber, and Dowling 2017). But they are not interchangeable; physicians are deeply divided by geography, specialization, industrial organization, partisanship, and ideology (Bonica, Rosenthal, and Rothman 2014, 2015; Bonica et al. 2017). Measuring their influence by using Census employment figures would be deeply misleading, lumping together self-employed ultraconservative orthopedic surgeons living in the Atlanta suburbs with liberal urban staff physicians working for Kaiser Permanente in Oakland. While both archetypes might be equally politically active, it is quite likely they will be pushing for diametrically opposed visions of health care reform. Then again, maybe that is only true for some issues, while on most issues occupational self-interest moves both to push for the same policies. It is something that empirical investigation should shed light on.

Newly available data on physician location and ideology (Bonica et al. 2017; Bonica 2017) is a major advance in this measurement problem. Physicians are very politically active, almost as much as lawyers which is the profession we often think of as highly politically connected. Combining physician location and ideology with public opinion, legislator ideology, and outcome data should give us a lot of insight about how representation operates. Between physician subconstituency pressure and public opinion, which constrains the legislative behavior of representatives who are themselves deeply ideologically bound? In personal communication, Adam Bonica has promised to supply me with measures of physician specialties, preferences, and locations resolved to the state legislative district level. These measures will proxy for activity by this highly influential interest, but with nuance regarding specialization and ideology.

Another important health-specific interest are medical facilities and hospitals in particular. Hospitals are located throughout states, but in a lumpy manner reflecting population, labor markets, industrial organization and the like. Hospitals have been very influential in lobbying legislators and governors on a variety of issues, including and especially Medicaid expansion (L. Jacobs and Skocpol 2015). The threat and actuality of hospital closures have been repeatedly cited in the popular press as putting pressure on state officials to increase state and federal financing for the sector. I intend to collect data on hospital locations and closures which I can classify at the district level, but I will also collect data on hospital sector lobbying in the state capitol that complements district level activity.

But physicians and hospitals are not enough. What more can we measure gather at the district level to address possible sources of influence? The proposed surveys in the grants, with their massive sample size and modern opinion disaggregation techniques can help us here. We can ask respondents about their economic interests, their membership (or familial connection) in unions, and general questions that reveal a deeply nuanced view of their general ideological preferences (Jessee 2012; Shor and Rogowski 2018). Then we can use MRP to estimate district level measures from these individual responses.

References

———. 2017. “Database on Ideology, Money in Politics, and Elections (DIME).”
Bonica, Adam, Howard Rosenthal, and David J Rothman. 2014. “The Political Polarization of Physicians in the United States: An Analysis of Campaign Contributions to Federal Elections, 1991 Through 2012.” JAMA Internal Medicine 174 (8): 1308–17.
———. 2015. “The Political Alignment of US Physicians: An Update Including Campaign Contributions to the Congressional Midterm Elections in 2014.” JAMA Internal Medicine 175 (7): 1236–37.
Bonica, Adam, Howard Rosenthal, David J. Rothman, and Kristy Blackwood. 2017. “Political Ideology and Sorting: The Mobility of Physicians.”
Broockman, David E, and Christopher Skovron. 2017. “Conservative Bias in Perceptions of Public Opinion Among American Political Elites.”
Erikson, Robert S., Gerald C. Wright, and John P. McIver. 1993. Statehouse Democracy: Public Opinion and Policy in the American States. New York: Cambridge.
Gelman, Andrew, and Thomas C. Little. 1997. “Poststratification into Many Categories Using Hierarchical Logistic Regression.” Survey Methodology 23 (2): 127–35.
Hertel-Fernandez, Alexander. 2014. “Who Passes Business’s ‘Model Bills?’ Policy Capacity and Corporate Influence in US State Politics.” Perspectives on Politics 12 (3): 582–602.
———. 2016. “Explaining Liberal Policy Woes in the States: The Role of Donors.” PS: Political Science & Politics 49 (3): 461–65.
———. 2018. “Memo on AFP-Wisconsin Chapter Development.”
Hertel-Fernandez, Alexander, Theda Skocpol, and Daniel Lynch. 2016. “Business Associations, Conservative Networks, and the Ongoing Republican War over Medicaid Expansion.” Journal of Health Politics, Policy and Law 41 (2): 239–86.
Howe, Peter D, Matto Mildenberger, Jennifer R Marlon, and Anthony Leiserowitz. 2015. “Geographic Variation in Opinions on Climate Change at State and Local Scales in the USA.” Nature Climate Change 5 (6): 596–603.
Jacobs, Lawrence, and Theda Skocpol. 2015. Health Care Reform and American Politics: What Everyone Needs to Know. Oxford University Press.
Jansa, Joshua M, Eric R Hansen, and Virginia H Gray. 2015. “Copy and Paste Lawmaking: The Diffusion of Policy Language Across American State Legislatures.” Department of Political Science University of North Carolina at Chapel Hill.
Jessee, Stephen A. 2012. Ideology and Spatial Voting in American Elections. Cambridge University Press.
Kastellec, Jonathan P., Jeffrey R. Lax, and Justin H. Phillips. 2010. “Public Opinion and Senate Confirmation of Supreme Court Nominees.” Journal of Politics 72 (3): 767–84.
Kastellec, Jonathan P, Jeffrey R Lax, Michael Malecki, and Justin H Phillips. 2015. “Polarizing the Electoral Connection: Partisan Representation in Supreme Court Confirmation Politics.” The Journal of Politics 77 (3): 787–804.
Kroeger, Mary A. 2016. “Plagiarizing Policy: Model Legislation in State Legislatures.” Princeton Typescript.
Lax, Jeffrey R., and Justin H. Phillips. 2009. “How Should We Estimate Public Opinion in the States?” American Journal of Political Science 53 (1): 107–21.
Lax, Jeffrey, and Justin Phillips. 2009. “Gay Rights in the States: Public Opinion and Policy Responsiveness.” American Political Science Review 103 (03): 367–86. https://doi.org/10.1017/S0003055409990050.
Mayhew, David R. 1974. Congress: The Electoral Connection. New Haven: Yale University Press.
McCarty, Nolan, Jonathan Rodden, Boris Shor, Christopher Tausanovitch, and Christopher Warshaw. 2018. “Geography, Uncertainty, and Polarization.” Political Science Research and Methods, May.
Pacheco, J. 2011. “Using National Surveys to Measure Dynamic US State Public Opinion.” State Politics & Policy Quarterly 11 (4): 415–39.
Pacheco, Julianna, and Elizabeth Maltby. 2017a. “The Role of Public Opinion—Does It Influence the Diffusion of ACA Decisions?” Journal of Health Politics, Policy and Law 42 (2): 309–40.
———. 2017b. “Trends in State Level Opinions Toward the Affordable Care Act.”
Park, David K, Andrew Gelman, and Joseph Bafumi. 2004. “Bayesian Multilevel Estimation with Poststratification: State-Level Estimates from National Polls.” Political Analysis 12: 375–85.
Patashnik, Eric M, Alan S Gerber, and Conor M Dowling. 2017. Unhealthy Politics: The Battle over Evidence-Based Medicine. Princeton University Press.
Raudenbush, Stephen W., and Anthony S. Bryk. 2002. Hierarchical Linear Models: Applications and Data Analysis Methods. Second Edition. Newbury Park, CA: Sage Publications.
Shor, Boris. 2018. “Ideology, Party and Opinion: Explaining Individual Legislator ACA Implementation Votes in the States.” State Politics and Policy Quarterly. https://doi.org/https://doi.org/10.1177%2F1532440018786734.
Shor, Boris, and Nolan McCarty. 2011. “The Ideological Mapping of American Legislatures.” American Political Science Review 105 (3): 530–51.
Shor, Boris, and Jon Rogowski. 2018. “Ideology and the US Congressional Vote.” Political Science Research and Methods 6 (2). https://doi.org/http://dx.doi.org/10.1017/psrm.2016.23.
Starr, Paul. 2008. The Social Transformation of American Medicine: The Rise of a Sovereign Profession and the Making of a Vast Industry. Basic books.
———. 2013. Remedy and Reaction: The Peculiar American Struggle over Health Care Reform. Yale University Press.
Warshaw, Christopher, and Jonathan Rodden. 2012. “How Should We Measure District-Level Public Opinion on Individual Issues?” Journal of Politics 74 (1): 203–19.
Zhang, Baobao, Sander van der Linden, Matto Mildenberger, Jennifer R Marlon, Peter D Howe, and Anthony Leiserowitz. 2018. “Experimental Effects of Climate Messages Vary Geographically.” Nature Climate Change 8 (5): 370.