Papers

A Sequential Modeling Approach for Predicting Mortality of High Risk Critically Ill Patients

Rodrigo Octavio Deliberato, Stephanie Ko, Tejas Sundaresan, Aaron Russell Kaufman, and Leo Anthony Celi Severity of illness scores are used for risk adjustment when comparing cohorts of critically ill patients in intensive care units (ICUs). Although these models have good discrimination, they are typically poorly calibrated, and over-predict mortality for low-risk patients and under-predict mortality for high-risk patients. ITherefore, clinicians have are skeptical of their accuracy for real-time patient prognostication. We propose a sequential modeling approach to improve these prediction models.  We hypothesized that by first stratifying patients into high (mortality prediction ≥ 10%) and low-risk cohorts, then applying four standard machine learning tools on a much larger set of candidate variables on only on the high-risk cohort, we could improve discrimination and calibration of mortality risk prediction in critically ill patients.

Matching for Inference on Text Data with an Application to Measuring Media Bias

Reagan Rose, Luke Miratrix, Aaron Kaufman, and Jason Anastasopoulos

Abstract: There is little existing methodology for drawing potentially causal conclusions when pre-treatment confounders are represented by text data. Even more unclear is how to approach inference in the setting where both the pre-treatment covariates and the outcome of interest are de ned by different summary measures of the same observed text. We summarize the challenges and limitations for principled analysis in this domain and propose a framework for estimating effects in studies where both the covariates and outcomes are summary measures built from text. First, we extend recent work on matching documents on features generated using text analysis methods. After matching, we estimate differential word use and sentiment using other text analysis tools. We demonstrate our procedure by comparing partisan bias across US news sources, as measured by their rates of coverage of issues and, given the same coverage, their different representation of topics. Here both the covariates (i.e., topics covered) and the outcome (i.e., language used and sentiment of covered content) are measured from the text. Our approach allows for investigation of two questions: are news sources systematically selecting different content to cover, and furthermore, when covering the same topics, are news sources presenting content using different language or sentiment?

Interbranch Conflict, Unilateral Action, and the Presidency

Aaron Kaufman and Jon Rogowski

Abstract: Unilateral action is a defining characteristic of the modern presidency and has gained increased political salience in recent years. While theoretical scholarship identifies conditions under which presidents will exercise unilateral powers, empirical tests of these theories suffer from two sets of limitations. First, though scholars recognize the range of unilateral tools presidents may deploy, including executive orders, memoranda, proclamations, and the like, empirical studies generally consider these tools in isolation and most often focus on executive orders to the exclusion of the others. Second, existing approaches provide no measures that allow for the direct comparison of substantive significance across unilateral tools and over time. In this paper, we use text analysis to address both limitations and develop a continuous measure of significance for executive orders, memoranda, and proclamations issued between 1845 and 2016. We then use these new measures to re-examine prominent theories of unilateral action, investigate potential complementarities between various tools, and produce a summary measure of presidents’ use of unilateral powers. Our approach offers a scalable method for studying presidential behavior and has important implications for theories of unilateral action.

Testing Theories of Judicial Behavior Using Networks of Text

Aaron Kaufman and Maya Sen

Abstract: How open are Supreme Court Justices to being persuaded by attorneys at oral argument? We use novel network analyses to explore a novel data set of nearly six thousand Supreme Court oral argument transcripts to understand how and why the Justices speak. In so doing, we little to no evidence that Justices are primarily using oral arguments primarily as an information finding mechanism, and limited evidence that they use it to lobby or to try to influence vulnerable colleagues. Instead, consistent with theories that Justices behave more like political actors with well-formed policy preferences, we find that Justices take opportunities at oral argument to stake out positions. That is, Justices use oral arguments to delineate their potential vote and to indicate to their colleagues the strength of their beliefs. Ultimately, our contributions here challenge the idea that Justices approach oral arguments as information-fathering missions, in turn providing evidence for a decision-making model rooted in more firmly fixed attitudes.

Imputation of Missing Baseline Creatinine Values for Patients in the ICU

Matteo Bonvini, Daniele Ramazzotti, Robert Stretch, Leo Celi, and Aaron Kaufman

Abstract: Lack of access to pre-admission data (including laboratory values and vital signs) contributes to difficulty estimating illness severity in newly admitted ICU patients. Baseline serum creatinine is an important example: the value is commonly unknown, yet it informs many decisions made by intensivists. This study evaluated methods of imputing baseline creatinine values using demographic variables and laboratory data on hospital and ICU admission. Data on patients admitted to ICUs at Beth Israel Deaconess Medical Center (BIDMC) from 2002-2012 was extracted from MIMIC III. Baseline creatinine values (obtained outpatient 2 to 365 days prior to admission) were available for patients receiving care at BIDMC clinics. Three methods of imputing missing baseline creatinine values were evaluated: predictive mean matching (PMM), classification and regression trees (CART) and Bayesian normal linear models (NLM). Inputs were age, gender, race, Elixhauser comorbidities, OASIS, Angus criteria, and creatinine on hospital and ICU admission. Patterns of missingness for baseline creatinine values were simulated as missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). Imputation methods were evaluated using two metrics: standardized root mean squared error (RMSE) and detection of increases in serum creatinine of 0.3 mg/dL from baseline. Among 37292 cases, 7085 (19%) had known baseline creatinine values. PMM, CART and NLM performed similarly in terms of RMSE when data was MCAR (0.22 for all) or MAR (0.13, 0.10, 0.13 respectively), whereas NLM was superior with MNAR data (0.40, 0.40, 0.31 respectively). The distribution of covariates differed between cases with and without known baseline values so MAR was deemed most appropriate. NLM exhibited the best sensitivity and specificity in detecting increases of 0.3 mg/dL from baseline creatinine under the MAR assumption (Sn 0.72, Sp 0.79).

Sequential Blocked Randomization for Internet-Based Survey Experiments

Aaron Kaufman and Matthew Kim

Abstract: It has been long-established that completely randomized experiments are generally less efficient than block randomized experiments, in that the latter can produce lower standard errors and narrower confidence intervals. However, in many practical settings, the application of block randomization has been infeasible. In medical trials, for example, patients often need to be assigned to a treatment condition the moment they enter the hospital. We introduce a platform to design and proliferate block randomized online survey experiments in which respondents arrive sequentially. Written in \texttt{R Shiny}, the platform offers a powerful and flexible framework for blocking on an arbitrary set of covariates and for implementing sophisticated modes of control flow, substantially improving researchers’ abilities to answer complex behavioral questions while minimizing respondent needs.

Improving Supreme Court Forecasting Using Boosted Decision Trees

Aaron Kaufman, Peter Kraft, and Maya Sen

Abstract: Though used frequently in machine learning, AdaBoosted decision trees (ADTs) are rarely used in political science, despite having many properties that are useful for social science inquiries. In this paper, we explain how to use ADTs for social science predictions. We illustrate their use by examining a well-known political prediction problem, predicting U.S. Supreme Court rulings. We nd that our AdaBoosted approach out-performs existing predictive models. We also provide two additional examples of the approach, one predicting the onset of civil wars and the other predicting county-level vote shares in U.S Presidential elections.

Draft Here

Can Violent Protest Change Local Policy Support? Evidence from the Aftermath of the 1992 Los Angeles Riot

Ryan Enos, Aaron Kaufman, and Melissa Sands

Abstract: Violent protests are dramatic political events often credited with causing significant changes in public policy.
Scholarly research usually treats violent protests as deliberate acts, undertaken in pursuit of specific policy goals. However, due to a lack of appropriate data and difficulty in causal identification, there is little evidence of whether riots accomplish these goals. We collect unique electoral measures of policy support before and after the 1992 Los Angeles Riot—one of the most high-profile events of political violence in recent American history—which occurred just prior to an election. Contrary to some expectations from the academic literature and the popular press, we find that the riot caused a liberal shift in policy support at the polls. Investigating the sources of this shift, we find that it was likely the result of increased mobilization of both African American and white voters. Remarkably, this mobilization endures over a decade later.

Draft Here

And Yet They Move: Candidates’ Ideological Repositioning During Primary and General Election Campaigns

Pablo Barbera and Aaron Kaufman
Abstract: A rich literature in formal modeling makes predictions regarding candidate behavior, and how incumbents may be ideologically constrained by a primary or general election challenger. However, no empirical evidence has ever been brought to bear on the question of whether candidates conform to these theories. In this paper, we apply a novel method of ideal point estimation to empirically test predictions of these models. Using follower networks on Twitter, we estimate the ideal points for every candidate running for United States Federal office in 2016, for every day from April 2015 until election day. In doing so, we create the first temporally fine-grained data set of candidate ideology over time for more than 1,000 candidates for the Presidency, US House, and US Senate. We show that candidates do ideologically reposition over the course of the campaign, but with heterogeneity conditional upon electoral contexts and challenger-based incentives. In particular, we examine whether candidates adopt more extreme ideological positions during primary election season when they are challenged, to later converge to the ideological center in contested general elections, and how these changes vary across parties. In doing so, we shed new light on well-developed but largely untested theories of electoral politics.

How to Measure Legislative District Compactness if You Only Know it When You See it

Aaron Kaufman, Gary King, and Mayya Komisarchik

 

Abstract: The US Supreme Court, many state constitutions, and numerous judicial opinions require that legislative districts be “compact,” a concept assumed so simple that the
only definition given in the law is “you know it when you see it.” Academics, in contrast, have concluded that the concept is so complex that it has multiple theoretical
dimensions requiring large numbers of conflicting empirical measures. We hypothesize that both are correct — that the concept is complex and multidimensional,
but one particular unidimensional ordering represents a common understanding of compactness in the law and across people. We develop a survey method designed to
elicit this understanding with high levels of intracoder and intercoder reliability (even though the standard paired comparison approach fails). We then create a statistical
model that predicts, with high accuracy and solely from the geometric features of the district, compactness evaluations by judges and other public officials from many jurisdictions,
as well as redistricting consultants and expert witnesses, law professors, law students, graduate students, undergraduates, ordinary citizens, and Mechanical
Turk workers. As a companion to this paper, we offer data on compactness from our validated measure for 18,215 US state legislative and congressional districts, as well
as software to compute this measure from any district shape. We also discuss what may be the wider applicability of our general methodological approach to measuring
important concepts that you only know when you see.

Draft Here