24.9 Steps for RDiT (Regression Discontinuity in Time)

Notes:

  • Additional assumption: Time-varying confounders change smoothly across the cutoff date
  • Typically used in policy implementation in the same date for all subjects, but can also be used for cases where implementation dates are different between subjects. In the second case, researchers typically use different RDiT specification for each time series.
  • Sometimes the date of implementation is not randomly assigned by chosen strategically. Hence, RDiT should be thought of as the “discontinuity at a threshold” interpretation of RD (not as “local randomization”). (C. Hausman and Rapson 2018, 8)
  • Normal RD uses variation in the \(N\) dimension, while RDiT uses variation in the \(T\) dimension
  • Choose polynomials based on BIC typically. And can have either global polynomial or pre-period and post-period polynomial for each time series (but usually the global one will perform better)
  • Could use augmented local linear outlined by (C. Hausman and Rapson 2018, 12), where estimate the model with all the control first then take the residuals to include in the model with the RDiT treatment (remember to use bootstrapping method to account for the first-stage variance in the second stage).

Pros:

  • can overcome cases where there is no cross-sectional variation in treatment implementation (DID is not feasible)

  • Better than pre/post comparison because it can include flexible controls

  • Better than event studies because it can use long-time horizons (may not be too relevant now since the development long-time horizon event studies), and it can use higher-order polynomials time control variables.

Cons:

  • Taking observation for from the threshold (in time) can bias your estimates because of unobservables and time-series properties of the data generating process.

  • (McCrary 2008) test is not possible (see Sorting/Bunching/Manipulation) because when the density of the running (time) is uniform, you can’t use the test.

  • Time-varying unobservables may impact the dependent variable discontinuously

  • Error terms are likely to include persistence (serially correlated errors)

  • Researchers cannot model time-varying treatment under RDiT

    • In a small enough window, the local linear specification is fine, but the global polynomials can either be too big or too small (C. Hausman and Rapson 2018)

Biases

  • Time-Varying treatment Effects

    • increase sample size either by

      • more granular data (greater frequency): will not increase power because of the problem of serial correlation

      • increasing time window: increases bias from other confounders

    • 2 additional assumption:

      • Model is correctly specified (with all confoudners or global polynomial approximation)

      • Treatment effect is correctly specified (whether it’s smooth and constant, or varies)

      • These 2 assumptions do not interact ( we don’t want them to interact - i.e., we don’t want the polynomial correlated with the unobserved variation in the treatment effect)

    • There usually a difference between short-run and long-run treatment effects, but it’s also possibly that the bias can stem from the over-fitting problem of the polynomial specification. (C. Hausman and Rapson 2018, 544)

  • Autoregression (serial dependence)

    • Need to use clustered standard errors to account for serial dependence in the residuals

    • In the case of serial dependence in \(\epsilon_{it}\), we don’t have a solution, including a lagged dependent variable would misspecify the model (probably find another research project)

    • In the case of serial dependence in \(y_{it}\), with long window, it becomes fuzzy to what you try to recover. You can include the lagged dependent variable (bias can still come from the time-varying treatment or over-fitting of the global polynomial)

  • Sorting and Anticipation Effects

    • Cannot run the (McCrary 2008) because the density of the time running variable is uniform

    • Can still run tests to check discontinuities in other covariates (you want no discontinuities) and discontinuities in the outcome variable at other placebo thresholds ( you don’t want discontinuities)

    • Hence, it’s hard to argue for the causal effect here because it could be the total effect of the causal treatment and the unobserved sorting/anticipation/adaptation/avoidance effects. You can only argue that there is no such behavior

Recommendations for robustness check following (C. Hausman and Rapson 2018, 549)

  1. Plot the raw data and residuals (after removing confounders or trend). With varying polynomial and local linear controls, inconsistent results can be a sign of time-varying treatment effects.
  2. Using global polynomial, you could overfit, then show polynomial with different order and alternative local linear bandwidths. If the results are consistent, you’re okay
  3. Placebo Tests: estimate another RD (1) on another location or subject (that did not receive the treatment) or (2) use another date.
  4. Plot RD discontinuity on continuous controls
  5. Donut RD to see if avoiding the selection close to the cutoff would yield better results (Barreca et al. 2011)
  6. Test for auto-regression (using only pre-treatment data). If there is evidence for autoregression, include the lagged dependent variable
  7. Augmented local linear (no need to use global polynomial and avoid over-fitting)
    1. Use full sample to exclude the effect of important predictors

    2. Estimate the conditioned second stage on a smaller sample bandwidth

Examples from (C. Hausman and Rapson 2018, 534) in

econ

marketing

References

Anderson, Michael L. 2014. “Subways, Strikes, and Slowdowns: The Impacts of Public Transit on Traffic Congestion.” American Economic Review 104 (9): 2763–96.
Auffhammer, Maximilian, and Ryan Kellogg. 2011. “Clearing the Air? The Effects of Gasoline Content Regulation on Air Quality.” American Economic Review 101 (6): 2687–2722.
Barreca, Alan I, Melanie Guldi, Jason M Lindo, and Glen R Waddell. 2011. “Saving Babies? Revisiting the Effect of Very Low Birth Weight Classification.” The Quarterly Journal of Economics 126 (4): 2117–23.
Bento, Antonio, Daniel Kaffine, Kevin Roth, and Matthew Zaragoza-Watkins. 2014. “The Effects of Regulation in the Presence of Multiple Unpriced Externalities: Evidence from the Transportation Sector.” American Economic Journal: Economic Policy 6 (3): 1–29.
Brodeur, Abel, Andrew E Clark, Sarah Fleche, and Nattavudh Powdthavee. 2021. “COVID-19, Lockdowns and Well-Being: Evidence from Google Trends.” Journal of Public Economics 193: 104346.
Burger, Nicholas E, Daniel T Kaffine, and Bob Yu. 2014. “Did California’s Hand-Held Cell Phone Ban Reduce Accidents?” Transportation Research Part A: Policy and Practice 66: 162–72.
Busse, Meghan R, Nicola Lacetera, Devin G Pope, Jorge Silva-Risso, and Justin R Sydnor. 2013. “Estimating the Effect of Salience in Wholesale and Retail Car Markets.” American Economic Review 103 (3): 575–79.
Busse, Meghan R, Duncan I Simester, and Florian Zettelmeyer. 2010. ‘The Best Price You’ll Ever Get’: The 2005 Employee Discount Pricing Promotions in the US Automobile Industry.” Marketing Science 29 (2): 268–90.
Chen, Hong, Qiongsi Li, Jay S Kaufman, Jun Wang, Ray Copes, Yushan Su, and Tarik Benmarhnia. 2018. “Effect of Air Quality Alerts on Human Health: A Regression Discontinuity Analysis in Toronto, Canada.” The Lancet Planetary Health 2 (1): e19–26.
Chen, Xinlei, George John, Julie M Hays, Arthur V Hill, and Susan E Geurs. 2009. “Learning from a Service Guarantee Quasi Experiment.” Journal of Marketing Research 46 (5): 584–96.
Davis, Lucas W. 2008. “The Effect of Driving Restrictions on Air Quality in Mexico City.” Journal of Political Economy 116 (1): 38–81.
Davis, Lucas W, and Matthew E Kahn. 2010. “International Trade in Used Vehicles: The Environmental Consequences of NAFTA.” American Economic Journal: Economic Policy 2 (4): 58–82.
De Paola, Maria, Vincenzo Scoppa, and Mariatiziana Falcone. 2013. “The Deterrent Effects of the Penalty Points System for Driving Offences: A Regression Discontinuity Approach.” Empirical Economics 45: 965–85.
Gallego, Francisco, Juan-Pablo Montero, and Christian Salas. 2013. “The Effect of Transport Policies on Car Use: Evidence from Latin American Cities.” Journal of Public Economics 107: 47–62.
Hausman, Catherine, and David S Rapson. 2018. “Regression Discontinuity in Time: Considerations for Empirical Applications.” Annual Review of Resource Economics 10: 533–52.
McCrary, Justin. 2008. “Manipulation of the Running Variable in the Regression Discontinuity Design: A Density Test.” Journal of Econometrics 142 (2): 698–714.