Chapter 32 Matching Methods
Matching is a process that aims to close back doors - potential sources of bias - by constructing comparison groups that are similar according to a set of matching variables. This helps to ensure that any observed differences in outcomes between the treatment and comparison groups can be more confidently attributed to the treatment itself, rather than other factors that may differ between the groups.
Matching and DiD can use pre-treatment outcomes to correct for selection bias. From real world data and simulation, (Chabé-Ferret 2015) found that matching generally underestimates the average causal effect and gets closer to the true effect with more number of pre-treatment outcomes. When selection bias is symmetric around the treatment date, DID is still consistent when implemented symmetrically (i.e., the same number of period before and after treatment). In cases where selection bias is asymmetric, the MC simulations show that Symmetric DID still performs better than Matching.
Matching is useful, but not a general solution to causal problems (J. A. Smith and Todd 2005)
Assumption: Observables can identify the selection into the treatment and control groups
Identification: The exclusion restriction can be met conditional on the observables
Motivation
Effect of college quality on earnings
- They ultimately estimate the treatment effect on the treated of attending a top (high ACT) versus bottom (low ACT) quartile college
Example
Aaronson, Barrow, and Sander (2007)
Do teachers qualifications (causally) affect student test scores?
Step 1:
\[ Y_{ijt} = \delta_0 + Y_{ij(t-1)} \delta_1 + X_{it} \delta_2 + Z_{jt} \delta_3 + \epsilon_{ijt} \]
There can always be another variable
Any observable sorting is imperfect
Step 2:
\[ Y_{ijst} = \alpha_0 + Y_{ij(t-1)}\alpha_1 + X_{it} \alpha_2 + Z_{jt} \alpha_3 + \gamma_s + u_{isjt} \]
\(\delta_3 >0\)
\(\delta_3 > \alpha_3\)
\(\gamma_s\) = school fixed effect
Sorting is less within school. Hence, we can introduce the school fixed effect
Step 3:
Find schools that look like they are putting students in class randomly (or as good as random) + we run step 2
\[ \begin{aligned} Y_{isjt} = Y_{isj(t-1)} \lambda &+ X_{it} \alpha_1 +Z_{jt} \alpha_{21} \\ &+ (Z_{jt} \times D_i)\alpha_{22}+ \gamma_5 + u_{isjt} \end{aligned} \]
\(D_{it}\) is an element of \(X_{it}\)
\(Z_{it}\) = teacher experience
\[ D_{it}= \begin{cases} 1 & \text{ if high poverty} \\ 0 & \text{otherwise} \end{cases} \]
\(H_0:\) \(\alpha_{22} = 0\) test for effect heterogeneity whether the effect of teacher experience (\(Z_{jt}\)) is different
For low poverty is \(\alpha_{21}\)
For high poverty effect is \(\alpha_{21} + \alpha_{22}\)
Matching is selection on observables and only works if you have good observables.
Sufficient identification assumption under Selection on observable/ back-door criterion (based on Bernard Koch’s presentation)
Strong conditional ignorability
\(Y(0),Y(1) \perp T|X\)
No hidden confounders
Overlap
\(\forall x \in X, t \in \{0, 1\}: p (T = t | X = x> 0\)
All treatments have non-zero probability of being observed
SUTVA/ Consistency
- Treatment and outcomes of different subjects are independent
Relative to OLS
- Matching makes the common support explicit (and changes default from “ignore” to “enforce”)
- Relaxes linear function form. Thus, less parametric.
It also helps if you have high ratio of controls to treatments.
For detail summary (Stuart 2010)
Matching is defined as “any method that aims to equate (or”balance”) the distribution of covariates in the treated and control groups.” (Stuart 2010, 1)
Equivalently, matching is a selection on observables identifications strategy.
If you think your OLS estimate is biased, a matching estimate (almost surely) is too.
Unconditionally, consider
\[ \begin{aligned} E(Y_i^T | T) - E(Y_i^C |C) &+ E(Y_i^C | T) - E(Y_i^C | T) \\ = E(Y_i^T - Y_i^C | T) &+ [E(Y_i^C | T) - E(Y_i^C |C)] \\ = E(Y_i^T - Y_i^C | T) &+ \text{selection bias} \end{aligned} \]
where \(E(Y_i^T - Y_i^C | T)\) is the causal inference that we want to know.
Randomization eliminates the selection bias.
If we don’t have randomization, then \(E(Y_i^C | T) \neq E(Y_i^C |C)\)
Matching tries to do selection on observables \(E(Y_i^C | X, T) = E(Y_i^C|X, C)\)
Propensity Scores basically do \(E(Y_i^C| P(X) , T) = E(Y_i^C | P(X), C)\)
Matching standard errors will exceed OLS standard errors
The treatment should have larger predictive power than the control because you use treatment to pick control (not control to pick treatment).
The average treatment effect (ATE) is
\[ \frac{1}{N_T} \sum_{i=1}^{N_T} (Y_i^T - \frac{1}{N_{C_T}} \sum_{i=1}^{N_{C_T}} Y_i^C) \]
Since there is no closed-form solution for the standard error of the average treatment effect, we have to use bootstrapping to get standard error.
Professor Gary King advocates instead of using the word “matching”, we should use “pruning” (i.e., deleting observations). It is a preprocessing step where it prunes nonmatches to make control variables less important in your analysis.
Without Matching
- Imbalance data leads to model dependence lead to a lot of researcher discretion leads to bias
With Matching
- We have balance data which essentially erase human discretion
Balance Covariates | Complete Randomization | Fully Exact |
---|---|---|
Observed | On average | Exact |
Unobserved | On average | On average |
Fully blocked is superior on
imbalance
model dependence
power
efficiency
bias
research costs
robustness
Matching is used when
Outcomes are not available to select subjects for follow-up
Outcomes are available to improve precision of the estimate (i.e., reduce bias)
Hence, we can only observe one outcome of a unit (either treated or control), we can think of this problem as missing data as well. Thus, this section is closely related to Imputation (Missing Data)
In observational studies, we cannot randomize the treatment effect. Subjects select their own treatments, which could introduce selection bias (i.e., systematic differences between group differences that confound the effects of response variable differences).
Matching is used to
reduce model dependence
diagnose balance in the dataset
Assumptions of matching:
treatment assignment is independent of potential outcomes given the covariates
\(T \perp (Y(0),Y(1))|X\)
known as ignorability, or ignorable, no hidden bias, or unconfounded.
You typically satisfy this assumption when unobserved covariates correlated with observed covariates.
- But when unobserved covariates are unrelated to the observed covariates, you can use sensitivity analysis to check your result, or use “design sensitivity” (Heller, Rosenbaum, and Small 2009)
positive probability of receiving treatment for all X
- \(0 < P(T=1|X)<1 \forall X\)
Stable Unit Treatment value Assumption (SUTVA)
Outcomes of A are not affected by treatment of B.
- Very hard in cases where there is “spillover” effects (interactions between control and treatment). To combat, we need to reduce interactions.
Generalization
\(P_t\): treated population -> \(N_t\): random sample from treated
\(P_c\): control population -> \(N_c\): random sample from control
\(\mu_i\) = means ; \(\Sigma_i\) = variance covariance matrix of the \(p\) covariates in group i (\(i = t,c\))
\(X_j\) = \(p\) covariates of individual \(j\)
\(T_j\) = treatment assignment
\(Y_j\) = observed outcome
Assume: \(N_t < N_c\)
Treatment effect is \(\tau(x) = R_1(x) - R_0(x)\) where
\(R_1(x) = E(Y(1)|X)\)
\(R_0(x) = E(Y(0)|X)\)
Assume: parallel trends hence \(\tau(x) = \tau \forall x\)
- If the parallel trends are not assumed, an average effect can be estimated.
Common estimands:
Average effect of the treatment on the treated (ATT): effects on treatment group
Average treatment effect (ATE): effect on both treatment and control
Steps:
Define “closeness”: decide distance measure to be used
Which variables to include:
Ignorability (no unobserved differences between treatment and control)
Since cost of including unrelated variables is small, you should include as many as possible (unless sample size/power doesn’t allow you to because of increased variance)
Do not include variables that were affected by the treatment.
Note: if a matching variable (i.e., heavy drug users) is highly correlated to the outcome variable (i.e., heavy drinkers) , you will be better to exclude it in the matching set.
Which distance measures: more below
Matching methods
Nearest neighbor matching
Simple (greedy) matching: performs poorly when there is competition for controls.
Optimal matching: considers global distance measure
Ratio matching: to combat increase bias and reduced variation when you have k:1 matching, one can use approximations by Rubin and Thomas (1996).
With or without replacement: with replacement is typically better, but one needs to account for dependent in the matched sample when doing later analysis (can use frequency weights to combat).
Subclassification, Full Matching and Weighting
Nearest neighbor matching assign is 0 (control) or 1 (treated), while these methods use weights between 0 and 1.
Subclassification: distribution into multiple subclass (e.g., 5-10)
Full matching: optimal ly minimize the average of the distances between each treated unit and each control unit within each matched set.
Weighting adjustments: weighting technique uses propensity scores to estimate ATE. If the weights are extreme, the variance can be large not due to the underlying probabilities, but due to the estimation procure. To combat this, use (1) weight trimming, or (2) doubly -robust methods when propensity scores are used for weighing or matching.
Inverse probability of treatment weighting (IPTW) \(w_i = \frac{T_i}{\hat{e}_i} + \frac{1 - T_i}{1 - \hat{e}_i}\)
Odds \(w_i = T_i + (1-T_i) \frac{\hat{e}_i}{1-\hat{e}_i}\)
Kernel weighting (e.g., in economics) averages over multiple units in the control group.
Assessing Common Support
- common support means overlapping of the propensity score distributions in the treatment and control groups. Propensity score is used to discard control units from the common support. Alternatively, convex hull of the covariates in the multi-dimensional space.
Assessing the quality of matched samples (Diagnose)
Balance = similarity of the empirical distribution of the full set of covariates in the matched treated and control groups. Equivalently, treatment is unrelated to the covariates
- \(\tilde{p}(X|T=1) = \tilde{p}(X|T=0)\) where \(\tilde{p}\) is the empirical distribution.
Numerical Diagnostics
standardized difference in means of each covariate (most common), also known as”standardized bias”, “standardized difference in means”.
standardized difference of means of the propensity score (should be < 0.25) (Rubin 2001)
ratio of the variances of the propensity score in the treated and control groups (should be between 0.5 and 2). (Rubin 2001)
For each covariate, the ratio fo the variance of the residuals orthogonal to the propensity score in the treated and control groups.
Note: can’t use hypothesis tests or p-values because of (1) in-sample property (not population), (2) conflation of changes in balance with changes in statistical power.
Graphical Diagnostics
QQ plots
Empirical Distribution Plot
Estimate the treatment effect
After k:1
- Need to account for weights when use matching with replacement.
After Subclassification and Full Matching
Weighting the subclass estimates by the number of treated units in each subclass for ATT
Weighting by the overall number of individual in each subclass for ATE.
Variance estimation: should incorporate uncertainties in both the matching procedure (step 3) and the estimation procedure (step 4)
Notes:
With missing data, use generalized boosted models, or multiple imputation (Qu and Lipkovich 2009)
Violation of ignorable treatment assignment (i.e., unobservables affect treatment and outcome). control by
measure pre-treatment measure of the outcome variable
find the difference in outcomes between multiple control groups. If there is a significant difference, there is evidence for violation.
find the range of correlations between unobservables and both treatment assignment and outcome to nullify the significant effect.
Choosing between methods
smallest standardized difference of mean across the largest number of covariates
minimize the standardized difference of means of a few particularly prognostic covariates
fest number of large standardized difference of means (> 0.25)
(Diamond and Sekhon 2013) automates the process
In practice
If ATE, ask if there is enough overlap of the treated and control groups’ propensity score to estimate ATE, if not use ATT instead
If ATT, ask if there are controls across the full range of the treated group
Choose matching method
If ATE, use IPTW or full matching
If ATT, and more controls than treated (at least 3 times), k:1 nearest neighbor without replacement
If ATT, and few controls , use subclassification, full matching, and weighting by the odds
Diagnostic
If balance, use regression on matched samples
If imbalance on few covariates, treat them with Mahalanobis
If imbalance on many covariates, try k:1 matching with replacement
Ways to define the distance \(D_{ij}\)
- Exact
\[ D_{ij} = \begin{cases} 0, \text{ if } X_i = X_j, \\ \infty, \text{ if } X_i \neq X_j \end{cases} \]
An advanced is Coarsened Exact Matching
- Mahalanobis
\[ D_{ij} = (X_i - X_j)'\Sigma^{-1} (X_i - X_j) \]
where
\(\Sigma\) = variance covariance matrix of X in the
control group if ATT is interested
polled treatment and control groups if ATE is interested
- Propensity score:
\[ D_{ij} = |e_i - e_j| \]
where \(e_k\) = the propensity score for individual k
An advanced is Prognosis score (B. B. Hansen 2008), but you have to know (i.e., specify) the relationship between the covariates and outcome.
- Linear propensity score
\[ D_{ij} = |logit(e_i) - logit(e_j)| \]
The exact and Mahalanobis are not good in high dimensional or non normally distributed X’s cases.
We can combine Mahalanobis matching with propensity score calipers (Rubin and Thomas 2000)
Other advanced methods for longitudinal settings
marginal structural models (Robins, Hernan, and Brumback 2000)
balanced risk set matching (Y. P. Li, Propert, and Rosenbaum 2001)
Most matching methods are based on (ex-post)
propensity score
distance metric
covariates
Packages
cem
Coarsened exact matchingMatching
Multivariate and propensity score matching with balance optimizationMatchIt
Nonparametric preprocessing for parametric causal inference. Have nearest neighbor, Mahalanobis, caliper, exact, full, optimal, subclassificationMatchingFrontier
optimize balance and sample size (G. King, Lucas, and Nielsen 2017)optmatch
optimal matching with variable ratio, optimal and full matchingPSAgraphics
Propensity score graphicsrbounds
sensitivity analysis with matched data, examine ignorable treatment assignment assumptiontwang
weighting and analysis of non-equivalent groupsCBPS
covariate balancing propensity score. Can also be used in the longitudinal setting with marginal structural models.PanelMatch
based on Imai, Kim, and Wang (2018)
Matching | Regression |
---|---|
Not as sensitive to the functional form of the covariates | can estimate the effect of a continuous treatment |
Easier to asses whether it’s working Easier to explain allows a nice visualization of an evaluation |
estimate the effect of all the variables (not just the treatment) |
If you treatment is fairly rare, you may have a lot of control observations that are obviously no comparable | can estimate interactions of treatment with covariates |
Less parametric | More parametric |
Enforces common support (i.e., space where treatment and control have the same characteristics) |
However, the problem of omitted variables (i.e., those that affect both the outcome and whether observation was treated) - unobserved confounders is still present in matching methods.
Difference between matching and regression following Pischke’s lecture
Suppose we want to estimate the effect of treatment on the treated
\[ \begin{aligned} \delta_{TOT} &= E[ Y_{1i} - Y_{0i} | D_i = 1 ] \\ &= E\{E[Y_{1i} | X_i, D_i = 1] \\ & - E[Y_{0i}|X_i, D_i = 1]|D_i = 1\} && \text{law of itereated expectations} \end{aligned} \]
Under conditional independence
\[ E[Y_{0i} |X_i , D_i = 0 ] = E[Y_{0i} | X_i, D_i = 1] \]
then
\[ \begin{aligned} \delta_{TOT} &= E \{ E[ Y_{1i} | X_i, D_i = 1] - E[ Y_{0i}|X_i, D_i = 0 ]|D_i = 1\} \\ &= E\{E[y_i | X_i, D_i = 1] - E[y_i |X_i, D_i = 0 ] | D_i = 1\} \\ &= E[\delta_X |D_i = 1] \end{aligned} \]
where \(\delta_X\) is an X-specific difference in means at covariate value \(X_i\)
When \(X_i\) is discrete, the matching estimand is
\[ \delta_M = \sum_x \delta_x P(X_i = x |D_i = 1) \]
where \(P(X_i = x |D_i = 1)\) is the probability mass function for \(X_i\) given \(D_i = 1\)
According to Bayes rule,
\[ P(X_i = x | D_i = 1) = \frac{P(D_i = 1 | X_i = x) \times P(X_i = x)}{P(D_i = 1)} \]
hence,
\[ \begin{aligned} \delta_M &= \frac{\sum_x \delta_x P (D_i = 1 | X_i = x) P (X_i = x)}{\sum_x P(D_i = 1 |X_i = x)P(X_i = x)} \\ &= \sum_x \delta_x \frac{ P (D_i = 1 | X_i = x) P (X_i = x)}{\sum_x P(D_i = 1 |X_i = x)P(X_i = x)} \end{aligned} \]
On the other hand, suppose we have regression
\[ y_i = \sum_x d_{ix} \beta_x + \delta_R D_i + \epsilon_i \]
where
\(d_{ix}\) = dummy that indicates \(X_i = x\)
\(\beta_x\) = regression-effect for \(X_i = x\)
\(\delta_R\) = regression estimand where
\[ \begin{aligned} \delta_R &= \frac{\sum_x \delta_x [P(D_i = 1 | X_i = x) (1 - P(D_i = 1 | X_i = x))]P(X_i = x)}{\sum_x [P(D_i = 1| X_i = x)(1 - P(D_i = 1 | X_i = x))]P(X_i = x)} \\ &= \sum_x \delta_x \frac{[P(D_i = 1 | X_i = x) (1 - P(D_i = 1 | X_i = x))]P(X_i = x)}{\sum_x [P(D_i = 1| X_i = x)(1 - P(D_i = 1 | X_i = x))]P(X_i = x)} \end{aligned} \]
the difference between the regression and matching estimand is the weights they use to combine the covariate specific treatment effect \(\delta_x\)
Type | uses weights which depend on | interpretation | makes sense because |
---|---|---|---|
Matching | \(P(D_i = 1|X_i = x)\) the fraction of treated observations in a covariate cell (i.e., or the mean of \(D_i\)) |
This is larger in cells with many treated observations. | we want the effect of treatment on the treated |
Regression | \(P(D_i = 1 |X_i = x)(1 - P(D_i = 1| X_i ))\) the variance of \(D_i\) in the covariate cell |
This weight is largest in cells where there are half treated and half untreated observations. (this is the reason why we want to treat our sample so it is balanced, before running regular regression model, as mentioned above). | these cells will produce the lowest variance estimates of \(\delta_x\). If all the \(\delta_x\) are the same, the most efficient estimand uses the lowest variance cells most heavily. |
The goal of matching is to produce covariate balance (i.e., distributions of covariates in treatment and control groups are approximately similar as they would be in a successful randomized experiment).