13.1 Potential Outcomes, Causal Effects and Idealized Experiments
We now briefly recap the idea of the average causal effect and how it can be estimated using the differences estimator. We advise you to work through Chapter 13.1 of the book for a better understanding.
Potential Outcomes and the average causal effect
A potential outcome is the outcome for an individual under a potential treatment. For this individual, the causal effect of the treatment is the difference between the potential outcome if the individual receives the treatment and the potential outcome if she does not. Since this causal effect may be different for different individuals and it is not possible to measure the causal effect for a single individual, one is interested in studying the average causal effect of the treatment, hence also called the average treatment effect.
In an ideal randomized controlled experiment the following conditions are fulfilled:
- The subjects are selected at random from the population.
- The subjects are randomly assigned to treatment and control group.
Condition 1 guarantees that the subjects’ potential outcomes are drawn randomly from the same distribution such that the expected value of the causal effect in the sample is equal to the average causal effect in the distribution. Condition 2 ensures that the receipt of treatment is independent from the subjects’ potential outcomes. If both conditions are fulfilled, the expected causal effect is the expected outcome in the treatment group minus the expected outcome in the control group. Using conditional expectations we have \[\text{Average causal effect} = E(Y_i\vert X_i=1) - E(Y_i\vert X_i=0),\] where \(X_i\) is a binary treatment indicator.
The average causal effect can be estimated using the differences estimator, which is nothing but the OLS estimator in the simple regression model \[\begin{align} Y_i = \beta_0 + \beta_1 X_i + u_i \ \ , \ \ i=1,\dots,n, \tag{13.1} \end{align}\]where random assignment ensures that \(E(u_i\vert X_i) = 0\).
The OLS estimator in the regression model \[\begin{align} Y_i = \beta_0 + \beta_1 X_i + \beta_2 W_{1i} + \dots + \beta_{1+r} W_{ri} + u_i \ \ , \ \ i=1,\dots,n \tag{13.2} \end{align}\]with additional regressors \(W_1,\dots,W_r\) is called the differences estimator with additional regressors. It is assumed that treatment \(X_i\) is randomly assigned so that it is independent of the the pretreatment characteristic \(W_i\). This is assumption is called conditional mean independence and implies \[E(u_i\vert X_i , W_i) = E(u_i\vert W_i) = 0,\] so the conditional expectation of the error \(u_i\) given the treatment indicator \(X_i\) and the pretreatment characteristic \(W_i\) does not depend on the \(X_i\). Conditional mean independence replaces the first least squares assumption in Key Concept 6.4 and thus ensures that the differences estimator of \(\beta_1\) is unbiased. The differences estimator with additional regressors is more efficient than the differences estimator if the additional regressors explain some of the variation in the \(Y_i\).