Regression discontinuity designs exploit substantive knowledge that treatment is assigned in a particular way: everyone above a threshold is assigned to treatment and everyone below it is not. Even though researchers do not control the assignment, substantive knowledge about the threshold serves as a basis for a strong identification claim.

Thistlewhite and Campbell introduced the regression discontinuity design in the 1960s to study the impact of scholarships on academic success. Their insight was that students with a test score just above a scholarship cutoff were plausibly comparable to students whose scores were just below the cutoff, so any differences in future academic success could be attributed to the scholarship itself.

Regression discontinuity designs identify a *local* average treatment effect: the average effect of treatment *exactly at the cutoff*. The main trouble with the design is that there is vanishingly little data exactly at the cutoff, so any answer strategy needs to use data that is some distance away from the cutoff. The further away from the cutoff we move, the larger the threat of bias.

We’ll consider an application of the regression discontinuity design that examines party incumbency advantage – the effect of a party winning an election on its vote margin in the next election.

## Design Declaration

**M**odel: Regression discontinuity designs have four components: A running variable, a cutoff, a treatment variable, and an outcome. The cutoff determines which units are treated depending on the value of the running variable.In our example, the running variable \(X\) is the Democratic party’s margin of victory at time \(t-1\); and the treatment, \(Z\), is whether the Democratic party won the election in time \(t-1\). The outcome, \(Y\), is the Democratic vote margin at time \(t\). We’ll consider a population of 1,000 of these pairs of elections.

A major assumption required for regression discontinuity is that the conditional expectation functions for both treatment and control potential outcomes are continuous at the cutoff.

^{1}To satisfy this assumption, we specify two smooth conditional expectation functions, one for each potential outcome. The figure plots \(Y\) (the Democratic vote margin at time \(t\)) against \(X\) (the margin at time \(t-1\)). We’ve also plotted the true conditional expectation functions for the treated and control potential outcomes. The solid lines correspond to the observed data and the dashed lines correspond to the unobserved data.**I**nquiry: Our estimand is the effect of a Democratic win in an election on the Democratic vote margin of the next election, when the Democratic vote margin of the first election is zero. Formally, it is the difference in the conditional expectation functions of the control and treatment potential outcomes when the running variable is exactly zero. The black vertical line in the plot shows this difference.**D**ata strategy: We collect data on the Democratic vote share at time \(t-1\) and time \(t\) for all 1,000 pairs of elections. There is no sampling or random assignment.**A**nswer strategy: We will approximate the treated and untreated conditional expectation functions to the left and right of the cutoff using a flexible regression specification estimated via OLS. In particular, we fit each regression using a fourth-order polynomial. Much of the literature on regression discontinuity designs focuses on the tradeoffs among answer strategies, with many analysts recommending against higher-order polynomial regression specifications. We use one here to highlight how well such an answer strategy does when it matches the functional form in the model. We discuss alternative estimators in the exercises.

```
N <- 1000
tau <- 0.15
outcome_sd <- 0.1
cutoff <- 0.5
bandwidth <- 0.5
control_coefs <- c(0.5, 0.5)
treatment_coefs <- c(-5, 1)
poly_reg_order <- 4
control <- function(X) {
as.vector(poly(X, length(control_coefs), raw = T) %*%
control_coefs)
}
treatment <- function(X) {
as.vector(poly(X, length(treatment_coefs), raw = T) %*%
treatment_coefs) + tau
}
population <- declare_population(N = N, X = runif(N, 0, 1) -
cutoff, noise = rnorm(N, 0, outcome_sd), Z = 1 * (X >
0))
potential_outcomes <- declare_potential_outcomes(Y_Z_0 = control(X) +
noise, Y_Z_1 = treatment(X) + noise)
reveal_Y <- declare_reveal(Y)
estimand <- declare_estimand(LATE = treatment(0) - control(0))
sampling <- declare_sampling(handler = function(data) {
subset(data, (X > 0 - abs(bandwidth)) & X < 0 + abs(bandwidth))
})
estimator <- declare_estimator(formula = Y ~ poly(X, poly_reg_order) *
Z, model = lm_robust, term = "Z", estimand = estimand)
regression_discontinuity_design <- population + potential_outcomes +
estimand + reveal_Y + sampling + estimator
```

## Takeaways

We now diagnose the design:

`diagnosis <- diagnose_design(regression_discontinuity_design)`

Estimator Label | Term | N Sims | Bias | RMSE | Power | Coverage | Mean Estimate | SD Estimate | Mean Se | Type S Rate | Mean Estimand |
---|---|---|---|---|---|---|---|---|---|---|---|

estimator | Z | 500 | 0.01 | 0.85 | 0.04 | 0.97 | 0.16 | 0.85 | 0.88 | 0.32 | 0.15 |

(0.04) | (0.03) | (0.01) | (0.01) | (0.04) | (0.03) | (0.00) | (0.13) | (0.00) |

We highlight three takeaways. First, the power of this design is very low: with 1,000 units we do not achieve even 10% statistical power. However, our estimates of the uncertainty are not too wide: the coverage probability indicates that our confidence intervals indeed contain the estimand 95% of the time as they should. Our answer strategy is highly uncertain because the fourth-order polynomial specification in regression model gives weights to the data that greatly increase the variance of the estimator (Gelman and Imbens, 2017). In the exercises we explore alternative answer strategies that perform better.

Second, the design is biased because polynomial approximations of the average effect at exactly the point of the threshold will be inaccurate in small samples (Sekhon and Titiunik, 2017), especially as units farther away from the cutoff are incorporated into the answer strategy. We know that the estimated bias is not due to simulation error by examining the bootstrapped standard error of the bias estimates.

Finally, from the figure, we can see how poorly the average effect at the threshold approximates the average effect for all units. The average treatment effect among the treated (to the right of the threshold in the figure) is negative, whereas at the threshold it is positive. This clarifies that the estimand of the regression discontinuity design, the difference at the cutoff, is only relevant for a small – and possibly empty – set of units very close to the cutoff.

## Using the Regression Discontinuity Designer

In R, you can generate a regression_discontinuity design using the template function `regression_discontinuity_designer()`

in the `DesignLibrary`

package by running the following lines, which load the package:

`library(DesignLibrary)`

We can then create specific designs by defining values for each argument. For example, we create a design called `my_regression_discontinuity_design`

with our chosen values for `N`

, `tau`

, `cutoff`

, `bandwidth`

, and `poly_order`

by running the lines below.

```
regression_discontinuity_design <- regression_discontinuity_designer(N = 300,
tau = .2,
cutoff = .4,
bandwidth = .01,
poly_order = .15)
```

You can see more details on the `regression_discontinuity_designer()`

function and its arguments by running the following line of code:

`??regression_discontinuity_designer`

## Further reading

Since its rediscovery by social scientists in the late 1990s, the regression discontinuity design has been widely used to study diverse causal effects such as: prison on recidivism (Mitchell et al., 2017); China’s one child policy on human capital (Qin, Zhuang and Yang, 2017); eligibility for World Bank loans on political liberalization (Carnegie and Samii, 2017); and anti-discrimination laws on minority employment (Hahn, Todd and Van der Klaauw, 1999).

We’ve discussed a “sharp” regression discontinuity design in which all units above the threshold were treated and all units below were untreated. In fuzzy regression discontinuity designs, some units above the cutoff remain untreated or some units below take treatment. This setting is analogous to experiments that experience noncompliance and may require instrumental variables approaches to the answer strategy (see Compliance is a Potential Outcome).

Geographic regression discontinuity designs use distance to a border as the running variable: units on one side of the border are treated and units on the other are untreated. Keele and Titiunik (2016) use such a design to study whether voters are more likely to turn out when they have the opportunity to vote directly on legislation on so-called ballot initiatives. A complication of this design is how to measure distance to the border in two dimensions.

An alternative motivation for some designs that do not rely on continuity at the cutoff is “local randomization”.↩