The DeclareDesign Blog
An instrument does not have to be exogenous to be consistent 2019/02/19

We often think of an instrumental variable (\(Z\)) as a random shock that generates exogenous variation in a treatment of interest \(X\). The randomness of \(Z\) lets us identify the effect of \(X\) on \(Y\), at least for units for which \(Z\) perturbs \(X\) in a way that’s not possible by just looking at the relationship between \(X\) and \(Y\). But surprisingly, we think, if effects are constant the instrumental variables estimator can be consistent for the effect of \(X\) on \(Y\) even when the relationship between the instrument (\(Z\)) and the endogenous variable (\(X\)) is confounded (for example, Hernán and Robins (2006)). That’s the good news. Less good news is that when there is effect heterogeneity you can get good estimates for some units but it can be hard to know which units those are (Swanson and Hernán 2017). We use a declaration and diagnosis to illustrate these insights.

Read More…


Estimating Average Treatment Effects with Ordered Probit: Is it worth it? 2019/02/06

We sometimes worry about whether we need to model data generating processes correctly. For example you have ordinal outcome variables, on a five-point Likert scale. How should you model the data generation process? Do you need to model it at all? Go-to approaches include ordered probit and ordered logit models which are designed for this kind of outcome variable. But maybe you don’t need them. After all, the argument that the difference-in-means procedure estimates the treatment effect doesn’t depend on any assumptions about the type of data (as long as expectations are defined)—ordered, count, censored, etc. We diagnose a design that hedges by using both differences in means and an ordered probit model. We do so assuming that the ordered probit model correctly describes data generation. Which does better?

Read More…

What can you learn from simulating qualitative inference strategies? 2019/01/30

Qualitative process-tracing sometimes seeks to answer “cause of effects” claims using within-case data: how probable is the hypothesis that \(X\) did in fact cause \(Y\)? Fairfield and Charman (2017), for example, ask whether the right changed position on tax reform during the 2005 Chilean presidential election (\(Y\)) because of anti-inequality campaigns (\(X\)) by examining whether the case study narrative bears evidence that you would only expect to see if this were true.1 When inferential logics are so clearly articulated, it becomes possible to do design declaration and diagnosis. Here we declare a Bayesian process-tracing design and use it to think through choices about what kinds of within-case information have the greatest probative value.

Read More…

Should a pilot study change your study design decisions? 2019/01/23

Data collection is expensive, and we often only get one bite at the apple. In response, we often conduct an inexpensive (and small) pilot test to help better design the study. Pilot studies have many virtues, including practicing the logistics of data collection and improving measurement tools. But using pilots to get noisy estimates in order to determine sample sizes for scale up comes with risks.

Read More…