Causal Inference Without Experimentation: Integrating Language and Methods in Epidemiology, Econometrics, and Program Evaluation

Ellicott Matthay , Evidence for Action
Erin Hagan, University of California, San Francisco
Laura Gottlieb, University of California, San Francisco
David Vlahov, Yale University
Nancy Adler, University of California, San Francisco
Maria Glymour, University of California, San Francisco

In population health research, trying to make causal inferences about the health effects of social programs is challenging, particularly when researchers can't randomize treatment. Several non-randomized analytic strategies exist, which can broadly be categorized as "observational" or "quasi-experimental", and all require researchers to make untestable assumptions. We review how these analytic strategies work, the assumptions each makes, and their strengths and weaknesses. Choosing a method entails tradeoffs between statistical power, internal validity, measurement quality, and generalizability, and no one approach will be preferable in all situations. The choice of which analytic strategy is preferable depends on context of the study at hand and the assumptions the investigator finds most plausible in that context. Since some assumptions are unverifiable, conducting studies with a variety of analytic strategies that rely on varying assumptions has the potential to build a stronger body of evidence than relying on any one analytic strategy alone.

See paper

 Presented in Session 20. Methods for Evaluating Population Programs