Systematic reviews of health systems research commonly limit studies for evidence synthesis to randomized controlled trials. However, well-conducted quasi-experimental studies can provide strong evidence for causal inference. With this article, we aim to stimulate and inform discussions on including quasi-experiments in systematic reviews of health systems research. We define quasi-experimental studies as those that estimate causal effect sizes using exogenous variation in the exposure of interest that is not directly controlled by the researcher. We incorporate this definition into a non-hierarchical three-class taxonomy of study designs - experiments, quasi-experiments, and non-experiments. Based on a review of practice in three disciplines related to health systems research (epidemiology, economics, and political science), we discuss five commonly used study designs that fit our definition of quasi-experiments: natural experiments, instrumental variable analyses, regression discontinuity analyses, interrupted times series studies, and difference studies including controlled before-and-after designs, difference-in-difference designs and fixed effects analyses of panel data. We further review current practices regarding quasi-experimental studies in three non-health fields that utilize systematic reviews (education, development, and environment studies) to inform the design of approaches for synthesizing quasi-experimental evidence in health systems research. Ultimately, the aim of any review is practical: to provide useful information for policymakers, practitioners, and researchers. Future work should focus on building a consensus among users and producers of systematic reviews regarding the inclusion of quasi-experiments.
Keywords: Difference-in-difference studies; Instrumental variable analysis; Interrupted time series; Quasi-experimental studies; Regression discontinuity designs; Systematic reviews.
Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.