University of Pennsylvania Press

Extracting information and drawing inferences about causal effects of actions, interventions, treatments and policies is central to decision making in many disciplines and is broadly viewed as causal inference. It was a pleasure to read the lengthy interviews of four leaders in causality and causal inference whose work had such a huge impact on empirical research in many fields. I am honored to follow up on these interviews and share my journey and thoughts of the field and its future.

I graduated in Economics in 1990 with a thesis on multicriteria decision making, at the intersection of economics, statistics and operational research, before starting a PhD program in Statistics. I earned my PhD in 1994 under the supervision of Bruno Chiandotto and Stephen Pudney, with a thesis on multi-spell multi-state transition models. So, my first exposure to causal concepts was through the lenses of economists and econometricians in times when the credibility revolution in applied Economics (Angrist and Pischke, 2010) had not started yet. I absolutely do not regret this background that was fundamentally important: I was able to confront with substantive policy questions that simple econometric and statistical (regression) models were not able to address, it emphasized the role of unobservable and how decisions at different levels may take place in various settings and how they can be described with structural models. But in retrospect, I think I was able to grasp the concepts of causality and causal inference in full when I was more deeply exposed to the potential outcomes framework to causal inference in its entirety; I taught Causal Inference (Stat 214) at Harvard in the Fall of 2001 jointly with Don Rubin and that experience had a tremendous influence on my views on causality and on the way I conduct research in the area.

The potential outcome framework is known as the Rubin Causal Model (RCM), a term coined by Holland (1986) and I am happy Don provided his view on how the framework developed and evolved. The essential elements and primitives of the framework are simple, yet very powerful: 1) all causal questions are tied to specific interventions or treatments; 2) causal effects are defined as comparisons of potential outcomes under different treatment conditions for the same subjects, with no a priori restrictions on the form these comparisons may or should have; 3) each of these potential outcomes could have been observed had the treatment taken on the corresponding level. After the treatment has taken on a specific [End Page 105] level, only the potential outcome corresponding to that level is realized and can be actually observed, while the other potential outcomes are missing; 4) Causal inference is therefore fundamentally a missing data problem. As in all missing data problems, a key role is played by the mechanism that determines which data values are observed and which are missing. In causal inference, this mechanism is referred to as the assignment mechanism: the process that determines which units receive which treatments, hence which potential outcomes are realized and thus can be observed, and which are missing. The characteristics of the assignment mechanism and the assumptions we are willing to make on it are key to link and use the observed data to the causal quantities we would like to estimate. The framework develops a formal statistical language in which causal effects can be unambiguously defined, the assumptions needed for their identification are clearly stated, and statistical methods for studying causal questions can be developed. The last component of the RCM is the (Bayesian) model for the potential outcomes and the covariates. The Bayesian paradigm is natural for addressing causal inference problems under the RCM, where parameters and potential outcomes are also seen as random variables. The Bayesian approach directs us to condition on all observed quantities and predict, in a stochastic way, the missing potential outcomes of all units, past and future, and thereby make informed decisions, based on explicitly stated assumptions, about which interventions look most promising for future application.

As a statistician, I found it of paramount importance the ability the approach has to clarify the different inferential perspectives, frequentist and Bayesian, to elucidate finite population and the super-population perspectives, thus allowing to be clear about the various sources of uncertainty that arise in the estimation of causal effects and how to quantify them. Also crucial from a data science perspective is the separation of the design phase from the analysis phase of any study, including complex observational studies, so that experimental principles can be translated into statistical practice.

The approach set the stage for development and application of methods in other disciplines, including work of two recent Nobel Memorial Prizes in Economic Sciences (2019, 2021). It also provided insights into understanding causal mechanisms through principal stratification, an approach to handling intermediate variables with the RCM. The recent release of an Addendum to the E9 guideline on ‘Statistical principles in clinical trials’ by the ICH (2019) describes strategies to address intercurrent events in clinical trials and includes the principal stratum strategy; the description of alternative estimands would have been very difficult without potential outcomes.

Despite its simplicity, the potential outcome approach is able to characterize and develop methods in complex settings and clarify and tackle complicated issues such as interference, particularly arising with units connected through a network, sequential treatments, dynamic regimes, high-dimensional settings, surrogate outcomes. The fact that some problems, as those described with causal diagrams, are difficult to represent using potential outcomes only highlight, in my view, the complexity of the problems rather than the limitation of the approach.

I truly respect other approaches and I am confident that cross-fertilization of different frameworks and fields will provide solutions to the many open questions in causal inference, as it is happening with the use of machine learning algorithm to enhance estimators of causal effects. Yet, I believe that the potential outcome framework allowed us to make [End Page 106] enormous progress in formalizing and solving problems in experimental and observational studies, and will continue to do so in the future. [End Page 107]

Fabrizia Mealli
Department of Statistics, COmputer Science, Applications
Florence Center for Data Science
University of Florence
Florence, Viale Morgagni 59 50139, Italy


Joshua D. Angrist and Joern-Steffen Pischke. The credibility revolution in empirical economics: How better research design is taking the con out of econometrics. Journal of Economic Perspective, 24:3–30, 2010.
Paul Holland. Statistics and causal inference. Journal of the American statistical Association, 81:945–960, 1986.