In lieu of an abstract, here is a brief excerpt of the content:

Observational Studies 1 (2015) 212-216 Submitted 3/15; Published 8/15 Comment on Cochran’s “Observational Studies” Donald B. Rubin rubin@stat.harvard.edu Department of Statistics Harvard University Cambridge, MA 02138, USA First, I have to thank Dylan Small for inviting me to contribute comments on the reprinted Cochran (1972) target article. This is, in fact, the third time that I have read this nice, almost colloquial, chapter Bill Cochran wrote in honor of his coauthor, George Snedecor. The first time was about 1970 when I was finishing my PhD under Bill’s direction, and he circulated it as a pre-print. The second time was while I was writing my own chapter (Rubin 1984) summarizing Bill’s contributions to observational studies, which appeared in the volume edited by Rao and Sedransk; because my discussion of it is less than three pages and the original is in a relatively massive book, I include it here in the appendix. But what would I say that’s different today than I said back in 1984? Obviously, I still am awed by Bill’s straightforward and no-nonsense style of communication – that hasn’t changed. But I think that today I would include some points not emphasized in 1984. One aspect that I think I should have emphasized is the usefulness of the formal idea for an “assignment mechanism” to distinguish between randomized experiments versus observational studies and the formal concept of “potential outcomes” to define precisely causal effects. That is, under the “stable unit treatment value assumption” (SUTVA; Rubin 1980, 1986), Yi(1) are the ith units values of outcomes under the active treatment and Yi(0) are the i-th unit’s values of outcomes under the control treatment, where the causal effect of the active versus the control treatment for unit i is the comparison of these two potential outcomes. Also, the assignment mechanism is the probability distribution of the vector of treatment indicators, W, given the arrays of potential outcomes and covariates; this perspective was termed “Rubin’s Causal Model” by Holland (1986), but the potential outcomes notation had its roots in the work of Neyman (1923) in the context of randomized experiments; the term “assignment mechanism” and the use of potential outcomes to define causal effects in general originates with work in the 1970s (Rubin 1974, 1976, 1977, 1978). In my 1984 discussion of Cochran (1972), I did not emphasize the clarity that this conceptualization brings to causal inference in observational studies. In hindsight, I think that omission was because that conceptualization seemed so obvious to me. It is only in recent years that I have been befuddled by all the confusion created by some writers who eschew this formulation with its attendant clarity. The recent text by Imbens and Rubin (2015) hopefully contributes to rectifying this situation, at least from my perspective. Another noteworthy omission in my 1984 discussion is my recent focus on the importance of outcome-free design for observational studies (Rubin 2006, 2008). I am not alone in having this current emphasis; see, for example Yue (2006) and D’Agostino and D’Agostino (2007). In hindsight I wished that I had emphasized this aspect, although with generous c ⃝2015 Donald B. Rubin. Comment on Cochran’s “Observational Studies” interpretation one could read that theme into parts of Cochran (1972), although I do not see much distinction made between things like propensity score design, which is blind to outcome data, and model-based adjustment methods, which require outcome data and so are subject to inapposite manipulation. I think that this desire to correct this omission arises from being repeatedly exposed to more problematic examples in recent years. A final comment on Cochran (1972) concerns a statement from his concluding section “Judgement About Causality” where he fairly blatantly reveals his disappointment in the answers provided by scientific philosophers. Often decisions about interventions must be made, even if based on limited empirical evidence, and we should help decision makers make sensible decisions under clearly stated assumptions so that “consumers” of the conclusion about the effects of some intervention can honestly weigh the support for that conclusion. In...

pdf

Share