Chapter 7: Explanation
In lieu of an abstract, here is a brief excerpt of the content:

Chapter 7 Explanation Good explanations are like bathing suits, darling; they are meant to reveal everything by covering only what is necessary. E. L. Konigsburg There has to be a mathematical explanation for how bad that tie is. Russell Crowe Perhaps the main reason that people are interested in causality is that they want explanations. Consider the three questions I asked in the ﬁrst paragraph of this book: Why is my friend depressed? Why won’t that ﬁle display properly on my computer? Why are the bees suddenly dying? The questions are asking for causes; the answers would provide an explanation. Perhaps not surprisingly, just as with causality, getting a good deﬁnition of explanation is notoriously difﬁcult. And, just as with causality, issues concerning explanation have been the focus of philosophical investigation for millennia. In this chapter, I show how the ideas behind the HP deﬁnition of causality can be used to give a deﬁnition of (causal) explanation that deals well with many of the problematic examples discussed in the literature. The basic idea is that an explanation is a fact that, if found to be true, would constitute an actual cause of the explanandum (the fact to be explained), regardless of the agent’s initial uncertainty. As this gloss suggests, the deﬁnition of explanation involves both causality and knowledge. What counts as an explanation for one agent may not count as an explanation for another agent, since the two agents may have different epistemic states. For example, an agent seeking an explanation of why Mr. Johansson has been taken ill with lung cancer will not consider the fact that he worked for years in asbestos manufacturing a part of an explanation if he already knew this fact. For such an agent, an explanation of Mr. Johansson’s illness may include a causal model describing the connection between asbestos ﬁbers and lung cancer. However, for someone who already knows the causal model but does not know that Mr. Johansson worked in asbestos manufacturing, the explanation would involve Mr. Johansson’s employment but would not mention the causal model. This example illustrates another important point: an explanation may include (fragments of) a causal model. 187 188 Chapter 7. Explanation Salmon distinguishes between epistemic and ontic explanations. Roughly speaking, an epistemic explanation is one that depends on an agent’s epistemic state, telling him something that he doesn’t already know, whereas an ontic explanation is agent-independent. An ontic explanation would involve the causal model and all the relevant facts. When an agent asks for an explanation, he is typically looking for an epistemic explanation relative to his epistemic state; that is, those aspects of the ontic explanation that he does not already know. Both notions of explanation seem to me to be interesting. Moreover, having a good deﬁnition of one should take us a long way toward getting a good deﬁnition of the other. The deﬁnitions I give here are more in the spirit of the epistemic notion. 7.1 Explanation: The Basic Deﬁnition The “classical” approaches to deﬁning explanation in the philosophy literature, such as Hempel’s deductive-nomological model and Salmon’s statistical relevance model, fail to exhibit the directionality inherent in common explanations. Despite all the examples in the philosophy literature on the need for taking causality and counterfactuals into account, and the extensive work on causality deﬁned in terms of counterfactuals in the philosophy literature , philosophers have been reluctant to build a theory of explanation on top of a theory of causality built on counterfactuals. (See the notes at the end of the chapter.) As I suggested above, the deﬁnition of explanation is relative to an epistemic state, just like that of blame. An epistemic state K, as deﬁned in Section 6.2, is a set of causal settings, with a probability distribution over them. I assume for simplicity in the basic deﬁnition that the causal model is known, so that we can view an epistemic state as a set of contexts. The probability distribution plays no role in the basic deﬁnition, although it will play a role in the next section, when I talk about the “quality” or “goodness” of an explanation. Thus, for the purposes of the following deﬁnition, I take an epistemic state to simply be a set K of contexts. I think of K as the set of contexts that the agent considers possible before observing...