From: Actual Causality
Chapter 5 Complexity and Axiomatization The complexities of cause and effect defy analysis. Douglas Adams, Dirk Gently’s Holistic Detective Agency Is it plausible that people actually work with structural equations and (extended) causal models to evaluate actual causation? People are cognitively limited. If we represent the structural equations and the normality ordering in what is perhaps the most obvious way, the models quickly become large and complicated, even with a small number of variables. To understand the problem, consider a doctor trying to deal with a patient who has just come in reporting bad headaches. Let’s keep things simple. Suppose that the doctor considers only a small number of variables that might be relevant: stress, constriction of blood vessels in the brain, aspirin consumption, and trauma to the head. Again, keeping things simple, assume that each of these variables (including headaches) is binary; that is, has only two possible values. So, for example, the patient either has a headache or not. Each variable may depend on the value of the other four. To represent the structural equation for the variable “headaches”, a causal model will need to assign a value to “headaches” for each of the sixteen possible values of the other four variables. That means that there are 216 —over 60,000!— possible equations for “headaches”. Considering all ﬁve variables, there are 280 (over 1024 ) possible sets of equations. Now consider the normality ordering. With ﬁve binary variables, there are 25 = 32 possible assignments of values to these variables. Think of each of these assignments as a “possible world”. There are 32! (roughly 2.6 × 1035 ) strict orders of these 32 worlds, and many more if we allow for ties or incomparable worlds. Altogether, the doctor would need to store close to two hundred bits of information just to represent this simple extended causal model. Now suppose we consider a more realistic model with 50 random variables. Then the same arguments show that we would need as many as 250×249 possible sets of equations, 250 possible worlds, and over 250×250 normality orderings (in general, with n binary variables, there are 2n2n−1 sets of equations, 2n possible worlds, and (2n )! ∼ 2n2n strict orders). Thus, with 50 variables, roughly 50×250 bits would be needed to representing a causal model. This is clearly cognitively unrealistic. 139 140 Chapter 5. Complexity and Axiomatization If the account of actual causation given in the preceding chapters is to be at all realistic as a model of human causal judgment, some form of compact representation will be needed. Fortunately, as I show in this chapter, in practice, things are likely to be much better; it is quite reasonable to expect that most “natural” models can be represented compactly. At a minimum, we cannot use these concerns about compact representations to argue that people cannot be using (something like) causal models to reason about causality. But even given a compact representation, how hard is it to actually compute whether X = x is a cause of Y = y? It is not hard to exhaustively check all the possibilities if there are relatively few variables involved or in structures with a great deal of symmetry. This has been the case for all the examples I have considered thus far. But, as we shall see in Chapter 8, actual causality can also be of great use in, for example, reasoning about the correctness of programs and in answering database queries. Large programs and databases may well have many variables, so now the question of the complexity of determining actual causality becomes a signiﬁcant issue. Computer science provides tools to formally characterize the complexity of determining actual causation; I discuss these in Section 5.3. In many cases, we do not have a completely speciﬁed causal model. Rather, what we know are certain facts about the model: for example, we might know that, although X is actually 1, if X were 0, then Y would be 0. The question is what conclusions about causality we can draw from such information. In Section 5.4, I discuss axioms for causal reasoning. Note to the reader: the four sections in this chapter can be read independently. The material in this chapter is more technical than that of the other chapters in this book; I have deferred the most technical to Section 5.5. The rest of the material should be accessible even to readers with relatively little mathematical background. 5.1 Compact Representations of Structural...

If you would like to authenticate using a different subscribed institution that supports Shibboleth authentication or have your own login and password to Project MUSE, click 'Authenticate'.


You are not currently authenticated.

View freely available titles: Book titles OR Journal titles
