The Seriousness of Mistakes and the Benefits of Getting it Right: Symmetries and Asymmetries in the Ethics of Epistemic Risk Management
Scientists have to make trade-offs between different types of error risks when making methodological decisions. It is now widely recognized (and not disputed in this article) that in doing so they must consider how serious the consequences of each error would be. The fact that they must also consider the potential benefits of getting it right is not equally recognized (and explicitly rejected by Heather Douglas). In this article, I argue that scientists need to do both when managing epistemic risks. At the same time, I acknowledge that in some cases it intuitively seems as if considering the consequences of possible errors carries greater moral weight. I explain this intuition by arguing that in these cases the contrast between the seriousness of mistakes and the benefits of getting it right can be linked to the moral asymmetry between action and omission. I examine various reasons that might justify a stronger weighting of the consideration of the consequences of errors in light of the action-omission asymmetry. I conclude that for all but some exceptional cases, such asymmetrical consideration is not called for.
1. INTRODUCTION
For any open investigation that aims to answer a reasonably well-defined question, there must be more than one way in which it can end: by delivering one result or the other, by refuting the hypothesis or by not refuting it.1 This applies to a wide range of very differently designed episodes of inquiry, although perhaps not to some forms of purely exploratory research (to which my considerations will accordingly not be applicable).2 There are types of inquiry with more than two possible results, but no endeavor to answer a well-defined question that is at all responsive to evidence can have fewer than two epistemically possible ways [End Page 419] of coming to a conclusion at the time of its inception. As a consequence of this, every such episode of inquiry also opens up at least two different possible ways of going wrong and producing error (as each of the possible end states could be reached incorrectly). The different kinds of epistemic risk associated with each of these possibilities of going wrong are traded off against each other in every methodological decision, and there seems to be no value-free way of telling how they ought to be balanced. Due in large part to the work of Heather Douglas, it is now widely accepted in philosophy of science that balancing epistemic risks is an inevitable part of scientists’ work, and that, as a consequence,
cognitive, ethical, and social values all have legitimate, indirect roles to play in the doing of science, and in the decisions about which empirical claims to make that arise when doing science.
(Douglas 2009, 108)
I will use the expression “epistemic risk management” to refer to the totality of methodological choices that affect the distribution of probabilities among the different kinds of potential errors of a study, and I will accept without further discussion the value-laden nature of scientific research that manifests itself in the way that Douglas suggests and the resulting inevitability of epistemic risk management as a component of all research.3 This paper is intended to contribute to the study of how best to approach the ethics of epistemic risk management—that is, the question how these choices ought to be made.
Ever since the ethics of epistemic risk management was first brought up within philosophy of science in the late 1940s, it has been customary to articulate it by pointing out the obligation to evaluate just how negative the negative consequences of a potential error would be. Thus C. West Churchman (1948, 256) writes that the evaluation of a test procedure “depends upon a certain function of both the chance of error and the loss.” Richard Rudner (1953, 2) puts it a little more memorably: “How sure we need to be before we accept a hypothesis will depend on how serious a mistake would be.” Isaac Levi (1962) titled one of his articles on the subject “On the Seriousness of Mistakes.”
Douglas, who has done more than anyone to shape the twenty-first century discourse on epistemic risk management, falls in step with this approach: [End Page 420]
The indirect role for values in science concerns the sufficiency of evidence, the weighing of uncertainty, and the consequences of error, rather than the evaluation of intended consequences or the choices themselves.
(Douglas 2009, 103; emphasis added)
Indirect roles are, according to Douglas, the only roles that are acceptable in methodological decisions in the research process and in decisions about whether to accept or reject a hypothesis. As the quotation indicates, Douglas explicitly requires that while scientists must allow the consequences of potential errors to influence the management of epistemic risk, they must not allow their positive valuations of the intended consequences of the knowledge they seek to have any influence (see also Douglas 2000, 564).4 Kevin Elliott (2013) has termed this the consequential interpretation of her direct-indirect role distinction and has criticized her normative claim. In this paper, I add to this existing criticism by viewing the debate from a slightly different vantage point.
My aim is to show that considering the costs of mistakes and considering the benefits of getting it right are essentially one and the same consideration under different descriptions. I will examine why it sometimes seems much more obvious to us to think of this consideration as a taking into account of the costs of possible errors, and why it sometimes seems more natural to think of it as a taking into account of the benefits of being right. I will also concede that there is a strong intuitive tendency to take more seriously the moral duty to consider the seriousness of errors rather than to consider the achievable benefits. I will trace some possible sources of these intuitive asymmetries and, in doing so, will have occasion to discuss the merits of some putative asymmetries in the ethics of epistemic risk management.
2. DOES CONSIDERING THE VALUE OF TRUE RESULTS LEAD TO INADMISSIBLE WISHFUL THINKING?
One reason to object to a consideration of the value of true results is that it may be thought to inevitably lead to inadmissible wishful thinking. Consider the example of a study designed to test whether a certain drug, drug X, is at least as effective as current standard therapy for condition Y. Assume that it is already established that X would have fewer gastrointestinal side effects than conventional therapies. While it is nowadays widely accepted that considering the potential consequences of error, such as patients receiving an ineffective new drug in case of a false positive error, is essential to the responsible design and execution of a study, it might seem that considering the consequences of true results, such [End Page 421] as how great it would be to have a therapy for Y without gastrointestinal side effects and how large the profits for pharmaceutical company Z would be in this case, would amount to wishful thinking and detract from the truth-directedness of the study. Inquiry must be motivated not by the desire to arrive at a specific result among the possible conclusions of a study, so one could argue, but only by the desire to arrive at the true conclusion—whatever it might be.
But under these conditions, what could it still mean to evaluate the consequences of error? Consider the consequences of a false negative error in our example: Patients suffering from condition Y would continue to be treated with standard therapy, and company Z would forgo the profits that an innovative treatment with fewer side effects would have brought. How bad a result is this? A judgment in response to this question is only possible if one knows (among other things) the severity of the side effects that drug X would help to avoid. In other words, we need to know, in a world in which drug X actually is as effective as standard therapy, how much worse not knowing this is compared to knowing it. An evaluative statement on the consequences of error only makes sense relative to an evaluation of the consequences of a true result. Likewise, this is true if we consider the consequences of a false positive error. How bad it is that patients will receive treatment X that is incorrectly deemed effective depends, among other things, on how good or bad standard therapy is. In each case, the harm that is relevant for the evaluation of the consequences of an error is the harm that we could have avoided by avoiding the error—that is, by arriving at the true result. This evaluation requires an assessment of the difference in the relevant outcomes between the consequences of error and the consequences of a true result. Responsible epistemic risk management requires us to evaluate this difference once for the case where the hypothesis is true (the left column of table 1) and once for the case that it is false (the right column) and then to compare these two differences to each other. In these evaluations, the consequences of getting it right and the consequences of getting it wrong play exactly symmetrical roles. [End Page 422]
In some concrete cases, Douglas herself implicitly refers to the consequences of getting it right. Thus, when commenting on the controversy around the hormonal drug DES and the required evaluative judgments in epistemic risk management concerning this case, Douglas (2009, 111) summarizes them as follows: “Which is worse, to cause birth defects or to fail to use a method to prevent miscarriages?” In characterizing consequences of one of the errors involved as a failure to prevent a certain kind of harm, she implicitly refers to the difference between the consequences of committing an error by falsely rejecting a drug in a world in which it is in fact safe and effective and the positive consequences that would ensue if its effectiveness and safety were recognized in that same world.
Now what about wishful thinking? To return once more to the hypothetical drug X, suppose that I evaluate the consequences of a true positive result as particularly great because the benefits of making a drug with no gastrointestinal side effects available would constitute a massive increase of quality of life for a great many patients, and that I argue on these grounds that we should be especially careful in not placing an impossibly strict demand on the evidence for X’s efficacy. Obviously the consequences of a false positive error in our example are nonetheless particularly severe, and obviously we need to take great care in avoiding it; but epistemic risk management requires us to calibrate just how far we should go in this direction, and we are not going to be able to come to a reasoned answer to this question without considering the gains that could be made with the aid of the new drug. Note that this consideration is essentially the [End Page 423] same as arguing that the error of dismissing drug X while it is in fact effective is particularly grave because the patients suffering from Y could have been freed from the pernicious side effects of conventional therapy. Considering the consequences of true results stands in no other relation to wishful thinking than considering the consequences of error does. The only evaluation of the consequences of error that does not automatically implicitly also involve an evaluation of the consequences of true results would be an immediate weighing of the two types of error against each other. But note that this, for example, would not allow one to take the severity of the side effects of standard therapy into account in any way. It would mean that epistemic risk management ought to be no different whether we are investigating a drug that stands a chance of alleviating severe suffering from side effects or one that just promises to taste better.
As another example to underline the importance of considering differences instead of comparing consequences of error immediately, consider how the medical community should manage epistemic risks in their evaluation of the claim “It is feasible to cure Parkinson’s disease by means of stem cell therapy.” A false positive error would lead to false hopes, wasted research resources, and possibly inadequate and harmful attempts at therapy. A false negative would lead to missing the chance of curing a widespread and dreadful condition. In trying to avoid these errors, which of the two approaches should take priority? If this question were all about evaluating the two states of error and comparing them to each other, it seems very clear that the false positive marks the bleaker overall situation of the world—simply because it is a world in which it is strictly impossible to cure Parkinson’s disease by means of stem cell therapy. It is a cruel disease, and our aspirations for successful treatment are limited. But clearly, this should not be the kind of evaluation that figures into assessing the harm done by being in error. In contrast, the assessment should only be about the harm that the epistemic failure adds to the situation—in other words, about the difference in value between the two states of the world presented in the right column of table 2. And in this respect, I submit that it is the false negative that does greater harm. While there is no doubt that the false positive adds suffering to the world, the suffering that the false negative prevents from being alleviated is much greater in comparison. Whether you agree with me in this assessment or not, note that it is only the method of comparing the differences in value between the consequences of true results and the consequences of error that even makes possible an evaluation that judges the false negative to be the more severe kind of error, in this case and many similar ones. [End Page 424]
Another example that Douglas uses can illustrate an additional dimension of this topic. In a paper on expertise, she discusses an example of scientists acting as public experts on climate change. She argues that experts should take the foreseeable consequences of errors into account (in an indirect role, to weigh uncertainties). But she continues:
On the other hand, experts should not concern themselves with potentially untoward consequences of making a true claim (as opposed to making a false one). For experts to turn away from a claim because, if true, it would be unpleasant, would be to confuse motives with evidence.
(Douglas 2008, 14)
Let me propose a ludicrously simplified scenario in which the single claim at issue is the prediction that global warming is going to escalate to catastrophic consequences unless we take immediate and drastic measures. In the scenario, the claim is either true or global warming is not going to escalate at all. The consequences of errors and truths as expressed in the public expertise of scientists could be outlined as in table 3. [End Page 425]
Allow me to note two things: First, every reasonable person would prefer the world to be in either of the states described in the right column of table 3 to either of the states on the left. Secondly, it would indeed mean to confuse motives with evidence to take this preference for the right column as a reason not to endorse the claim, or even to raise the standards of evidence because of it. This is of course not the form that a consideration of the consequences of true results should take. Instead, what is relevant for epistemic risk management, in this case and in others, is only the damage that not endorsing the claim does in a world in which it is true compared to the damage that endorsing it will do in a world in which it is false. But that damage, in both cases, is constituted by the difference that results from getting it wrong versus getting it right. Each can only be assessed by means of an evaluative comparison of the states of the world that are represented as vertical neighbors in table 3. And as such, the required evaluations also require the scientist to reckon with the untoward consequences of making a true claim—if only to assess precisely how many more untoward consequences not endorsing it would bring about in a world in which the claim is in fact true. The assessment of the potential harms of getting it wrong cannot be separated from a consideration of the benefits of getting it right. [End Page 426]
3. EPISTEMIC RISK MANAGEMENT AND THE DIFFERENCE BETWEEN ACTION AND INACTION
After all this has been said, claiming absolute symmetry between the roles of the consequences of error and the consequences of true results seems somewhat counterintuitive in many typical cases. This sense is especially strong when we think about the moral culpability of epistemic actors who fail to properly consider different kinds of potential consequences. Take a hypothetical scientist in the drug-efficacy example who in her epistemic risk management fails to give any consideration to the ineffective treatment that patients would receive as a consequence of a false positive and compare this to another hypothetical researcher who fails to consider the potential benefits of a drug with fewer side effects. Intuitively, the two failures do not quite seem to be of the same quality. The first case has a more severe air of moral shortcoming. The second case, in which the researcher did not properly consider the beneficial consequences of a potential true result, seems morally less odious. Moreover, it also seems much more natural to call the first case—where a researcher is oblivious to the severity of the harm to patients that a drug mistakenly thought to be effective might do—a disregard for the seriousness of mistakes, while in the second case, it seems much more obvious to speak of a failure to consider the benefits of getting it right. This is despite the fact that we have just shown that any consideration of one is also a consideration of the other.
Can we somehow square the pull of these intuitions with the insight that epistemic risk management always requires consideration of both the consequences of error and the consequences of a true result? Or should we dismiss the intuitive pull of the example? Can it at least be explained?
To work toward such an explanation, I begin by providing a tentative account of the intuitions about suitable characterizations—that is, of why, in one case, the description as disregarding the seriousness of mistakes seems more salient and in the other case, it seems more appropriate to speak of a failure to take into account the benefits of getting it right. If we look at the possible failures of the investigation in both possible worlds—the world in which drug D is actually effective and the world in which it is not—we find the following. In the world in which drug D is actually effective but is not used because the medical community does not recognize this effectiveness, the scientists’ contribution to this overall state of the world is negatively relevant behavior in the following sense: the most informationally parsimonious description of the scientists’ behavior that completes a causal account of the result is the mention [End Page 427] of a negative fact—namely, that they did not recognize the effectiveness of drug D. There may be other true narratives that make for an equally complete causal explanation—accounts that detail what the scientists did instead that kept them from recognizing D’s effectiveness—but they are richer in information in comparison (and thus richer in information than strictly necessary). In contrast, in the world where D is ineffective but is mistakenly believed by the scientists to be effective, any informationally parsimonious causal account of the harm that ensues will have to make mention of the positive fact that the scientists accepted claims about D’s effectiveness; no informationally weaker description by mentioning a negative fact is available. In this sense, the scientists’ behavior is positively relevant in this case. I propose that this is why we understand the first case as an omission and a failure to bring about a better state of the world, and why a failure to consider the benefits of getting it right strikes us as the most salient description of what has gone wrong here in terms of epistemic risk management. In the other case, where scientists’ behavior is positively relevant to bringing about the unfortunate consequences, we classify it as an action rather than an omission and see the problem, as far as epistemic risk management is concerned, in an insufficient regard for the seriousness of mistakes.
In this analysis, I have used Jonathan Bennett’s (1993) distinction of behaviors that are positively relevant to an outcome from those that are negatively relevant. Bennett’s distinction rests on whether the informationally weakest fact about the agent’s behavior that completes a causal explanation of the outcome is a positive or a negative fact. If one assumes a Gricean preference for informationally parsimonious descriptions—or “relevance”, in Grice’s (1975) own terms—, it follows quite straightforwardly from this conception that, in the case of a negatively relevant behavior, it is to be expected that an explanation of the consequences that describes the behavior as a negative fact will be (intuitively) preferred (and vice versa for positively relevant behaviors).
Bennett introduced this distinction in his discussion of the moral differences between action and inaction, and between doing harm and merely allowing it to occur. Accordingly, the explanation I offered of the intuitions about suitable characterizations can also be extended to an explanation of the intuitions about differing moral severity (that insufficient attention to the damage caused by an error constitutes the more serious wrongdoing) by relating them to the (alleged) moral asymmetry between action and omission. An inquirer who has not adequately taken [End Page 428] into account the possible harms of getting it wrong, and who then actually gets it wrong and thereby causes harm, can be understood to have failed her moral responsibilities by actively and negligently causing harmful consequences. On the other hand, an inquirer who has not properly taken the benefits of getting it right into account, and as a consequence falls short of establishing the result that would have brought about good effects, seems “only” to be culpable for an omission. She may have failed to prevent harm, but she has not actively contributed to causing it.
The intuitive plausibility of a morally relevant difference between doing harm and failing to prevent it is quite strong, as nonconsequentialist ethicists in particular have often pointed out. This plausibility explains much of the reluctance against putting the consequences of error on par with the consequences of true results when it comes to epistemic risk management. The scientist’s primary obligation, one might think, is to consider the potential harm that her research results might actively bring about if she gets it wrong. This priority fits well with the widespread acceptance of the principle “primum non nocere,”—in other words, the admission that nonmaleficence is, in the words of Ross (2002, 21), “a duty of a more stringent character” than beneficence.
So much for explaining the intuitive asymmetry. Can we make space for it in a principled account of responsible epistemic risk management? In the first part of this paper, I have shown that nonarbitrary decisions regarding epistemic risk management need to be based on evaluations of both the consequences of error and the consequences of true results. This was not so much a point about what scientists’ moral obligations are, but rather an argument to the effect that methodological choices are simply underdetermined where such value judgments are not available. Nothing that I have discussed so far places any limitations on the kinds of evaluative judgments that might go into assessing the difference in value between getting it right or getting it wrong. Perhaps scientists should give privileged consideration to harm caused by error over harm not prevented by a true result. Note that this would not amount to treating the consequences of error and the consequences of true results asymmetrically after all. It remains true that the harm done by an error can only be assessed by giving full and equal evaluative consideration to two states of the world, one in which the error has been made and one in which it hasn’t. But it would perhaps justify a different kind of asymmetry in epistemic risk management, one that can in certain cases be mistaken for an asymmetry between the consequences of error and the consequences of true results. [End Page 429]
Whether or not such an asymmetry can be justified depends on whether there are relevant moral differences between the consequences of action and the consequences of inaction, or between doing harm and merely allowing it to occur. Instead of going into a full discussion of matters of moral principle in this paper, I shall be content with testing whether morally relevant differences between action and omission are likely to play a role in the specific cases at issue—that is, in the context of epistemic risk management.
It is difficult for consequentialists to admit a morally relevant difference between consequences of action and consequences brought about by failing to act, and arguments based on cases in which such differences seem intuitively compelling have therefore long played a role in debates about consequentialism. But while many argue (and some disagree; see, e.g., Haydar [2002]) that consequentialists cannot admit such morally relevant differences on pain of inconsistency, nonconsequentialists do not have to admit them—let alone admit them in all cases. The matter at issue therefore cannot be settled by choosing between broad normative frameworks. It will rather depend on whatever it is that grounds the purported moral differences between action and omission. In the remainder of this paper, I discuss three influential proposals and apply them to the present case.
4. POSITIVELY VERSUS NEGATIVELY RELEVANT BEHAVIORS SOMETIMES DIFFER IN MORAL RESPECTS
It is characteristic of positively relevant behavior in Bennett’s sense that most of the behavioral alternatives at the agent’s disposal would not have brought about the effect. In some cases in which positively and negatively relevant behaviors differ in moral respects, the difference can be explained with reference to this point. Take duties, for example. A positive duty— go to Exeter!—requires the agent to pick from a small range of possible actions (all of which may come at a cost) and thus typically places a weightier burden on the individual than a negative duty—stay away from Exeter!—that can be fulfilled in a wide range of different ways (see Bennett 1993, 79).5
However, the positive/negative distinction typically does not transfer well to the kinds of activities involved in epistemic risk management. Even in cases in which one could describe the results of epistemic risk management as a failure to prevent damage, such as when a scientist underestimates the benefits of getting it right and, as a consequence, falls short of establishing the knowledge that could have helped to prevent [End Page 430] harm, the scientist’s own contributions to how epistemic risks in research are managed are typically positively relevant in Bennett’s sense. This is because one can hardly engage in inquiry by doing nothing, but once one does enter the pursuit of knowledge, one is automatically actualizing one balance of epistemic risks or another. Considered as concrete actions, such as typing certain parameters into the computer that runs the statistics software, the kinds of methodological steps that in effect raise the risk of direct negative consequences from error are not different in kind from those that in effect lower the chance of preventing harm by means of a true result.6 A positive/negative distinction along Bennett’s lines does not seem to identify a morally significant asymmetry in the context of epistemic risk management.
That said, there may be some very specific instances of scientific research involving asymmetries that are related to the distinction between positively and negatively relevant behavior described by Bennett. The asymmetries only become visible if one chooses a much more abstract level of description of what scientists do instead of detailing their immediate behavior in the laboratory. They contribute to the solution of this or that problem, or they do not; they prevent the spread of an error, or they refrain from doing so. Let’s take the example of biosafety research on a genetically modified plant C, and assume that the plant was designed to be particularly drought resistant and thus to contribute to combating food shortages in certain low-income countries. The specific subject of research is the hypothesis that widespread cultivation of C will lead to widespread environmental degradation, due to negative effects on insects and on the ecosystem as a whole. Stephen John (2010), who has worked on very similar cases in connection with his work on the application of the precautionary principle in environmental policy, has pointed out that there is a morally relevant asymmetry here between the risks associated with a false negative and those associated with a false positive. In particular, the false negative result here is associated with the risk of irreversible environmental damage, while the false positive result is associated with science missing an opportunity to contribute to solving the world’s food problems. The fact that it seems natural to describe the false positive research result in terms of its overall consequences as a missed opportunity, and thus as negatively relevant behavior, is related to the fact that a whole range of scientific courses-of-action seem viable both for contributing and not contributing to solving the global food problem. In contrast, the risks associated with the false negative outcome in this case are generated in a very specific and unique [End Page 431] way. In this particular case, this is also associated with a morally relevant asymmetry: for while the irreversible environmental damage cannot be compensated, in the case of the risks related to world nutrition, it is conceivable that they could be mitigated by flanking them with other measures. Whatever the merits of crop C, it is certainly not the only potential way to address food shortages in low-income countries. John (2010) has argued that in such situations the application of precautionary reasoning can be morally right even if it runs counter to what cost benefit analysis would recommend. The cases considered here are very specific; nevertheless, it is quite possible that, especially in the kinds of research around which political and social controversies revolve, more than just occasionally a missed benefit can be offset by other measures (often by other scientific-technical developments), while a mistake, once it has been made, leads to irreversible and uncompensable damage.
5. RIGHTS
A second influential approach to explain morally relevant differences between causing harm and merely allowing it to unfold is to relate it to the rights of individuals affected by the consequences. In this context, Philippa Foot (1967; 2002) points out the central importance of a difference between two kinds of right: rights to noninterference, such as the right to bodily integrity, on the one hand, and rights to goods and services, such as the right to receive aid or support in times of distress, on the other. She states that the latter are generally weaker and, in cases of conflict, are overridden by rights of the former kind. The basic idea is that actively doing harm regularly entails the violation of rights to noninterference, while typical cases of allowing bad consequences to happen “only” involve an infringement of rights to goods and services (if they involve contravening any rights at all), such that the difference in moral strength between these kinds of rights would explain the moral asymmetry at issue (Foot 2002, 83–85). In the context of epistemic risk management, this approach seems to be a prima facie interesting way to explain potential moral differences between action and omission. Nevertheless, I think that it does not apply in this context for the following two reasons.
The first reason is that it is difficult to argue that someone’s rights to noninterference have been violated by someone producing an erroneous research outcome. If it is possible at all, it will be limited to a very few extreme cases. Producing information is by its nature an act that is not likely to constitute interference in and by itself. Even in cases of fraud and [End Page 432] deceit, the giving of false information usually requires additional action to set in motion the series of events that leads to harmful consequences. And note that by talking about fraud, we have already left our proper field of discussion. Fraud and deceit are not limiting cases of epistemic risk management.
The second reason is that the language of rights suggests absolute priorities and incommensurability with weaker concerns. But there can be no such thing as the absolute prioritization of the aim of avoiding one type of error over all others while still maintaining open inquiry. In open inquiry in the inductive sciences, no error probability can be one, and none can be zero. Therefore, epistemic risk management requires weighing and, thus, commensurability of all concerns about the potential results of all possible outcome states rather than prioritization.
There may be cases where the production of a certain kind of error itself constitutes the infringement of someone’s right to noninterference. But even in these cases, the proper reaction would be to abstain from conducting the inquiry completely. It cannot be right to knowingly risk infringing someone’s negative rights, and the legitimate tools of epistemic risk management within open inquiry do not allow one to set this risk to zero. So even such extreme cases would call not for a different treatment of the consequences of error and consequences of true results within epistemic risk management, but for a decision that would obliterate the need for epistemic risk management.
6. ANTHROPOGENIC VERSUS NONANTHROPOGENIC RISKS
Finally, I would like to discuss a third candidate for a morally relevant asymmetry related to the distinction between doing and allowing. In an environmental ethics context, Marion Hourdequin (2007) has pointed out that when discussing the question of whether risks are equitably distributed, it matters greatly whether we are talking about anthropogenic or nonanthropogenic risks. For nonanthropogenic risks, the question of equitable distribution is generally not applicable. It may be tragic for Peter that he, of all people, has a genetically increased risk of developing disease D, but it is not literally unfair.7 This distinction can sometimes be related to the distinction between doing and allowing—namely, when the risks that remain untouched and unchanged in their distribution by not acting are to be recognized as nonanthropogenic risks, whereas acting would causally produce certain risks that would thus be clearly anthropogenic. Thus, in the moral evaluation of options, doing would be saddled with the [End Page 433] additional burden of considering questions of justice, whereas allowing would not.
Can these distinctions also play a role in the management of epistemic risks? I think so, in principle. Consider the scientific assessment of this hypothesis: “A rapid school closure for a few weeks will significantly limit the spread of infectious disease S.” Let’s assume that this hypothesis needs to be assessed shortly after the outbreak of S, which is as yet poorly understood. If the medical community rejects the hypothesis while it is in fact true, schools are not likely to be closed, even though this could have limited the spread of the disease. The associated risks are unevenly distributed, but in a way that is not (at first glance) anthropogenic. Rather, the natural lotteries of differing physical constitutions and the chaotic path of the virus through the population are taking hold here. In contrast, a false positive acceptance of the hypothesis by the medical community would almost certainly lead to school closures. This would impose additional risks on certain people: risks of missed education, increased risks of poverty, and other risks that in this case are clearly anthropogenic and thus need to be examined for questions of equity. This could be interpreted to mean that in calibrating epistemic risks in research on the effectiveness of school closures for infectious disease mitigation, first and foremost the seriousness of mistakes must be morally considered—as it is entangled in serious questions of justice—and the benefits of getting it right are secondary.
But even with this carefully selected example, this conclusion can easily be doubted. It is only an oversimplified description of the problem that makes the risks associated with a false negative result appear to be nonanthropogenic. In recent years, we have had ample opportunity to observe how the different vulnerabilities of people in the face of a pandemic are not only determined by the natural lottery alone but also largely shaped by socioeconomic factors. A case in which the differently distributed risks associated with a research-relevant factual question actually have no anthropogenic component, and can therefore be considered independently of aspects of justice, would arguably be the striking exception in today’s world.
It is plausible that the difference between anthropogenic and nonanthropogenic risks is indeed sometimes behind the intuition that doing and allowing must be treated differently in moral considerations. But I do not believe that an intuition based on this alone will stand up to closer scrutiny in the present case or in any other reasonably realistic applications. [End Page 434]
7. CONCLUSIONS
There is no justification for granting the evaluation of the benefits of getting it right no influence on epistemic risk management, or for generally considering it a comparatively less weighty consideration than the seriousness of mistakes. Considering both is not only justifiable but indispensable for any inquirer in order to set the balance of epistemic risks in a responsible way, because every relevant value assessment consists of evaluating a difference between the consequences of a possible type of error and the corresponding benefits of getting it right.
But we tend to describe the assessment of this difference sometimes as an estimate of the seriousness of mistakes and sometimes as a measurement of the benefits of getting it right, depending on whether we judge the contribution of science to the overall outcome to be positively relevant or negatively relevant. This creates a link between these different descriptions and the much-discussed moral asymmetry between doing and allowing. Our discussion, however, has shown that while this connection may explain in some cases the source of the intuition that judging the seriousness of a mistake is the morally weightier aspect, no principled and convincing justification can be derived from this that those aspects we describe as judging the seriousness of mistakes actually have moral priority in epistemic risk management. There may be individual, special cases in which the distinction aligns with the difference between compensable and noncompensable consequences. And there may be individual, special cases in which the distinction aligns with the difference between states of the world characterized by nonanthropogenic risks and those that are characterized by anthropogenic risks. This may then justify a differing and asymmetric weighting of the moral considerations in each case. But the cases in question are each very particular and arguably extremely rare in the practice of epistemic risk management in scientific research. As a general rule, the seriousness of mistakes and the benefits of getting it right remain two sides of the same coin.
Torsten Wilholt is Professor of Philosophy and History of the Natural Sciences at Leibniz Universität Hannover. His research interests include the social epistemology of science, the philosophy of applied science and the political philosophy of science. He is co-director of the Center for Advanced Studies “SOCRATES” that is devoted to studying the “Social Credibility and Trustworthiness of Expert Knowledge and Science-based Information” and is funded by the German Research Foundation (DFG).
NOTES
1. The research underlying this paper was funded by the Deutsche Forschungsgemeinschaft (DFG) through the SOCRATES Center for Advanced Studies at Leibniz Universität Hannover (grant numbers WI 2128/8-1, FR 3862/6-1 and FR 3862/7-1, project number 470816212). This paper is based on an idea that emerged in conversation with Ralf Stoecker a long time ago. Stephen John provided detailed and very helpful feedback on an earlier version. I am also grateful for the advice I received from Dietmar Hübner, Jacob Stegenga, T. Y. Branch, two anonymous reviewers for this journal, and the participants of the SOCRATES research seminar.
2. I am grateful to a reviewer for this journal for drawing my attention to this restriction.
3. I believe that the considerations of this article can be applied very broadly to a wide variety of decision-making moments in the research process that bear some relation to error avoidance, and not only to those related to propositional inference based on data. I therefore prefer the broader concept of epistemic risk, as introduced by Justin Biddle and Quill Kukla (writing as Rebecca Kukla), to the narrower concept of inductive risk (Biddle 2016; Biddle and Kukla 2017).
4. This statement also echoes a much-noted stance in the ethics of science and technology that goes back to the work of Hans Jonas, who called it the “heuristics of fear”: “[M]oral philosophy must consult our fears prior to our wishes to learn what we really cherish” (1984, 27). I am grateful to Dietmar Hübner for pointing out the parallel to Jonas.
5. This is only to say that these potentially morally relevant features may be expected to be associated with positively and negatively relevant behaviors, not that any act possesses certain moral qualities simply in virtue of falling on either side of the positive/negative distinction. Bennett (1993, 86, 95–96) himself insists that the distinction has no basic moral significance.
6. Only in certain very special cases may a methodologically relevant choice be implemented by negatively relevant behavior—for example, by not continuing to look for evidence any further. But there is no reason to expect that these cases will be associated with lowering the chance of a true result and its benefits more often than with raising the risk of harm through error.
7. To be sure, we sometimes talk like this when we speak imprecisely. Yet it is revealing that this occurs especially among theists when they quarrel with God. From the theistic point of view, while the unequally distributed risk is not anthropogenic, there is someone else to blame for it and to blame for an insufficient attention to concerns of justice.




