In lieu of an abstract, here is a brief excerpt of the content:

6 Research Misconduct Policy, Its Evolution and Culture of Morality Preceding chapters support the notion that our current treatment of research misconduct represents an early version of future regulatory policy, one that needs to further evolve. Although, at this time, countries around the world model their research misconduct policies on ours, there is little firm evidence of our current policy’s effectiveness either in preventing or containing such misconduct or in protecting scientific capital, supporting fair competition in science, or limiting harm to end users. A recap of ideas flowing from critical observations in previous chapters sets the stage for considering alternative directions for misconduct policy. • Our current strategy for governing science entails highly diffuse sources of supervision, a strategy that is often unsuccessful and thus increasingly inappropriate given our heavy dependence on research findings. With rare exceptions, journals are unwilling or unable to play a major role in detecting research misconduct and containing its harms, and the traditional safeguards of peer review and replication are also unequal to the task. The diffusion of governance and the failure of safeguards lead us to ask how frequently does research misconduct occur (with the implication that a certain 116 Chapter 6 level is normal and therefore tolerable) when we should be asking whether science can produce knowledge that is sufficiently valid and reliable for society’s purposes. Although it isn’t something scientists can put in place by themselves, science should have a full armamentarium of policies and structures to achieve that validity and reliability. • Disputes about the appropriate role of the public in science and our exposure to a continuous stream of egregious cases of research misconduct feed our commonly held assumption that science is a distinct and autonomous enterprise (science exceptionalism) developed by a community of scientists working in isolation. Heather Douglas (2009), however, argues that such an assumption has intolerable consequences for society. A fully autonomous and authoritative science is one whose claims we simply have to accept with no recourse, one with no responsibility on the part of the scientific community for harms that may result when those claims prove invalid or fraudulent. • Rethinking the autonomy of science requires reexamining its social compact, to allow it sufficient self-rule to protect its authority, but not total self-rule. Reflections on research misconduct are central to this rethinking in two ways: (1) research misconduct or errors and conflicts of interest contribute to a perception of science as unreliable and cast doubt on its authority, thus rendering it less useful in making policy decisions ; and (2) scientists should be held responsible for the consequences of their fraudulent or seriously flawed findings, a precept routinely ignored in the handling of research misconduct cases, during both investigations and corrections of the scientific record. Simply because scientists provide important knowledge doesn’t exempt them from basic moral responsibilities (Douglas 2009). [52.14.224.197] Project MUSE (2024-04-25 10:38 GMT) Research Misconduct Policy 117 • Current regulations don’t reflect the complexity of appropriate research behavior, as outlined in chapters 2 and 3, but instead are based on several mistaken assumptions. The first such assumption is that the definition of integrity in scientific research is clearly established and widely accepted, ”yet many common research practices are directly at odds with ideal behavior” (Steneck 2011, 745). The second is that scientists commit research misconduct with the clear intent to deceive, yet there is abundant evidence they do so for many other reasons—out of ignorance, frustration, or the conviction that the good ends of their research justify nearly any means; because of differing interpretations of what constitutes falsification or fabrication; in response to toxic research environments or to social or institutional pressures; or in an effort to beat a system of resource distribution they see as unfair or impossible to win. A third mistaken assumption is that the findings of institutional authorities investigating allegations of research misconduct are largely accurate, yet the views of those found to have committed misconduct rarely appear in the scientific literature, and many have neither the resources nor the time to appeal these findings, thus are denied the opportunity to challenge their accuracy. Let us return to several key questions posed in the introduction . What has happened to the framework for scientific ethics? Never intended as a normative tool, the Mertonian framework was conceived at too high a level of generality to direct scientific practice in today’s environments. In fact, most current ethical codes are aimed at...

Share