[Access article in PDF]
The Presumptions of Expertise:
The Role of Ethos in Risk Analysis
The civilian nuclear power enterprise in the United States has had a short and not very happy life. There was an initial period of slow development from the late 1940s through the late 1960s, a very brief period of rapid growth that lasted less than ten years, and then an unforeseen rapid decline beginning in the mid-1970s that was only hastened by the Three Mile Island accident in 1979; this decline has been called "one of the most stunning reversals of fortune in the history of American capitalism."1 A Forbes article declared in 1985 that "for the U.S., nuclear power is dead," and the scientist who chaired the National Research Council's 1992 report on nuclear power declared a year later that the future of nuclear power in the U.S. "looks grim."2 Although some 20 percent of the nation's electric power in 2003 was supplied by 104 nuclear plants, nuclear power has not been prominent on the public agenda: in the 1990s, a slowed growth in the demand for energy, reduced funds for R&D, the deregulation of the power industry in 1992, and reduced prices for fossil fuels made the nuclear option less important.3 There has been some [End Page 163] talk of a "second nuclear era," with new plant designs that are "inherently safe," new approaches to regulation, and changed energy economics.4 However, no plants have been ordered since 1978, and all forty-one orders placed since 1973 were canceled or rejected by state governments; of 259 orders ever placed, 124 were canceled, the last two in 1995. The last operating license was issued in 1996, for a plant whose construction permit was originally issued in 1973.5
The short, controversial life of the nuclear industry leaves at least three legacies: the problem of decommissioning worn-out plants, the necessity of long-term waste storage, and the practice of risk analysis—this last legacy no less important than the other two for being less material. Risk analysis originated in the efforts of the federal government to sell the nuclear option to both the electric power companies and the public in the 1950s and 1960s, and the nuclear industry contributed much to its advancing methods.6 The field developed and expanded rapidly with the environmental and consumer legislation of the 1960s and 1970s—more than thirty major federal laws concerning health, safety, and the environment were passed between 1965 and 1985, many of them requiring the regulation of hazards and thus inviting, and often mandating, risk analysis.7 Although risk analysis began in the nuclear power enterprise [End Page 164] and drew from safety and reliability engineering, it has become interdisciplinary, drawing from such intellectual traditions as operations research and systems analysis, public policy, actuarial statistics, toxicology, and epidemiology.8
Risk analysis acquired much of its disciplinary form and by-now-pervasive influence through governmental support and implementation. The National Science Foundation began a major funding program for risk analysis in 1979, in response to a request by the U.S. House Committee on Science and Technology; this program had significant impact on the development of the field.9 In 1983, in response to a request by the Food and Drug Administration, the National Research Council published a report on the issues and problems involved in using risk assessment in the regulatory process.10 This report, informally referred to as the "Red Book," helped create what has become the "standard account" of risk analysis, which maintains that the scientific process of risk assessment should be separate from the subsequent political process of risk management.11 The Red Book conceived of risk assessment as having four stages: hazard identification, dose-response assessment, exposure assessment, and risk characterization; and it described risk management as a process that builds on the results of risk assessment but also involves "social, [End Page 165] economic, and political concerns" in order to...