In lieu of an abstract, here is a brief excerpt of the content:

Reviewed by:
  • Arguments that Count: Physics, Computing, and Missile Defense, 1949–2012 by Rebecca Slayton
  • Martin Collins (bio)
Arguments that Count: Physics, Computing, and Missile Defense, 1949–2012. By Rebecca Slayton. Cambridge, MA: MIT Press, 2013. Pp. xi+ 324. $35.

In Arguments that Count, Rebecca Slayton tells a story of the U.S. cold war state in its relations with science and technology that is both familiar and distinct. It is familiar in that for her topical frame—missile defense—there exists a modest historiography on the series of programs and policy debates from the 1950s through the 1980s on whether or how the continental United States might be protected from a catastrophic nuclear attack via aircraft, initially, and later by missiles. It is familiar, too, in that her analysis is grounded in that thick vein of studies on the ways in which cold war state interests and patronage reoriented existing disciplines and stimulated the creation of new communities of knowledge and practice. [End Page 1028]

Her account, though, brings these literatures and themes into productive juxtaposition through the methodology of science and technology studies (STS). Her focus is on how “software engineering,” a field nonexistent in the 1950s, developed by the 1970s as a community of expertise. Her primary question is how in the tangle of the cold war this expertise became constituted and gained legitimacy and authority—how its arguments came to “count.”

Initially, in the context of defense against nuclear attack, policy makers, national security officials, and scientists regarded physics—the postwar period’s top-of-the-hierarchy discipline—as the relevant expertise, whether in support or critique. Slayton organizes the book contrapuntally, alternating chapters on the communities of physics and software/computing in their intersection with the technical and political history of a succession of defense programs. The effect is to historicize carefully the rise in standing of software engineering in national policy debates.

The story of cold war state funding and disciplinary formation runs deeper. Slayton argues that defense against nuclear attack, in its sheer scale and complexity as a problem, in its requirement to process information and direct action on very short timescales, provided a crucial forcing circumstance for the creation of software engineering. From the first postwar initiatives, Whirlwind and Sage, to the 1980s Strategic Defense Initiative (SDI, aka Star Wars), air defense was fundamentally reliant on computing technology and practice. Over this multiyear trajectory, engineers and scientists at select universities and corporations came, gradually, to see software and programming as the problem in designing a defense. Then, through the 1960s and early 1970s, they created a distinct domain of intellectualization and practice. Defense against nuclear attack was from its inception a fraught national policy issue that provoked intense debate about its possible role in the cold war’s “delicate balance of terror” (strategist Albert Wohlstetter’s enduring coinage)—a consequence of the near-unstoppable offensive weaponry of the antagonists. Thus, the formation of this community of expertise in this historical context reflected not just the coalescence of a specialty but its inseparable relation to national politics and decision making and the attendant ethical implications thereof.

Software engineering as community emerged roughly coincident with the Reagan administration’s efforts to promote SDI. Its dominant contribution to the debate was fundamentally critical; SDI (and its historical progenitors) posed a problem of “arbitrary complexity,” of seeking to know and coordinate numerous and heterogeneous technical and social elements inherent in the task of thwarting a nuclear attack. This challenge was compounded by a distinctive feature of the defense problem: one could never test such a system under the conditions in which it would have to perform, an actual nuclear war. Absent real-world testing and iterative uncovering [End Page 1029] of faults, there was no way to assure the reliability of the software, the vital factor in the performance of a system intended to defend against catastrophe. This judgment, both practical and ontological, had, in Slayton’s accounting, both an effect on software engineering’s sense of its epistemological foundations and on subsequent policy debates on nuclear defense (which are covered in outline for the 1990s and 2000s in the conclusion).

Arguments that Count works best...