In lieu of an abstract, here is a brief excerpt of the content:

  • Ethics beyond Computation:Why We Can't (and Shouldn't) Replace Human Moral Judgment with Algorithms
  • William Hasselberger (bio)

in 1976, the computer scientist and artificial intelligence pioneer Joseph Weizenbaum, inventor of the famous ELIZA program that simulated conversation with a psychotherapist, published an impassioned book titled Computer Power and Human Reason. He observed: "Western man's entire milieu is now pervaded by complex technological extensions of his every functional capacity" (1976, 9). Whether or not that statement was true in 1976, when a song called "Disco Lady" was at the top of the charts, it is certainly true now, in 2019, when computer algorithms are becoming interwoven with every manner of human activity and practice. Algorithms—logically structured formal instructions for mechanically translating specific "inputs" into desired "outputs"—are now used to assist or replace human judgment and expertise in countless areas, including transportation (in planes, trains, and automobiles), medical diagnosis and treatment decisions, human resource management, military strategy and intelligence assessment, stock trading, insurance risk assessment, aesthetic evaluation, matchmaking, assessing job applications and loan requests, and criminal sentencing [End Page 977] and parole eligibility determinations. The trend of altering or substituting individual human judgment and expertise with increasingly sophisticated and powerful computer algorithms seems nearly irresistible.

This trend also raises serious ethical questions: about the privacy of those individuals involved, about automated patterns of bias towards particular groups of people, and about moral responsibility for harms resulting from "judgments" made by computers in, say, autonomous transportation or military hardware. But there is a deeper philosophical issue at play here that Weizenbaum put his finger on four decades ago when he noted that "the computer … has brought the view of man as a machine to a new level of plausibility," that people are increasingly prone to "anthropomorphize" computers, and that computers, while promising us ever increasing mastery and freedom, could actually play a decisive role in the "general technological usurpation of man's capacity to act as an autonomous agent in giving meaning to his world" (1976, 8, 10). Against the backdrop of an increasingly computerized and automated social world and an increasingly mechanistic view of ourselves ("as nothing but a clockwork"), Weizenbaum thought we faced two paramount questions. First, "no matter how it may be disguised by technological jargon, the question is whether or not every aspect of human thought is reducible to a logical formalism, or, to put it into the modern idiom, whether or not human thought is entirely computable" (12, emphasis added). And, second, "whether there are limits to what computers ought to be put to do"—i.e., "however intelligent machines may be made to be," are there "some acts of thought that ought to be attempted only by humans" (11, 13, original emphasis)? The first question is both a philosophical and an empirical question, and the second question is a related normative and ethical one.

Jump ahead 40 years. The idea of self-driving cars has been introduced to the public alongside the appealing prospect of a drastic reduction in overall traffic deaths. But such cars must be programmed with an algorithm—what some call a "morality algorithm" and others call the "death algorithm"—to determine how the car should respond [End Page 978] to the event of an impending accident when human beings are likely to be killed. In a crash, should the car prioritize the lives of its occupants over a larger number of lives of pedestrians? What if among those pedestrians are mothers with infants in baby strollers? What is the morally correct program for such cases?

Anyone who has taken a university course in moral philosophy in recent decades will recognize that the quandary of the "death algorithm" of a self-driving car is a variant of the infamous trolley problem that some philosophers use to argue for and against different moral theories. However, with the "death algorithm," we are not just having dubious fun testing our moral intuitions with macabre hypotheticals; programmers are thinking about the design architecture of the increasingly automated, computerized environment that human beings will actually inhabit, in which we will live and die (Simanowski 2018).1

The "death algorithm" can be seen as part...

pdf

Share