In lieu of an abstract, here is a brief excerpt of the content:

1 Moral Agency 1.1 Introduction The question concerning machine moral agency is one of the staples of science fiction, and the proverbial example is the HAL 9000 computer from Stanley Kubrick’s 2001: A Space Odyssey (1968). HAL, arguably the film’s principal antagonist, is an advanced AI that oversees and manages every operational aspect of the Discovery spacecraft. As Discovery makes its way to Jupiter, HAL begins to manifest what appears to be mistakes or errors, despite that fact that, as HAL is quick to point out, no 9000 computer has ever made a mistake. In particular, “he” (as the character of the computer is already gendered male in both name and vocal characteristics) misdiagnoses the failure of a component in the spacecraft’s main communications antenna. Whether this misdiagnosis is an actual “error” or a cleverly fabricated deception remains an open and unanswered question. Concerned about the possible adverse effects of this machine decision, two members of the human crew, astronauts Dave Bowman (Keir Dullea) and Frank Poole (Gary Lockwood), decide to shut HAL down, or more precisely to disable the AI’s higher cognitive functions while keeping the lower-level automatic systems operational. HAL, who becomes aware of this plan, “cannot,” as he states it, “allow that to happen.” In an effort to protect himself, HAL apparently kills Frank Poole during a spacewalk, terminates life support systems for the Discovery’s three hibernating crew members, and attempts but fails to dispense with Dave Bowman, who eventually succeeds in disconnecting HAL’s “mind” in what turns out to be the film’s most emotional scene. Although the character of HAL and the scenario depicted in the film raise a number of important questions regarding the assumptions and consequences of machine intelligence, the principal moral issue concerns 16 Chapter 1 the location and assignment of responsibility. Or as Daniel Dennett (1997, 351) puts it in the essay he contributed to the book celebrating HAL’s thirtieth birthday, “when HAL kills, who’s to blame?” The question, then, is whether and to what extent HAL may be legitimately held accountable for the death of Frank Poole and the three hibernating astronauts. Despite its obvious dramatic utility, does it make any real sense to identify HAL as the agent responsible for these actions? Does HAL murder the Discovery astronauts? Is he morally and legally culpable for these actions? Or are these unfortunate events simply accidents involving a highly sophisticated mechanism? Furthermore, and depending on how one answers these questions , one might also ask whether it would be possible to explain or even justify HAL’s actions (assuming, of course, that they are “actions” that are able to be ascribed to this particular agent) on the grounds of something like self-defense. “In the book,” Dennett (1997, 364) points out, “Clarke looks into HAL’s mind and says, ‘He had been threatened with disconnection ; he would be deprived of his inputs, and thrown into an unimaginable state of unconsciousness.’ That might be grounds enough to justify HAL’s course of self-defense.” Finally, one could also question whether the resolution of the dramatic conflict, namely Bowman’s disconnection of HAL’s higher cognitive functions, was ethical, justifiable, and an appropriate response to the offense. Or as David G. Stork (1997, 10), editor of HAL’s Legacy puts it, “Is it immoral to disconnect HAL (without a trial!)?” All these questions circle around and are fueled by one unresolved issue: Can HAL be a moral agent? Although this line of inquiry might appear to be limited to the imaginative work of science fiction, it is already, for better or worse, science fact. Wendell Wallach and Colin Allen, for example, cite a number of recent situations where machine action has had an adverse effect on others. The events they describe extend from the rather mundane experiences of material inconvenience caused by problems with automated credit verification systems (Wallach and Allen 2009, 17) to a deadly incident involving a semiautonomous robotic cannon that was instrumental in the death of nine soldiers in South Africa (ibid., 4). Similar “real world” accounts are provided throughout the literature. Gabriel Hallevy, for instance, begins her essay “The Criminal Liability of Artificial Intelligence Entities” by recounting a story that sounds remarkably similar to what was portrayed in the Kubrick film. “In 1981,” she writes, “a 37-year-old Japanese employee [13.58.244.216] Project MUSE (2024-04-26 17:05 GMT) Moral Agency 17...

Share