In lieu of an abstract, here is a brief excerpt of the content:

Chapter 4 / Tort Liability for Arti‹cial Agents Navigating Analogies Tort liability, which includes a multiplicity of liability schemes and recovery theories such as negligence, product liability, malpractice liability, liability in trespass, and liability for negligent misstatement, arises from the harm caused by a person’s breach of a duty to avoid harm to others, and seeks to put the person harmed in the position he would have been in had the breach of duty not taken place. It can arise in favor of third parties harmed by their interactions with an arti‹cial agent. Such potential harm is not trivial; the liabilities may be huge. As particularly salient examples, missile battery control systems, autopilots, train control systems , and nuclear medicine control software can cause grievous injury and property damage if they operate incorrectly, and examples of such failures have occurred.1 In the economic sphere, breakdowns of trading systems and incorrect advice from expert systems can cause signi‹cant losses to their users.2 Given arti‹cial agents can be embodied in a wide variety of software and hardware, their diverse functionality, and varying levels of autonomy , the sources of applicable liability theories are potentially diverse. Indeed, perhaps the hardest problem in devising a theory of liability for arti‹cial agents is deciding which body of analogous case law should be brought to bear in the case of arti‹cial agents. Such divisions in tort liability law re›ect history and the policy of separate areas, areas that could merge again. Our analytical strategy will be to directly reason from the 119 4.1. treatment laid out in previous chapters, and by keeping the alreadyidenti ‹ed characteristics of arti‹cial agents in mind, apply the relevant portions of liability doctrine where appropriate. Tort liability rules have historically attempted to protect humans against the harms other humans might expose them to via a variety of entities of varying capacities. Besides many conventional theories of supplier and operator/user liability, possible sources for a theory of tort liability ›exible enough to handle arti‹cial agents include existing doctrines relating to liability for wild and domestic animals, children, unpredictable actors under supervision such as prisoners and even slaves, and ultrahazardous activities. Such analogies are not new. Indeed, products themselves became the source of strict liability via analogies made to dangerous animals (Bernstein 1995). These analogies, in the case of arti‹cial agents, draw their plausibility from the fact that, like animals, arti‹cial agents are “a category of entities which are neither human, nor totally without responsibility .” These analogies reach their limits when we consider punishment, for “the problem with machines and their programs, even if we were to squeeze them into the same category as dogs, would be how to blame and punish them” (Wilks 1985, 1279). Questions of punishment invariably bring about consideration of moral responsibility, a topic we return to in section 4.5 and chapter 5. Flexibility in both conceptual and legal approaches is necessary in part because autonomous arti‹cial agents challenge the law’s extant, typically binary, conceptual schemes. In particular, arti‹cial agents that include , or are instantiated by, software, present classi‹cation problems: If software is characterized as a “product,” strict products liability regimes, at least in theory, are engaged and could facilitate recovery of damage caused by software defects; if as a “service,” liability is typically limited to contractual remedies against the supplier, and implied warranties will often be easily avoided by disclaimer so as to preclude recovery. The dif‹culties inherent in classifying agents as products or services are relevant when the question of supplier liability to those harmed by defects in the agent (such as users and third parties) is at stake. In a broader range of circumstances, operators or users may be liable for the acts of arti‹cial agents they direct and control. As well as analogies with dangerous activities and with animals or other actors under supervision, a doctrinally and economically plausible strategy would regard arti‹cial 120 / A Legal Theory for Autonomous Arti‹cial Agents [18.118.227.69] Project MUSE (2024-04-25 01:53 GMT) agents as legal agents, engaging the strict vicarious liability regime that forms part of agency law. In the context of liability, arti‹cial agents, besides being thought of as agents in the legal sense, are also usefully thought of as actors in the more general sense of originators of actions with consequences, with varying capacities and abilities, deployed in situations...

Share