In lieu of an abstract, here is a brief excerpt of the content:

  • Schizophrenia and Narrative in Artificial Agents
  • Phoebe Sengers (bio)
Abstract

Artificial-agent technology has become commonplace in technical research from computer graphics to interface design and in popular culture through the Web and computer games. On the one hand, the population of the Web and our PCs with characters who reflect us can be seen as a humanization of a previously purely mechanical interface. On the other hand, the mechanization of subjectivity carries the danger of simply reducing the human to the machine. The author argues that predominant artificial intelligence (AI) approaches to modeling agents are based on an erasure of subjectivity analogous to that which appears when people are subjected to institutionalization. The result is agent behavior that is fragmented, depersonalized, lifeless and incomprehensible. Approaching the problem using a hybrid of critical theory and AI agent technology, the author argues that agent behavior should be narratively understandable; she presents a new agent architecture that structures behavior to be comprehensible as narrative.

The premise of this work is that there is something deeply missing from artificial intelligence (AI) or, more specifically, from the currently dominant ways of building artificial agents. This uncomfortable intuition has been with me for a long time, although for most of that time I was not able to articulate it clearly. Artificial agents seem to be lacking a primeval awareness, a coherence of action over time, something one might, for lack of a better metaphor, term "soul."

Roboticist Rodney Brooks expressed this worry eloquently:

Perhaps it is the case that all the approaches to building intelligent systems are just completely off-base, and are doomed to fail. . . . [C]ertainly it is the case that all biological systems . . . [b]ehave in a way which just simply seems life-like in a way that our robots never do.

Perhaps we have all missed some organizing principle of biological systems, or some general truth about them. Perhaps there is a way of looking at biological systems which will illuminate an inherent necessity in some aspect of the interactions of their parts that is completely missing from our artificial systems. . . . [P]erhaps we are currently missing the juice of life [1].

Here, I argue that the "juice" that we are missing is narrative. The divide-and-conquer methodologies currently used to design artificial agents result in fragmented, depersonalized behavior, which mimics the fragmentation and depersonalization of schizophrenia seen in institutional psychiatry. Anti-psychiatry and narrative psychology suggest that the fundamental problem for both schizophrenic patients and agents is that observers have difficulty understanding them narratively. This motivates my work on a narrative agent architecture, the Expressivator, which structures agent behavior to support narrative, thereby enabling the creation of agents that are intentionally comprehensible.

The Problem

Building complex, integrated artificial agents is one of the dreams of AI. Classically, complex agents are constructed by identifying functional components—natural-language processing, vision, planning, etc.—designing and building each separately and then integrating them into an agent. More recently, some practitioners have argued that the various components of an agent strongly constrain one another and that the complex functionalities of classical AI cannot be easily coordinated into a whole system. Instead, behavior-based AI proposes that the agent be split up, not into disparate cognitive functionalities, but into "behaviors," such as foraging, sleeping and hunting. Each of these behaviors would integrate all of the agent's functions for that behavior.

Even such approaches, however, have not been entirely successful in building agents that integrate a wide range of behaviors. Rod Brooks, for example, has stated that one of the challenges of the field is to find a way to build an agent that can integrate many behaviors (he defines "many" as more than a dozen) [2]. Programmers can create robust, subtle, effective and expressive behaviors, but the agent's overall behavior tends to fall apart gradually as more behaviors are combined. For small numbers of behaviors, this disintegration can be managed by the programmer, but as more behaviors are combined their interactions become so complex that they become at least time-consuming and at worst impossible to manage.

In both cases, divide-and-conquer methodologies lead to integration problems. With classical agents, which...

pdf

Share