In lieu of an abstract, here is a brief excerpt of the content:

Configurations 10.3 (2002) 473-516



[Access article in PDF]

A Future For Autonomous Agents:
Machinic Merkwelten and Artificial Evolution

John Johnston
Emory University


Humans can't build a robot as smart as themselves. But, logically speaking, it is possible for such robots to exist. How? Cobb had asked himself throughout the 1970s, How can we bring into existence the robots which we can't design? In 1980 he had the bare bones of an answer. One of his colleagues had written the paper up for Speculations in Science and Technology. "Towards robot consciousness," he'd called it. The idea had all been there. Let the robots evolve.
—Rudy Rucker, Software (1982)

Throughout the 1980s and early 1990s, at places like the Los Alamos National Laboratory and the Santa Fe Institute, Artificial Life experiments by Christopher Langton, Thomas Ray, Christoph Adami, and many others dovetailed with the development of nonlinear dynamical systems theory, with the result that one of the most innovative theory-practice relay systems in contemporary science and technology was constituted. More specifically, this relay system arose from a novel conjunction of computer simulation with the formation of a theoretical framework within which nonstandard theories of computation (or information processing), dynamical systems theory, and adaptation in evolutionary theory could be articulated into a new unity. As one consequence, practitioners and theorists alike could entertain the hope that complex systems theory—or Complexity Theory, as this framework has come to be called—might be able to resolve contradictions between theories of structure and theories of change. [End Page 473]

In the 1990s "the new AI" made great strides in robotics by further applying the lessons of Artificial Life research, beginning with Rodney Brooks's "bottom-up" approach to the construction of autonomous mobile robots. Colleagues and followers like Luc Steels, Pattie Maes, Maja Mataric, and Randall Beer developed and applied notions of emergent functionality, autonomous-agent theory, collective intelligence in multiagent systems, and a dynamical systems approach. Meanwhile, the new robotics continued to reject symbolic computation as part of the baggage of the old AI and allied itself with Francisco Varela's theory of "enaction," a new theory of cognitive science based on embodiment and concrete situatedness. It soon became evident, however, that further progress in robotics depended upon the application of evolutionary programming techniques to evolve not only neural net controllers but new morphologies as well. To be maximally effective, evolutionary programming is usually combined with computer simulations, with which the new robotics has always been uneasy. Thus a double exigency now demands that computation be brought back into the mix—a new kind of "emergent" computation, to be sure, but computation nonetheless.

Viewed historically, this trajectory promises to bring to fruition the original ambition of cybernetics to fashion a complete theory of the machine. 1 It began with Arturo Rosenblueth, Norbert Wiener, and Julian Bigelow's seminal essay in which they proposed that any behavior controlled by negative feedback—whether that of an animal, human, or machine—was purposive and teleological. 2 By the early 1950s, little more than a decade later, Ross Ashby had demonstrated how a machine possessed of a "requisite variety" of internal states would inevitably optimize itself through self-organization, and John von Neumann had shown theoretically how it would be possible to build self-reproducing automata of increasing complexity. Now, some fifty years later, research in evolutionary robotics seems to be readying itself to leap over the "complexity barrier," as von Neumann called it. When it comes, this leap will not only initiate a [End Page 474] new phase in the evolution of technology but will mark the advent of a new form of machinic life.

In our contemporary setting, the completion of such a trajectory would allow us not only to evolve robots that could walk out of the laboratory to pursue their own agendas, but also to understand how cognition itself is an evolutionary machinic process, distributed throughout multiple feedback loops with the environment. In this dynamic exfoliation of what Gilles Deleuze and Félix Guattari call the machinic phylum, (post)humanity will begin to...

pdf

Share