In lieu of an abstract, here is a brief excerpt of the content:

  • A Kantian Prescription for Artificial Conscious Experience
  • Susan A.J. Stuart (bio) and Chris Dobbyn (bio)
Abstract

Research in artificial intelligence, artificial life and cognitive science has not yet provided answers to any of the most perplexing questions about the mind, such as the nature of consciousness or of the self; in this article the authors make a suggestion for a new approach. They begin by setting their project in the broader cognitive science context and argue that little recent research adequately addresses the question of what are the necessary requirements for conscious experience to be possible. Kant addresses this question in his transcendental psychology, and although Kant's work is now over 200 years old the authors believe his approach is worthy of re-examination in the current debate about the mind.

There is a general assumption that if we could produce a conscious system its consciousness would be qualitatively different from our own. But this assumption, we believe, is symptomatic of the dominant approach in artificial intelligence (AI) and artificial life (A-Life), in which systems are situated in a world, virtual or actual, whose properties are a given (which the system is required to discover) and whose structure determines the internal representations formed by the system. We believe this approach is flawed; instead we propose that, in order to achieve any form of consciousness, a system would have to be an active, rather than a passive, participant in how and what it experiences. We argue that active participation of this sort is possible only in a system that instantiates a perceptual and interpretive framework similar to the one that, Kant argues, dictates how we intuit, order and unify our experience [1]. We begin by examining the current state of play in AI and A-Life; we then state the Kantian paradigm and propose an architecture that would have to be implemented in a conscious artificial system; finally we consider the case of MAGNUS, an artificial system proposed by Aleksander [2] that satisfies some of the criteria for self-consciousness.

Artificial Intelligence and Artificial Life

It is commonplace among philosophers of mind and of cognitive science to lump together AI and A-Life, as if they were closely related fields. However, we start by highlighting the deep differences between them, differences that must cast doubt on hopes for a unified approach to the problems of mind and consciousness [3].

AI has its intellectual roots in classical cognitive psychology and borrows theoretical concepts heavily from computer science: its model of mind is fundamentally computational, in that it pictures the mind as a finite-state automaton, a Turing machine [4]. A Turing machine performs a computation by moving through a series of discrete states, each new state being determined by the machine's present state, information from an external program and a set of internal logical rules. Turing was able to prove that such a machine could perform any computation. By contrast, A-Life's background is in evolutionary biology, and its theoretical underpinnings come from information theory and from complex systems—the scientific endeavor to find the laws that govern complex phenomena, such as ecologies, economies or weather patterns. The tools of AI are computer simulation and a variety of knowledge representation formalisms; but a powerful strand within AI has sought to establish the discipline formally on the basis of mathematical logic. A-Life also uses computer simulation as its main practical tool. However, to the extent that its practitioners rely on formal analysis at all, this originates in the mathematics of cellular automata, genetics, natural selection and dynamical systems theory. Hendricks-Jansen [5] has also drawn attention to the disparity between the "natural kinds"—the sorts of entities that it is natural to invoke when describing or explaining something—on which AI and A-Life discourses are based: AI generally relies on semantic tokens representing beliefs, plans, goals and functional states; A-Life, he argues, has a much less well-developed set of natural kinds, centered on such ideas as patterns of behavior, genes and environmental variables. Typical AI explanations are classically top-down, reductionist, modular and homuncular; A-Life explanations are bottom-up and rely crucially on...

pdf

Share