In lieu of an abstract, here is a brief excerpt of the content:

4 THE TURING TEXT Even scholars working far outside the disciplines that make up the field of artificial intelligence are familiar with the basic elements of the Turing test, in which the machine’s ability to mimic human language is presented as the touchstone of intelligent behavior. It is usually presented in the following way: A human being and an entity that is either a human being or a machine are separated from each other by a wall. The first human is allowed to pose questions to the unseen entity by means of a teleprompter. If that human being is unable to determine from the responses given whether the entity on the other side is a machine or a human being, that entity is for all intents and purposes intelligent.1 Turing’s thought experiment has led to innumerable meditations on the possibilities for intelligent machines. But at the heart of the Turing test lies a brilliant, if unsuccessful, attempt to move attention away from the “how” of imperative process toward the results of rhetorical persuasion. Rather than having humans compete against machines in their ability to generate correct utterances, the test sets the machine’s ability to generate intelligibility against the human’s capacity to generate theories by which the origins and form of that intelligibility may be explained. Even if it fails to persuade (and many have suggested that the test is too easy), it fails in an entirely spectacular way. Our attention becomes focused not merely on whether its utterances are true, but on whether its utterances are possible. The “text” of the test is one in which we move from the already potentialized valences of dialogue to ruminations about the origins of that dialogue. That one can move fruitfully between these two registers is amply, if ironically , illustrated in the case of Joseph Weizenbaum’s 1960 program ELIZA, The Turing Text 59 which imitates (Weizenbaum would say “parodies”) the behavior of a Rogerian psychiatrist:2 ELIZA: Hello, I am a computer program. you: Are you? ELIZA: You’re not really talking about me—are you? you: Can you help me? ELIZA: Does someone else believe I help you? you: I’m thinking you might be able to help me! ELIZA: Do you wish to believe I might be able to help you? you: Yes. ELIZA: Please go on. you: I’m concerned about my computer. ELIZA: What do you think about machines? you: I think machines can be very useful for scholars. ELIZA: Why do you mention computers? you: I don’t know. ELIZA: Don’t you really know? (Chatbot::ELIZA) Weizenbaum made no pretensions toward setting aside the “how” of his program . Behind ELIZA’s dialogue lies a fairly rudimentary keyword substitution algorithm that mostly parrots back the user’s responses (while trying to avoid repetition). The effect, though, is legendary. There are several anecdotes (some undoubtedly apocryphal, all plausible), in which ELIZA tricks someone ’s employer or colleague into thinking that ELIZA is a real person, and Weizenbaum himself observed (to his horror and astonishment) that people regularly developed strong emotional bonds with the program. In one instance he found that his secretary would insist that he leave the room while she used the program; others were offended when asked to view transcripts of their interactions with the program, claiming it was an invasion of their privacy. According to Weizenbaum, “A number of practicing psychiatrists seriously believed the DOCTOR computer program could grow into a nearly completely automatic form of psychotherapy” (5). ELIZA’s obvious association with the Turing test tends to transform discussion of it into debates over Turing’s definition of intelligence. And since ELIZA operates within a realm normally considered part of medical therapy for mental disorders experienced by real human beings, the program adds to this a set of obvious ethical questions. Weizenbaum was undoubtedly correct in stating his objection to the idea that therapists might one day be replaced with machines: “I had thought it essential, as a prerequisite to the very possibility that one person might help another learn to cope with his emotional problems, that the helper himself participate in the other’s experience of [18.118.7.85] Project MUSE (2024-04-16 17:58 GMT) 60 chapter 4 those problems and, in large part by way of his own empathetic recognition of them, himself come to understand them” (7). The fact that such an argument seemed necessary indicates that at least some users believed ELIZA...

Share