Mechanical Brains: Autism and Artificial Intelligence
This essay shows how popular and technical discourses of autism treat the condition as a form of artificial intelligence. Offering a genealogy of so-called mechanical brains, it argues that attempts to rehabilitate liberal subjectivity in an age of information overload created the conceptual and cultural conditions necessary for computational theories of mind, which inform the most prominent studies of autism. As distinctions between biological and machinic intelligence gave way to distinctions between authentic and artificial intelligence, autism emerged as an intermediary for consolidating divergent understandings of cognitive difference. Since the mid-twentieth century, autism has come to signal the superhuman potential of artificial intelligence and, at the same time, mark the threshold computers must surpass to achieve authentic intelligence. Autism contains anxieties about automation while also sustaining fantasies of maximized brainpower. It indicates both deficit and surfeit, strength and weakness. This essay leverages those oppositions to suggest how theories of disability and posthumanism might productively trouble one another: first to track constructions of intelligent personhood alongside information technologies, then to read an alternative construction of autistic intelligence that points toward new cognitive subjectivities.
[End Page 52]
Is the autistic brain different? That question underlies a 2011 study featuring the brain of autistic author and animal scientist Temple Grandin. In its own language, the study aims “to elucidate the neuroanatomical and functional basis of Dr. Grandin’s cognitive strengths and weaknesses.”1 The authors of the study, a group of neuroscientists and psychiatrists, conclude that Grandin’s brain structure and function evidence unique traits when compared to a control population. The problem, the authors explain, is their limited control group size of only three people. Such a small group does not make much basis for comparison. Regardless, anyone familiar with Grandin’s work likely will accept that she has a unique mind. As an internationally renowned professor of animal sciences, she has had a profound impact on industrial agriculture, making cattle corrals simultaneously more humane and more efficient. Even more than her work to design better livestock handling facilities, her fame derives from her work to raise awareness about autism, a project now in its fourth decade that has led her to author nearly a dozen books about her experience as an autistic person with exceptional visual-spatial skills.
The study of her brain cites one of Grandin’s more successful titles, Thinking in Pictures, as evidence of her unique mental life. Cognition for Grandin takes place in a non-verbal visual field, what she describes as “a VCR tape in my head.”2 She first described this unique form of cognition in Oliver Sack’s 1993 feature for The New Yorker: “My mind is like a CD-ROM in a computer—like a quick-access videotape.”3 In a later article titled “My Mind is a Web Browser,” she explains, “I look at the visual images on the ‘computer monitor’ in my imagination, then the language part of me describes those images.”4 New media formats have given Grandin new ways to figure it, but the mechanicity of mind runs throughout her writings on autism. What that cognitive difference has to do with her brain remains an open question. The cognitive scientists who studied scans of her brain noted morphological differences and, possibly, differences in activity patterns compared to their three other test subjects. The small sample group aside, do differences in brain size, shape, or activity correlate to meaningful differences in cognitive function? Even the authors of the study have their doubts. After all, we know well that differences in head size have nothing to do with differences in intelligence. Looking for answers to autism’s mysteries in the morphology of brains may amount to high-tech phrenology. Like those nineteenth-century scientists feeling around for cranial differences, cognitive scientists find an intractable problem in trying to explain what normal cognition looks like.5
Despite its provisional findings, this study and similar ones following it generated widespread interest, garnering coverage from 60 Minutes, Discover Magazine, Smithsonian Magazine, Scientific American as well as more specialized news outlets.6 The publicity surrounding Grandin’s brain provides a useful point of departure for interrogating constructions of autistic intelligence because it highlights a number of tendencies in accounts of autism, starting with a common conflation between mind and brain. The spotlight on Grandin illustrates how that conflation inspires popular fascination with autistic people of extraordinary intelligence such as Grandin while casting a shadow on other autistic experiences. With the brain front and center, emphasis falls squarely on autistic cognition as a form of computation. Grandin’s use of technological analogies to [End Page 53] explain her cognitive experience belongs to a broader trend of describing autistic intelligence as robotic. As Majia Homer Nadesan writes, “autistic intelligence has become a site of condensation for the cultural fascination with, and fear of, self-regulating, cybernetic machines devoid of human emotion and sociality.”7 This essay follows Nadesan’s method of historicizing processes of subjectivization that construct intelligent person-hood in relationship to information technologies. I turn to those processes of subject formation to ask how autism, as a medical and cultural phenomenon, emerged in the twentieth century as a new terrain for defining boundaries between liberal cognition and artificial intelligence.
Examining early formulations of artificial intelligence shows how information and computer sciences laid a conceptual foundation for the construction of autism as a condition of mechanical being. Constructed that way, autism acts as a crucible for subjectivity in the information age, forging into a single entity widespread anxieties over automated society and enthusiasm for the biotechnical potential of maximizing brain-power. As Nadesan suggests, cybernetics— a field that abstracted differences between biological and mechanical systems—has played a key role in changing how people understand intelligence beyond the human. Her phrase “self-regulating, cybernetic machines” may conjure an image of sentient robots, but Jean-Pierre Dupuy reminds us that the field’s intellectual legacy “represented not the anthropomorphization of the machine but rather the mechanization of the human.”8 Or, put another way, “to think is to compute as a certain class of machines do.”9 Dupuy shows how cognitive science adopted a computational theory of mind from cybernetics, decoupling intelligence from consciousness by treating it as the capacity to react to stimuli. Consciousness and cognition thus become features of an intelligence sufficiently complex to produce mental experience, which lays the foundation for subjectivity. Situating cybernetics as the origin of cognitive science, Dupuy explains how reconfiguring the human as a system of mechanisms implied a new way of thinking about cognition as information processing. If a vision of the human mind as no more than synaptic circuitry appears less flattering than the rational subject of free will supposed by liberal humanism, Dupuy reminds readers that scientific discovery, “from Copernicus to molecular biology,” often displaces “our proud view of ourselves as occupying a special place in the universe.”10 My contention here is that constructions of autism have borne the weight of this latest blow to man’s naïve self-love by limiting its implications to subjects of cognitive disability.11
Computational theories of mind have gravitated to autism as a foil for better understanding the functions of neurotypical cognition. But because autism is defined by behavior, not by neuroanatomical difference, that project requires naturalizing social norms as [End Page 54] products of biological evolution.12 In that way, behaviors like making eye contact begin to look like the cornerstones to human sociality and are seen as a fundamental aspect of human being. Medical models of autism define the condition by the absence of such capacities, explaining the condition as a deficit of intelligence. At the same time, as the fascination with Grandin’s brain suggests, autistic cognition also marks a potential for heightened forms of cognitive ability. When rendered in terms of a computational theory of mind, those heightened abilities suggest the possibility of optimizing intelligence in much the same way recent developments in artificial intelligence have elicited speculations about superintelligence.13 Autism not only condenses cultural fascination and fear of cybernetic machines, as Nadesan argues, but also serves as a site of negotiation for redrawing the boundaries of human intelligence in an age of algorithmic ascendency. Even before digital computers, information science theorized artificial intelligence as intelligence uncoupled from biological cognition, setting the stage for what became, by the end of the twentieth century, a stock representation of the autistic brain as mechanical.14
Liberal Subjectivity Impaired
Both disability studies and posthumanism offer critical methods for assessing representations of autistic intelligence as computational. From a disability studies perspective, analogies comparing autistic cognition to information technology have pervasive social effects that range from framing everyday interactions to shaping policy decisions. For instance, when Nadesan critiques the association of autists with cybernetic machines, the force of her critique falls on dehumanization: “The common idea that autistic intelligence is typified by a computer’s computational processes strips autistic people of their consciousness, their emotions, their humanity.”15 Attention to the stakes of negating human qualities similarly informs Stuart Murray’s position in “Autism and the Posthuman.” While acknowledging the potential for a conceptual move beyond the human, he raises “real concerns about the place of those with disabilities in a world centered on information exchange and technology.”16 By juxtaposing the theoretical with the actual, he reminds readers that the everyday experience of people living with autism depends on their recognition in human society. The computational model of autistic intelligence is problematic, disability studies suggest, because it makes cognitive difference a barrier to the category of human.
Posthumanism, by contrast, rejects the category of human as an ideal, instead treating cognitive difference as an opportunity to complicate exclusionary borders drawn around human intelligence. In his account of posthumanism, for example, Cary Wolfe discusses Grandin’s description of thinking like a web browser because it defies humanist dogma “founded in no small part on the too-rapid assimilation of the questions of subjectivity, consciousness, and cognition to the question of language ability.”17 By aligning her mode of thought with technologies of visual representation, Wolfe argues, Grandin constructs a more inclusive model of subjectivity. In that sense, Wolfe extends the critique of liberal humanist subjectivity that N. Katherine Hayles develops in How We Became Posthuman. Despite substantive differences in their theories of posthumanism, both Wolfe and Hayles [End Page 55] turn to cybernetics to find an alternative to liberal subjectivity. That is, an alternative to subjectivity grounded in autonomy, free will, and rationality—those trappings of sovereign selfhood that Hayles associates “with projects of domination and oppression.”18
If cybernetics frames a disagreement over the status of humanism, liberal selfhood gives disability studies and posthumanism a shared object of critique. Both areas of inquiry foreground the limitations of mastery that liberal humanism has traditionally assumed as the basis for an individuated self. That sense of mastery begins with the idea of selfhood as located in the mind because it implies an individual in possessive control of their body. This tradition tends to obscure the importance of embodied experience— including bodily differences of all sorts— treating it “as an accident of history rather than an inevitability of life,” Hayles explains.19 Disability theorist Tobin Siebers adds that the inevitability of embodiment should help us recognize “human society not as a collection of autonomous beings, some of whom will lose their independence, but as a community of dependent frail bodies that rely on others for survival.”20 The incorporation of embodiment in formulations of personhood makes clear how narrowly liberal subjectivity hews to those readily discernible aspects of perception that make up consciousness. Research on unconscious bias, hormonal balance, and microbiomes give us some empirical basis for what theories of the unconscious and subject position have long held—many factors delimit human rationality.
Taken together, these critiques of liberal subjectivity might contribute to neurodiversity by expanding the territory of legitimate personhood. However, that project leaves the construction of autistic subjectivity on precarious ground because the very characteristics it questions are those typically associated with autism. Siebers describes elements of liberal subjectivity as “outdated notions that define the human according to eighteenth-century ideals of rational cognition, physical health, and technological ability.”21 Compare that list of traits to Hans Asperger’s description of autistic children as “intelligent automata” in his seminal 1944 study:
Social adaptation has to proceed via the intellect. In fact, they have to learn everything via the intellect. One has to explain and enumerate everything, where, with normal children, this would be an error of educational judgement. Autistic children have to learn the simple daily chores just like proper homework, systematically.22
Rational and technical abilities dominate this formative construction of autistic intelligence (physical health is a nonissue), but with a difference from the norm. Autism is diagnosed as more rational, more systematic than usual, and that surfeit implies a deficit. Autism ends up looking like liberal subjectivity turned askew, impaired as if by caricature. Cultural constructions of autistic intelligence treat the condition as both excessive and deficient in its divergence from cognitive norms. Sacks makes this explicit in his account of Grandin when he writes about her photographic memory: “This quality of memory seemed to me both prodigious and pathological—prodigious in its detail and pathological in its fixity, more akin to a computer record than to anything else.”23 The dichotomy renders autism as at once extraordinarily powerful and hopelessly inhuman. [End Page 56]
Roland Barthes famously commented that “Einstein’s brain is a mythical object,” signifying the entirety of his person and his work. “Paradoxically,” wrote Barthes, “the greatest intelligence of all provides an image of the most up-to-date machine, the man who is too powerful is removed from psychology, and introduced into a world of robots.”24 Sixty years later, headlines asking “Did Einstein Show Asperger’s Traits?” serve as a good reminder that Barthes touched on something more general than the myth of just one brain.25 Extraordinary cognition tends to merge with the technological as a method for recognizing its difference. Formulated that way, autism makes a convenient container for anxieties about the role of machine learning in the computer age. The emergence of cybernetics in the mid-twentieth century, as Hayles has shown, caused considerable worry over the fate of liberal subjectivity. She describes liberal subjectivity as “imperiled” by cybernetic theories that reconfigured humans as complex information systems.26 Autism, emerging contemporaneously with cybernetics, came to signal all the negative attributes of the human reduced to machine while neuro-typical subjectivity continued to occupy what Rosemarie Garland-Thomson has termed “the normate position,” or “the social figure through which people can represent themselves as definitive human beings.”27 Because autism is pathologized by its analogy to cutting-edge technologies, however, it also signals the potential for optimizing human cognition toward superhuman aims. The tension between prodigious and pathological intelligence finds stable ground in autism when the condition appears bound up with questions of machine learning.
To interrogate those binds requires a critical approach to artificial intelligence as well as a critical approach to autism. Critical approaches to representations of autism have shown clearly how popular movies such as Rain Man (1988) “propose a computational, nonhuman model of the autistic brain.”28 The question remains how theories and technologies operating under the sign of AI have proposed a model of cognition that attaches itself only to certain subjectivities while appearing to reposition all of human intelligence. Alan Turing’s famous test makes a good example. It describes authentic computer intelligence as a text generator with natural language understanding capable of passing for human in what he called “the imitation game.”29 The model of intelligence AI aims to imitate derives from the same form of subjectivity that Turing’s work is supposed to have imperiled: linguistic, disembodied, socially normative. Instead of putting liberal [End Page 57] subjectivity at risk of obsolescence, cybernetics seems to have recast it in a weaker form divested of political rights and free will, yet still centered around cognitive style. That form of subjectivity continues to shape critical and imaginative views of personhood reduced to the brain.30 As Meredith Broussard explains in Artificial Unintelligence, for computer science the term machine learning “doesn’t mean that the machine has a brain made out of metal. It means that the machine has become more accurate at performing a single, specific task according to a specific metric that a person has defined.”31 Broussard has to clarify because, over the course of the twentieth century, mechanical brains grew into the image of artificial intelligence manifested. Although less evocative today, their legacy continues to shape normative distinctions between real and artificial intelligence and, consequently, between human and nonhuman entities. To displace those distinctions, along with their power to mark autistic intelligence as artificial, requires closer attention to how early information scientists imagined mechanical brains as tools for rehabilitating liberal cognition in an age of information overload.
Making Intelligence Artificial
The Enlightenment individualism inaugurated by Cartesian rationality played a leading role in the development of liberalism. Formulations of personhood grounded in skeptical self-consciousness—such as cogito, ergo sum—coalesced in the mid-nineteenth century with what Elaine Hadley describes as an individuated subject of “liberal cognition.”32 Such a subject possessed not just the capacity for rational knowing but also for disinterestedness, objectivity, reticence, conviction, impersonality, sincerity, reflection, abstraction, and internal deliberation. Liberal cognition enabled a fair and resilient social order at the dawn of industrialization, Hadley shows, just when an explosion of information technologies overwhelmed self-contained individual knowledge. “In this regard,” she explains, “a liberal mind was not only distinct from the reflexive habits of the unthinking masses but also different in some respects from a reflective subject who engaged in what might be called ‘habits of thought.’”33 The habitual, repetitive, and rote would come to distinguish computation from human cognition before those categories grew less distinct in the late twentieth century. The invention of mechanical brains created a conceptual terrain for thinking the relationship between human and machine intelligences, eventually blurring the distinction and undermining the subject of liberal cognition. The history of mechanical brains makes visible the undoing of liberal subjectivity as an epistemic model for understanding human intelligence, and in turn how that undoing makes it possible to imagine different types of intelligent subjects from artificial to authentic.
The concept of mechanical brains dates to the late nineteenth century, after the emergence of liberal cognition but before the invention of digital computers. If liberal cognition stood strong against the mechanization of Western society, the floods of information pouring out of industrial sciences and new academic disciplines threatened to overburden that model of knowing and undermine individual mastery over even one area of knowledge. Concern over information glut appears in writings by the early architects of [End Page 58] our modern information infrastructure, pioneers of information science avant la lettre. For instance, Paul Otlet found in bibliography a technique for rescuing a world drowning in a flood of unsystematic publishing. In an essay from 1891 titled “Something About Bibliography,” Otlet laments “the debasement of all kinds of publication” in the nineteenth century due to the proliferation of social and political sciences.34 The solution, he argued, would require “the creation of a kind of artificial brain by means of cards containing actual information or simply notes of references.”35 Otlet imagined his massive card catalog as a brain because the epistemic basis of knowledge rested on individuated cognition. As a solution to overburdened individuals, however, his artificial brain approached a horizon of collective knowledge by organizing scattered threads of research. In other words, the solution to individual cognitive overload takes the form of collective cognition designated as artificial because it is social, mechanical, and distributed rather than biological.
A sense of collective cognition persisted in early information science through the interwar years, making Otlet an important predecessor to online culture.36 In retrospect, we can see how his vision of an artificial brain prefigures similar concerns in computer science about memory, associative structures, information retrieval, and crowd sourcing. The artificial brain metaphor appeared apt as he dreamed of building complex research tools to transform sound into writing, retrieve and reproduce documents, manipulate cataloged items, and recombine records to trace new relationships among existing research. The technologies capable of such work, he writes, “would indeed be a mechanical, collective brain.”37 The collective aspect of his vision grew in significance so that his most aspirational plans for organizing information systems became his most networked. In 1934, at the end of his career, Otlet’s ambitious work Traité de documentation: Le livre sur le livre (Treatise on documentation: the book about the book) voiced his dream of creating an international system of mechanical brains. As he explains it, the assemblage of research tools “would become very approximately an annex to the brain, a substratum even of memory, an external mechanism and instrument of the mind but so close to it, so apt to its use that it would truly be a sort of appended organ, an exodermic appendage.”38 Here the mechanical brain extends the individual mind into the knowledge collective. Rather than organizing and operating the collective intelligence itself, research tools act as an interface for linking the individual mind to the world of knowledge. In one sense, the mechanical brain metaphor imagines collective intelligence as an amplification of individual intelligence. The technical apparatus empowers individual researchers to command knowledge production as irreducible units in the project to rehabilitate a subject of liberal cognition. In another sense, however, the mechanical brain metaphor situates researchers in a constellation of other intelligent subjects, raising the possibility of individuals dissolved into a social system of artificial intelligence.
Tension between individual and systematic intelligence grew more pronounced as the figure of the mechanical brain captured popular imagination. Famed science-fiction author H.G. Wells echoed information professionals in a series of lectures and essays published in 1938 under the title World Brain. In his more utopian vision, the “work of documentation and bibliography, is in fact nothing less than . . . a sort of cerebrum [End Page 59] for humanity” that could beef up “feeble and convulsive” citizens of a new information age.39 The brain offered a vivid descriptive metaphor for collective intelligence because it suggested an intensification of the cognitive capacities understood as characteristic of liberal subjectivity. Yet the collective it envisioned would require the increased rationalization, impersonalization, and abstraction of knowledge, tipping information techniques toward systematization rather than individuation. The qualifier artificial recognized that the mechanical brains which would enable collective intelligence were no brains at all, but tools to manage complex, socially distributed cognitive processes.
Recognition of the brain as a metaphor for artificial intelligence registered less clearly in popular reports on the machines known as mechanical brains. Magazines and newspapers gave mechanical brains the appearance of taking on lives of their own as entities separate from human intelligence. Artificial intelligence as the property of computers pointed toward imitation: fake intelligence that failed spectacularly to approach its authentic referent of human intelligence. Despite their limited capacities, the novelty of mechanical brains met praise that elevated them to the world of conscious being. In the United States, throughout the 1930s and 40s, popular magazines such as Modern Mechanics, Popular Mechanics, and Popular Science expressed energetic excitement over mechanical brains. In these venues, excitement attached to engineering and its applications rather than the information architecture that inspired earlier international cooperation. For example, Popular Science published a brief notice in its December 1930 issue describing how Westinghouse Electric and Manufacturing Company built a “mechanical Einstein” to solve complex problems.40 The June 1932 issue of Modern Mechanics similarly described Vannevar Bush’s differential analyzer as a mechanical brain for “making computations which are mechanical and repetitive.” The article went on to promise, “this rapid mechanical device will save the user several minutes.”41 The novelty of technological innovation shines so brightly in these pages that it casts a shadow over the obvious point: these machines bear no resemblance to brains.
When reported in the language of cognition, innovations in mechanical computing conjoin fantasies of artificial intelligence with hopes that automation will foster military and economic dominance. According to the August 1935 issue of Science and Mechanics, researchers at the University of Pennsylvania built on Bush’s work to create a much larger differential analyzer: “The U.S. Army wants another like it” because the three-ton brain “can run rings around Einstein in solving mathematical kinks of the way that the universe operates.”42 These articles promote mechanical brains less as tools for running calculations derived from Einstein’s work and more as Einstein’s successors, possessing intelligence in their own right. Yet it turns out the U.S. Army did not want to learn how [End Page 60] the universe operates. The military took interest because, by the end of the 1940s, these so-called brains had aided in the automation of airplanes, telephone systems, and the calculation of atomic weapons impact.43 Situating the differential analyzer as competition for Einstein lets a fantasy of omniscient technology obscure more plausible concerns about technological unemployment during the Great Depression. As cultural historian Dustin Abnet explains, “discussions of the first ‘mechanical brains,’ analog computers, raised the possibility that perhaps no job would be safe from mechanization.”44 Such articles mark just how far mechanical brains had wandered, seemingly on their own, from the goal of international cooperation toward the business of international competition. And this apparent capacity for self-control revealed the will to imagine artificial intelligence as a possible foundation for subjectivity.
Popular reports on humanoid robots went the furthest toward imagining the subjectivity of mechanical brains. The publicity surrounding so-called mechanical men— personified robots housing mechanical brains—indexes how electromechanical computation inspired fascination with artificial intelligence as an alternative to human intelligence. And no other robot was more famous than Elektro, the Westinghouse sensation introduced to America at the 1939 World’s Fair in New York City. Westinghouse produced a promotional film titled The Middleton Family at the New York World’s Fair that showed Elektro walking, talking, and smoking a cigarette. One character announces, “If he wasn’t so big, I’d take him for an engineer,” giving us an early example of engineer stereotypes that later dovetailed with stereotypes of autism. As if to drive home the point, the same character insists, “all he lacks is a heart.”45 After World War II, Westinghouse sent Elektro on a national tour. The robot acted as a publicity engine, as Popular Mechanics made clear as early as June 1931 in a report on a prototype of Elektro. The article explains how its components find commercial application: “Some of these devices protect banks against robbery, turn on electric light, and sort yeast cakes and other products.”46 Popular Mechanics also reassures readers that they have nothing to fear from the physically imposing robot by describing it as receptive to commands. Supposedly intelligent but fully obedient, the robot responds to “the spoken request of his human master.”47 The need to reassure readers of the robot’s harmlessness evidences a fear generated by the opposition of artificial and human intelligence, as if one did not depend on the other.48
Humanoid robots sparked popular fascination with the potential for artificial intelligence and simultaneously laid the groundwork for postwar fears that computers could come to dominate humans. This anxiety was visualized by the cover illustration of the March 28, 1955 issue of Time. The cover features a drawing of IBM’s then-President Thomas J. Watson Jr. looking slightly askance, sober, with a computer standing behind him. The computer is composed of machine parts arranged to look like a human face, torso, and arms. One hand is raised to its mouth with its index finger extended, as if to say: “Shh, don’t tell.” The other hand pushes a button on a panel located just above Watson’s head. A series of numbers and mathematical symbols fill the background like wallpaper. The illustration evokes an ominous mood, hinting at a story of robots making [End Page 61] decisions behind the backs and over the heads of their creators, based on math problems too complex for human comprehension. The article inside the magazine—a glowing profile of Watson—says nothing of robots or the possibility of sentient computers. It adopts a far more modest tone, explaining with reference to H.G. Wells that when “the newest Wellsian brain in the earthly world was enthroned” on the fifth floor of a St. Louis office building, it “looked like nothing more than a collection of filing cabinets.”49 That description could apply just as easily to Otlet’s card catalogs, yet the cover unmistakably evokes what Hayles calls “cybernetic anxiety”—the counterpart to technophilic fantasy—in postwar America.50
In the intervening years, the idea of mechanical brains had undergone dramatic changes. By the mid-twentieth century, mechanical brains appeared not only to rival human intelligence but also to menace human existence. The culture industry offered narratives of artificial intelligence surpassing rote calculation to achieve the same characteristics that had defined liberal cognition a century earlier—objectivity, impersonality, abstraction. Indeed, their increased technical capacities and lack of character appeared to give mechanical brains an advantage in achieving the deliberate impartiality that had been the ideal of liberal subjectivity. Yet, as Elektro’s missing heart demonstrated, they lacked the empathy and sociability that would have made them authentically intelligent. Mechanical brains exhibited precisely those same traits that clinical researchers in the postwar period identified as the defining characteristics of autism.
Autism in a Mechanical Age
Renderings of autistic intelligence as mechanical date back to the 1940s case studies that introduced autism to the medical world. Those foundational works, although controversial today, established a through line of prominent research that continues to pathologize autistic intelligence as nonhuman. Autism emerged as a medical diagnosis against the backdrop of a transatlantic cultural fascination with mechanical brains when Leo Kanner and Hans Asperger both used the term “autistic” to describe children who demonstrated unusual behavior and intelligence in articles published roughly at the same time. Kanner opens his first case study by telling readers that the patient possesses “an unusual memory” that, at a very young age, allows him to memorize musical tunes, the names of places, short poems, information about American presidents and encyclopedia illustrations, the twenty-five questions and answers of the Presbyterian Catechism, the alphabet backwards and forwards, and numbers up to 100.51 In his conclusion, Kanner notes that most parents regarded such abilities “with much pride” and generalizes “excellent rote memory” as a condition of autism.52 Paired with exceptional recall, however, was “the children’s inability to relate themselves in the ordinary way to people.”53 From the outset we hear the discourse of mechanical brains echoed in descriptions of autistic children who appear to lack sociability while excelling at rote forms of cognition—hence Asperger’s designation of “intelligent automata.” The shared vocabulary shows how influential the phenomenon of mechanical brains was in setting the terms for [End Page 62] new types of cognition. By distinguishing different forms of intelligence, autism research carved out a key area for investigating human being in a mechanical age.
More than Kanner or even Asperger, Bruno Bettelheim established the link between atypical cognition and mechanization. Bettelheim published a case study in the popular science magazine Scientific American in 1959 titled “Joey: A ‘Mechanical Boy.’” It begins with this claim: “Joey, when we began our work with him, was a mechanical boy.”54 Formulated as a statement of fact, the sentence disavows its own figurative significance and the technological imaginaries that make Bettelheim’s interpretation of Joey possible. For Bettelheim, Joey not only fantasized about being a machine but really “had been robbed of his humanity.”55 The machine appears opposed to the human throughout the short article in much the same way the Time magazine cover about IBM suggests that increasingly autonomous technology will threaten humanity. As Sungook Hong puts it, “Bettelheim worked with an understanding that the technicity that dominated young Joey suffocated his humanity, and thus caused his autism.”56 In that way, Bettelheim works against the grain of technophilia that animated so much of the excitement for mechanical brains. Offered up as a cautionary tale, Joey depicts the logical extreme of a machine-besotted society. Joey believes machines are superior to people, so that, as Bettelheim writes, “if he lost or forgot something, it merely proved that his brain ought to be thrown away and replaced by machinery.”57 That logic makes perfect sense if one takes seriously the idea that mechanical brains can “run rings around Einstein.” But because Bettelheim’s project is to rehabilitate human subjectivity, Joey’s dreams of becoming a cyborg appear pathological. The case study presents Joey as an affectless child who cannot love or be loved any more than a household appliance.
Despite Joey’s intelligence, his inability to relate to others in conventional ways led Bettelheim to describe his subjectivity as completely flat, monotonous, and withdrawn from the world. In that, Bettelheim was not unique. Both Kanner and Asperger used the term “autistic” for its Greek root autos, meaning self, to suggest a state of being withdrawn within oneself. Figurative language suggesting barriers between autistic individuals and the world runs throughout both the professional and popular literature. In 1967 Bettelheim published his full-length autism case studies in a book titled The Empty Fortress. That same year, Clara Claiborne Park published the first memoir about raising a child with autism. The Siege describes her daughter as “oddly content within the invisible walls that surrounded her.”58 The Empty Fortress and The Siege give us two sides of a metaphor about isolation; the clinician pictures a consciousness to be infiltrated while the parent imagines her child stuck inside her own mind, cut off from family. Bettelheim elaborates this metaphor of isolation as a relationship between machine and human. He reads Joey’s fantasy of rebirth from an “artificial, mechanical womb” as a manifestation of withdrawal, but takes his fascination with cars as a healthy exploration “of living with a good family in a safe, protecting car.”59 The difference, Bettelheim explains, comes down to mobility: the car moves while the mechanical womb does not. However, it seems at least as important to Bettelheim that Joey demonstrates an appropriate subject-object relation with technology. The mechanical womb gestates Joey, while Joey pilots some [End Page 63] of the vehicles in his imagination. A correlative to how Elektro’s obedience reassured readers of their own safety, Joey’s imagined ability to control technology reassures Bettelheim of his progress toward human subjectivity.
This line of thought takes Joey’s ability to control objects as the condition of possibility for recognizing his own humanity. Personhood for the autistic child requires an assertion—or at least an acceptance—of agency over self and surroundings. Bettelheim expresses this requirement when he explains, “in a car one was not only driven but also could drive.”60 At eleven years old, Joey could not drive himself, but the fantasy makes the person. With that, Bettelheim writes, “Joey at last broke through his prison” and became “a human child.”61 In addition to marking an early example of a dominant trope in autism literature, Bettelheim’s prison metaphor complicates the figure of the machine. Mechanicalness not only defines Joey’s state of being but also becomes a container that keeps him isolated. We can begin to see in this case how subjectivity does more than suggest a way of experiencing the world; it also encloses the parameters of personhood. Bettelheim draws those parameters first around an ability to intervene in one’s own situation, but then proposes a more fundamental distinction between human and machine. “Feelings, Joey had learned, are what make for humanity,” he writes. “Their absence, for a mechanical existence. With this knowledge Joey entered the human condition.”62 Emotions do the heavy lifting of separating humanity from mechanical contrivances in this formulation. At mid-century, when machines appeared brainier by the year, intelligence alone no longer sufficed for theorizing an individual who could participate in the reproduction of society. Social relations among individuals required an element of emotional intelligence to avoid being described as “artificial, mechanical.”63
The stakes of an emotional turn in defining human subjectivity run high for people diagnosed with autism. Although theories of intelligence have evolved since the mid-twentieth century, and Bettelheim has been discredited widely, his basic idea of autism as emotional deficiency survives everywhere, from psychological theories to television shows.64 The recent Netflix series Atypical (2017–21), for instance, frames an autistic coming-of-age story with the main character’s revelation that “humans can’t be perfect because we’re not machines,” thus juxtaposing human fallibility with the ideal of flawless mechanics.65 That line echoes the narrative arc of Joey’s transformation in “Mechanical Boy,” restaging the drama of self-liberation from autism as a rejection of the perfect machine in favor of the messy human. While Atypical offers a subtle rebuke to stereotypes of autistic intelligence as mechanical, it leaves in place the stereotype’s proxy by marking autism as emotional lack. More damagingly, the same idea factors into the appraisals of leading experts on autism. The director of Cambridge University’s Autism Research Centre Simon Baron-Cohen cemented his reputation with a book titled Mindblindness, which argues that autism involves an inability to imagine perspectives other than one’s own. Unconventional social behaviors and the tendency to withdraw, in Baron-Cohen’s account, result from a cognitive failure to empathize. In a more recent book, Baron-Cohen argues that a lack of empathy can explain “the origins of human cruelty.”66 As Marion Quirici recently put it, “there could be no greater stigma.”67 Given the [End Page 64] medical culture of our moment, the move to incorporate emotional life within understandings of what makes a person fit to participate in society runs the risk of excluding people diagnosed with autism, or even relegating them to the status of nonhuman.
The affirmative humanism staged by Atypical will do little to correct the pathologizing perspective put forward by Baron-Cohen. Having adopted a mechanistic theory of mind, he understands all cognition as the function of cerebral wiring. This mechanistic cognitive model appears clearly in the way Baron-Cohen constructs his theory of mindblindness. Defined as a failure to empathize with others, mindblindness figures the negative image of what Baron-Cohen describes as the positive cognitive ability to read minds. Mindreading here denotes a socially learned ability to imagine other people’s moods, motivations, and intentions. This ability, he argues, depends on “the maturing of four mechanisms that the infant has pre-wired into its brain.”68 Baron-Cohen draws on the language of computer hardware to theorize mental faculties, describing a Shared Attention Mechanism, a Theory of Mind Module, Eye-Direction Detector, and Intentionality Detector that work together to let people guess at what someone might be looking at or how they are feeling. Baron-Cohen has led the way in hypothesizing neurotypical cognition as a set of mechanisms that, for autistic children, either do not exist or do not fully mature. The distinction between neuro-typical and neurodivergent cognition in his account still treats autism as a kind of emotional failure, but it does not draw a clear line between human and machine intelligence. The meaningful difference for understanding mindblindess distinguishes functionality from non-functionality. For that reason, Baron-Cohen begins to make clear how the history of mechanical brains comes to bear on understandings of autistic cognition.
Like many other cognitive scientists, Baron-Cohen imagines the mind—in some cases also the brain—as modular, “pre-wired” with “mechanisms” that operate specific functions. Much as computers have hard drives for memory, graphics processing units for rendering visuals, and digital-to-analog converters for recognizing sound, he understands minds to have mechanisms, modules, and detectors for recognizing emotional communication. One could forgive Baron-Cohen for developing an analogy to help those outside his profession understand technical concepts, but it seems that cognitive psychology does not recognize its concept of modularity as figurative. In more specialized language, for instance, John Tooby and Leda Cosmides explain Baron-Cohen’s work by foregrounding the mechanized mind: “We inhabit mental worlds populated by the computational outputs of battalions of evolved, specialized neural automata.”69 In other [End Page 65] words, many tiny machines (“specialized neural automata”) make up our cognitive processes (“computational outputs”). Their formulation leads to an assessment of autism as a condition of malfunction. As Tooby and Cosmides put it, “even well-designed machinery can break down.”70 It takes only a moment’s reflection to realize that in no literal sense do minds have modular mechanisms and so in no literal sense can they break. When taken as a literal description of mental faculties, however, the conceptual heuristic leads to a view of atypical cognition not simply as deficient but more particularly as defective.
The view of atypical cognition as defective or inauthentic appears in early works on artificial intelligence that theorize the mind as modular, and those writings gravitate to autism as a convenient way to reflect on neurotypical intelligence. Such use of autism appears in Marvin Minsky’s Society of Mind, which arguably did more than any other single publication to convince specialists and laypeople alike that minds work like machines. The book synthesizes ideas Minsky developed throughout the 1970s and 80s as co-founder of MIT’s AI Lab. It suggests that natural intelligence results from many simple component parts working together to form a complex aggregate mind. Much like Baron-Cohen’s idea of a shared attention mechanism working in tandem with an intentionality detector, Minsky suggests that even a simple task like picking up a cup of tea requires the cooperation of several components of mind, what he calls agents. Grasping agents, balancing agents, thirst agents, and moving agents each play their part. “If each does its own little job,” he writes, “the really big job will get done by all of them together: drinking tea.”71 In this way we can imagine cognition involving various processes working simultaneously, not unlike the way a vocal mechanism worked with a hand mechanism to make Elektro seem like he was counting on his fingers. The benefit of such a theory lies in its ability to help rethink cognition in terms less reliant on the coherent, fully rational subject assumed by liberalism.
The problem with Minsky’s formulation of a modular mind lies in how it exaggerates the Cartesian mind-body duality to imagine the mind operating the body as it carries out physical tasks. That view of the human subject—a view Minsky often expressed by calling humans “meat machines”—contributes to his tendency to characterize the mind as a central processor, where all sensory experience becomes information made meaningful by the brain.72 Although the book theorizes the mind, not the brain, the two begin to blur in formulations that try to connect the psychological to the physiological. That slippage appears plainly when he addresses autism. In a chapter titled “Autistic Children,” he uses mind and brain interchangeably to explain how infants, unlike adults, can accomplish social goals more easily than physical goals. He writes, “the presence of helpful people makes the infant’s mind more powerful—by making the agencies inside those of other people’s brains available for exploitation by the agencies in the infant’s brain.”73 The first part of that statement makes a claim about an infant’s mind in social context, while the second part draws a conclusion about brains more generally, implying mental agencies reside in brains. Although theoretically distinct, the effective separation between mind and brain becomes unclear if all the agencies that make a mind intelligent manifest physically as specific parts of the brain. [End Page 66]
The conflation of mind and brain produces a quasi-materialist perspective that takes the mind as an effect of physical, biochemical processes in the brain, but does not take the brain as a body part so much as an operating system. Minsky’s obfuscating gesture runs two ways at once: he abstracts the brain, positing it as indistinct from psychological effects of the mind, while at the same time he reifies the mind, positing it as physically locatable in the brain. This confusion runs deeper as Minsky continues to theorize agents of mind. In the chapter on autistic children, he writes, “I’ll suggest that our infant brains are genetically equipped with machinery for making it easy to learn social signals.”74 Minsky’s idea that brains have machinery for recognizing and understanding social signals is consistent with his description of agents of mind as “tiny machines.”75 If we have agents for simple physical tasks such as grasping a cup of tea, it follows that we must have simple agents informing our ability to communicate with others, in part by making sense of something like a smile, a look of confusion, or a finger pointing to something. But the move to map a society of tiny mind machines onto the brain itself receives no explanation, theoretical or empirical. Starting with a conceptual and psychological model of cognition, Minsky simply extrapolates to posit the physiology of the brain as mechanical.
The amalgamation of mind and brain clarifies how Minsky influences autism researchers like Baron-Cohen. He lays a foundation for understanding neurotypical cognition as the work of a high-functioning computer. If infant brains have machinery to recognize social signals, by extension, the inability to recognize such social signals implies the malfunction of those tiny machines genetically determined to think about other people. Here the theory’s relevance to understanding autism becomes suggestive. Rather than understanding children diagnosed with autism as having an underlying condition of mental disability, we could understand them to have fully functional minds that operate with specific atypical cognitive processes. In other words, the computational metaphor could figure atypical cognition as a matter of having a different set of modules. Minksy, however, ignores the possibility of benign cognitive difference to instead bend the metaphor toward defectiveness. He makes the point explicit when he writes, “that hapless mind is doomed to fail.”76 From a disability studies standpoint, the implications look dire: Minsky introduces the model of defectiveness that Baron-Cohen has since popularized. If medical discourses understand disability as an impairment for individuals to manage or overcome, Minsky’s understanding lands closer to eugenicist models of the early twentieth century that described whole classes of people as defectives.77 What has changed since the early twentieth century is that, in order to think of autistic brains as [End Page 67] broken, he first has to think of all brains as mechanical, taking a dramatic step away from the form of subjectivity that structured liberal individualism.
Tellingly, that step away from liberal subjectivity has the severest consequences for nonnormative personhood. The understanding of brains as computers contributes to an epistemology that renders autism as a form of artificial intelligence. Because cognitive science defines “intelligent behavior as computation,” we lose earlier distinctions between the biological and the mechanical that made it possible to think artificial intelligence as a form of collective knowledge organized through mechanized apparatuses.78 The operative distinction becomes authentic versus artificial intelligence, with authenticity marking normative abilities. Contrasted with authentic rather than biological intelligence, artificial intelligence not only structures a hierarchy of intelligences but also repositions what authentic mechanical intelligence would entail. Autism plays an important evaluative role for AI researchers by marking a threshold for what counts as authentic.
Influential cognitive scientist Brian Scassellati makes use of autism in that way when he argues that humanoid robots will need a theory of mind—that is, an interpretation of others’ psychological states. Citing Baron-Cohen, Scassellati rehearses the common understanding of autism as defined by a lack of social skills that require empathy, then proposes to develop a robot with those skills. “A robot that can recognize the goals and desires of others,” he writes, “will allow for systems that can more accurately react to the emotional, attentional, and cognitive states of the observer, can learn to anticipate the reactions of the observer, and can modify its own behavior accordingly.”79 To make Scassellati’s assumption explicit: observers are neurotypical individuals. The basis of his study suggests that machines will need to surpass autistic cognition to achieve something on par with human intelligence, implying that autism exists at the border between human and nonhuman.
The Posthuman and Autism
The strongest critique of computational theories of mind comes from Hayles’s seminal account of cybernetics in How We Became Posthuman. She turns to cybernetics to find an alternative to liberal subjectivity, even though she shows how the mathematicians, philosophers, and engineers who reimagined the relationship between people and their technologies also reproduced some of the assumptions of liberal humanism. In particular, she takes aim at abstractions of cognition that see intelligence as disembodied. Those formulations of intelligence, she shows, reify the concept of information as an [End Page 68] object of study, yet treat it “as a kind of immaterial fluid that circulates effortlessly around the globe.”80 Intelligence in cybernetic systems becomes nothing more than a capacity for processing information. That idea of intelligence derives from a history of theorizing information systems as intelligent, with the added assumption underlying cybernetics that humans themselves consist of information. “When information loses its body,” Hayles argues, “equating humans and computers is especially easy, for the materiality in which the thinking mind is instantiated appears incidental to its essential nature.”81 Such disembodied theories of information lead simultaneously to utopian fantasies of uploading human consciousness and to dystopian fantasies of the so-called singularity when technology eclipses human control.82 The historical narrative Hayles traces to explain such contrasting visions of the future begins in the aftermath of WWII and looks forward to the unraveling of the individualized liberal subject into a series of codes.
The emergence of autism as a diagnosis during the same period suggests how some people bear the weight of computational fantasies more than others. Representations of autism as artificial intelligence also clarify how the “we” of Hayles’s title points to a shortcoming in the best critique of computationalist thinking available. Taking the narrowly defined liberal subject as a starting point produces a narrow vision of its posthuman variations and a misrecognition of those who deviate from humanist expectations.83 This problem occurs clearly when Hayles addresses autism to register a flaw in cybernetic models of intersubjective communication. Arguing that second-wave cybernetics conceived of subjectivity as radically closed off from the world, she points to autism to speculate on what the embodiment of their model would look like. Their model of the human subject, she writes, “formulates a description that ironically describes autistic individuals more accurately than it does normally responsive people. For the autistic person, the environment is indeed merely a trigger for processes that close on themselves and leave the world outside.”84 The sharp distinction between neurotypical cognition and autistic cognition stems from misunderstanding autism as a condition of self-isolation, foreclosing an opportunity to think posthuman subjectivity within a complex terrain of cognitive styles. Surprisingly, her homogenized understanding of autism cites Baron-Cohen’s concept of mindblindness. Baron-Cohen’s concept derives from the modular theory of mind developed by the very same body of thinking that Hayles draws into question. No wonder she sees similarities between his cognitive model and the one developed by cybernetics; he inherited the idea of minds as computers from those forefathers of cognitive science. Their influence made it possible for Baron-Cohen to develop a theory of mind that posits autistic intelligence as defective, inhuman, artificial.
Displacing fantasies of mechanical brains will require fuller engagement with forms of intelligence that fit neither the description of authentic nor artificial. How might intelligences excluded from the domain of liberal selfhood generate new modes of being human? As Erin Manning and Brian Massumi explain, rethinking a spectrum of intelligence that runs from low-to-high-to-super “is a question of the diversity of modes of existence, and of the modes of thought they enact.”85 Autistic artists and advocates have begun to address that question, in part, by combining critiques of normative subjectivity [End Page 69] with creative explorations of what Manning and Massumi call an “environmental mode of awareness.”86 One well-known example is the YouTube video “In My Language” that Amanda Baggs posted in 2007, attracting enough attention that a New York Times blog-post declared her “an Internet sensation.”87 The video articulates and rejects a number of assumptions about autistic intelligence that derive from computationalist thinking, namely cognition defined by rationality, autistic subjectivity characterized by self-enclosure, and savantism as the dominant model of autistic intelligence. In that sense, Baggs’s video provides a counterpoint to Grandin’s Thinking in Pictures. Grandin values “measurable results more than emotion;”88 Baggs values creativity, openness to sensory experience, and benign cognitive difference. The video “In My Language” integrates intellectual and emotional experience by depicting what Melanie Yergeau describes as “nonhuman communion, an exchange absent of symbolism and replete with feeling.”89 Baggs films herself “reacting physically to all parts of my surroundings,” not to master them but to be “in a constant conversation,” experimenting with cognitive selfhood that engages but does not dominate her environment.90
The video begins with an image of Baggs, her back to the camera, rocking forward and backward, gesticulating with her hands. The image depicts self-stimulatory behavior, commonly called stimming, to frame the aesthetic quality of her movement as she faces a sliding glass door that turns her figure into a near-silhouette. Over the image plays the sound of melodic vocal droning, which continues as the shot cuts to different sequences, at first featuring her hands interacting with objects in a way that lends percussive elements to the soundtrack. Approaching the three-minute mark, the shot cuts to Baggs’s face as she nuzzles a book and flips through its pages. The video may appear illegible on first view due to the absence of explanatory language or a contextual framework, but that resistance to legibility is self-conscious, viewers realize, when a title slide appears with the words “A Translation.” The video shifts modes at that point to feature a monolog with explanatory visual aids. The narration, a computer-generated voiceover of Baggs’s typing, tells us that the first part of the video “was in my native language.”91 It becomes clear that what appears random in the opening shots is in fact thoughtful, fully conscious. “Far from being purposeless,” she explains, “the way that I move is an ongoing response to what is around me.”92 Baggs asks viewers to think “about what gets considered thought, intelligence, personhood, language, and communication, and what does not.”93 Because non-phonetic sound and gesticulation occur outside of language, dominant accounts of cognition do not recognize them as evidence of consciousness. Baggs’s translation, however, makes clear that although not logocentric, her native language mediates between her exterior environment and her interior experience.
The video marks a useful departure from computational thinking because Baggs explicitly questions normative standards of intelligence. Rather than figuring an alternative form of genius, she disputes the idea that autistic value relies on intelligence useful to state or market ends. Her mode of cognition lends itself to artistic expression but less clearly to industrial efficiency or military advantage. At the height of interest in the so-called autism epidemic, Baggs expanded understandings of autistic intelligence beyond [End Page 70] the well-known images of engineer, computer geek, and database. She risks appearing illiterate by interacting with writing technologies in nonnormative ways, such as when she noses a book or tastes a pen, although the narration confirms for her audience that Baggs does have language ability. Those capacities to communicate using conventional rhetorical strategies—her ability to construct a compelling argument, for example— demonstrate normative intelligence while refusing to privilege a logic of utility determined by clear subject-object hierarchies. Instead, she privileges cognition as embodied experience and open-ended responsiveness, proposing a form of intelligence uncoupled from computationalist demands for rationalization and calculation.
Yet, her intelligence can align with technological theories of the posthuman because computers have an important place in the video and in Baggs’s mode of communication. She does not speak, so relies on computer vocalization in the video and on computer-mediated communication more generally. Baggs uses computers to produce her work and the internet to transmit it in the form of blogs, online videos, and articles. Without thematizing technology per se, her work foregrounds the intimacy of human-computer relationships in ways that easily go unnoticed when we think of computers only as tools for our control. Her computer keyboard first appears as any other object in Baggs’s environment, alongside a doorknob, a necklace, a slinky. She runs her hands over it as rhythmic accompaniment to her singing. More than just an instrument for writing, the keyboard becomes an instrument for non-verbal communication and noisemaking. The fullness of feeling on display in Baggs’s work necessitates reconsideration of autistic intelligence as purely intellectual, mechanical, divorced from emotion, disembodied, or isolated from the world. Recognizing computers as part of a broader material ecology— rather than a tool for managing, modeling, or reproducing that ecology—gives us a fuller sense of what the posthuman subject might look like. When Baggs uses her computer to vocalize writing, she makes visible how communicative work is also tactile, “replete with feeling,” and part of a network including but not exclusive to computers. In other words, autistic subjectivity begins to look less like a cybernetic model and more like one model for engaging the nonhuman environments everyone learns to navigate as best they can.
From the perspective of embodied relation, Baggs’s performance has implications beyond understanding autistic intelligence. She rejects the idea of the human as coterminous with cognitive subjectivity, and with it the image of autonomous rationality that once defined the individual subject of civil society. The rise of mass society required a form of individual intelligence that could abstract itself as part of a larger whole through the use of technical mechanisms such as research tools. Over the course of the twentieth century, abstracted human intelligence came to coincide with the new ideal of artificial intelligence as a form of computation, which, as the history of mechanical brains makes clear, has everything to do with imagining forms of cognition susceptible to and useful for effecting control. Joseph Valente has argued convincingly that, in the wake of sociopolitical changes undermining autonomous selfhood, contemporary culture treats autism “as the repository of a liberal myth of individualism” and “the after image” of a [End Page 71] humanist ego ideal.94 Simultaneously, computational constructions of autism treat the condition as a container for social anxieties concerning the subordination of human autonomy to intelligent machines. The dissonant, often dichotomous, process of autistic subject formation coalesces into a figure consistent with postindustrial regimes of power: a single portion of our population alleviates fears of technological domination and concentrates dreams of organizing human biology toward governmental, militaristic, and market goals. If only we can understand how autism relates to enhanced cognitive powers, the thinking goes, we can learn to optimize all brains, even mechanical ones.
The folly of this proposition lies not only in how it imagines brains as computers but also in how it imagines computers merely as instruments of control. As Baggs helps to show, the more computers become knitted into the fabric of our social lives, the more they shape our tactile and somatic experiences, exceeding the calculus of rational utility. Because autism has framed a contested relationship between emergent subjectivities and emergent technologies, it has the potential to help us imagine selves who aspire to be neither the subjects nor the objects of technological control, neither artificially nor authentically intelligent, but rather embedded in a variety of cognitive relationships. Some of those relationships will feature nonhuman environments and computational technologies, as posthumanist excavations of cybernetics have suggested; just as important are relationships between a variety of mentalities, especially among unrecognized forms of intelligence that might point toward new modes of knowing. Medical and scientific researchers have begun to wonder if, rather than an underlying genetic or neural phenomenon that produces a spectrum of symptoms, they might better understand autism as a complex of different conditions. Some have gone so far as to propose eliminating the term as a diagnostic category.95 Meanwhile, the cultural life of autism has grown more prominent, making it relevant for imagining cognition with—not as—computers. In that way, autism may prove more enduring a phenomenon for social theory than for the cognitive sciences. As those sciences grow more nuanced, we need a richer and clearer sense of what constitutes cognitive subjectivity after liberal humanism. [End Page 72]
David Squires is an assistant professor at the University of Louisiana at Lafayette. He teaches American literature and writes about the cultural legacy shared by information science and modern media. He recently coedited Porn Archives (Duke University Press, 2014) and has published work on poetry, pulp fiction, library science, and racial violence.
My gratitude goes to Joseph Valente for his encouragement and guidance at the outset of this project. Many thanks to Ian Beamish, Maria Seger, Liz Skilton and the folks at Diacritics, including two anonymous readers, for indispensable feedback during the revision process.
5. For more on the persistence of the mind-body problem in the age of neuroscience, see Leefmann and Hildt, The Human Sciences after the Decade of the Brain.
6. CBS News, “Temple Grandin’s Unique Brain,” 60 Minutes, October 23, 2011, https://www.cbsnews.com/video/temple-grandins-unique-brain/; Kat McGowan, “Exploring Temple Grandin’s Brain,” Discover, March 12, 2013, https://www.discovermagazine.com/mind/exploring-temple-grandins-brain; Rachel Nuwer, “What Makes Temple Grandin’s Brain Special?” Smithsonian Magazine, October 17, 2012, https://www.smithsonianmag.com/smart-news/what-makes-temple-grandins-brain-special-76672628/; Gary Stix, “A Little Hard Science from the Big Easy: Temple Grandin’s Brain and Transgenic Sniffer Mice,” Talking Back (blog), Scientific American, October 19, 2012, https://blogs.scientificamerican.com/talking-back/a-little-hard-science-from-the-big-easy-temple-grandins-brain-and-transgenic-sniffer-mice/; Virginia Hughes, “Researchers Reveal First Brain Study of Temple Grandin,” Spectrum, October 14, 2012, https://www.spectrumnews.org/news/researchers-reveal-first-brain-study-of-temple-grandin/.
11. Dupuy’s comment about Copernicus recalls an oft-quoted passage from Freud that positions psychoanalysis as a third blow to the megalomania of men. See Freud, “Lecture XVIII: Fixation to Traumas—The Unconscious.”
12. See the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) for the official diagnostic criteria, which include “deficits in social-emotional reciprocity,” “deficits in nonverbal communicative behaviors,” and “deficits in developing, maintaining, and understand [sic] relationships.”
14. For an account of such representations, see Murray, “Hollywood and the Fascination of Autism;” Draaisma, “Stereotypes of Autism;” Hacking, “Humans, Aliens, and Autism.”
25. Robert A. Lavine, “Did Einstein Show Asperger’s Traits?” Psychology Today, March 19, 2016, https://www.psychologytoday.com/us/blog/neuro-behavioral-betterment/201603/did-einstein-show-aspergers-traits.
30. Recent research calls this tendency to reduce subjectivity to brain function neuroessentialism. See Racine et. al., “Contemporary Neuroscience in the Media.” For a discussion of neuroessentialism in relation to autism, see Ortega, “Cerebralizing Autism within the Neurodiversity Movement.”
36. Wright dubs Otlet “the Internet’s forgotten forefather” (Wright, Glut, 185). For more on the connection between Otlet’s ideas and early versions of the internet, see Wright, Cataloging the World. In theoretical terms, Otlet suggests something like an inchoate version of the extended mind thesis. See Clark and Chalmers, “The Extended Mind.”
40. “Machine Solves Hard Problems,” Popular Science Monthly, December 1930.
41. “‘Mechanical Brain’ Works Out Mathematical Engineering Problems,” Modern Mechanics, June 1932.
42. “Electric ‘Brain’ Weighs Three Tons,” Science and Mechanics, August 1935.
43. For examples, see “10,000,000 ‘Hellos’ A Day,” Popular Mechanics, April 1942; Aubrey O. Cookman Jr., “Push-Button Pilot,” Popular Mechanics, December 1947; Stephen L. Freeland, “Inside the Biggest Man-made Brain,” Popular Science, May 1947.
45. Snody, The Middleton Family. For the Elektro demonstration, see 33:48–37:33. For more on the relationship between engineering and autism stereotypes, see Jack, Autism and Gender, especially Chapter 3, “Presenting Gender: Computer Geeks.”
46. “Robot Obeys Spoken Orders and Has Mechanical Mind,” Popular Mechanics, June 1931.
47. “Robot Obeys Spoken Orders and Has Mechanical Mind.”
49. “The Brain Builders,” Time, March 28, 1955.
50. Hayles, How We Became Posthuman. See Chapter 4, “Liberal Subjectivity Imperiled: Norbert Weiner and Cybernetic Anxiety.” Bettelheim offers a social diagnosis of machine-age anxiety in his case study “Joey,” which I discuss below; see The Empty Fortress, 233–38. See Turkle’s The Second Self for a discussion of how machine-age anxieties persist into the late-twentieth century, especially Chapter 6, “Hackers: Loving the Machine for Itself,” which ends with a brief discussion of Bettelheim.
54. Bettelheim, “Joey,” 117. This article makes an apt example of what Duffy and Dorner call “a rhetoric of scientific sadness in which autistic people are mourned even as they are ostensibly explained” (“The Pathos of ‘Mindblindness’,” 201).
64. For a critique of Bettelheim’s career, see Pollak, The Creation of Doctor B. For a critique of his legacy and rhetoric, especially concerning so-called refrigerator mothers, see Jack, Autism and Gender, Chapter 1, “Interpreting Gender: Refrigerator Mothers.” For a personal account of Bettelheim’s treatment of children, see Redford, Crazy.
72. Steven Levy, “Marvin Minsky’s Marvelous Meat Machine,” Wired, January 26, 2016, https://www.wired.com/2016/01/marvin-minskys-marvelous-meat-machine/.
79. Scassellati, “Theory of Mind for a Humanoid Robot,” 16. For a fuller critical assessment of Scassellati’s project, see Richardson, Challenging Sociality.
87. Tara Parker-Pope, “The Language of Autism,” The New York Times, February 28, 2008, https://well.blogs.nytimes.com/2008/02/28/the-language-of-autism/.
93. Baggs, video description.