In lieu of an abstract, here is a brief excerpt of the content:

By way of an afterword, I want here to offer a brief history—and in a way, a political economy—of Unicode, the character set encoding standard that today mediates much, or perhaps all, of a contemporary scholar’s writing , depending on how extensively she or he uses a personal computer. It is a fascinating history, replete with correspondences to the tropological realm in which many of us in the literary culture feel at home. Whether we find it liberating or constraining (or both), most younger scholars today, I would venture, have long since accustomed themselves to the idea that the accumulating life’s work on our computer hard drives only exists, in any material sense, as patterns of magnetized spaces, “on” and “off” states set and unset by electrical pulses. It is not an image of the letter “A,” for example, that is written to the surface of the hard disk when I press the key marked with that letter, writing these words. What is “written,” rather, is one such pattern of magnetized spaces, representing a binary number assigned to the letter “A” in an internal code table. It is this binary encoding that allows us to “process” words with the word processing software we use to write: to rapidly search, sort, and modify text, as well as to transmit it. Much, or perhaps all of our writing as contemporary scholars—depending , again, on how extensively or exclusively one uses a personal computer—is mediated by character set encoding. It is, in other words, digital—and if one composes first or exclusively on the computer, “born” 167 Afterword Unicode and Totality 168 Afterword that way. Our work is always already, if one wants to put it this way, “in code,” or encoded. I do not mean this to be taken as a presentist or futurist demand to “get with the program”—a fatuous gesture all too common in new media studies, at least as a subfield of a perhaps generally anachronistic literary studies—but rather as an observation about the conditions of scholarly knowledge production today, as conditioned by a system that, not incidentally, has a cultural politics already hidden in it. For character encoding has a technical history as marked by cultural language politics as the history of machine translation. Here is why. The local storage and retrieval of data that I perform by pressing the key marked with the letter “A” is, straightforwardly, an encoding and decoding operation. So, equally straightforwardly, is the remote transmission over a telecommunications link, which I perform by sending electronic mail, uploading or downloading my document files, and so on. For patterns of bits or “binary digits”—those series of magnetized spaces on the hard drive, or of pulses sent through a telecommunication channel—to be represented as alphabetic letters, both transmitting and receiving hardware must share a standard code table. The historical development of these code tables, I want to suggest, is effectively a technical allegory of postwar development—in which one might also, if inclined, find something of the language politics of U.S. American studies, comparative literature, and multicultural, multiethnic, and global or world literature studies arrayed in line. There is a postwar “American” moment, in which U.S. industries set the standard for the rest of the redeveloping and developing world. That moment is followed by an initial, Euro-Atlantic period of internationalization, during the 1960s and ’70s, comprising individual nationalization projects and the creation of a European community , as well as the growth of Japan. What followed subsequently—in the politically, if not necessarily technically significant year 1991—might be taken as a moment in what we now call globalization, in which U.S. industries incorporated European and East Asian redevelopment into a “global” framework again legislated by U.S. needs. The first code standard for data processing in the English language was created in 1963 by the nonprofit American Standards Association (now the American National Standards Institute) to encourage the voluntary [18.190.156.212] Project MUSE (2024-04-24 14:30 GMT) Afterword 169 compliance of manufacturers of computing equipment in building interoperative systems. ASCII, or American Standard Code for Information Interchange, which encoded ninety-five characters used in U.S. English typography, quickly became the international standard, since U.S. computer manufacturers dominated the industry. ASCII was “internationalized ” in 1967, when the International Organization for Standardization (ISO) in Geneva published an expanded code standard adding to ASCII those Roman characters used in writing Western...

Share