In lieu of an abstract, here is a brief excerpt of the content:

Chapter 9 logic, dna, and Poetry In January 1956, Herbert Simon, who would later win the Nobel Prize in economics, walked into his classroom at Carnegie Institute of Technology and announced, “Over Christmas Allen Newell and I invented a thinking machine.” His invention was the “Logic Theorist,” a computer program designed to work through and prove logical theorems. Simon’s casual announcement—which, had it been true, would surely have rivaled in importance the Promethean discovery of fire—galvanized researchers in the discipline that would soon become known as artificial intelligence (AI). The following year Simon spoke of the discipline’s promise this way: “It is not my aim to surprise or shock you. . . . But the simplest way I can summarize is to say that there are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until—in a visible future—the range of problems they can handle will be coextensive with the range to which the human mind has been applied”(Simon and Newell 1958). There was good reason for the mention of surprise. Simon and his colleagues were, in dramatic fashion, surfing the shock waves produced by the realization that computers can be made to do much more than merely crunch numbers; they can also manipulate symbols—for example , words—according to rules of logic. The swiftness with which such programmed logical activity was equated, in the minds of researchers, to a humanlike capacity for speech and thought was stunning.And, during an extended period of apparently rapid progress, their faith in this equation seemed justified. In 1965 Simon predicted that “machines will be capable, within twenty years, of doing any work that a man can do” (Simon 1965, 96). MIT computer scientist Marvin Minsky assured a Life magazine reporter in 1970 that “in from three to eight years we’ll have a 98 • Genes and Context machine with the general intelligence of an average human being . . . a machine that will be able to read Shakespeare and grease a car.” ThestoryiswelltoldbynowhowthecocksuredreamsofAIresearchers crashed during the subsequent years—crashed above all against the solid rock of common sense. Computers could outstrip any philosopher or mathematician in marching mechanically through a programmed set of logical maneuvers, but this was only because philosophers and mathematicians —and the smallest child—were too smart for their intelligence to be invested in such maneuvers. The same goes for a dog. “It is much easier,” observed AI pioneer Terry Winograd,“to write a program to carry out abstruse formal operations than to capture the common sense of a dog” (Winograd and Flores 1986, 98). A dog knows, through whatever passes for its own sort of common sense, that it cannot leap over a house in order to reach its master. It presumably knows this as the directly given meaning of houses and leaps— a meaning it experiences all the way down into its muscles and bones.As for you and me, we know, perhaps without ever having thought about it, that a person cannot be in two places at once. We know (to extract a few examples from the literature of cognitive science) that there is no football stadium on the train to Seattle, that giraffes do not wear hats and underwear, and that a book can aid us in propping up a slide projector when the image is too low, whereas a sirloin steak probably isn’t appropriate. We could, of course, record any of these facts in a computer. The impossibility arises when we consider how to record and make accessible the entire, unsurveyable, and ill-defined body of common sense. We know all these things, not because our “random access memory” contains separate, atomic propositions bearing witness to every commonsensical fact (their number would be infinite), and not because we have ever stopped to deduce the truth from a few, more general propositions (an adequate collection of such propositions isn’t possible even in principle). Our knowledge does not present itself in discrete, logically well-behaved chunks, nor is it contained within a neat deductive system . It is no surprise, then, that the contextual coherence of things—how things hold together in fluid, immediately accessible, interpenetrating patterns of significance rather than in precisely framed logical relationships —remains to this day the defining problem for AI. It is the problem of meaning. [3.141.8.247] Project MUSE (2024-04-23 20:58 GMT) Logic, DNA, and Poetry • 99...

Share