In lieu of an abstract, here is a brief excerpt of the content:

1 58Reviews Electric Words: Dictionaries, Computers, and Meanings. Yorick A. Wilks, Brian M. Slator, and Louise M. Guthrie. Cambridge, MA: MIT Press, 1996. Pp. xii + 298. $32.00. A;s both the need and funding for natural language processing k.(NLP) systems grow, computational linguists increasingly turn toward electronic dictionaries to provide the lexical information they need and rely less on the hand-made, hand-tagged lexicons of yesteryear. Previously, dictionaries were not often used for this purpose, deemed unfit "object[s] for research or computation" (vii). Computational linguists claim to have avoided dictionaries because they are prone to human error and their definitions are not "the ones that humans use" (vii). I found myself both amused and slightly insulted by these excuses, but the authors of Electric Words explain and discount that opinion throughout their wonderfully useful and informative book. Electric Words: Dictionaries, Computers, and Meanings (EW) investigates a system within which natural language may be processed with the aid of information that can now be extracted electronically from traditional paper-bound dictionaries. To interpret human language mechanically, though, a computer must also have real-world knowledge or "artificial intelligence" (AI) . As the authors note, computers acquire this knowledge more easily today than in previous decades because "the distinction between dictionaries and encyclopedias may be harder to draw today" ( 1 ) since some dictionaries now provide not only strictly lexical, but cultural, biographical, and other information, as well. Traditional lexical definition and theories of meaning are central to EW, however. The authors, through two chapters, valiantly describe the multitude of theories on meaning and defining. Like much philosophy, these theories can be collapsed into a few competing groups, or expanded into myriad, only slightly differentiated opinions. The authors admirably arrange philosophies of definition into 1 1 categories. Not for the faint of heart, this summary of opinions about defining and how it should be carried out is useful and interesting , whether the reader studies modern computational linguistics or is engaged solely in traditional lexicography. The authors refer to their own theory as "Meaning as Equivalent to Symbolic Structures" (34), or, put more simply , "meaning is other words or symbols" (15). Having laid this groundwork, the authors proceed to a lower-level description of die "atoms" or "linguistic primitives" that definitions comprise (45). Not to be confused with phonemes or morphemes, primitives are what some practitioners refer to as the words in a "defining vocabulary" (45). For an artificial intelligence system to understand natural language, it must contain a base, or set of primitives, which "name basic concepts underlying human thought" (45), to break down and interpret larger, more complex human thoughts. A fascinating part of this description of primitives is the autíiors' defense of using words as primitives. Anyone who has used a dictionary knows that "infinite regress" (33) rears its ugly head here: if words and concepts are always defined in terms of other words, then attempts to define words will proceed Reviews1 59 infinitely and circularly. Some linguists thus believe that primitives must be things other than words, and in the interest of fairness the authors describe some of those other theories. But the authors contend that "there is no more trouble about [words as primitives] than there is in the financial situation where we happily accept currency for currency at the bank, andjust as in dictionaries we accept definitions of words by more words and never hope for more" (48). Once we have accepted the authors' use of words as primitives, the question naturally arises, "What happens when a primitive has more than one sense?" This question leads the authors to a very interesting discussion of sense in dictionaries, the differing theories on whether one can actually separate out senses of a word or even whether words can be "defined" at all by other words, and how consistency in the recognition and grouping of senses becomes very important computationally. The authors provide an example from work by Martin Kay, a professor of linguistics at Stanford University, that sets the stage for how primitives can be disambiguated: "The sense (give!) (sense 1 oí give) is defined to be (CaUSe1 to have^ where (caused is sense 1 of...

pdf

Share