In lieu of an abstract, here is a brief excerpt of the content:

Reviewed by:
  • Automaton theories of human sentence comprehension by John T. Hale
  • Sashank Varma
Automaton theories of human sentence comprehension. By John T. Hale. (CSLI studies in computational linguistics.) Stanford, CA: CSLI Publications, 2014. Pp. 204. ISBN 9781575867472. $27.50.

Linguistics and psychology have been intertwined since at least the beginning of the cognitive revolution. In the early 1950s, the psychologist George Miller was busy applying mathematical [End Page 1002] formalisms—probabilistic (Markov) models and information theory—to understand the sequential structure of sentences (Miller 1951). He abandoned these formalisms under the force of Chomsky’s (1957) arguments that sentences are hierarchically structured and joined with him to launch psycholinguistics: the experimental study of how people understand sentences. By the early 1970s, however, linguists and cognitive psychologists had largely gone their separate ways, with the former focusing on competence and the latter pursuing performance.

Beginning in the early 1990s, linguists and cognitive psychologists began to talk once again. A driving force was the maturation of computational linguistics, which gave rise to a number of computational models of human sentence comprehension (Elman 1990, Jurafsky 1996, Just & Carpenter 1992, Lewis 1993, Lewis & Vasishth 2005). These models bridged the divide between linguistic competence (because the models took grammars as ‘inputs’) and psycholinguistic performance (because the models were evaluated against behavioral data).

In Automaton theories of human sentence comprehension, John Hale offers a next step in the intertwined evolution of linguistic and psychological theory. To preview my evaluation, his book is perhaps the most efficient (measured in ideas per page) argument for the development of computational models of sentence comprehension and the evaluation of these models against experimental data. I highly recommend it to linguists and cognitive scientists alike.

Ch. 1 articulates the overall framework of H’s approach. It begins with Marr’s (1982) explication of the highest level of cognitive science theorizing, the ‘computational’ level. This is the abstract characterization of a cognitive ability in terms of the function it computes, that is, from inputs to outputs. Marr observed that this level is absent in most cognitive science theories, which focus on the lower levels where the mechanisms that implement this function reside. He identified Chomsky’s competence theory of language as one of the few exceptions. The computational level has evolved over the years to include the additional claim that cognitive abilities are optimal or rational (Anderson 1990). H presents these arguments—and then argues for a return to lower levels. For Marr, one problem with theorizing exclusively at lower levels is that mechanisms are forever underconstrained by data, and this impedes scientific progress. To avoid this problem, H proposes applying optimality to lower levels, taking as his goal the identification of a small number of mechanisms that are individually optimal and collectively minimal and sufficient. Of course, scientists have long used parsimony to guard against unwieldy theories. What differentiates H’s approach is his wide-ranging search across disciplines for a set of tightly interlocking mechanisms.

Ch. 2 introduces context-free grammars (CFGs) and argues that they are sufficient for expressing a large subset of sentence structures.1 H argues that movement and other phenomena that some have analyzed using more powerful formalisms can be handled by CFGs by multiplying the number of internal symbols, as in, for example, generalized phrase structure grammar’s ‘slash categories’. But he holds the line there, declining to lexicalize the grammar or adopt complex feature structures. This is consistent with his inclination toward parsimony: by keeping the grammatical component simple, he avoids propagating complexity to other parts of his model.

Ch. 3 takes up the parsing component, which is understood as an automaton. H motivates moving beyond simple top-down and bottom-up parsing strategies by considering the implications of three constraints on human sentence comprehension. These are: (i) parsing is incremental—people build up an understanding of a sentence as they read and do not consistently wait to begin until after the last word; (ii) parsing is nondeterministic—there are points during incremental parsing where multiple actions can be taken next (i.e. the parse can be extended in multiple directions). One must be chosen, and if the wrong choice is initially made...

pdf

Share