In lieu of an abstract, here is a brief excerpt of the content:

Reviewed by:
  • The algebraic mind: Integrating connectionism and cognitive science by Gary F. Marcus
  • Iris Berent
The algebraic mind: Integrating connectionism and cognitive science. By Gary F. Marcus. Cambridge, MA: MIT Press, 2001. Pp. 208. $27.95.

How do speakers produce new linguistic forms? Generative accounts attribute the productivity of language to the grammar. Although these theories greatly differ on their specific accounts of the grammar, they all share the assumption that the grammar, the home of linguistic productivity, is a distinct aspect of linguistic knowledge that is separate from and irreducible to the lexicon. With the rise of connectionism, however, this basic assumption has been the subject of fierce debate in the psychological literature. Connectionism is a computational framework that captures knowledge in terms of the activation patterns in a network of interconnected nodes. There are now hundreds of connectionist models that exhibit some degree of linguistic productivity despite having no separate grammatical component. The challenge that these networks present to linguistic research cannot be underestimated: If linguistic knowledge could be captured in terms of the statistical properties of lexical entries, then the notion of grammar would become all but obsolete.

Does a theory of language need a grammar? Marcus’s book is a unique, remarkable achievement that is bound to reshape the discussion on this question. M’s argument begins not with grammar but with symbol manipulation. As M explains, symbolic accounts of cognition share three assumptions. First, the mind operates on variables—abstract placeholders such as nouns and verbs, akin to algebraic variables (e.g. X). Second, the mind represents the constituents structure of variables (e.g. recursion, X → XY). Third, the mind distinguishes between types (e.g. ‘mouse’) and individuals (‘Mickey’). Linguistic rules, principles, and constraints are special cases of symbolic processes—algebraic operations whose semantic output is determined by the constituent structure of mental variables. For grammars to exist, the mind must have the capacity to perform symbolic operations.

In his book, M identifies numerous areas of cognition, including language, that implicate a symbolic architecture. He demonstrates that the success of connectionist models in these domains critically depends on their maintenance of the symbolic tenets. The most powerful of those analyses is the discussion of relations between variables in Ch. 3. It is this discussion that offers a principled computational explanation as to why grammatical generalizations are irreducible to the properties of lexical instances.

Key to the ability of symbolic architectures to generalize is their operation over variables—abstract placeholders such as noun and verb. The incorporation of variables crucially determines the scope of generalizations. Because symbolic operations appeal to variables (e.g. noun), not instances (e.g. house, dog, etc.), they can be extended to any new instance, regardless of its properties, its familiarity, or similarity to known instances. For instance, because the English past tense rule concatenates the variables verbstem and suffix (Pinker 1999), it can apply to both familiar (e.g. flip) and novel (e.g. plip) stems. The representation of variables and the operation [End Page 569] over variables, however, is not necessary for generalization. Numerous connectionist models can generalize without representing variables or incorporating operations over variables. Proponents of such networks claim that generalizations ‘emerge’ in connectionist models that are not equipped with mechanisms for operating over variables prior to training.

The implications of such findings crucially depend on what, precisely, such ‘emergent’ generalizations really are—a pivotal question that has not been fully articulated in the connectionist literature. In one view, such systems learn the ability to operate over variables, an ability with which symbolic architectures are innately (i.e. in advance of any training) equipped. Such emergence would not challenge the observational or descriptive adequacy of symbolic operations in general or grammars in particular as the scope of generalizations in these two architectures is indistinguishable. Such emergent generalizations, if they existed, however, would question the explanatory adequacy of symbolic systems, namely, the hypothesis that the ability to operate over variables must be innately specified. The innateness of operations on variables should not be equated with the innateness of specific grammatical principles or constraints: The innateness of the symbolic machinery is not sufficient argument for the innateness of any...

pdf

Share