In lieu of an abstract, here is a brief excerpt of the content:

Nobody could doubt that the brain is made up of neurons and connections between them. But how are they organized? In cognitive science, much of the excitement of mid-1980s connectionism came from a specific hypothesis: that the mind did its work without relying on the traditional machinery of symbol manipulation. Rumelhart and McClelland (1986, 119), for instance, clearly distanced themselves from those that would explore connectionist implementations of symbolmanipulation when they wrote, We have not dwelt on PDP implementations of Turing machines and recursive processing engines [canonical machines for symbol-manipulation] because we do not agree with those who would argue that such capabilities are of the essence of human computation. Up until that point, most (though not certainly all) cognitive scientists took it for granted that symbols were the primary currency of mental computation. Newell and Simon (1975), for example, wrote about the human mind as a “physical symbol system,” in which much of cognition was built on the storage, comparison, and manipulation of symbols. Rumelhart and McClelland (1986) challenged this widespread presumption by showing that a system that ostensibly lacked rules could apparently capture a phenomenon—children’s overregularization errors—that heretofore had been the signal example of rule learning in language development . On traditional accounts, overregularizations (e.g., singed rather sang) were seen as the product of mentally represented rule (e.g., past tense = stem + -ed). In Rumelhart and McClelland’s model, overregularizations emerged not through the application of an explicit rule, but through the collaborative efforts of hundreds of individual units that represented individual 4 PDP and Symbol Manipulation: What’s Been Learned Since 1986? Gary Marcus 104 Gary Marcus sequences of phonetic features that were distributed across a large network, with a structure akin to that in figure 4.1. A flurry of critiques soon followed (Fodor and Pylyshyn 1988; Lachter and Bever 1988; Pinker and Prince 1988), and the subsequent years were characterized by literally dozens of papers on the development of the English past tense, both empirical (e.g., Kim, Marcus, Pinker, Hollander, and Coppola 1994; Kim, Pinker, Prince, and Prasada 1991; Marcus, Brinkmann , Clahsen, Wiese, and Pinker 1995; Marcus et al. 1992; Pinker 1991; Prasada and Pinker 1993) and computational (e.g., Ling and Marinov 1993; Plunkett and Marchman 1991, 1993; Taatgen and Anderson 2002) In the late 1990s, I began to take a step back from the empirical details of particular models—which were highly malleable—to try to understand something general about how the models worked, and what their strengths and limitations were (Marcus 1998a,b, 2001). In rough outline, the argument was that the connectionist models that were then popular were inadequate, and that without significant modification they would never be able to capture a broad range of empirical phenomena. In 2001, in a full-length monograph on the topic (Marcus 2001), I defended the view that the mind did indeed have very much the same symbolic capacities as the pioneering computer programming language Lisp, articulated in terms of the following seven claims. 1. The mind has a neurally realized way of representing symbols. 2. The mind has a neurally realized way of representing variables. Figure 4.1 [3.139.238.76] Project MUSE (2024-04-18 05:10 GMT) PDP and Symbol Manipulation 105 3. The mind has a neurally realized way of representing operations over variables , to form the progressive form of a verb, take the stem and add -ing. 4. The mind has a neurally realized way of representing distinguishing types from tokens, such as one particular coffee mug as opposed to mugs in general. 5. The mind has a neurally realized way of representing ordered pairs (AB ≠ BA); man bites dog is not equivalent to dog bites man. 6. The mind has a neurally realized way of representing structured units (element C is composed of elements A and B, and distinct from A and B on their own). 7. The mind has a neurally realized way of representing arbitrary trees, such as the syntactic trees commonly found in linguistics. A decade later, I see no reason to doubt any of the first six claims; PDP efforts at modeling higher-level cognition have become far less common than they once were, no major new architecture for modeling cognition has been proposed (though see below for discussion of Hinton’s approach to deep learning and its application to AI), no serious critique of The Algebraic Mind (Marcus 2001) was ever published, and to my knowledge there...

Share