In lieu of an abstract, here is a brief excerpt of the content:

1 Introduction When, in 1909, physicists Hans Geiger and Ernest Marsden fired charged particles into gold foil, they observed that the distribution of deflections followed an unexpected pattern. This pattern afforded an important insight into the nature of atomic structure. Analogously, when cognitive scientists probe mental ability, they note that the distribution of cognitive capacities is not arbitrary. Rather, the capacity for certain cognitive abilities correlates with the capacity for certain other abilities. This property of human cognition is called systematicity, and systematicity provides an important clue regarding the nature of cognitive architecture: the basic mental processes and modes of composition that underlie cognition—the structure of mind. Systematicity is a property of cognition whereby the capacity for some cognitive abilities implies the capacity for certain others (Fodor and Pylyshyn 1988). In schematic terms, systematicity is something’s having cognitive capacity c1 if and only if it has cognitive capacity c2 (McLaughlin 2009). An often-used example is one’s having the capacity to infer that John is the lover from John loves Mary if and only if one has the capacity to infer that Mary is the lover from Mary loves John. What makes systematicity interesting is that not all models of cognition possess it, and so not all theories (particularly, those theories deriving such models) explain it. An elementary theory of mind, atomism, is a case in point: on this theory, the possession of each cognitive capacity (e.g., the inferring of John as the lover from John loves Mary) is independent of the possession of every other cognitive capacity (e.g., the inferring of Mary as the lover from Mary loves John), which admits instances of having one capacity without the other. Contrary to the atomistic theory, you don’t find (English-speaking) people who can infer John as the lover (regarding the above example) without being able to infer Mary as the lover 9 A Category Theory Explanation for Systematicity: Universal Constructions Steven Phillips and William H. Wilson 228 Steven Phillips and William H. Wilson (Fodor and Pylyshyn 1988). Thus, an atomistic theory does not explain systematicity. An atomistic theory can be augmented with additional assumptions so that the possession of one capacity is linked to the possession of another. However, the problem with invoking such assumptions is that any pair of capacities can be associated in this way, including clearly unrelated capacities such as being able to infer John as the lover and being able to compute 27 as the cube of 3. Contrary to the augmented atomistic theory, there are language-capable people who do not understand such aspects of number. In the absence of principles that determine which atomic capacities are connected, such assumptions are ad hoc—“free parameters,” whose sole justification is to take up the explanatory slack (Aizawa 2003). Compare this theory of cognitive capacity with a theory of molecules consisting of atoms (core assumptions) and free parameters (auxiliary assumptions) for arbitrarily combining atoms into molecules. Such auxiliary assumptions are ad hoc, because they are sufficiently flexible to account for any possible combination of atoms (as a data-fitting exercise) without explaining why some combinations of atoms are never observed (see Aizawa 2003 for a detailed analysis). To explain systematicity, a theory of cognitive architecture requires a (small) coherent collection of assumptions and principles that determine only those capacities that are systematically related and no others. The absence of such a collection, as an alternative to the classical theory (described below), has been the primary reason for rejecting connectionism as a theory of cognitive architecture (Fodor and Pylyshyn 1988; Fodor and McLaughlin 1990). The classical explanation for systematicity posits a cognitive architecture founded upon a combinatorial syntax and semantics. Informally, the common structure underlying a collection of systematically related cognitive capacities is mirrored by the common syntactic structure underlying the corresponding collection of cognitive processes. The common semantic structure between the John and Mary examples (above) is the loves relation. Correspondingly, the common syntactic structure involves a process for tokening symbols for the constituents whenever the complex host is tokened. For example, in the John loves Mary collection of systematically related capacities, a common syntactic process may be P → Agent loves Patient, where Agent and Patient subsequently expand to John and Mary. Here, tokening refers to instantiating both terminal (no further processing) and nonterminal (further processing) symbols. The tokening principle seems to support a much needed account of systematicity, because all [18.226.28.197] Project MUSE (2024-04-26 18...

Share