In lieu of an abstract, here is a brief excerpt of the content:

SubStance 30.1&2 (2001) 199-202



[Access article in PDF]

Dialogue:


I. John Tooby and Leda Cosmides Respond to Ellen Spolsky

Spolsky raises a number of interesting and important issues too wide-ranging to fully address here. However, the core of the disagreement between us boils down to her belief in the explanatory adequacy of a number of widely credited ideas that unrepentant and hard core believers in materialism and computationalism (like us) cannot accept. For example, for those attempting to construct computational models of the mind, Spolsky's counterposing of "human, rather than machine processes"--and at other points, consciousness to algorithms, flexible processes to algorithms, context-sensitive and holistic processes to algorithms, and so on--exhibits dichotomies that we think are unsupportable. According to the computational view of the mind (which many find difficult to accept), there are nothing but "algorithms"--that is, some kind of cause-and-effect programming structure that implements all changes in representations. The whole cognitive science game is to take the high level capacities that we intuitively grant to minds--such as consciousness, agency, flexibility, context-sensitive interpretation, and so on--and to see what programming steps they are built out of. Although many resist this as an unwelcome nineteenth-century reduction of humans to deterministic, clanking machines, we see it as approaching humans at a very different level of description from the ordinary--raising many of the same conflicts that, for example, counterposing the chemistry of love to the experience of love does. It is unconvincing to say that one's chemistry must be wrong because the experience of love seems inexpressible in the language of the chemist (or vice-versa). The core of the disagreement lies in whether one is willing to accept that while consciousness and agency are not themselves "automatic" or mechanistic in the ordinary sense, their constituents must be, because within a scientific descriptive framework everything (including anything, like us, made out of molecules) operates on a micro level in terms of cause and effect. Many of Spolsky's objections are not specific to our proposals about what we call scope syntax (Cosmides and Tooby, 2000a), but about the "mechanistic" nature of the computational approach to the mind--a [End Page 199] debate encompassing a voluminous literature to which we have nothing substantial to add.

More specifically, she argues that the problems that we think are solved by an evolved scope syntax take care of themselves spontaneously through human judgement, consciousness, context, and so on. We would respond that however distinctions between true and false, the conditional and the unconditional, the actual and the potential, and so on, are made by our minds, there must be a mental language in which such distinctions are marked, expressed, and respected. We call this language "scope syntax" and do not see how a human representational system can get along without it. It is vital to keep a record of whether something is true or false, past or future, experience or imagination. Even in our well-designed minds, people often lose the "label" telling them whether they actually said something or only meant to say something, for example. When this happens on a larger scale, and the mind loses lots of source tags--supposedly unnecessary "labels" for "bits"--schizophrenia is the diagnosis (as for example, when endogenously generated voices are marked as originating in the external world).

Finally, Spolsky argues that "we don't need to label bits or limit their scope because the larger theory of fitness will do that: what isn't useful will, by definition, not be used, and if a fiction or a lie is useful, it will be used." If only this were so, mistakes would never be made, and everything would work itself out for the best. Unfortunately, what is not useful in terms of causing a maladaptive output is "used" as input all the time as long as the item meets the input conditions a given psychological process looks for. The trick is designing psychological processes whose input criteria discriminate at the time of input whether the output that would result is likely to be...

pdf

Share