Can another person know my thoughts with better authority than I know them myself? With his affirmative answer to this question, Freud invented the twentieth-century human, a being whose mind is accessible to scrutiny from outside, and whose attempts at conscious self-explanation are at best partial and in many cases wrong. Even as Freud’s scientific influence wanes, the shift of authority he inaugurated is still gathering strength. Now the mind is open to many outsiders. Not only psychoanalysts, but behaviorists, cognitive scientists, and neuroscientists are all striving to read my mind, and yours. Against this historical sea change stands the solitary John Searle, whose work is consistently animated by fierce adherence to first person authority and surly attacks of any theory that could be exploited to undermine that authority. To most readers Searle’s positions seem stubbornly anti-scientific and covertly dualistic, despite his insistence to the contrary. And since science is good and dualism is bad, Searle’s arguments have become standing obstacles for theorists to discuss, debate, or dismiss.
Buttressing his insistence on first person authority, Searle defends a Cartesian view of mental representation or intentionality, holding that an intentional entity is a conscious entity, either in actuality or potentiality. Deeply unconscious representations are therefore impossible. Eric Gillett has joined the debate just here with a pointed critique of Searle’s argument (also discussed in Lloyd 1990). Gillett performs a double service by challenging Searle’s premises and offering some positive grounds for the deep unconscious. I agree with Gillett that establishing the existence of deeply unconscious representations requires a theory of intentionality that is independent of conscious awareness. Ruth Millikan’s evolutionary teleology is an excellent candidate, in its pursuit of real function and real content (vs. “as-if” interpretation) for mental items.
Some questions remain. First, if deeply unconscious entities are clinically important, then in addition to a defense-in-principle of their existence we will need some method of identifying specific deep representations in particular cases. Millikan’s functionalism links content to broad biological and environmental contexts, and thereby seems to be an unwieldy tool for content ascription. If deeply unconscious representations get their content in the way Millikan describes, how can the clinician make use of them?
Second, and more important, Millikan’s theory shares with many others the ambition to explain all intentional entities, conscious and unconscious alike. This leads to a new dualism of mental contents, a split between intentional content and conscious content, since the comprehensive theory of intentionality will assign mental content both [End Page 201] shallow and deep, irrespective of consciousness. This is a perplexity that transcends particular theories of intentionality, since what my mental states represent seems at least partly dependent on my environment or context, but my current state of consciousness is exclusively dependent on my brain. How, in theory, can these two concepts of content be reconciled?
In sum, deeply unconscious representations do not come cheap. One way to escape the new dualism of content is to deny that the deep entities are representations. Instead, they might be dispositions or the “mechanisms, instinctual drives, structures, and energies which can never become conscious under any conditions,” as Gillett suggests. (In Lloyd 1994 I explore the possibility of a dispositional unconscious understood via connectionism.) On this view, mental content could remain within our first person authority, while the causal processes that manage that content remain out of sight. I wouldn’t have to guess what I’m thinking, but I would depend on the sciences of the mind to tell me how I come to think it.
Dan Lloyd, Department of Philosophy, Trinity College, Hartford, CT 06106, U.S.A.