Johns Hopkins University Press
  • Description Is Not Enough:The Real Challenge of Enactivism for Psychiatry

In his article, "Delusion, Reality, and Inter-subjectivity," Thomas Fuchs gives an "enactivist" account of how primary delusions in early schizophrenia evolve. First, subjects experience the "loss of familiar, commonsensical meanings"—known as delusional mood. Consecutively they experience new "revelatory significances," in perception as well as in social interaction, with all experiences becoming radically "subjectivized." Out of these "uncanny, spurious and made" experiences delusions develop. Suddenly the formerly uncanny experiences make sense. This new subjective reality, however, is "rigid." Subjects are no longer able to take on different perspectives. The usually present, shared reality we live by is lost. Delusions cannot be challenged anymore by arguments, and communication must proceed by suspending traditional common sense, as Fuchs correctly notes and as every experienced psychiatrist will teach the novice.

The picture Fuchs eloquently paints is hardly novel. Rather, it is now the received view in biological psychiatry of how persecutory delusions evolve (Kapur, 2003). So what is new in Fuchs's account? If anything, it is the enactivist approach. Enactivism is a part of what today is called situated cognition: cognitive abilities of a system are embodied, situationally embedded, extended, and enacted (the four Es) (Walter, 2010, 2013). As Fuchs notes according to the four Es, cognition is not a passive process. Rather, it is an active adaptive achievement of systems that have to survive in an ever-changing environment to which they are coupled from the sensorimotor up to the communicative level. Mental states are not isolated representations in the head, but are constituted relationally. However, in contrast to what Fuchs seems to be thinking, situated cognition is not typically human, but characteristic for all biological organisms. Moreover, it is also thought to be one of the most promising theories of cognition for autonomous (nonliving) systems, for example, in robotics.

Fuchs does a good job in describing psychopathology from an enactivist point of view. Unfortunately though, he does not go any further. He uses explanatory language, where mostly redescription with some new fancy terminology is found. Moreover, he seems to imply that neurocognitive explanations of delusions necessarily must fail because of enactivism. However, this is not true. To the contrary. In the last decade highly innovative [End Page 85] neurocognitive theories of delusions have emerged that are based on, well, an "enactive" account of information processing of organisms with brains (Bortolotti & Miyazono, 2015; Corlett, 2015; Feeney, Groman, Taylor, & Corlett, 2017; Friston, Stephan, Montague, & Dolan, 2014; Kapur, 2003; Corlett, Taylor, Wang, Fletcher, & Krystal, 2010; Mishara & Fusar-Poli, 2013). The real challenge therefore is not whether, but why and how primary delusions emerge. To explicitly address a notorious misunderstanding: to give a neurobiological explanation of delusions is not claiming that delusions are solely caused from inside the brain. For example, it is well known that important risk factors for schizophrenia, like urbanization, living alone, or migrant status, have in common the factor of social isolation. Nor do neurobiological explanations imply that the best therapy of a condition is necessarily biological. What a neurobiological stance, however, does imply, is, that there must be some mediating (eventually even causal) mechanisms that explain why delusions emerge in the way described by enactivist psychopathology.

The framework I am referring to is known as "predictive coding" or "active (Bayesian) inference" (Clark, 2013; Friston, 2010; Howhy, 2014). Obviously, I cannot go into much detail here. In a nutshell, this approach conceives of the brain in living organisms as a prediction machine that has evolved in order to predict the sensory consequences of actions. Living organisms never encounter their environment equipped only with a blank slate, but always already have implemented internal world models, first very crude and imprecise, later elaborated, that predict what will happen next. This is an energy-saving strategy, as many contexts are usually stable and need not be processed in much detail on most encounters. This approach inverses the traditional picture of information processing: It is not passively happening bottom-up, but rather happens in a behaving ("enacted") organism that generates top-down predictions from a world-model. Only prediction errors, that is, mismatch between expectation and sensory signals, are conveyed bottom-up in a hierarchical fashion. This is a clearly enactive picture of cognition.

Delusions, so the explanation within this framework goes, might emerge via a failure to attenuate sensory precision and compensatory increases in precision at higher levels of the hierarchy (Adams, Stephan, Brown, Frith, & Friston, 2013). Importantly, this theory is not a just-sostory, but all concepts in these explanations have a mathematically defined meaning. In general the framework has three important characteristics: First, it is based on neurophysiological data and theories (Clark, 2013). Second, it is quantitative, that is, it provides a mathematical framework allowing to construct models (Friston, 2010). Third, it makes concrete suggestions how prediction and prediction error messaging might be implemented within the brain, bridging the anthropomorphizing language of expectations and predictions to concrete brain mechanisms down to the synaptic and molecular level, serving situated cognition (Bastos et al., 2012). Ironically, and just as a side note to the well-known neurocritical position Fuchs persistently defends, this framework has been developed in large parts by cognitive neuroscientists and neuroimagers. To be sure, this picture is not as intuitively appealing and easy to understand as the phenomenological description. But it is much more precise, testable in principle, and links levels of reality, from molecules to subjective experience.

To sum up: I agree that the theory of situated cognition has some plausibility and holds promise in the explanation of delusions. I disagree, however, with constructing an opposition between this enactivist approach and current neurobiological approaches to delusions. Instead of remaining in the descriptive realm, I recommend to wholeheartedly embrace these neurobiological approaches as they are the only way to find out why and how, and not only whether, delusions evolve as described.

Henrik Walter

Henrik Walter, MD, PhD, Professor for Psychiatry, Psychiatric Neuroscience and Neurophilosophy, Director of the Research Division of Mind and Brain, Deputy Director of the Department for Psychiatry and Psychotherapy, CCM (Research) Charité Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Berlin, Germany.


Adams, R. A., Stephan, K., Brown, E. H. R., Frith, C. D., & Friston, K. J. (2013). The computational anatomy of psychosis. Frontiers in Psychiatry, 4, 47.
Bastos, A. M., Usrey, W., Adams, M.,R. A., Mangun, G., Fries, R. P., & Friston K. J. (2012). Canonical microcircuits for predictive coding. Neuron, 76 (4), 695–711.
Bortolotti, L., & Miyazono, K. (2015). Recent work on the nature and development of delusion. Philosophy Compass, 10 (9), 636–645.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Science, 36 (3), 181–204.
Corlett, P. R. (2015). Answering some phenomenal challenges to the prediction error model of delusions. World Psychiatry, 14 (2), 181–183.
Corlett, P. R., Taylor, J., Wang, R. X., Fletcher, J. P. C., & Krystal, J. H. (2010). Toward a neurobiology of delusions. Progress in Neurobiology, 92 (3), 345–369.
Feeney, E. J., Groman, S., Taylor, M. J. R., & Corlett P. R. (2017). Explaining delusions: Reducing uncertainty through basic and computational neuroscience. Schizophrenia Bulletin, 4 (2), 262–272.
Friston, K. (2010). The free-energy principle: a unified brain theory? Nature Reviews. Neuroscience, 11 (2), 127–138.
Friston, K. J., Stephan, K., Montague, E., R. & Dolan, R. J. (2014). Computational psychiatry: the brain as a phantastic organ. Lancet Psychiatry, 1 (2), 148–158.
Fuchs, T. (2020). Delusion, reality and intersubjectivity. Philosophy, Psychiatry & Psychology, 27 (1), 61–79.
Howhy, J. (2014). The predictive mind. Oxford: Oxford University Press.
Kapur, S. (2003). Psychosis as a state of aberrant salience: a framework linking biology, phenomenology, and pharmacology in schizophrenia. American Journal of Psychiatry, 160 (1), 13–23.
Mishara, A. L., &. Fusar-Poli, K. (2013). The phenomenology and neurobiology of delusion formation during psychosis onset: Jaspers, Truman symptoms, and aberrant salience. Schizophrenia Bulletin, 39 (2), 278–286.
Walter, H. (2013). The third wave of biological psychiatry. Frontiers in Psychology, 4, 582.
Walter, S. (2010). Locked-in syndrome, BCI, and a confusion about embodied, embedded, extended, and enacted cognition. Neuroethics, 3, 61–70.

Additional Information

Print ISSN
Launched on MUSE
Open Access
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.