In this last chapter I discussed the general turn to quantified cognitive-behaviourism, particularly its combination of abstracted empiricism and psychologism, as it seeks to forge a ‘Grand Theory’ of the social sciences from the mind and its genetic basis and then apply this model in matters of governance. Accordingly, I want to consider the ideological assumptions that underwrite a series of turns occurring in a broad range of fields, including behavioural economics, cognitive science, evolutionary psychology and information systems as well as elements within artificial intelligence research as they seek to examine how the universally shared features of human cognition code and recode historically specific forms of cultural practice. To their credit, proponents and practitioners in these disciplines rightly do think that cognitive findings—when put in their social, historical, and comparative context—are truly illuminating.38 Nonetheless, I argue that they fall foul of cognitive behaviourism because they have a weak and unsustainable theory of the mind. My inquiry considers the ramifications of these paradigms as they have sought to integrate with one another based upon several common commitments to a kind of computational economy in the brain, and that once understood, can be replicated in information systems, but can also explain historical and social development by referring to the brain. Indeed, despite these diverse elements, I argue that they are manifestations of a definable core insofar that these disciplines share strong claims are over-interpretations of small evidence and reflect more the prevailing ideological conditions than genuine insight. In the course of this chapter, I explain why this intellectual artifice is not only an alienated understanding of human beings, but one that when backed by institutional sanction via ‘nudge’ like programs, will create new techniques of oppression, ensure social stratification, and further legitimate exploitation.
Altogether, the goal in this chapter is to address several interrelated kinds of questions, which include the evolution and social nature of the human brain, the possibility of embodied intersubjectivety through mirror neurons and the extent to which social life has neuro supports, the neurosociology of emotion and its relation to cognition and decision making, and the degree to which consciousness is computational. What is more, this development trades on the promise that it can reveal the cognitive continuities that underlie particular collective responses to cultural forms. In doing so, the project seeks to explain how particular mental rules cross boundaries of time and place and underlie perceptual and cognitive abilities. This implies that groups of persons are not only historically socially situated, but historically cognitively situated. Underwriting all of this is the presumption of the convergent combination of the cognitive-computational revolution that will be the most far-reaching intellectual development of the early twenty-first century. Still, I think there are several conceptual errors that require attention before one wholeheartedly boards this train.
Animating my critique is a principled rather than practical end. To elaborate, what I mean is that my critique does not aim at a premature project that has not yet been able to deliver on its promises. Rather, this project is built upon several suppressed contradictions that beget errors in axiomatic reasoning that then accumulate. This is not a pedantic exercise: what is at stake is the interlinking nature of knowledge, cognition, and reality as they inform prospects for human flourishing. By this, I mean how events are described and explained, how factual reports are constructed and how cognitive states are attributed. Too often mental states are said to orientate themselves to discursive constructions, themselves predicated upon an interplay between a person’s situated cognition and material context; thus these are expressions of the context of their occurrences. Therefore, to a speaker their own mind is unknown; but it is available to the expert whose hermeneutic hammer beats down on a person’s lived experience and intentions. This makes the analyst have final say over the descriptions and meanings of social actions. To me, too much sway is given to the discursive power of cognitive behaviourism. This dominant stage presence in current intellectual inquiry is a peculiar naturalization that sidelines more plausible explanations.
I begin by tracing some key developments in the attempts to date, on the part of researchers and theorists, to constitute an interdisciplinary venture on cognitive behaviourism axioms, and to develop links between different projects. I address both pioneering and transitional attempts to describe cognition in terms of the computational process of coded symbols and the relation to embodied experience. Thereafter I show how the accumulation of errors in axiomatic reasoning combined with unwarranted enthusiasm for cognitive behaviourism and cherry-picking evidence harms social science. These emblematic cases seem to me to go most directly to the heart of what is at stake in the general turn to cognitive behaviourism. In this respect, I am interested in the social production of the substantive claims.
Much of this debate would be conceptual, except that cognitive behaviourism is already being used to guide policy makers. Although carried out in technical terms, and so rendered neutral and natural, the implementation of cognitive behaviourism functions to limit political struggle and judgements about politics. As such, there is much at stake, particularly when one recalls how many fashionable twentieth-century social policies appealing to rationality qua neutrality were extremely harmful. Similarly, this practice of presuming that persons can be better understood and governed by social policy informed by cognitive behaviourism is the application of misguided assumptions, which once coded in bureaucratic decision systems would then condition much of our life and leave little room to contend. To better understand this development, one needs some familiarity with post-war twentieth-century American social thought.
During the Second World War, political science in the United States came into its own as the study of order. This meant that the study of politics was less textual and canonical, leaving behind its philosophical and legal-historical orientation, and instead was put in service of the state to understand political behaviour and social cohesion (Skocpol 1985, 4). This project even drew in many of the European émigrés such as Theodor Adorno, Max Horkheimer, Herbert Marcuse and other political refugees who proverbially ‘had to pay the rent’ and spent their wartime activities modelling personality, propaganda, and the influence of information exchange (Wilson, 2004, Chapter 2). This project borrowed significantly from psychology and organizational economics, but it was also influenced by nascent behaviourism.
Behaviourism, as practiced by B. F. Skinner, reduced behaviour to the simple set of associations between an action and its subsequent reward or punishment. This approach applied an empirical statistical analysis to predict the future as a function of the past. Here ‘a vague sense of order emerges from any sustained observation of human behaviour’. Furthermore, ‘direct observation of the mind comparable with the observation of the nervous system has not proved feasible’. This brackets aside intentions, along with other ‘conceptual inner causes’ a valid science of behaviour (Skinner, 1953, 16, 29, 31). With its success, there were spill over effects for other disciplines, and became the foundation of what Robert Dahl (1961) called the ‘behavioural revolution’ in the social sciences.
This approach was meant to reconcile the differences between expectations and practice with persons in organizational settings, and the extent to which people did not follow rules and procedures, and how did they become influenced to do what actions as well as their attitudes to events. Part of this research agenda was enabled by the technologies of mass public opinion surveying that informed researchers of discrepancy between normative and institutional rationality and people’s everyday decision-making practices.
The critical error that behaviourists of all kinds made was ruling out the importance of subjective, mental phenomena simply because it was difficult to observe or measure. Deficiency of method is insufficient grounds for a conceptual grounding; this points to the social setting of this idea. Accordingly, Noam Chomsky’s 1967 critical review of Skinners’ Verbal Behaviour torpedoed Skinner’s attempt to explain linguistic ability by behavourial principles. Instead, Chomsky (1959) argued that the human mind had a linguistic capacity founded on a universal grammar which itself was innate. Languages could only be developed if they conformed to the deep structure of the brain.
Along with advances in computer science, the computational turn reduced behaviourism’s stranding in American social sciences. Following the Second World War, computer scientists actively sought to build machines that could compute rationally to mimic human cognitive processes. The computational turn pulled from Alan Turing (1950) and the Claude Shannon’s (1948) information science—which itself relied upon formal mathematical logic developed by Gottlob Frege and Bertrand Russell—whose work argued that computers resembled the human brain, and that these machines would eventually manifest an artificial intelligence indistinguishable from human intelligence. In the 1950s, computer scientist John McCarty (1979) called the study of intelligence and its replication of essential features on a computational system artificial intelligence. The goal of this project was to create intelligent devices and robots that could undertake labour while also demonstrating how biological intelligence functioned. Herein computation is understood as anything that can be represented as information can be computed.
This project intersected with Chomsky insofar that his work uses natural attributes to explain ordinary language practice and linguistic ability. In making the deep structure of the brain responsible for syntax, effectively giving it priority over semantics, Chomsky’s critics argued that he could not account for meaning (cf. Searle 1972). This focus on the biological rather than the social pushed both social sciences and cognitive scientists to shift attention to trace the distinct patterns produced by the brain so that computers could replicate these patterns, hoping that this would replicate the form of consciousness.
These successes, however, were arguably limited. This was because there were severe errors with the foundational assumption about rationality. While computer programs could be written to manipulate symbols within logical finite systems this was not successful outside those systems. The scope of comprehension for a computer using natural language was limited, and computer scientists encountered the same barriers as philosophers of language in the ideal and ordinary language debates (see Rorty 1967).39
A good demonstration of these conceptual inadequacies can be represented by John Searle’s (1980) famous ‘Chinese Room’ argument wherein he presents a strong case for the distinction between meaning making and information processing. Herein symbol manipulation via a set of predetermined logical rules cannot match how humans relate symbols to meaningful events. This fits with Chomsky’s conception of language wherein the complexity of internal representations is a result of a genetic endowment maturing in an environment. This opens up the possibility of rich, creative, meaningful activity. This simply cannot be reduced to computational associations as practiced by behaviourists. Chomsky’s approaches to the understanding of the mind are anathema to behaviourism. Their emphasis on the internal structure and characteristics enable it to perform a task is different from the external associations formed by relying on patterns of past behaviour and the environment.
As meaning making is still out of reach of computation, Hubert Dreyfus has good grounds to state that, ‘the research program based on the assumption that human beings produce intelligence using facts and rules has reached a dead end, and there is no reason to think it could ever succeed’ (1992 ix). In short, the first artificial intelligence (AI) revolution was limited by an overly rational model of mind wherein consciousness was understood as the computation of information processing as opposed to making meaningful interactive relationships and associations with the world.
Following Chomsky’s critique of artificial intelligence and the jettisoning of positivist logic methodology, there was an emergence of technological power in the computation area. The aforementioned critiques of ‘good old fashioned AI’ heralded a turn to probabilistic and statistical models and analysis. This was in part attributed because advancements in engineering and robotics, achieving goals, being successful, was more professionally rewarding than addressing fundamental scientific questions. This led AI researchers to use computers to model the architecture of the brain. The advanced computing power allowed models of networks rather than logical serial processing.
There was also an additional development where there was some limited modification to the presumptions to the mind. Cognitive scientists tended to understand a mind that is biological, embodied, and affective, that is linked to thought processes that are apparently ‘illogical’ relative to previous models of the mind that stressed the separation of emotions from cognition that was endemic to early cognitive theory. However, the empirical methodological re-orientation of the second AI revolution saw mental processes classified as information processing, and moreover, the best model for a cognitively active human being is a computer running a program. This change is a selective inversion of the where the mind does not pre-exist discourse or culture, but rather is continually accomplished in and through its production and interpretation. This newer approach sought out alternatives to a strictly logical view of cognition and incorporated findings and axioms from psychology, anthropology, and linguistics, in short stressing the network nature of cognition as a computational problem.
These intellectual sentiments, supported by cognitive linguistics, demonstrate the complex and reciprocal relationship between culture and the embodied mind in forming the human subject; here the brain is the material site where language, culture, and the body meet and form each other. These sentiments are present in Foucauldian philosophical anthropology regarding the contextual shaping of cultural artefacts that then redirects questions about the author’s cognitive process to questions of authorship in material culture more broadly. An axiom in this analysis is that there needs to be a thorough understanding of the existence, circulation, and disciplining of a discourse within the author’s material body. By inference, reading texts can reveal ideological formations, but also cognitive processes. There seems to be a contradiction between the role given to the shaping power of culture on the brain, while seeking to preserve and stress universal innate constrained cognitive actions that hold across cultural and historical eras. A similar impulse is present in the Derridean critique of rationalism, wherein rational thought is not a reflection of natural functioning of human cognition. As Jacques Derrida argues, ‘there is nothing outside the text.’ The key difference between post-structuralism deconstruction and Chomskian cognitive science is where Derrida kept good portions of Saussurean arbitrariness, although making it less phonocentic, Chomsky argued that meaning was not arbitrary, but rather motivated by innate characteristics bounded by physical attributes that were refined by environmental factors.
From the Chomskian vantage, cognitive science embraces a framework wherein culture intersects with human cognition and material forces as they influence and shape each other. There is an emphasis on how human cognition is deeply tied to materiality and embodiment, even to the extent that persons are themselves unaware of the process by which the brain is the site where culture and biology meet and shape each other.
Nonetheless, current renditions of AI are little more than behavioural principles cloaked by sophisticated computational techniques. This can be seen in the reliance on statistical learning techniques to better mine massive datasets. Implicit in this endeavour is the assumption that with sufficient statistical tools and enough data, interesting signals can be isolated from the noise of hereunto poorly understood systems. While the urge to gather more data is strong, it is not always clear whether this is a path to meaningful explication. What I mean relates to the conception of the purpose of scientific practice and emblematic of the struggle between the efficiency of using computing power to distinguish between signal and noise, or whether it is more meaningful to find the essential basic principles that underlie and provide explanatory insights of the system. This is reminiscent of Sydney Brenner’s dismissal of the sequencing revolution in the biological sciences as ‘low input, high throughput, no output science’ (Friedberg, 2008). What the second AI revolution is attempting is to reverse engineer systems and networks whose nature is a mystery, although it is not always clear what theoretical framework this data fits. To paraphrase David Berry, ‘the destabilizing amount of knowledge’ produced by this computational turn ‘lack the regulating force of philosophy’ (Berry, 2011). Appreciating physical differences can help limit claims of the comparative similarity between brains and computers.
Aside from the major difference that brains have bodies, while computers are digital, brains are analogue. Neurons can fire in relative synchrony or relative disarray and fluctuating membrane potentials are a factor. Additionally, computers use byte-addressable memory, but memory in the brain is associational. Another appreciable difference is that computers are modular and serial whereas the brain has distributed and domain-general neural circuits. For example, the hippocampus is important for memory, but also imagination, navigation, and other functions. This means that unlike computers, processing and memory are performed by the same components in the brain. Moreover, the brain has no system clock akin to the speed of a microprocessor. Together these differences show that synapses are far more complex than electrical logic gates. Lastly, the brain can repair itself after injury. In this respect, the brain is a self-organizing system and adapts to experience in ways that simply do not happen with microprocessors.
In short, attempts to develop artificial neural networks to replicate the brain are nowhere near like the actual intricate and massive connection of neurons. This means that they are limited in how useful they are in testing theories about basic cognitive functions. Moreover, there is a kind of paradox: the attempt to prove that the mind is logically computational, and thus digitally replicable, trades upon associational strategies, biases, and biological things.
Reminiscent of ethnology, evolutionary psychology roughly states that the mind is the way that it is because of adaptions to the environment, and that insights of evolutionary biology can be used to bring new light onto the human brain, and human behaviour more generally. These neo-Darwinists have sought to apply natural selection to social organization much like Herbert Spencer’s meek justification that the social stratification and colonial domination of expansionist industrial capitalism reflected natural selection. Evolutionary psychology takes mundane observations—such as cells being spherical—to claim that physical principles provide channels of development that extend up to individual action and social organization. It does so by explaining human behaviour by referencing a competitive environment as understood by cost and benefits anchoring in economic modelling. Here the presumption is that everyday human behaviour can be well explained by this framework, where replication of genetics is the purpose of human beings, and that this action is guided by calculations undertaken by the brain directly and indirectly largely at the unconscious level regarding an economy of energy consumption and expenditure.
This has the hallmarks of Gary Becker’s project, which he described as ‘the economic approach to analyse social issues that range beyond those usually considered by economists’, (Becker, 1992) or as he said elsewhere, an ‘approach to human behavior’ (Becker, 1976). However, the crucial oversight of this project is that market rationality is substituted for pure rationality, meaning that social interactions are assessed in relation to the market. This treats all social interactions as transactions. Having set all relations to the market metronome, the market is presented as an omnipresent system of distributing goods, rewards, and privileges. However, this presumption is unwarranted; rather a case must be made for relating things to the market, not the other way round. So the mistake is presuming what ought to be proven.
Construing ecology as economy does not explain much because there are always retrospective appeals to the two principles of mutation and selection to explain humanity’s remarkable attributes. The paradigm is so flexible that it is immune to experimental and observational tests. Therefore, when evolutionary psychologists are found wanting they can simply weasel out by saying that they now have more information at their disposal. In doing so they seek to escape assessing, whether in principle their research agenda is theoretically well grounded. This does not explain much and so is not satisfactory. Like Spencer, it is philosophically convenient for social Darwinism to skip over the political installation of institutions such as the market. Instead, social adaption has less to do with natural and biological processes than political and social processes.
To the extent that one can see the economic principles of Gary Becker’s project in evolutionary psychology, similarly one can see the economics of Daniel Kahneman who sought to model the ‘intuitive mode in which judgments and decisions are made automatically’ in affect theory (Kahneman 2002, 470). In Kahneman’s theory, ‘an automatic affective valuation – the emotional core of an attitude – is the main determinant of many judgments and behaviors.’ (Kahneman 2002, 470). As this applies to the affective turn, there is the latent promise that it can account in some cases for how the material environment triggers specific kinds of intensities of awareness which elude description, representation, or intentional formation, but which Kahneman speculates, developed in ‘evolutionary history’ (Kahneman 2002, 470).
Kahneman’s research demonstrated that persons are susceptible to anchoring, availability, and representativeness biases. When combined with a lack of knowledge in strategic settings, inertia, and sunk costs, the general claim is people are fallible and do not act in accordance with strict rationality. Further to that, one tenet of affective evolutionary psychology is that brains evolve to develop and function in social networks and have an appreciation of the costs of reproduction, and that these are impulses that shape our actions, oftentimes which are not well understood by persons themselves, that there are pre-conscious motives that drive action.
This is hardly controversial—indeed, it can be understood as necessary humanization of economic research to acknowledge that preferences are not consistent. However, a quick follow up proposition is that persons are ‘predictably irrational’ to use Dan Ariely’s (2009) turn of phrase, and so they can be systematically and strategically manipulated by savvy architects of choice. These architects are often employers and governments—those with power—and depending on their intentions, the architecture can be for social amelioration or exploitation.
The depoliticized language of ‘nudges’ cloaks this manipulation as if to connote mild direction rather than paternalistic intervention by the ruling class. Proponents of nudges like Richard Thaler and Cass Sunstein point out that as policy is the architecture of choice and so can be used to correct for a person’s predictably irrational preferences (Thaler & Sunstein 2008). But people have contradictory and inconsistent views and practices, so it falls to the nudges and their preferences as to how they will design the architecture that informs the setting of other’s choices: for if consistent preferences do not exist then there is no way to nudge people to what they want or need; it can only be what paternalists think they want, which is but what paternalists want. In this respect, nudging is more than creating incentives that a person can then exercise options over, but rather an intervention to try rig the system that triggers an affect in a person so they then undertake the ruling class’s subjectively preferred behaviour. Additionally, as nudging is built into a social system it is a cost effective means to shape subjects (See McMahon, 2015, Cromby & Willis Martin, 2014, Leggett, 2014).
The ethics of this kind of governance warrant scrutiny. While Thaler (2015) claims that there is little chance of manipulative nudges causing harm because most nudges are visible, even if one suspends disbelief simply on his say so, his defence neglects to incorporate the need for the transparency over, and indeed justification for, active intervention to shape subjects. In doing so Thaler overlooks that the political purpose of nudging is to for policy to make itself inconspicuous and so circumvent a public gaze.
There are other problems here too. While certain decisions which were once thought to be self-consciously produced are automatic, this does not mean that all or most of our actions are automatically pre-conscious or without intent. So when affective evolutionary psychologists use neuroscience to underwrite behavioural economics they have little to say about the individual person or even consciousness. Accordingly, when using the same axiomatic paradigm it is unlikely to have anything valid to say about social organization and politics.
The social and political consequences of the computational turn have a near unprecedented impact on governmental practice. Setting aside for present purposes the role of government in collecting information and creating profiles of people—itself extremely problematic in nature—there are other kinds of insidious epistemic problems that skew the conception of persons and their actions. In this section, I use the example of algorithmic regulation to demonstrate that there are principled arguments for keeping neuroscience out of social policy.
Algorithmic regulation is an approach to governance that seeks to apply AI learning principles to process the data produced by sensors to adapt to changing circumstances, induce stability, and shape social actions. The main promise of algorithmic regulation is that it makes governance more effective, and thus harnesses the state for democratic purposes by improving service delivery. This is justified by harnessing ‘a deep understanding of the desired outcome’, as one proponent calls it (O’Reilly 2013, 289). But make no mistake, algorithmic regulation is a political programme that seeks to quell politics.
Algorithmic regulation marries anticipatory adaption of the environment with the various kinds of technologies of surveillance such as dynamic biometrics and smart environments of those same environments to guide public interventions according to what particular people are susceptible to do. It does so to ‘nudge’ or influence a person’s decisions to adopt preferred social actions. In doing so, it seeks to minimize the contingency of human actions for governors to better stabilize their regimes. In other words, its target is to delimit what a person could do and prevent those actions from being actualized.
It has a pre-emptive character that seeks to create affects or nudge persons based upon an anticipatory evaluation of what a person could do and what the political regime wants them to do. It then masks this gentle pressure as a person’s agency to decide to act. This means that algorithmic regulation undermines the person as a moral agent because it seeks to even slightly displace their preferences and intentions for the regime’s own.
The error here is disconnecting the means of doing politics from its ends, and so reveals the several simplistic and naïve assumptions about politics and power. The fault is presuming that behaviours discovered via data mining are independent of power, and overlooks how the process of politics shapes the contents of politics. This political agnosticism can be seen in cases where the imperative to evaluate and demonstrate efficiency, results, and the like presupposes that the goal of policy is optimising the already agreed upon, or already instituted. So positioned, algorithmic regulation is posed as politically neutral and thus able to generate objective and inoffensive universal remedies to social ills.
It neglects that most political discussions and struggles are about beliefs, and so are not amenable to quantifiable. So the appeals to efficiency and rationality in the form of ‘crunching numbers’ does not improve on weaknesses of human judgement, rather the profiling and digital transcription enabled by predictive data mining bypasses human interpretation altogether. This in turn changes a person’s relationship to knowledge as it is applied in a social setting.
To be clear, the issue is not quantification, or statistics. Quantification requires epistemic communities that evaluate interpretations, uses, and findings. By contrast, automated algorithmic regulation is not accountable to an epistemic community at large. Rather it is privy to proprietary bureaucracies whose programmers and administrators are not directly accountable to the people they seek to nudge. The lack of transparency emerges again when dealing with methodological concerns. The absence of a broader epistemic community that is not employed by these bureaucracies means that there is no independent process of testing and evaluating the code.
The computational turn has a pretence to an analysis anchored in empirical experiment and deductive–causal—logic. Rather it interpolates people through induction as shadows of themselves. This indifference to causes in heterogeneous contexts has a direct impact on the presumed existence of causal interactions particularly with how to understand a person’s actions and intentions. Algorithmic systems are appealing because it relieves governors from the burden of being accountable and transparent in their assessments of people. This phenomenon deflects attention away from causality and intentional agency or individual and collective ability to give account of their actions and related encumbered meanings thereof.
In doing so, it is seen as the ability to disrupt existing deliberate governance, and instead ‘hack’ people. Here, getting people to adopt a welfare program, or forestall civil disobedience. The commonality between these two kinds of actions is that they are treated as an equation that needs a solution suitable to an epistemic government problem, the problem being inducing people to be lessen radical indeterminacy and the incommensurability of contexts and behaviours. In doing so, this upends existing conventions regarding the production and enforcement of norms.
One can contrast this to circumstances where persons encounter institutional due processes. This particular kind of interaction requires persons to use intentional language to provide explanations and motivations for their actions. In this respect, it is a moment for people to give account of themselves and in so doing, the institution provides a moment for a person to challenge the norms that organize that very process as the norms are relatively more visible, intelligible, and contestable. It is not automated and scripted by code.
As the purpose of algorithmic regulation is to assist bureaucracies anticipate what bodies could do, and then nudge these bodies to undertake subjectively desirable actions, this undermines the agency that is foundational to the person-as-citizen. Rather people are objectified, while the process itself is concurrently mystified and reified such that accounts for and about the data and its uses are unexamined. This kind of regulation limits a person’s capacity to develop as an autonomous agent who engages in collective action.
This has the hallmarks of Marcuse’s one-dimensional society. Recall that he argued that the process of near total social integration due to consumer and administrative driven logics flattens the scope for discourse, imagination, and understanding. Instead what is substituted is the perspective of the dominant order which uses various mechanisms to create social closure. While the mechanical forms he identified—punditry and media systems—are different, the mechanical functional remains the same: the appearance of contentment in service of capital, but not essential contentment independent thereof.
Rendering politics devoid of class concerns, the struggle for power to distribute goods, or disagreements about belief, relegates contention and confrontation to but accepting different unintentional driven tastes and preferences. I cannot see how this could be considered emancipatory, for it renders democratic governance as aesthetic rather than normative.
In at least one strand of contemporary philosophy of mind, talk of perception has fallen out of favour. Indeed, most writers deny perception altogether, or claim it does not matter. Instead, they reduce perception to reality, or speak of the ‘really real’. Perceptions are said to be ‘nothing but’ particles or waves or structured brain events. Paul Churchland (1996) replaces the perceiver with functioning biological bodies. The perceiver is reduced to an organized body, mind becomes the brain, body motions become actions, man becomes the person. Churchland redefines phenomenal qualities as being nothing but properties of the brain. Cognitive events such as understanding, recognising, feeling, and perceiving are replaced with neural analogues. Here psychological events are treated solely as neural events. Here this is always already nothing but the really real of matter and motion. And this is the prevailing view in cognitive science.
These contemporary materialists have two claims. This first claim is that all perceptions can be explained in terms of or by reference to neural events and the like. The second claim is that there are only neural events (and other physical events in the environment). At the heart of the dismissal of perception is the combination of two beliefs. The first is that science, especially neurological science, has access to reality; and second, the distrust of perceiver-dependent events.
The critique is that neuroscience practices an epistemology that seeks to associate properties. Take connectomics for example. Here the goal is to map the neurons in the human cerebral cortex as a preliminary step to digitally reproducing that circuitry (see Alivisatos et al. 2012). Google Brain is a very good example of this kind of project. Then again, this approach is an error: it merely seeks the surface manifestation rather than the logic and operations that performs the task, what is the brain actually doing. Therefore, it is difficult to discover this by seeing where synaptic connections are being strengthened or where there is neural activity. Information of this sort may well be useful, but it does not address the fundamental question about mechanisms.
In part, this is because there are real problems with the working definition. Describing consciousness as ‘the feeling of processing information’ cannot be correct because it implies consciousness is perceived by something else. This is a problem when trying to construct computationally based artificial intelligence: For unless a mathematical pattern can perceive its own existence, then consciousness is but well described by that mathematical pattern.
Besides, considering humans’ have approximately 100 trillion possible arrangements of synapses, even if were possible to map out the exact pattern of brain waves that gives rises to a person’s momentary complex of awareness, that mapping would only explain the physical correlate of these experiences, but it would not be them. Experiences are irreducibly real, but different from brain waves. Still, it is an error to mistake consciousness as being well described by mathematics or computation for being mathematical or computational. Presuming otherwise is to reify a description. Overall, these are incomplete representations and partial understandings of physical, biological, psychological, and social reality.
There are other methodological errors. Researchers rely upon fMRIs to compare brain activity with visual stimuli by examining increases in blood flow in order to infer associations. But this neuroimaging is little more than neo-phrenology and misleading. Four points are relevant here. First, as Lilienfied et al. (2015) point out, ‘the bright red and orange colors seen on functional brain imaging scans are superimposed by researchers to reflect regions of higher brain activation.’ Moreover, this increased illumination is not a direct measure of neural activity; rather ‘they reflect oxygen uptake by neurons and are at best indirect proxies of brain activity.’ So changes in blood flow are not a clear indicator of what the precise relationship between cerebral blood flow and neural activity happens to be (Sirotin & Aniruddha, 2009). Besides, it is not as if other parts of the brain are dimmed when there is a stimulus. As Lilienfied et al. write, ‘the activations observed on brain scans are the products of subtraction of one experimental condition from another. Hence, they typically do not reflect the raw levels of neural activation in response to an experimental manipulation.’ So increased blood flow indicates little about what is occurring in other parts of the brain that are active. So controversies regarding the assumptions read ‘into’ and ‘out of’ brain scans are left unattended (see Dumit 2004), or dealt this at the level of technical improvements of data capturing technology at the larger expense of what is trying to be captured, in others words, a poor understanding of the problem, hoping that the technical refinements will provide insights. However, this largely leaves the question of how thoughts and consciousness relate to each other under-attended.
One final point Lilienfied et al. raise is that ‘depending on the neurotransmitters released and the brain areas in which they are released, the regions that are “activated” in a brain scan may actually be being inhibited rather than excited.’ So functionally, it could be that these areas are be being ‘lit down’ rather than ‘lit up’. In other words, it is premature to try to isolate individual components from a working whole. Even the most basic tasks require the integrated unit.
Granted, researchers tend to study what they know how to study, but this points to the deficiency of the developing experimental techniques or methods that seek to find the right target. While more data and better statistical analysis can provide a better approximation to mapping mechanical relations, it reveals little about the principles behind those mechanical relations. Statistical analysis is all well and fine, but one should not confuse understanding what is happening for why it is happening. Moreover, one can be easily mislead by data mining that seems to work because one does not know enough about what to look for.
A more appropriate approach is to understand the fundamental principles. Take the example of medical practice. Without a strong grounding in biological principles a doctor will not be able to examine and explain variations to bodies, all they will have is knowledge of techniques that come in and out of fashion, and while these techniques may be successfully applied, their use does not demonstrate fundamental knowledge of a human body’s biological processes. So their medical knowledge is predicated upon techniques rather than principles, which is a problem, should techniques change or a new kind of variation is uncovered. Without fundamental knowledge of the causal processes, one is only in the catalogue business, and this does not tell us about how structures were acquired or developed. In short, one is dealing with a different conceptual problem.
Probability theory and statistics are misapplied to biology and cognitive science because they seek to find correlations within noisy data rather than examining how biological and cognitive systems select and filter out the noise; these systems are not trying to duplicate the noise, but rather to filter it out. Using the example of human infants, although initially confronted by noise, they reflexively acquire language because they have the genetic endowment to do so.
In addition to theoretical problems inherent in using statistical induction, practical problems emerge when researchers use or amalgamate large data sets. This is because of unknowns in the selection criteria and quality of data. These limitations produce errors that compound upon each other and are rarely acknowledged. So there are intrinsic limitations to data as well as interpretive elements involved so it would be an egregious mistake to believe that quantification necessarily brings social science closer to objective truths.
A popular stance is the definition that assumes that all mental representations derive from brain activity, and so every mental state has an associated neural state. The difficulty with this definitional understanding of consciousness is that it does little to aid and address the more relevant question, at least for the present discussion, about how it arises. In avoiding the metaphysical grounding of consciousness it becomes trapped in a circle where the contents of consciousness consist of whatever we happened to be aware of. In this respect, the present generation of AI researchers are describing perception, not consciousness.
Cognitive functionalists posit that ‘the mind is what the brain does’, as two commentators put it (Kosslyn and Koenig 1992, 4), while Paul Churchland explains this, ‘whether or not mental states turn out to be physical states of the brain is a matter of whether or not cognitive neuroscience eventually succeeds in discovering systematic neural analogs for all the intrinsic properties of mental states’ (1996, 206). But this way of understanding the mind is premature, for the model—the computational theory of mind—is hardly a settled question in the philosophy of mind. It is rather, as Thomas Nagel has written a kind of ‘physical-chemical reductionism’ (Nagel 2012, 5). To give some background, John Searle regards the mind body problem as resting upon the faulty presumption that these terms reflect ‘mutually exclusive categories of reality.’ It exists because there is a reluctance to see that ‘our conscious states qua subjective, private, qualitative etc. cannot be ordinary physical, biological features of our brain.’ (Searle 2007, 39). Searle writes that if we drop the mutually exclusive criteria then a solution is possible. To his mind,
All of our mental states are caused by neurobiological processes in the brain, and they are themselves realized in the brain as its higher level or system features. So, for example, if you have a pain, your pain is caused by sequences of neuron firings, and the actual realization of the pain experience is in the brain. (Searle, 2007, 39–40).
But, this introduces a problem insofar that it requires one to specify how conscious states come into being. This attends to not only questions of making sense of perceptions and experience, but also the extent to which a person’s consciousness is evoked by material circumstances independent to the body. This defence requires us to defend the person as a perceiving agent; the nature of the object of perception; the role of mental contents; and the causal and significatory relation between perceptions and objects.
Keeping these points in mind, it is worth returning to the AI research. These efforts, as notable as they have been in their disciplines, are mimicking intelligence. In addition, AI is deterministic: if presented with the same inputs, it will produce the same outputs if the program were run again.40 Beyond practical problems, in principle there remains little understanding of the brain. Presently, cognitive behaviourists describe intelligence in a way that boils down to ‘being able to do everything person can do’. This circular reasoning demonstrates that scientists do not know what these computations are trying to mimic. So even once we set aside the hubris from executives who seek to sell software systems, there is considerable distance between producing things that even remotely resemble the breadth of actual human intelligence, let along consciousness. To reiterate the points made above, without a principle-based understanding of consciousness there is little way to know what you have designed to act in a particular way is actually successful in being sentient. The working definitions of intelligence in artificial intelligence are descriptive low bars and are nowhere near self-awareness or consciousness.
In the attempt to remedy the faulty theory of mind, social scientists would do well to reassess their relationship with metaphysics, and discard the coextensive hyper-materialism that characterizes the AI researcher paradigm. Treating perceptions, ideas and emotions as little more than electrical impulses misses the emergent properties where the whole is greater than the sum of its parts. This perspective, it seems to me, stems from a cramped, hyper-reductive view of causality: one that is materialist, but not historical, instead of viewing the mind and its mental activity as distinct from mere and reductive material impulses. So one needs to take seriously that minds are a distinct realm of existence that cannot be fully explained by physicalists. In other words, there is a remainder after taking account of cognitive evolution, computational models, or the economy of mind. Neither chemical reactions predicated upon a base economy of natural selection can account for the creation of the mind. While physical evolution is causally necessary, by itself it is an insufficient condition for consciousness. These bold claims about neurological support are but an epistemology of excessive reductionism.
There is one last bit of hubris where researchers believe that mathematically based speculations resolve or dissolve long standing political conundrums, but do so without seriously engaging with those philosophical debates. It is related to the belief that all reality can be fully comprehended in terms of physics. But this is a fantasy. While it is easy to understand, for instance, the interaction of light and nerve impulses, this cannot account for the gaze outwards. Cognitive behaviourism falls at this first hurdle while still being far away from being able to account for other aspects of human consciousness like the formation of intentions and undertaking voluntary action. So if cognitive functionalism has little to say about the person and consciousness, it is certainly not well positioned to claim relevant meaningful contributions to social policy.
Having discussed potions of the intellectual inheritance of cognitive behaviourism from twentieth-century social thought, I now want to turn my attention to a critical branch of sociological thought from the same period to assist in analysing this set of ideas. C. Wright Mills worked in the immediate post-war period as a research assistant to Elihu Katz and Paul Lazarsfeld’s research on the media effects of mass communication. The majority of their work sought to understand the persuasive influence of mediated messages in print and broadcast communication technologies to shape and control the ideas, attitudes, and behaviours of members of a society. Mills thought that most of the findings suggested that the media effects of mass communication sat in concert, if not over-determined, by other factors like differentiated cultural practice of composite audiences and their agency. And for this reason, he never shook his distaste for behaviourism and its presuppositions.
Shaped by this post-war infatuation with coding mass behaviour and his critique thereof, in The Sociological Imagination, Mills identified the emergence of Grand Theory (the term Mills used to mock Talcott Parsons’s work) and Abstracted Empiricism (a comment on Daniel Bell’s work). Stemming from his close experience with large public opinion survey research and alongside questions about the legitimacy of power, epistemologically, Mills was dissatisfied with the attempt to induce correlative relations but at the expense of understanding social forces. With an excessive focus on individuals, these aforementioned studies did not consider social relations, real world politics, nor were they well grounded in the sociological theoretical tradition. Altogether, this reflected what Mills described as a pervasive ‘psychologism.’ What he meant by this was ‘the attempt to explain social phenomena in terms of facts and theories about the make-up of individuals’ (Mills 2000, 67 fn 12). He writes,
Historically, as a doctrine, it rests upon an explicit metaphysical denial of the reality of social structure. At other times, its adherents may set forth a conception of structure which reduces it, so far as explanations are concerned, to a set of milieux. In a still more general way...pyschologism rests upon the idea that if we study a series of individuals and their milieux, the results of our studies in some way can be added up to knowledge of social structure. (Mills 2000, 67 fn 12)
Abstracted empiricists, had according to Mills, adopted a research approach that sought to replicate the demonstrated success of the physical sciences, but in doing so had prioritized method over substance. In this respect, it was ‘systematically a-historical and non-comparative’ (Mills 2000, 68). Quantitative survey methods were presumed to be more rigorous than other kinds of social inquiry. But this kind of research was costly, required significant staff to distribute, collect, and tally the findings in preparation for basic computational analysis. These actions required large budgets and resources, and so led to the bureaucratization of social research that resembled industrial scale production. In this industrial scale, research the sunk costs of the scale of investment trumps self-critique and modification. This mind-set makes it difficult to understand change and contradiction in social, economic, and political institutions let alone wider social and political development. As Mills observed:
one reason for the thin formality or even emptiness of these fact-cluttered studies is that they contain very little or no direct observation by those who are in charge of them. The ‘empirical facts’ are facts collected by a bureaucratically guided set of usually semi-skilled individuals. It has been forgotten that social observation requires high skill and acute sensibility; that discovery often occurs precisely when an imaginative mind sets itself down in the middle of social realities. (Mills 2000 70 fn 13)
Together, these enable the pre-conditions for the domestication of critique. So being enamoured with cognitive behaviourism often leads to but one kind of approach to the study of human action. But this has direct and distinct disadvantages because the information produced tends to be a-historical and de-contextualized. This kind of theoretical mindset makes it difficult to deal with change in social, economic, and political institutions.
Abstraction, without context, Mills believed, led to disengaged scholarship, alienated from the true dimensions of the problems under investigation. Excessively functional, behavioural, and naïve empirical approaches fail to perceive the wider social and political settings that organize those particular arrangements. These approaches count the countable because they are easily countable. This is not to elevate context above all else, but to suggest that historical circumstance, contingency if one will, cannot be discounted in any analysis. So, the spirit of Mills’ critique is perhaps as important now, given that there is a prevailing belief that technological management is necessarily required given the rise of ever more complex societies and the discussions over the selection of basic values is closed.
As I have demonstrated throughout this chapter, the psychologism produced by Grand Theory and Abstracted Empiricism has come about through the dismissal of reasoning and intention. This is the product of two beliefs: the first is that science, especially neurological science, has access to reality; and second, the distrust of perceiver-dependent events. But this is little more than bringing the hermeneutic hammer down on lived experience, holding that people are not best positioned to relate to an observer their reasons for actions. Instead, one has the judgement of the theorist or unaccountable bureaucratic code. Moreover, it concedes that the investigation of social problems can be best approached via methodologies defined by computation not humanism; this is disciplinary supplication, not supplementation.
All of this is to say that there is a tremendous intellectual stake in cognitive behaviourism being correct. So much so that criticisms are brushed away and considered professional contrarians rabble rousing for their attention. This neglects though that in the attempt to impose a synthesis on approximately 25 years of research, the hubris and generalisations that have emerged therefrom there has been a significant intellectual citadel build upon a shaky foundation of over generalized eclecticism. It has created a messy soup in which the ‘cognitive’ elements are insufficiently grasped by the technicians, and the technical elements are insufficiently grasped by the social sciences, all of which creates a few leads and insights but much commotion and confusion. While this piecemeal approach presumes as-yet-unconnected little particulars to be the whole, it is rather nothing but a kind of naïve empiricism.
To conclude, capitalism’s rule requires more than military force, more that favourable laws, more than coercion and legitimation. A dangerous, impoverished, exploited and oppressed, urban class requires the development of a system of beliefs with several mechanisms to get the subjects themselves to justify the prevailing social inequality and social order. Calculated nudges are helpful in that regard. So, notwithstanding C. Wright Mills ‘well known critique of methodological, conceptual, and organizational flaws American social science research has continued to accommodate the wishes of the US ruling class, and has heeded calls to serve the state. Ultimately, and this is what is at stake with this epistemology, is that the anticipatory uses of big data will destroy the concept and practice of habeas corpus.
How to cite this book chapter:
Timcke, S. 2017 Capital, State, Empire: The New American Way of Digital Warfare. Pp. 125–143. London: University of Westminster Press. DOI: https://doi.org/10.16997/book6.g. License: CC-BY-NC-ND 4.0