CHAPTER 3

Calculation, Computation, and Conflict

During the First World War, massive artillery barrages caused the majority of causalities on the Western Front; indeed, many more soldiers were killed by falling debris than small arms fire (cf. Middlebrook 1971).17 This was partly due to the stable fronts that in turn compelled a change in military doctrine, and so whereas artillery had once primarily supported infantry manoeuvres, in stalemate warfare they became paramount elements in controlling battles. One American observer wrote that,

the artillery has now reached such a position of importance that successful attack or defense is impossible without it...Infantry officers do not hesitate to say that infantry should not leave its trenches until the artillery preparation has really smashed all targets...also, the infantry can advance only so far as their artillery can escort them with fire. (cited by Grotelueschen 2001, 5)

As the Allied adage went, ‘artillery conquers, infantry occupies’.

This new doctrine quickly depleted Britain’s stock of shells and caused political scandal in 1915. In response, David Lloyd George was appointed as the Minster of Munitions, but nevertheless the Asquith administration fell in 1916, replaced by one headed by Lloyd George himself (see Adams 1978). One task of his government was to better plan the strategic production and distribution of these shells. Britain would go on to produce nearly 260 million shells through the course of the war, underscoring the importance of artillery dominance.18 Once having arrived at the Western Front via rail lines the shells were distributed to gun crews and a different kind of calculability took over. Tactically the first gun in the battery would fire; forward observers would report the landing and gun crews would recalibrate; the process was repeated until the guns zeroed in on the target. To increase effectiveness, engineers and signal corps installed telegraph or telephone lines for forward observers, but these lines were often cut off by enemy artillery fire and would need to be repaired or replaced – a task made hazardous by enemy snipers and rifle fire.

When the communication infrastructure was intact, the staff at the artillery headquarters had to calculate targeting using variables like distance, elevation, charge, weather, height differences, and the distances between enemy and friendly troops. Ordinarily it took anywhere from 15 minutes to 1 hour to co-ordinate artillery strikes: after the initial call for artillery from the front line commander the signal went to the staff headquarters to calculate the trajectory and who then relayed the information to artillery commanders. If any branch of the established command was out of contact, it was difficult to get artillery fire approved. Delayed and poor communication or incomplete geographic and weather information could risk friendly fire incidents. Similarly, because of the slow turnaround times front line commanders could not seize opportunities as they might occupy ground set to be bombarded by their own side. So poor communication of calculations reduced operational effectiveness.

After the war, there were several country specific approaches to this problem. Common was the allocation of mortars to infantry units for line of sight operations, where teams could make accuracy corrections themselves, lessening their reliance on divisional command. The German and Soviet armies also developed short-range line of sight guns to support infantry units. Germany combined precision airpower with armour, and trained their officers to operate without divisional oversight to radically exploit battlefield opportunities. France expanded their staff and added long-range cannons to divisional artillery units, but this proved unable to respond to rapid moving fronts in 1940. Conversely, the British standardized their artillery and added mechanical calculation machines to allow artillery staff to calculate ballistics faster. The British also sought to decentralize artillery command by providing radios to forward observers to shorten the command structure.

While the US was a late entrant into the war, the American Expeditionary Force’s (AEF) frustration with trench warfare left a strong impression within the US military. Whereas the AEF observed, adopted and incorporated elements of European artillery doctrine, their officers, especially General John Pershing, believed that this mode of warfare was ‘based upon the cautious advance of infantry with prescribed objectives, where obstacles had been destroyed and resistance largely broken by artillery.’ This over-reliance on artillery produced a conservative infantry subject to ‘psychological effects’ that lacked the ability to create a decisive offensive. Drawing upon established American military thought and practice in Mexico and the Indian Wars under Manifest Destiny, and the conditions on the Western Front, Pershing advocated for aggressive infantry manoeuvres assisted by artillery to rout, pursue, and destroy enemies. He called this ‘open warfare’, the purpose of which was ‘to bring about a decision the [enemy] army must be driven from the trenches and the fighting carried out into the open’ and ‘an aggressive offensive based on self-reliant infantry.’ (Pershing as cited by Grotelueschen 2001, chapter 1).

Upon review of their combat performance, the US undertook several research and development programs to make military power more effective. One initiative was to continue to develop armour, but as these weapons had to manoeuvre under battlefield conditions there were limits to the calibre of guns that could be mounted on chassis. Another initiative installed automated analogue computation equipment on battleships so that the rates of accurate fire could be increased (see Mindell 2002, chapter 2), however the operational conditions of land warfare differed from those at sea, so this solution was not easily transferable. One notable difference to other military reconfigurations was that the US miniaturized radios to the point that they could be carried and operated by a single person. Radios were deployed at the company level, so field officers and NCOs could order artillery support, making them more self-reliant. To make this system more effective, the US pre-calculated ballistics data for any given scenario. This effort involved a small army of mathematicians and technical staff aided by the ENIAC computer. Additionally, throughout the 1930s the US Department of Defense (DoD) undertook a programme to land survey parts of Europe to make extremely detailed and accurate maps. Altogether, this meant that US artillery was able to respond quickly to calls for support than other military peers.

This brief overview of the development of early twentieth-century artillery warfare is indicative of several key initial developments in computational warfare in late capitalism: the general characteristics are the increase in scope and scale of coordinated calculability between the strategic and tactical level. Building upon these insights, in this chapter I advance a working conjecture that the development and expansion of detection and the tracking media facilitates the social reorganization of coercive power that can in turn increase social stratification.

3.1 Cold War Social Science

The marriage of radio, surveying, and calculability was very successful for the US military in the Second World War. In the post-war period, the US sought to replicate the success of this kind of mapping and calculability of populations in the ramp up to the Cold War. To counter a rising USSR, the US required constant technological and intellectual innovation to compete and foster economic growth. One method to achieve this objective was to use universities as the foundational research and development arm for blue-sky military and corporate imperatives. The US government and subsidized industry investments in academia sought to create a stock of exploitable ideas as components for strategic competitive advantage. Perceiving the character of this problem requires setting aside the objectified private agendas of various stakeholders, and instead seeing higher education as part of public and economic policy about knowledge production to support imperial rule.

Roger Meiners (1995) points out ‘By 1950 over $150 million a year was being spent by at least fourteen federal agencies, [while] over two–thirds of all budgeted university research came from federal money.’ Meiners continues,

Much of the initial funding for social sciences research came through the Department of Defense. The Office of Naval Research sponsored research in the fields of human relations, manpower, psychophysiology, and personnel and training. The Air Force, through the RAND Corporation and the Human Resources Research Institute, sponsored studies on topics such as group motivation and morale, role conflict, leadership, and social structure in the military community.

The initial post-war boom in veteran students aided this research agenda as over one million extra students enrolled in 1947. Meiners (1995) notes that in 1946, total university enrolment stood at 2.6 million students, double that from 1938, while from 1956 to 1966, federal spending increased by near $3 billion (Brock 2010). Using the Servicemen’s Readjustment Act of 1944, the US subsidized about 2.2 million military personnel though higher education, many of who would not have been able to attend otherwise (Olson, 1973). Costing around $5.5 billion, this seemingly egalitarian public policy was designed to create a staff for the Cold War industrial enterprise.

Within this broader transformation of American social science, one notable initiative was the US funding of the Bureau of Applied Social Research (BASR) at Columbia University. For example, Elihu Katz and Paul Lazersfeld attempted to understand the behavioural influence of mediated messages in mass print and broadcast communication to better influence target populations, irrespective of whether those populations were domestic or abroad (see Pooley 2008). Presuming quantitative survey methods to be more rigorous than other kinds of social inquiry, Katz and Lazersfeld began refining public opinion research programs, and importing techniques from actuarial statistics to test difference messages to detect whether the composition of content registered different efforts or not.

In a similar vein, Daniel Lerner’s The Passing of Traditional Society underscored the belief held by many US media researchers in the early Cold War that mass media could induce social transformations. Informed by his wartime occupation as a propaganda analyst in the Psychological Warfare Division (Shah 2011), Lerner’s book was the product of an another notable BASR project, which had been funded by the US State Department to assess the effectiveness of Voice of America in influencing public opinion in the Middle East. Lerner advances a psychosocial theory of modernization wherein groups moved ‘from farms to flats, from fields to factories’ (1958 47). Urbanization would be a catalyst for the development of modern institutions, paramount of which was a market system. When combined with high rates of literacy and contemporary media consumption, Lerner proposed that this would create ‘empathy’, an effect where behavior becomes associated with Western beliefs and values. This geopolitical theory, while being less rooted in European colonial assumptions about racial attributes than pre-war social theorists, still nevertheless maintains sufficient residual trace elements of racial superiority of the American variety, albeit coded in the language of cultural adaptability.

3.2 The Strategic Return to Centres of Calculation

While telling about the goals of the US, Katz and Lazersfeld’s as well as Lerner’s programs do not match the scale of the US Army’s Human Terrain System (HTS). Initialled in 2006 and costing $725 million until discontinued in 2014, HTS was the most expensive social science programme ever undertaken. (Most of these funds went to two defence contractors, BAE Systems and CGI Federal). Conceived at the US Army’s Training and Doctrine Command (TRADOC), then headed by General David Petraeus, the programme embedded social scientists in combat brigades both to ensure better sociocultural understanding of the populations under occupation as well as to address institutional racism in the US Army.19 Together this social scientific ‘soft power’ was meant to aid counter-insurgency operations. Numbering more than 500 personal at one stage, five person HTS teams were embedded collecting data, gathering information and undertaking psychological operations (See Nigh 2012, Human Terrain Team Handbook 2008). To outsiders these teams presented less lethal options to manage occupation, and fitted into the population centric counter-insurgency doctrine TRADOC was developing.20 Eventually 30 Human Terrain Teams were deployed in Iraq and Afghanistan, however many personnel had inadequate language skills or lacked local cultural knowledge. Moreover, the programme was beset by accusations of institutional racism, ignoring sexual harassment, and of participating in interrogations (Varder Brook 2013). These problems might have been tolerated had the programme been an operational success, but brigade commanders found the HTS teams ineffective (Clinton et al. 2010).

Post-surge, as the US Army reduced troops in Iraq and Afghanistan, the HTS programme sought to retain relevancy by repurposing themselves to be able to gather information about local populations in areas where the US Army anticipated conducting operations. But this redirection was not met with much enthusiasm as the US Army faced budget cuts and so the programme ceased being funded. Roberto Gonzalez (2015) argues that another contributing factor was HTS’s close connection with Petraeus, who lost power after he was dismissed as the Director of the CIA following the Petraeus-Broadwell scandal. Still, the decline of counter-insurgency operations and Petraeus’s scandal are secondary reasons for the cancelation. Rather cancellation represents, as Gonzalez writes, ‘the broad shift in Pentagon priorities, away from cultural intelligence and towards geospatial intelligence’ (2015). Notwithstanding the long association between anthropology and the intelligence community documented by David Price (2008, 2016), as one critical geographer writes, ‘It’s algorithms, not anthropology, that are the real social science scandal in late-modern war’ (Belcher 2013, 63). These priorities return intelligence collection to the strategic centre of calculation and to the various agencies of the state. The proceeding sections are case studies of drone warfare and mass surveillance and are used to analyse the ramifications of covert computation.

3.3 Automated Lethal Robotics

All branches of the US military are researching or seeking to develop robotic instruments of war. The US Navy is attempting to build armed submarines and helicopters such as the Fire Scout. At the time of writing, the US Marines are testing Gladiators, small tracked vehicles armed with machine guns that are intended to operate in front of advancing troops, while the Army uses Packbots to assist in bomb detection and detonation. Using funds provided by the Defense Advanced Research Projects Agency (DARPA), several companies are iteratively making hominoid-esque robots like Boston Dynamics’ Atlas. Bio-mimicry extends to pack animals such as the BigDog and drones that look like birds (McDuffee 2013). Suffice to say that even if the military budget were to shrink, these kinds of robotic systems are deemed crucial pieces of future military capacity, force, and planning. To examine this trend, I use the case of drones. Here I follow Derek Gregory (2011) and understand these technologies as part of a ‘scopic regime’, by which he means to draw attention to the specific techno-culture manner of employing sensors and optics to display and coordinate warfare.

First used for tactical reconnaissance, drones have become a near indispensable battlefield technology with offensive capabilities. From 2002, when a couple of strikes targeted Salim Sinan al-Harethi and Nek Mohammad—with an estimated High Value Target to Total Deaths ratio (HVT:TD) of 1:5—the offensive use of drones escalated from 2005 onwards. Eventually, between 2009 and 2010, there were 161 strikes, killing 1,029 persons with a HVT:TD ratio of 1:147, suggesting indiscriminate targeting (Hudson, Owens, and Flannes 2011). And still, Gen former director of the NSA and CIA, General Michael Hayden, has said that ‘Our tolerance for collateral damage is far too low.’

The most recent phase of the drone program is characterized by an increase in attack frequency, sanctioning targets of opportunity, and likely larger payloads exacerbating civilian deaths. From 2011, the Obama administration announced plans to begin an aggressive new drone-warfare campaign in Yemen directed against al-Qaeda in the Arabian Peninsula, Somalia (Mazzetti 2011), as well as providing drone support to foreign nations such as Uganda and Burundi in addition to anti-piracy operations in the Indian Ocean (Turse 2011). Due to the multiple areas of operation, state secrecy, and absent reports, it is difficult to estimate the number of casualties drones have created.

Drone warfare has been marked by so-called ‘signature strikes’. Daniel Klaidman describes signature strikes as ‘targeting of groups of men who bear certain signatures, or defining characteristics associated with terrorist activity, but whose identities aren’t known’, (2012) and Greg Miller describes them as ‘surgical, often lethal, and narrowly tailored to fit clearly defined U.S. interests.’ This is particularly distressing when the US military is testing software that will program drones to automatically hunt, identify and engage targets without a human pulling the trigger. (Finn 2011). Combined with revelations about NSA mass surveillance there is little to inspire confidence that future signature strikes will not automatically scrape big data gathered through data mining.

The Obama administration overruled the use of signature strikes, preferring instead Terrorist Attack Disruption Strikes (TADS). However, as Miller reports, TADS are aimed at ‘wiping out a layer of lower-ranking operatives through strikes that can be justified because of threats they pose to the mix of U.S. Embassy workers, military trainers, intelligence operatives and contractors scattered across Yemen.’ But by that definition, it seems that TADS and signature strikes are practically one and the same (Miller 2012). And if anything, one can infer from Miller’s report that the US has inserted trainers, operatives and contractors into Yemen in an effort to erode the threat presented by AQAP (al-Qaeda in the Arabian Peninsula) (but itself likely inducing blowback).

The American public is told that extrajudicial casualties from drones are primarily militants but these claims remain unsubstantiated and under investigated even as strikes have become routine (Sokol 2010). The Brookings Institute estimates that 10 civilians are killed for every militant. It seems the official line is similar to that provided in the Vietnam War; ‘anybody dead was considered a VC.’ This method is used in areas as widespread as from Northern Mali on the Islamic Maghreb and the Philippines’s Abu Sayyaf and Jemaah Islamiyah (Oumar 2012, Ahmed, 2012). The lack of judicial oversight, superseding legal constraints, extra-judicial killings, massive collateral damage, secret ‘kill lists’, and the uncertainty caused by the lack of transparency and accountability leaves little information for a proper public debate. The Obama administration claims they follow strict internal reviews to prevent abuses, but there is no way to verify these claims. What has happened is the installation of an undemocratic and illiberal self-regulating centralized authority yielding lethal force. Therefore, the intellectually responsible position is to be suspect of this politically centralized bombing.

The US state claims that greater transparency, while desirable, must be weighed against revealing the sources and methods of the intelligence community, and the ‘requirement of non-acknowledgement’. The former reason seeks to preserve a tactical edge over enemies, but this does not explain why representatives or the judiciary cannot provide oversight. The later reason indicates cooperation with other countries whereupon operational involvement is unacknowledged, and official credit of tactical successes are taken by the host country. Here the Yemen, Philippines, and Mali governments insist they carry out strikes to preserve their sovereignty, even while lacking the capability to do so (Booth and Black 2010). However, by not acknowledging external involvement, this is withholding crucial information from their citizens.

Given the states in which they are used, drones destabilize already frail political systems by inflaming social volatility and isolating populations from political elites and governance structures that are seen as powerless to stop this terror (Crilly 2011). In Pakistan, for instance, the CIA wants the drone campaign to continue unabated, whereas the State Department argues that the drones risk destabilizing a nuclear power (Entous, Gorman, and Rosenberg 2011). As it has been conducted, drone warfare seems strategically misguided, lacks decisiveness and incurs significant political and diplomatic costs. Target populations live in constant terror of being attacked. And non-combatants’ deaths and feelings of asymmetrical vulnerability, even if they are not ideologically sympathetic to local combatants, create incentives for the target population to retaliate against convenient targets. Altogether, drone warfare, rather than bringing stability has simply compounded violence and instability. But it appears as if this cost is acceptable because it gives an under-informed public the impression that potential conflicts and attacks are being averted.

In 2011, the United States operated approximately 60 drone bases planet wide (Turse 2011, Whitlock and Miller 2011), and the Obama administration planned for more bases in Japan, South Korea, and Niger. Similarly, in the first half of 2013, the US Navy on separate occasions successfully launched and landed an automated X-47B drone from an aircraft carrier and its software is being tested for inflight refuelling. These developments can increase surveillance and reconnaissance capabilities, but to see them as isolated or minor events is to miss the point that they are a key part of a constantly expanding project of global surveillance, one that involves a complex labour process. To elaborate upon the last point, Gregory cites figures that 185 persons required to support one Predator drone flight (2011, 194). So military labour power is still required to man ‘unmanned’ weapons systems.

Despite requiring good operating conditions (Turse 2012), and being easy to target necessitating deployment to safer operating areas, proponents promote drone warfare as more precise and discriminating, hence more militarily effective and even ethically obligatory (cf. Strawer 2012). They cite additional benefits such as payload variability for weapons and surveillance, as well as their long range and extended flight times all at a relatively low production and operating cost, compared to manned aircraft (basic models cost $4.5 million). Proponents further suggest that the moral questioning of this mode of warfare is factually incorrect, confused, or misguided. For instance, Peter Beaumont does not distinguish between which weapon has caused injury and death (Beaumont, 2012). He argues that the central question is whether a weapon system is used in line with prevailing international conventions and norms:

In conflict, within the existing framework of international humanitarian law, whether an attack is justifiable and legal is defined both by the nature of the target and proper consideration of whether there will be civilian casualties and whether they are avoidable. (2012)

Therefore, Beaumont concludes, ‘the notion of drone warfare [is] not more horrible than a Tomahawk cruise missile fired from a distant ship or a bomb dropped indiscriminately on a village by a high-flying F-22 or MiG.’ By inference, what matters is the existence of a targeted killing programme, not the instrument. Moreover, an excessive focus on the instruments blurs the key issue, which is the willingness to use deadly force to further imperial aspirations. The right question to ask of drone warfare, Beaumont thinks, is whether

as a military tool, drone warfare is actually effective; whether its use is justified when set against the political fallout that the drone campaign has produced and whether drones have actually reduced the threat posed by militants.

This subjective utilitarian view of military tools is not an engagement with morality and ethics, but simply a political calculation regarding technology use where drones are just another tool to apply lethal force. In this respect, Joseph Singh, a researcher at the Center for a New American Security sees no qualitative difference between drones and piloted aircraft in terms of the application of lethal force. He writes, ‘any state otherwise deterred from using force abroad will not significantly increase its power projection on account of acquiring drones’ (Singh 2012). Other commentators present the false choice between national insecurity and assassinations as if there were no better ways to achieve security and peace. Another kind of discussion that takes place is the presumption that Drones are a moral imperative. ‘You can far more easily limit collateral damage with a drone’, former Secretary of Defense Robert Gates declared in 2013, ‘than you can with a bomb, even a precision-guided munition, off an airplane.’ (Gates cited by Wolf and Zenko, 2016) But this is a falsehood. Using the publicly available data, Amelia Mae Wolf and Micah Zenko (2016) compared airstrikes and drone strikes, finding that ‘drone strikes in non-battlefield settings — Pakistan, Yemen, and Somalia — result in 35 times more civilian fatalities than airstrikes by manned weapons systems in conventional battlefields, such as Iraq, Syria, and Afghanistan.’ The ground truth reveals the equivocation of these bulk moral arguments.

Opponents of drone warfare, like Michael Ignatieff, suggest that drone proliferation has changed the nature of warfare (2012). In a passage worth citing at length, he writes

In his essay ‘Reflections on War and Death’ French philosopher Jean-Jacques Rousseau “asks the reader what he would do if without leaving Paris he could kill, with great profit to himself, an old mandarin in Peking by a mere act of his will. Rousseau implies that he would not give much for the life of the dignitary.” Imagine if great numbers could so exercise their will. What violence would be unleashed, how many prostrate bodies around the globe who never knew what hit them? (Igantieff 2012)

The passage remarks that the ease of killing without consequence lowers the threshold for public acquiescence to conflict. Reduced-risk operations lessen political aversion to commission attacks in official and unofficial conflict areas. This enables conditions where strikes become more frequent and militaries less prudent in their use of force relative to the industrial mode of war. This, in turn, contributes to and exacerbates existing conditions (such as political repression and famine in the case of Yemen, sectarian turf wars in the case of Pakistan, or a failed state in the case of Somalia) thereby producing more enemies. The deception is that ‘these new technologies promise harm without consequence’, but Ignatieff says, ‘there is no such thing.’ Gregory provides a harrowing aphorism ‘The death of distance enables death from a distance’ (2012, 192). Proponents of drone warfare miss the point that distance—physically and psychologically—is an ethical matter.

In the final analysis, it appears as if foreign drone strikes serve two functions. The first is to engender domestic political satisfaction amongst an otherwise blasé public; the second is that the greater part of the Middle East is a laboratory for operational testing in advance of future conflicts. Not to put too fine a point on it, the military adventurism in the Middle East is, in part, a technological proving ground for the other aspects of the New American Way of War. The apparent ease of operational deployment means that missions can be run with minimal accountability; hence, military force is more aggressive and less discriminating. This is important to consider given that military technological pathways are prone to becoming locked in by the market in one way or another. There is little to suggest that effects from the efforts to robotize the battlefield will be any different.

The current research agenda for drone includes automated lethality and the capacity to operate from aircraft carriers as the development of X-47B Unmanned Combat Air System demonstrates. To date, there has not been sufficient attention to the kind of battlespaces, the kinds of weapon systems that could (and will be) deployed, nor the vanishing boundary between domestic surveillance and battlefield technology deployments by combat systems like the X-47B. Absent too is a discussion of the extent to which the domestic deployment of drones as surveillance systems in combination with the de facto handheld computing devices acting as tracking devices, and how this might erode liberties of all kinds. Another key area to see this domestic and foreign line being erased is in the aforementioned cyber warfare.

To end this section, it is worth bearing in mind that while my discussion of automated robotics warfare has focused on drones, their use on the ground is as significant. The US Army (2017) has a Robotic and Autonomous Systems Strategy that describes how ‘Unmanned Ground Systems’ can complement existing military labour by improving soldiers’ situational awareness and improve firepower. The mid-term goals of are to ‘Increase situational awareness with advanced, smaller RAS and swarming; Lighten the load with exoskeleton capabilities; Improve sustainment with fully automated convoy operations; Improve maneuver with unmanned combat vehicles and advanced payloads’ (2017, 7). That most of these robotics are conceptually to attuned to operate with ‘increased congestion in dense urban environments’ (2017, 1), it is telling about the US militaries thoughts about the nature of future combat operations and kill chains.

3.4 Extrajudicial Drone Strikes

Vincent Mosco describes drone warfare as ‘a global system combing electronic surveillance and algorithmic decision making’. (2017, 2) As he correctly notes, the development of automated lethality and the deployment of drones cannot be disentangled from extrajudicial signature strikes in non-declared war zones that often result in significant civilian casualties. To begin, while periodically frowned upon, US Presidential sanctioned assassination was a common tactic throughout the twentieth century—Eisenhower on Lumumba, Kennedy on Castro, and Johnson in Vietnam—it has now come out of the shadows and been used to gain political capital and electoral clout. To justify this development, the Obama administration has written legal opinions, but which it claims must be necessarily secretive. What details have been made available are limited; President Obama has attempted to reassure citizens that drone targets must pose ‘a continuing and imminent threat to the American people’. The White House maintains that ‘lethal force must only be used to prevent or stop attacks against U.S. persons, and even then, only when capture is not feasible and no other reasonable alternatives exist to address the threat effectively’ (White House, 2013). This carefully worded criterion does not differentiate between an American citizen and an enemy combatant. When questioned by Senator Rand Paul whether the president could authorize a targeted attack against a US citizen in the United States, Attorney General Holder replied that there could be:

an extraordinary circumstance in which it would be necessary and appropriate under the Constitution and applicable laws of the United States for the president to authorize the military to use lethal force within the territory of the United States.

This reasoning implies that domestic drone strikes on American citizens are permissible in certain conditions. Moreover, it is indicative of the state mentality which has sought to expand mass surveillance through legal contortion which little resemble International norms for governance, transparency, and accountability and which likely make John Yoo proud. Peter Van Buren puts it brilliantly:

Prior to [al-Awlaki’s] killing, attorneys for his father tried to persuade a U.S. District Court to issue an injunction preventing the government from killing him in Yemen. A judge dismissed the case, ruling that the father did not have “standing” to sue and that government officials themselves were immune from lawsuits for actions carried out as part of their official duties.

This was the first time a father had sought to sue the U.S. government to prevent it from killing a son without trial. The judge did call the suit “unique and extraordinary,” but ultimately passed on getting involved. He wrote instead that it was up to the elected branches of government, not the courts, to determine if the United States has the authority to extrajudicially murder its own citizens.

The extrajudicial killing of an American citizen seemed to [the judge] to be nothing but a political question to be argued out in Congress and the White House, not something intimately woven into the founding documents of our nation. (Van Buren 2014)

Equally worrying, is then Attorney General Eric Holder’s 2012 interpretation of the Fifth Amendment, where he said,

that a careful and thorough executive branch review of the facts in a case amounts to ‘due process’ and that the Constitution’s Fifth Amendment protection against depriving a citizen of his or her life without due process of law does not mandate a ‘judicial process.’

Effectively, the standards for due process—which supposedly curb the abuses and excesses of the state—are determined by the state itself, without judicial oversight. As we shall see in the following section, these actions cannot be disconnected from Holder’s extensive use of the Espionage Act to prosecute whistleblowers (see Carr 2012), nor his prosecutors from seizing records from journalists (see Bronner, Savage, and Shane 2013).

3.5 The Order of the Internet of Things

The US’s attempt to weaponize communication has long been a part of post-war politics and has shaped the rule that the state imposes order. During the Cold War, for example, J. Edgar Hoover’s Federal Bureau of Investigation (FBI) deployed counter-intelligence programs to disrupt civil rights activists and the peace movement. Agents collected information on targeted individuals (up to half a million citizens) to discredit them. Pressured by the outrage following revelations about the scope and centralization of this intelligence gathering, the House and Senate Intelligence Committees became permanent features of Congress, and in 1976, Attorney General Edward Levi established guidelines to limit federal investigative powers. But this oversight and curtailing of power was less motivated by the revelations themselves, but rather because Hoover’s FBI turned their powers inwards to the ruling class, transgressing the order of things.

The limitation on investigative power was temporary, and beginning in Reagan’s first term many suspended techniques were reauthorized in one form or another. This continued irrespective of which party controlled the various branches of power. For example, during the Clinton administration, the Communications Assistance for Law Enforcement Act (1994) required telecommunications companies to make their designs accessible via backdoors to law enforcement surveillance. Following the Oklahoma City Bombing the Antiterrorism and Effective Death Penalty Act (1996) expanded this program authorising targeted surveillance based not upon investigating acts, but their associations. The response to 9/11 near completely remove all barriers to full-scale total state organized data collection and created a funding boom as the newly established Department of Homeland Security sought to coordinate and install a digital surveillance apparatus. The Patriot Act (2001) allowed state agencies to visit public events and collect information on persons and organizations, even those that did not appear to have criminal intent.

The basic contours of the mature state security institution begun to be revealed in a few years after 9/11. In October 2004, New York Times investigative reporters James Risen and Eric Lichtblau discovered the NSA’s domestic warrantless surveillance programme. When asked for comment, the Bush administration pressured the New York Times to hold the story, claiming national security. Bill Keller, then executive editor, decided again publishing. It was only after learning that Risen planned to publish the article in a book—State of War—that the newspaper published the story in December 2005. From Risen’s reports, the decision involved deliberations that included Arthur Sulzberger Jnr talking with President Bush in the Oval Office. Subsequently, Risen and Lichtblau’s reporting was awarded a Pulitzer Prize, but the expansion of the security state continued unabated. I now turn to provide a brief overview of that development.

While the NSA has a long history of information gathering, after 9/11 the agency greatly expanded by building facilities in Georgia, Texas, Alaska, Washington, and Utah, in addition to directing more resources to overseas stations. The agency’s goal is to pre-emptively monitor and identify any individual’s ‘communications fingerprints’. Currently with a budget of $10.8 billion per year and 35,000 workers, the NSA is a security leviathan. It undertakes mass surveillance for the White House, Pentagon, FBI and CIA, but also the Departments of State, Energy, Homeland Security, Commerce, and the United States Trade Representative. But despite extensive service for these departments, there is a near total invisibility of these activities to the public, and is the inverse of the NSA’s extensive efforts and ambitions ‘to answer questions about threatening activities that others mean to keep hidden’ (NSA 2007).

The department’s intelligence programs include Social Network Analysis Collaboration Knowledge Services, which attempt to register social organization hierarchies; Dishfire collects and stores text messages; Tracfin records credit card transactions; Orlandocard installs skyware on personal devices. These programs illustrate how surveillance has moved beyond the mandate for a military advantage to encompass a survey of the general population, home or abroad. Public records and third party record compliance from banks, social media sites, and GPS location information can augment these profiles (Risen and Poitras 2013a). Concern about these activities is downplayed as just ‘metadata’; still, even if just metadata, it is still very revealing as basic data analysis can be used to infer a person’s associates, build behavioural patterns, or predict actions. Indeed, General Michael Hayden, former director of the NSA and CIA, has said, ‘We kill people based on metadata’ (cited by Cole, 2014, 1). This does not bode well given automated lethality discussed above. As such, US mass surveillance has established new norms that other states do and will follow, in effect making all traffic, private and public, on the internet fair game. Intrusive surveillance of this sort directly creates conditions where citizens can easily be subjugated—the Snowden files show this is not an abstract threat.

An internal NSA strategy policy document (2012) reveals that the agency views its mission as ‘dramatically increas[ing] mastery of the global network’ and acquiring communication data the agency deems of strategic value from ‘anyone, anytime, anywhere’. Former NSA Director General Keith Alexander’s motto ‘collect it all’ best captures this directive (Greenwald 2014, 95). To do so the agency has petitioned for legal and policy accommodations and adaptions, undertaken liberal interpretation of existing laws, or disregarded them altogether to pursue their objective. They have even spied on the standardization bodies that set particular encryption specification. This aggressive surveillance has been rebuked by Judges in the Foreign Intelligence Surveillance Court (FICA), even while the court has authorized these programs. This is a secret legal process, so citizens are unaware of the extent to which they were subject to surveillance and their rights compromised. Nevertheless, all these actions are justified by appealing to the demands of the information age:

The interpretation and guidelines for applying our authorities, and in some cases the authorities themselves, have not kept pace with the complexity of the technology and target environments, or the operational expectations levied on the N.S.A.’s mission. (NSA 2012)

Still, the NSA has bragged about operating in ‘the golden age of Sigint’ (NSA 2012, 2). Similar to the Pentagon’s Human Terrain System—a militarized anthropology whose ostensible purpose consists of a computerized system of statistical demographic information on occupied populations with the aim of providing actionable military intelligence—so too do NSA projects seek to profile populations. One project, Mainway, in August 2011 was collecting data from nearly 2 billion phone records per day. From what little is publically known, this project used Section 702 of the 2008 FISA Amendments act to force American service providers to give data on Americans’ calls to foreign nations. The 2013 NSA budget requested funds to increase data collection capacities to record 20 billion events per day as well as a system that can integrate different data streams within the hour to create bulk data, then to share that data for more effective analysis (Risen and Poitras 2013a). Little else is known because FICA proceedings and rulings are classified.

To build upon this point, under current law, aspects of the NSA’s data-mining practice is legally binding (cf. Smith v. Maryland 1979 and Patriot Act 2001) and is understood by the NSA to apply ‘without regard to the nationality or location of the communicants’ (as reported by Risen and Poitras 2013a). But prima facie this scope presents a serious attack on free speech and liberty. As Josh Levy crisply observes, ‘The chilling of free speech isn’t just a consequence of surveillance. It’s also a motive’ (Levy 2013). The constant threat of direct monitoring with privacy being de facto non-existent, and the affective anxiety caused by it, is anathema to liberty. Authoritarians claim these measures are for public safety, but in practice, surveillance is internally directed to preserve the regime, not to ward off external threats. Such social conditions fracture civic life as it is impossible to trust others. In addition, the prospect using evidence acquired without due process, or trumped up evidence, in an attempt to forestall protest. The point is not whether this or that administration will or will not act in this way, but rather that the infrastructure is in place with the implicit latent rationale that it ought to be used; the state establishes an infrastructure that it ‘won’t control’, rather than ‘can’t control’. These conditions are primed for institutional abuse.

When these items are discussed in public, the US state opportunistically mobilized a rhetoric of national security interests, cyber warfare and preventative security to exploit public fears of terrorism to install ever more monitoring devices to justify mission creep and security drift. Human rights language is also co-opted to justify security. But this is an inversion of what has actually happened, as since launching the Global War on Terror, the US has pushed aside legal safeguards that protected civil liberties, subordinating them to the interests of the state. This has happened without public disclosure, nor robust and informed debate about the desirability and consequences of these goals and methods. This is near obvious when examining the proportionality of policy actions that what is taking place is systematic pervasive surveillance. Arguably, contemporary surveillance is more pervasive than under most authoritarian and totalitarian regimes of the recent past. As Heidi Boghosian, notes ‘corporations and our government now conduct surveillance and militaristic counterintelligence operations not just on foreign countries but also on law-abiding U.S. citizens working to improve society’, and whole ‘lives are subjected to monitoring, infiltration, and disruption once they are seen as a threat to corporate profits and government policies’ (2013, 21). Indeed, the NSA has been collecting information in anticipation of discrediting dissidents, but this collection fails to meet the standard of probable cause.

It would be unwise to underplay the danger and significance of this emerging capability to expand the range and kind of harm, and the implications for national and international security (for an extended treatment of this issue see Kello 2013), too much that remains unknown about technological volatility and defence complications that could lead to strategic instability. Emblematically, there is tremendous confusion over Stuxnet, the first publicly disclosed cyber weapon. Due to a lack of information about the weapon itself, there are many unanswered questions about who deployed it, and the extent of sabotage done to the Natanz uranium-enrichment plant and the IR-1 centrifuge control system (Langer 2013). Mass surveillance and Stuxnet has initiated a cyber arms race to build capacities, gather resources, and train staff. But this is a race with no direction and without an understanding of pace.

Despite the NSA’s effort to reassure American citizens that its actions are not as nefarious as press reports indicate, and that all data queries relate to foreign intelligence efforts such as counterterrorism, counterproliferation and cybersecurity, time and time again, claims about the NSA’s lawfulness and conscientious protection civil liberties are demonstrated to be false. Similarly, its claims of thwarting attacks are drastically overstated. Foreign Intelligence Surveillance Court Judge John Bates in a recent ruling painstakingly catalogued ‘pervasive violations’ of previous court orders, rampant ‘unauthorized electronic surveillance’ of US citizens, and a ‘history of material misstatements’ about how NSA programs worked (Bates as cited by Gosztola 2013). So the agency has a credibility gap.

The danger of massive data gathering exercises takes on another dimension as domestic government agencies begin acquiring drone programs to assist with law enforcement. For instance, the FBI, Homeland Security, and Coast Guard deploy these resources for border patrol and drug interdiction. It would be an error to downplay these concerns as in previous instances the NSA has shared criminal evidence with law enforcement agencies, who in turn then misattribute the source of their information to retroactively manufacture legal chains of evidence to justify arresting a suspect (Menn 2013). This makes a mockery of due process principles.

The NSA also partners with universities. Likewise, consider the Minerva Research Initiative, a DoD research programme that funds university research into population and media dynamics of civil unrest. With an allocated budget of $75 million over 5 years, Minerva’s aim is to ‘to improve DoD’s basic understanding of the social, cultural, behavioral, and political forces that shape regions of the world of strategic importance to the US.’ A typical example is a Cornell based project that uses ‘digital traces’ to model ‘the dynamics of social movement mobilisation and contagions’ to determine ‘the critical mass (tipping point)’. Case studies include ‘the 2011 Egyptian revolution, the 2011 Russian Duma elections, the 2012 Nigerian fuel subsidy crisis and the 2013 Gezi park protests in Turkey.’ Another project, based at the University of Maryland, aims to understand how climate change influences civil unrest. These projects seek to conduct ‘study of emotions in stoking or quelling ideologically driven movements to counteract grassroots movements’. Most notably, in 2012 university-based researchers used Facebook privacy policy to skirt informed consent and conducted an experiment where user’s timelines were modified to measure how ‘emotional contagion’ spreads (Kramera et al. 2014). One of the lead authors, Jeffrey Hancock had previously worked on other Minerva funded projects like Modeling Discourse and Social Dynamics in Authoritarian Regimes.21

Social media research is not limited to universities acting on behalf of the security forces; sometimes they conduct this research directly. For example, the Intelligence Advanced Research Project Activity programme examines Twitter to predict civil disorder. General Michael Flynn, the then director of the Defence Intelligence Agency, is on record as indicating that social media has opened up new areas of inquiry. ‘The information that we’re able to extract form social media’, he said, ‘it’s giving us insights that frankly we never had before’ (see Tucker, 2014, 1). Each individual project looks inconspicuous, but much like the development of military hardware at US universities during the Cold War, when seen in totality it is anything but. To one analyst’s eyes, ‘Minerva is farming out the piece-work of empire in ways that can allow individuals to disassociate their individual contributions from the larger project.’

These cases show how the US continues to weaponize social science, using it as an instrument of imperial rule. This practice has drawn criticism from the American Anthropological Association who argue that the Pentagon lacks ‘the kind of infrastructure for evaluating anthropological research’ in the case of the HTS. They called for such research to be overseen by the National Science Foundation (NSF). Accordingly, the DoD and the NSF signed a memorandum of understanding to cooperate on Minerva. But, as the AAA writes, this arrangement ‘undermines the role of the university as a place for independent discussion and critique of the military’. But it seems the horse has already bolted: the American Psychological Association has sought to protect James Mitchell and Bruce Jesssen, psychologists who assisted the CIA in its torture programme (in addition, members of the American Medical Association were present at torturing).22 Republican Senator Tom Coburn had various proposals to restrict political science research to areas that provided benefits for national security.23 These developments wither social science.

The security forces have partnered with ICT companies to use their resources, sometimes via political pressure to provide keys to their encryption, other times via court orders to install backdoors into software. Still it is not just overt; reports indicate that the CIA pays AT&T about $10 million for metadata search series (Savage 2013). As AT&T provides infrastructure for other telecommunications companies, they are able to provide information of those that use the infrastructure, not just their customers. As the CIA is prohibited from domestic surveillance, the contact has ‘safeguards’ to ensure privacy protection with international calls with one end in the US. AT&T is said to ‘mask’ several digits of the phone numbers. But given database triangulation, this is hardly a barrier. Besides, as Savage reports, there is still the possibility of inter-agency cooperation where the CIA can refer these numbers to the FBI, which then can subpoena AT&T for uncensored data (Savage 2013). AT&T has a history of extensive cooperation with the state. It facilitated the Bush administration’s warrantless wiretapping surveillance program, embedded employees with the FBI and DEA.

Data companies collect and amalgamate online and offline information to understand behaviour. ChoicePoint, owned by Elsevier maintains 17 billion records on businesses and individuals. Or consider that one of the leading companies in this area, Acxiom, processes about 50 trillion data transactions per year, and averages 1,500 pieces of data per consumer. The pieces come from tracked online information combined with public records such as credit reports, criminal records, Social Security numbers, to build a profile of a person making genuine anonymity almost impossible. Reminiscent of Alexander’s remarks above, Scott Howe, Axciom’s CEO, has said, ‘Our digital reach will soon approach nearly every Internet user in the US.’

The scope of this information brokerage rivals that of the NSA, yet this business remains near entirely unregulated, and with little public understanding of this business sector. Data collection happens through consent for direct data in return for services, but also through the passive collection by private entities by unknown, little known or involuntary means. It may seem as if the data-brokers do not endanger human rights, at least relative to governmental surveillance, but this neglects that the US government, federal and local agencies purchase data from these sources. In this respect, government access to customer data blurs the lines between agencies tasked with serving the public and corporate profit seeking. The consequence of aligning consumer marketing and state security is possible inference-based discrimination or police targeting.

There are uncomfortable relationships between corporate surveillance of consumers and workers, and the US government’s domestic surveillance of citizens. Telecommunications companies and retailers routinely capture everyday consumer data and hand over to the state. In the wake of the Snowden revelations, as a public relations exercise ICT companies proclaim they are complying with the law, but as the AT&T-CIA case shows, voluntary cooperation does continue. Still, statements indicating legal compliance can be misleading insofar that broadly applicable laws have not been revised in light of technical developments. Where there are revisions, the new statutes often cater towards the ruling class’s interests. Consider that companies like Booz Allen Hamilton sponsored legislation like the Digital Accountability and Transparency Act 2014 or the Cybersecurity Information Sharing Act, 2015. While there has been targeted protest on bills like CISPA and SOPA, this activism has not generally sought to situate this legislation as but the latest iteration of a drive to formally entrench the security state. This is worrying, given that US legislators are woefully under-informed about mass surveillance, and so susceptible to manipulation campaigns, it makes it difficult to justify this public policy as having democratic legitimacy.

While the NSA has claimed great success, there is little evidence to support these claims. As Democratic Senators Ron Wyden, Mark Udall, and Martin Heinrich (2013) made clear in a New York Times op-ed,

The usefulness of the bulk collection program has been greatly exaggerated. We have yet to see any proof that it provides real, unique value in protecting national security. In spite of our repeated requests, the N.S.A. has not provided evidence of any instance when the agency used this program to review phone records that could not have been obtained using a regular court order or emergency authorization.

Despite the massive investment of funds and resources, investigations have yet to say whether the NSA’s programs have yielded results that have stopped terror activities. Using intensive and extensive surveillance in Afghanistan, there was little tactical success nor enough insight to produce strategic success (Savage and Weisman 2015).

However, even if mass surveillance did meet its ostensible goal, and even if there was public oversight, principally it is right to oppose it. This is because it creates docile, non-threatening and productive subjects. Glenn Greenwald puts it well:

The danger posed by the state operating a massive secret surveillance system is far more ominous now than at any point in history. While the government, via surveillance, knows more and more about what its citizens are doing, its citizens know less and less about what their government is doing, shielded as it is by a wall of secrecy. (2014 208–209)

Debates about efficacy miss the point that mass surveillance unduly infringes upon a person’s dignity. Constant surveillance and monitoring induces people to performing a particular kind of subjectivity, limiting the scope for dissent or plain difference of opinion. The shrugged response of ‘nothing to hide, they have nothing to be afraid of’ underestimates the extent to which people monitor their actions because they do not want to attract the attention of the state. In other words, they go out of their way to do nothing contentious. But when people face the prospect of authorities holding them accountable for specifically framed records, it is nothing less than a direct attack on their freedom of speech, belief, consciousness: in short their very personhood.

As vast portions of people’s lives are mediated it is near impossible to live without constant sharing of data, recording or conducting activities on digital devices. The current Supreme Court ruling on data indicates that authorities must obtain a warrant to search a cellphone, but Fourth Amendment ‘expectations of privacy’ are forfeited when this information is transmitted. With real-time transmissions, this practically means that there is no expectation of privacy and that the cellphone of every person is turned into a tracking device making them susceptible to dragnet data collection. This underscores the point that mass surveillance and the interception of communications is not selective. It operates by the presumption of guilt, grants no respect for privacy rights nor the need to justify interference, thus nullifying civil liberties. As networked computing is central to economic activity and social life there is no practical distinction between the offline and online world. In this respect, digital liberties are civil liberties and their widespread compromise is unacceptable.

Others acknowledge state monitoring, but downplaying the need for absolute privacy. Indeed, it is seen as a negotiation of selective disclosure in exchange for access to digital services. But this requires a person to be digitally literate about the implications of what they are granting access to, and that opt-outs are available. Still, the worry is about breach of rights, undue pre-emptive data recording that tracks every aspect of a person’s life. Given that the value of data lies in its secondary application, it is impossible to specify any one particular risk at this point. That said, one can generally anticipate some: the consequence of near continuous ever present surveillance through ubiquitiously handheld cellular devices, internet browsing, and sensors which have normalized a culture of obedience are anathema to a democratic society and will ultimately prove corrosive for meaningful social relations.

While their disclosures provide a partial overview of NSA’s operations, whistleblowers like Edward Snowden and journalists like Risen reveal how the security state, with the assistance of corporations such as AT&T, Facebook, Google, and Verizon, has built an extensive intelligence-gathering infrastructure using programs like PRISM, XKeyscore and other strategic information operations to build dossiers. When whistleblowers speak up or journalists investigate these actions, agents of the state use intimidation tactics or character assassination. As Chelsea Manning, Julian Assange, Edward Snowden, and Barrett Brown can attest, incarceration or exile are also viable options. Consider the treatment of Risen. In State of War he details how Operation Merlin, a covert CIA operation undertaken in the Clinton administration to delay the Iran nuclear programme had the opposite effect.24 Following publication, both the Bush and Obama administrations undertook a protracted effort to pressure Risen to reveal his sources. Using the Espionage Act of 1917, the Justice Department indicted Jeffery Sterling, a former CIA officer. While there was evidence of correspondence between Risen and Sterling, pre-trail filings indicated that the Justice Department believed that to convict Sterling, they required Risen to testify because Sterling revealed classified material in interviews, so Risen was an eyewitness to the felony. After several years, Justice did eventually concede that Risen could avoid testifying about his source, this framing of the case and protracted pressure infers that reporting on classified material is deemed an act of co-conspiracy in espionage by the US.

Without a federal shield law, reporters claim First Amendment protection, but Bransburg v. Hayes (1972) is often interpreted to mean that journalists have no special right to testify in a criminal case.25 Justice Powell did indicate that this privilege,

should be judged on its facts by the striking of a proper balance between freedom of the press and the obligation of all citizens to give relevant testimony with respect to criminal conduct. The balance of these vital constitutional and societal interests on a case-by-case basis accords with the tried and traditional way of adjudicating such questions.

This is hardly reliable protection, particularly when Eric Holder’s Department of Justice prosecuted and imprisoned a number of people for disclosing classified information to the press, even if it was about matters of public interest like warrantless surveillance or CIA torture programs. Indeed, these efforts found the Justice Department had undertaken extensive wiretapping and recording of Associated Press reporters to investigate leaks. Upon this becoming public knowledge, in July 2013 the Justice Department issued new guidelines when dealing with the press in investigations, but this was predicated upon protections only when reporters were undertaking ‘ordinary’ newsgathering, this itself being undefined and so open to draconian law enforcement. This raises the question of whether the DoD or the Justice Department considers protest movements and social activism—which are normatively vital for a democratic polity—a threat to national security.

To conclude, part of the success of the US security state has been its ability to mobilize privately organized industrial strength, and has directed the dividends to new technological developments cementing state-capital relations in the military industrial complex while periodically intervening into popular culture to create soft power. The state’s control of data presents opportunities to limit dissent and marginalize internal rivals, while commodification of data points to the urgent need for sustained digital liberties activism. While one should not discount the role and lobbying done by the emerging cyber-industrial complex, or the politics involved, it is clear that the US has ‘weaponized the internet’ (Weaver 2013), making it an instrument of control and oppression. Therefore, it is important to underscore the point Zeynep Tufekci (2014b) makes that ‘How the internet is run, governed and filtered is a human rights issue.’ At stake is the very moral agency of people’s lives, and the infringement of a person’s right by a state set on reducing citizens to nothing but subjects.

How to cite this book chapter:

Timcke, S. 2017 Capital, State, Empire: The New American Way of Digital Warfare. Pp. 55–76. London: University of Westminster Press. DOI: https://doi.org/10.16997/book6.d. License: CC-BY-NC-ND 4.0

Additional Information

ISBN
9781911534372
Related ISBN
9781911534365
MARC Record
OCLC
1065770188
Pages
55-76
Launched on MUSE
2018-11-17
Language
English
Open Access
Yes
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.