In lieu of an abstract, here is a brief excerpt of the content:

  • No Man’s War: Will Robots Fight the Battles of the Future?
  • Jennifer Conrad (bio)
Review of Paul Scharre, Army of None: Autonomous Weapons and the Future of War (New York: W.W. Norton & Company, 2018).

In 1139, Pope Innocent II called for a moratorium on the use of crossbows—at least against fellow Christians. It did not work. In fact, throughout history, military and political leaders have tried with varying degrees of success to regulate the newest deadly technologies, from a pre-modern ban on poisoned arrows to ongoing concerns over nuclear proliferation. Today, within the US Department of Defense and around the world, conversations about regulating new forms of warfighting technology concern limits on autonomous or semi-autonomous weapons, especially when it comes to life-or-death decisions.

These new tools of war offer both immense promise and grave consequences, as recounted in Paul Scharre’s Army of None: Autonomous Weapons and the Future of War. Scharre’s book, which recently won the William E. Colby Award, explains what these weapons are, how they may be used, and the questions surrounding their regulation. At the heart of these debates is the simple and scary question: should we let robots decide whom to kill?

Are we looking at a future in which, with a few taps on the computer, someone sitting in the Pentagon can dispatch an army of killer robots to fight our wars for us? While a scenario out of The Terminator—a film Scharre references several times—is far-fetched, advances in lethal autonomous weapon systems (LAWS) do bring new moral quandaries to the forefront. In June 2018, after an outcry from employees, Google announced via a blog post by CEO Sundar Pichai that it will not work on “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”1

Before delving into the ethical and policy implications, it is important to understand how such weapons work and to develop a vocabulary for describing their capabilities. Scharre, a senior fellow at the Center for a New American [End Page 103] Security and former Army Ranger, dedicates the first part of his book to unpacking the current state of the technologies and taking readers inside the labs at the forefront of their development.

Visiting the Naval Postgraduate School in California, Scharre watches a dogfight between sets of Styrofoam drones, preprogrammed to collaborate as a swarm, with no further instruction from the controllers on the ground. As scientists develop machines that work together, the collaboration can take a number of forms, from collaborating in a hierarchical manner to a more decentralized approach called “emergent coordination,” as a colony of ants would organize themselves, forming a kind of consensus among themselves without a set leadership structure.2 At the Defense Advanced Research Projects Agency (DARPA), where past researchers laid the groundwork for the internet and GPS, teams today are looking at questions such as how deep-neural networks can be used to improve the ability to recognize targets in distracting or decoy-filled environments.

The term “killer robots” has become a catchall phrase, especially by opponents of such weapons, but there is no single definition of autonomy. While it’s fairly uncontroversial to use unmanned drones to silently track the movements of terrorists from above or patrol the oceans, after that the lines become less clear. Upon completing a task, a machine may require human permission for further action (semi-autonomous or “human in the loop”), or it may act on its own with human oversight, for example homing in on a predetermined target with the option for an operation to pull the plug (supervised autonomous or “human on the loop”). Machines may also carry on without outside input, as in so-called loitering munitions that scan a designated area for targets (fully autonomous or “human out of the loop”). This category, which includes weapons like Israel’s Harpy drone that can search for and destroy targets on their own, causes the most alarm.

Within the US military, there is resistance to taking humans out of the decision-making process. Current defense policy (Department of Defense Directive 3000.09), which...

pdf

Share