In lieu of an abstract, here is a brief excerpt of the content:

Reviewed by:
  • The Promise of Artificial Intelligence: Reckoning and Judgement by Brian Cantwell Smith, and: Cloud Ethics: Algorithms and the Attributes of Ourselves and Others by Louise Amoore
  • Elliott Hauser
The Promise of Artificial Intelligence: Reckoning and Judgement
by Brian Cantwell Smith
MIT PRESS, 2019, 184 PP.
HARDCOVER, $24.95 ISBN 978-0-2620-4304-5
Cloud Ethics: Algorithms and the Attributes of Ourselves and Others
by Louise Amoore
DUKE UNIVERSITY PRESS, 2020, 232 PP.
PAPER, $25.95
ISBN 978-1-4780-0831-6

Brian Cantwell Smith’s latest work is a brief but serious engagement with the history and philosophy of artificial intelligence (AI). Its central thesis is that AI systems as we currently know them are excellent at a kind of informed calculation, which Smith terms reckoning, but that they’re still far from being able to form the situated understanding of consequence typical of human decision making, which he terms judgment. With these two concepts as a scaffold, Smith embarks on an ambitious and brisk trip through AI’s history, its present, and its future, providing evidence for his core thesis and ultimately offering some initial prescriptions for how best to utilize AI for the benefit of society.

Smith builds his simple framing of reckoning and judgment into a tight yet powerful conceptual scheme for analyzing AI in relation to the human. The human capacity for judgment arises, Smith argues, from a normative deference toward the world. It is precisely this constituent of genuine intelligence that Smith claims AI systems are not yet capable of and predicts they will not be for the foreseeable future. Smith makes extensive use of concepts developed in his prior work, most notably On the Origin of Objects (1996), such as registration, objects, and ontological schemes. This preexisting conceptual machinery evolves here into tools for distinguishing human-style judgment from machine-style reckoning and for explaining both commensurably. Humans who hold their concepts accountable to the world have a sense of the stakes of their actions that is conspicuously absent from computers. In the words of John Haugeland, a major influence on Smith, computers “don’t give a damn” (108). Smith elaborates on what “giving a damn” means in this context and shows how to determine when a system, human or otherwise, can be said to be capable of judgment. [End Page 105]

Smith covers the failure of Good Old Fashioned AI (GOFAI), the rigid Knowledge Representation–focused AI of the 1970s and 1980s, informed by its history and illuminated by his framework (chapters 2–4). GOFAI systems were incapable of dealing with anything that was not hard-coded into them ahead of time. In Smith’s terms, GOFAI systems were merely registering human registrations. Most damningly, these systems’ designers assumed that the world itself was neatly divisible into distinct objects with unambiguous properties, an assumption Smith traces to Descartes and certain brands of philosophical realism. The kinds of systems that might suggest placing a kidney in boiling water to treat an infection made inevitably egregious errors because their connection to the world we actually live in, where boiling water both cures the infection and kills the patient, was unavoidably shallow and rigid. Such systems, starved of any ability to register the world directly, had to be hand-fed increasingly verbose yet shallow encodings of human registrations to correct these rigidities with ever more finely wrought rigidities. GOFAI systems ultimately found their uses and live on in technologies such as the Semantic Web, but they fell far short of what most would consider to be the promise of AI.

Leaping ahead a decade or three, Smith acknowledges the staggering successes of “second wave AI,” such as deep learning, but uses his framework to qualify them as successes of reckoning and higher-fidelity, conceptually open registration of the world (chapter 5). Algorithms that work on large amounts of low-level data don’t require the kind of ontological scaffolding that GOFAI approaches did and don’t assume the world to be made of well-defined objects. Instead, they are capable of subconceptual nuance and use this nuance to produce semantically meaningful computations that align remarkably well with specific facets...

pdf