In lieu of an abstract, here is a brief excerpt of the content:

Reviewed by:
  • Artificial Whiteness: Politics and Ideology in Artificial Intelligence by Yarden Katz
  • Gregory Laynor
Artificial Whiteness: Politics and Ideology in Artificial Intelligence
by Yarden Katz
ISBN: 978-0-231-19491-4

The recent proliferation of books, white papers, and think tank discussions on algorithmic bias, AI ethics, and AI for good might already be more than anyone can keep up with in one human lifetime. Yarden Katz's Artificial Whiteness: Politics and Ideology in Artificial Intelligence questions why there is all of a sudden so much talk about making AI ethical, fair, and good. The book begins by tracing the history of the label "artificial intelligence" and its usage, as it has fallen in and out of fashion over the decades since its initial appearance in the late 1950s. Katz, who works in systems biology at Harvard Medical School and has done work under the label of AI, now questions the very use of that label.

The book examines the shifting definitions of AI and the varying uses of the label to brand different technologies. Katz finds that "attempts to ground AI in technical terms, along a set of epistemic considerations or even scientific goals, could never keep this endeavor going" (164). What, then, explains the appeal of the AI label? What keeps the endeavor going? Katz's way of answering these questions might seem outlandish. The author turns not to thinkers typically associated with AI but to Toni Morrison, Herman Melville, W. E. B. Du Bois, and Cedric Robinson. In reading how these writers wrote about whiteness, Katz noticed a resemblance to artificial intelligence. Like whiteness, artificial intelligence "gets its significance, and its changing shape, only from the need to maintain relations of power" (164).

Artificial Whiteness is not another book of AI ethics or AI for good. It is a critical genealogy of AI that is informed by scholarship on race. By looking at multiple projects over time that have used the AI label, Katz finds that their commonalities are less technical than ideological and financial. The book documents what different approaches to AI have had in common: funding by the US military-industrial complex and marketing that uses the tropes of white settler–colonial manifest destiny. AI has an ideological life of its own beyond any particular computing process.

The AI label came back into fashion in the mid-2010s. Projects previously described as Big Data began to rebrand as AI. Reports of platform companies using user data for behavioral manipulation and Edward Snowden's revelations about NSA surveillance generated public scrutiny. Rebranding Big Data as AI, in Katz's view, helped deflect public scrutiny and change the conversation to be about "futuristic machines" (69). We are now in a moment when, as Katz notes, "'AI' is applied to projects that use well-worn computer technologies that do not depend on either recent developments in parallel computing or particularly large data sets or neural networks" (68). [End Page 356]

Artificial Whiteness provides a way of seeing AI as an ideology, a political and economic project that presents itself as technology. Katz outlines the ideology of AI in terms of three "epistemic forgeries" (94). The first forgery is the idea that AI is universal, as if it has intelligence beyond a social context. The second forgery is the idea that AI surpasses human thought, as if human thought is only a calculation in a controlled setting, like a game. The third forgery is the idea that AI arrives at knowledge on its own, as if AI's developers are not responsible for setting the conditions in which AI arrives at its knowledge.

There is a growing awareness of algorithmic bias, along with efforts to correct bias. Artificial Whiteness discusses, for example, projects that aim to improve machine learning for facial recognition so as to better recognize race and gender. Katz, however, warns that improving facial recognition's sensitivity to a greater diversity of faces "ultimately enhances the carceral eye" (178). It is troubling to read that correcting algorithmic bias in facial recognition systems might exacerbate incarceration and policing, but it is a crucial reminder that oppression is...