In lieu of an abstract, here is a brief excerpt of the content:

  • Think PieceOn the Cruelty of Really Writing a History of Machine Learning
  • Aaron Plasek (bio)

We’ve grown accustomed to speculative narratives about the birth of artificial intelligence (both on the screen and the page) in which computers programmed by us to teach themselves quickly exceed our own mental faculties and physical resources. The idea that an “ultraintelligent machine” will “gradually improve itself out of all recognition” is nearly as old as the term AI itself.1 It has gained prominence in the popular imagination through a cottage industry of books, articles, and think pieces advocating for the study of safe AI. The idea is to defer or defuse entirely an inevitable apocalypse, and it’s prompted at least one multimillion dollar donation “aimed at keeping AI beneficial to humanity.”2

The reality is that no one is any closer to implementing a contextually nimble machine learning system that could, say, engage us in an exegesis of a poem than when Alan Turing first proposed this thought experiment for machine intelligence in 1950.3 One of the most exciting implementations of a “general purpose” algorithm was one that learned how to play 29 Atari video games at “human-level or above” proficiency.4 That is, what is “general” is its ability to learn different games by only having access to the pixels on the screen and the controller inputs while only being programmed to maximize score. This was such an impressive advance over earlier work that it was featured on the cover of Nature in 2015.5

The failure to appreciate this point has contributed to myopia in the popular histories of AI that rely on AI researchers as informants while downplaying the enormous body of technical work these informants produced, often relegating the field of machine learning to a mere subfield of AI. The actual historical situation, in terms of the sheer volume and ambit of technical publications produced, suggests the opposite to be true: machine learning has always been center stage, while AI within the larger field of computer science has often had the status of a disciplinary backwater.

Devil’s in the Data

We need better ways to discuss how machine learning systems are being integrated into our economic models, political rhetoric, and legal frameworks now, rather than in some speculative future. The use of learning systems as already deployed are disparately perpetuating and even reifying systemic prejudices and historical biases, often with the most adversely affected being those least well-positioned to protest their treatment.

A laundry list of examples is available, but for the sake of this discussion, let’s be brief.6 Google’s Adsense has been shown, as recently as 2013, to provide racially discriminatory personalized ads when the names searched tended to be associated with specific racial groups.7 Facebook’s now infamous “emotional contagion” experiment in which users were shown more “positive” or more “negative” posts in an effort to see if they could write more negative or positive posts themselves has been the subject of intense media scrutiny.8 And just this summer, an approach for statistically inferring semantic relationships between words was found to reflect pronounced gender bias—notably, in one instance, producing the following analogy: man is to computer programmer as woman is to homemaker.9 Machine learning systems are already being used to predict recidivism, even though critics have argued that they underestimate or overestimate the risk of recidivism for defendants on the basis of race.10 Similarly, such systems have been constructed to identify and classify refugees as terrorists using a hodgepodge of data collected from many different sources.11

The various errors that each particular machine learning system produces depends on the statistical model used, the learning algorithm by which the system updates prior estimates, and the specific data used to “train” (that is, constrain) the model. However, in practice, some of the most egregious machine errors tend reflect poor choices in training data, which is itself the result of various forms of historical systemic bias, often curated for polyvalent purposes and aggregated under radically different assumptions.

Long Histories of Datasets

What we need is a better appreciation for the deeply contingent and mutually...

pdf

Share