In lieu of an abstract, here is a brief excerpt of the content:

  • Racist in the Machine:The Disturbing Implications of Algorithmic Bias
  • Megan Garcia (bio)

Click for larger view
View full resolution

NYUHUHUU

Tay’s first words in March of this year were “hellooooooo world!!!” (the “o” in “world” was a planet earth emoji for added whimsy). It was a friendly start for the Twitter bot designed by Microsoft to engage with people aged 18 to 24. But, in a mere 12 hours, Tay went from upbeat conversationalist to foul-mouthed, racist Holocaust denier who said feminists “should all die and burn in hell” and that the actor “ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism.” [End Page 111]

This is not what Microsoft had in mind. Tay’s descent into bigotry wasn’t pre-programmed, but, given the unpredictability of algorithms when confronted with real people, it was hardly surprising. Miguel Paz, distinguished lecturer specializing in data journalism and multimedia storytelling at the CUNY Graduate School of Journalism, wrote in an email that Tay revealed the problem of “testing AI in an isolated controlled environment or network for research purposes, versus that AI sent out of the lab to face a real and highly complex and diverse network of people who may have other views and interests.”

Tay, which Microsoft hastily shut down after a scant 24 hours, was programmed to learn from the behaviors of other Twitter users, and in that regard, Tay was a success. The bot’s embrace of humanity’s worst attributes is an example of algorithmic bias—when seemingly innocuous programming takes on the prejudices either of its creators or the data it is fed. In the case of Microsoft’s social media experiment, no one was hurt, but the side effects of unintentionally discriminatory algorithms can be dramatic and harmful.

Companies and government institutions that use data need to pay attention to the unconscious and institutional biases that seep into their results. It doesn’t take active prejudice to produce skewed results in web searches, data-driven home loan decisions, or photo-recognition software. It just takes distorted data that no one notices and corrects for. Thus, as we begin to create artificial intelligence, we risk inserting racism and other prejudices into the code that will make decisions for years to come. As Laura Weidman Powers, founder of Code2040, which brings more African Americans and Latinos into tech, told me, “We are running the risk of seeding self-teaching AI with the discriminatory undertones of our society in ways that will be hard to rein in because of the often self-reinforcing nature of machine learning.”

Algorithmic bias isn’t new. In the 1970s and 1980s, St. George’s Hospital Medical School in the United Kingdom used a computer program to do initial screening of applicants. The program, which mimicked the choices admission staff had made in the past, denied interviews to as many as 60 applicants because they were women or had non-European sounding names. The code wasn’t the work of some nefarious programmer; instead, the bias was already embedded in the admissions process. The computer program exacerbated the problem and gave it a sheen of objectivity. The U.K.’s Commission for Racial Equality found St. George’s Medical School guilty of practicing racial and sexual discrimination in its admissions process in 1988.

That was several lifetimes ago in the information age, but naiveté about the harms of discriminatory algorithms is even more dangerous now. Algorithms are a set of instructions for your computer to get from Problem A to Solution B, and they’re fundamental to nearly everything we do with technology. They tell your computer how to compress files, how to encrypt data, how to select a person to tag in a photograph, or what Siri says when you ask her a question. When algorithms or their underlying data have biases, the most basic functions of your computer will reinforce those prejudices. The results can range from such inconsequential mistakes as seeing the wrong weather in an app to the serious error of identifying African Americans as more likely to commit a crime.

Computer-generated bias is almost everywhere we look. In 2015, researchers at Carnegie...

pdf

Additional Information

ISSN
1936-0924
Print ISSN
0740-2775
Pages
pp. 111-117
Launched on MUSE
2017-01-07
Open Access
No
Archive Status
Archived
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.