In lieu of an abstract, here is a brief excerpt of the content:

  • Safety First: Entering the Age of Artificial Intelligence
  • Sam Winter-Levy (bio) and Jacob Trefethen (bio)

In September 1933, the British Association for the Advancement of Science met in London to discuss the prospects for atomic energy. After two weeks of debate, the assembly closed with a speech by the Nobel Prize-winning scientist Ernest Rutherford, who categorically dismissed the possibility of nuclear fission. “Anyone who expects a source of power from the transformation of these atoms,” he declared, “is talking moonshine.”


Click for larger view
View full resolution

UNITED STATES DEPARTMENT OF ENERGY

[End Page 105]

The next day, a young Hungarian scientist named Leo Szilard, an unemployed Jewish refugee, read a summary of Rutherford’s speech in The Times of London over his morning coffee. Waiting for a traffic light to change color, Szilard had a realization: “It … suddenly occurred to me that if we could find an element which is split by neutrons and which would emit two neutrons when it absorbs one neutron, such an element … could sustain a nuclear chain reaction [and] could liberate energy on an industrial scale.” Within a year, Szilard had patented the neutron-induced nuclear chain reaction. Twelve years later, a B-29 bomber named Enola Gay would drop the first atomic bomb over Hiroshima.

The science fiction writer Arthur C. Clarke once said, “When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong.” Technological breakthroughs, before and since the development of the atomic bomb in the 1930s, have confirmed the truth of this dictum. So when the author of the standard textbook on artificial intelligence (AI), Stuart Russell, professor of computer science at the University of California, Berkeley, warns that the development of an artificial intelligence system vastly more powerful than the human intellect is possible and that such a breakthrough could bring with it very great risks, he should be taken seriously.

BREAKTHROUGHS

In the past few years, innovations in AI have come thick and fast. In 2011, IBM’s supercomputer Watson beat the reigning flesh-and-blood champion, Ken Jennings, and won the quiz show Jeopardy!. In 2013, the software company Vicarious announced that it had successfully defeated the CAPTCHA test, designed to distinguish between humans and computers. Self-driving cars have appeared in Silicon Valley. Most dramatically, in October 2015, an AI system developed by researchers at Google Deep-Mind, the world’s leading AI company, defeated the European champion at Go, a game that had thwarted computers for decades. All the while, companies such as Google, IBM, and Facebook have been investing hundreds of millions of dollars in their machine learning divisions.

Breakthroughs in AI may bring unprecedented opportunities for people and businesses around the world. Yet such advances will come with risks, which so far have been largely ignored. When they have engaged with AI research, they have focused overwhelmingly on its implications for economic inequality, neglecting a more fundamental concern: the challenge of retaining control over systems that may one day be vastly more intelligent than humans.

Current technology is nowhere near having capabilities of this sort, and some believe it will never get there. Luminaries from across the technological and scientific world, however, have voiced concerns about the possible downsides of rushing toward AI. Tesla and Space X CEO Elon Musk has referred to building AI as “summoning the demon.” Bill Gates, Steve Wozniak, Peter Thiel, Stephen Hawking, and Frank Wilczek have all warned about the potential dangers. Among AI researchers, Russell has been the most vocal, but he is not alone: Numerous experts have warned that AI could pose an existential risk, including the computer scientists Steve Omohundro, Murray Shanahan, and David McAllester. In 2008, a survey of experts at a conference on [End Page 106] global catastrophic risks at Oxford University ranked superintelligent AI as the greatest existential threat to the human race, above nuclear wars, engineered pandemics, and climate change, with a 5 percent chance of causing human extinction by 2100.

These experts are not worried by the prospect of world takeover by a malicious computer that deliberately seeks to harm humans...

pdf

Share