In lieu of an abstract, here is a brief excerpt of the content:

  • On the Ethics of Algorithmic Intelligence
  • Roberto Simanowski (bio)
    Translated by Michel Brodmer and Jefferson Chase

"perhaps the new intelligence will worship its creators, perhaps it will keep us as pets, or perhaps it will erase us. We simply don't know. What is certain is that if my arguments are correct we will lose control over evolution" (Gumbrecht 2018, 227). Those are the words of 21-yearold Sam Ginn, a student of computer science and comparative literature at Stanford University, who has set himself the task of inventing an artificial consciousness that understands its own existence in the world. His rather pessimistic assertion about what might result if he succeeds is contained in Weltgeist im Silicon Valley, a book by Hans Ulrich Gumbrecht, a professor of comparative literature at Stanford.

The Weltgeist, or world-spirit, that Georg Wilhelm Friedrich Hegel saw embodied in 1806 by Napoleon riding through the university town of Jena, after defeating German troops, lives on 200 years later in California. The geopolitical transition from Central Europe to the American West Coast is simultaneously a change from the political to the scientific. The future no longer rests with politics or even the philosophy of Plato's Republic. It rests with science, as in Francis Bacon's Nova Atlantis, or more precisely, with computer science. We no longer have people like Napoleon revolutionizing their times with military campaigns or a Code civil. Instead it's people like Sundar Pichai and Mark Zuckerberg who, every day, with every new piece of data, increasingly determine the future we're rushing toward.

Politicians are completely satisfied with this transition of power. If in times of post-political consensus all that matters is the administration of the established order, what they expect from progress [End Page 423] is above all more effective forms of such administration. It is thus hardly surprising that those in power embrace the digital revolution, or at least refrain from expressing any doubts or passing any laws when business demands that everything be bet "on a single card, the digital one"—as the president of Bitkom, Germany's Association for IT, Telecommunications, and New Media, said in the summer of 2017. The business-friendly Free Democratic Party (FDP) even campaigned in the 2017 German election with the slogan "Digital first, doubts second" (Digital first, Bedenken second). This sort of optimism vis-à-vis the future has complete faith in science. As Theresa May told the World Economic Forum in Davos in 2018: "Imagine a world in which self-driving cars radically reduce the number of deaths on our roads. Imagine a world where remote monitoring and inspection of critical infrastructure make dangerous jobs safer. Imagine a world where we can predict and prevent the spread of diseases around the globe" (May 2018). True believers justify their confidence with numbers. May said: "We have seen a new AI start-up created in the UK every week for the last three years. And we are investing in the skills these start-ups need, spending £45 million to support additional PhDs in AI and related disciplines." This is completely in keeping with the myopia of the post-political dogma that the future will be determined not by societal discussions but by quantifiable investments and innovations.

But to what extent does the future truly reside in the hands of the science and start-ups May praises so fulsomely? Do scientists and entrepreneurs consciously decide, after possibly discussing the consequences, which inventions they will produce and support? Do they have the power to tell society how scientific achievements like splitting the atom or decoding DNA will be used? Do they abide by social imperatives not to engage in research whose consequences cannot be anticipated or controlled? Or would they seek out a society with other moral standards and legal regulations—perhaps on an artificial extraterritorial island financed by venture capitalist Peter Thiel's Seasteading Institute? And what should we think of the fact that in the case of artificial intelligence, the threat of unforeseen consequences [End Page 424] is no longer restricted to irresponsible usage of the invention, but includes the prospect that the invention might keep its inventors as pets instead of...

pdf

Share