In lieu of an abstract, here is a brief excerpt of the content:

  • It's All in the Business ModelThe Internet's Economic Logic and the Instigation of Disinformation, Hate, and Discrimination
  • Dipayan Ghosh (bio)

Internet platforms, specifically social media platforms, have been at the epicenter of a number of public harms in recent years. The onset of the foreign disinformation problem, the spread of hateful conduct online, the terrorist recruitment problem, the growth of algorithmic discrimination: these and other harmful impacts have been facilitated by the internet industry. Policymakers have, in response, worked with the corporate sector to create novel contentfocused interventions, including election "war rooms," more human content moderators, and better artificial intelligence to detect offending content. However, much of this focus may be ill-conceived; while these negative impacts are troubling, the policy response focuses on the harms generated by an economic logic internal to the internet industry without focusing on the nature of the industry itself. The purpose of this paper is to examine the business model underlying consumer-facing social media platforms and argue that their economic logic is connected to the myriad public harms we have observed.

The Thesis of an Underlying Economic Logic

Social media has in recent years become a substantial new vector for the spread of hateful and violent conduct—including content that has implicated national security concerns in the United States and unsettled peaceful circumstances in locales throughout the world. Many have contended that companies like Facebook, YouTube, and Twitter should more effectively take down content by developing artificial intelligence systems with the assistance of human content moderators that more efficiently identify and take down offending content such as terrorist recruitment and incitement to violence.1 Government and political bodies including the United States Congress have, in response, initiated numerous inquiries into the steps internet and technology firms should take to counteract the spread of these harmful effects over their digital communications platforms. These include the imposition of innovations such as Facebook's Oversight Board2 and penalties for certain propagators of content such as hate speech on YouTube and Twitter.3

These initial steps are, however, superficial; they miss a critical linkage between the economic logic underlying such firms and their platforms and the negative externalities—unintended negative impacts that diminish social welfare facilitated or generated by internet platforms as a byproduct of their regular commerce—that we have witnessed emerge over them. The spread of digital disinformation over platforms including Facebook, Twitter, and Google can be seen as a negative externality generated by the business model implicit in these three firms and others across the sector. As such, addressing the [End Page 129] negative externality by attempting to contain its overt impact—through the use of artificial intelligence, human content moderators, or advisory panels, for instance—cannot ultimately be fruitful; we must reform the object that generated these externalities in the first place. I will contend in this paper that rather than address surfacelevel implications, such as the spread of hate speech and disinformation, policymakers must address the overreaches of a business model that has stepped on the public interest itself—a business model that is premised on the unchecked collection of personal data to compose behavioral profiles and highly sophisticated artificial intelligence that curates social feeds and targets ads.

The Economic Logic Underlying the Digital Platform Monopolies

We start from a well-known conclusion and work backward. In recent years, we have witnessed a slate of harmful effects across the media ecosystem that have variously damaged the full functioning of American democracy. These effects include the Russian disinformation operation, where agents of the Kremlin, including the St. Petersburgbased Internet Research Agency, worked to inject conspiracy theories and political falsehoods into local political discourse during the course of the American presidential election season in 2016;4 the spread of hate speech in the United States, where various personalities have, for instance, suggested that the United States should be a country for people of only one ethnic background;5 and the perpetration of unfair discriminatory decisions where certain protected classes have been denied economic opportunities, among many other discriminatory effects across the user populations of leading internet platforms.6 These incidents are almost exclusively emerging from social media platforms. As such...

pdf

Share