Johns Hopkins University Press

As perhaps the most consequential technology of our time, Generative Foundation Models (GFMs) present unprecedented challenges for democratic institutions. By allowing deception and de-contextualized information sharing at a previously unimaginable scale and pace, GFMs could undermine the foundations of democracy. At the same time, the investment scale required to develop the models and the race dynamics around that development threaten to enable concentrations of democratically unaccountable power (both public and private). This essay examines the twin threats of collapse and singularity occasioned by the rise of GFMs.

In March 2023, OpenAI released GPT-4, one of the most sophisticated computer programs for producing persuasive facsimiles of natural expression (including language, images, and even video). Such programs are known as "generative foundation models" (GFMs). GPT-4 may or may not constitute a breakthrough toward "artificial general intelligence," but two things are clear: GFMs will become increasingly capable of creating large-scale deception and surveillance. And they can facilitate novel forms of interaction and co-creation—new ways for people to interact with one another, organizations, and states; new methods of coordination, planning, and decisionmaking for groups, associations, firms, and governments; new forms of knowledge, communication, and cultural content. They will become increasingly important for many social, communication, and political functions, and evolve into fundamental social infrastructure.

As many technical and policy leaders have highlighted, possible negative applications could introduce two threats to "plural societies"—that is, free and democratic societies operating under conditions of social diversity. Democratic systems work on core assumptions, including that the state can distinguish citizens from noncitizens, and that citizens can form coherent views based on a "marketplace of ideas." The first threat is that GFM-driven deception may cause these assumptions, and the institutions built on them, to collapse. Under democracies, power and ultimate accountability for the fundamental infrastructure of "the state"—including systems of identification, authentication, defense, [End Page 147] and basic physical infrastructure—should be in the hands of the people. With fundamental building blocks of that control collapsing, the result would be chaos.

The second threat is that GFM development may instead concentrate power in the singular hands of financial interests, technical experts, authoritarian regimes, or even an internal, self-replicating machine logic divorced from human oversight. As we have argued elsewhere, a paradigm of technology development organized around the concept of "singularity"—a concept of a singular human intelligence that could be matched and even outmatched by artificial intelligence—inevitably leads to "singularity" understood in the second sense of concentrated economic and political power.1

The twin threats of "collapse" and "singularity" cause respectively "chaos" and "concentrated power." They are contemporary counterparts to age-old challenges from rapid technological change—the need to navigate a "narrow corridor" between anarchy and authoritarianism, and to secure the conditions in which free, pluralistic societies can thrive.2 The simplest ways to avoid collapse (a process problem), chaos (a social-state problem), and anarchy (a political-form problem) might at first seem to be either to concentrate authority over new technologies or to slow development. But the latter would also concentrate authority, albeit by surrendering technical advantage to authoritarian regimes. Meanwhile, the simplest antidote to the problem of singularity and concentrated power would be strategies such as allowing free and rapid deployment of open-source models (trained partly by harnessing access to the most cutting-edge, resource-intensive models), but these strategies would accelerate the causes of collapse and chaos. Plural societies are particularly at risk because they need to avoid collapse and singularity, chaos and concentrated power, anarchy and authoritarianism, simultaneously. Authoritarian societies need only avoid chaos. They are threatened by the destabilization of identification and surveillance, but lose nothing of their basic character if they respond by concentrating power.

To navigate the straits between collapse and singularity, anarchy and authoritarianism, we need a different paradigm for technology development called "plurality." This paradigm is grounded in three premises about, respectively, cognition, society, and technology. Those premises are that: 1) both human and machine cognition are plural; they come in multiple forms that cannot be reduced to singularity; the most beneficial [End Page 148] activation of intelligence fully activates the multiplicity of forms of intelligence; 2) societies thrive best when they can capture the benefits of pluralism to strengthen a society epistemically, culturally, and economically;3 and therefore 3) "plural technologies" developed via an orientation toward the multiplicity of intelligence and with the goal of bringing out the benefits from human pluralism offer the best prospect for fostering human flourishing.4 Before we turn to explaining how a plurality paradigm for technology development can help us steer through the twin dangers of chaos and singularity, we need to characterize the dangers.

The Hazard of Chaos from Collapse

For a vote to be free and fair, a census must count the population and a registration system must ensure that all (and only) eligible votes are counted. Citizens should understand political options, be able to form reasonably informed and relevant opinions, and organize and express themselves accordingly. Collective action also entails being able to exchange ideas with like-minded compatriots, with minimal external surveillance. Freedoms of speech and association and the right to vote are fundamental features of democracy.

These rights do not on their own achieve these objectives. If, hypothetically, an incessant, deafening noise drowned out all voices, no amount of free speech would enable a "marketplace of ideas."5 Similarly, if it becomes possible to create infinite replicas of human beings and their records, undetectably and for a fee, how can elections be secure? Democracy depends not just on information, communication, and rights protecting their use, but on technologies that operationalize these rights. Such technologies are constitutive elements of democracy's foundation; if they are eroded or destroyed, democracy will collapse, yielding chaos.

Innovation in technologies therefore has the potential to upend basic democratic functions. Among the diverse risks to democracy, the most serious can be understood in terms of "contextual confidence," a risk framework proposed by Shrey Jain, Zoë Hitzig, and Pamela Mishkin.6 They argue that in a plural society, the problems of authentication, mis-and disinformation, and privacy are different violations of what Helen Nissenbaum calls "contextual integrity," that is, the appropriate sharing of information in its intended context of interpretation.7 The ability of GFMs to recombine and re-present information quickly and convincingly enables the sharing of information outside its intended context at new scales. This has significant downstream consequences.

First, consider the problem of authentication. As a system providing equal rights and entitlements to all citizens, democracy depends on authentication of unique citizenship. In our modern contexts, if citizenauthentication systems break down, democracy becomes unsustainable.8 Authentication is typically based on a combination of three factors: [End Page 149] things you are (biometrics), things you know (secrets), and things you have (contents and keys). Biometrics, if measured in person, and non-duplicable physical keys are unlikely to be significantly affected by the advancement of GFMs. But any form of verification based on content that can be reproduced or transmitted electronically (such as driver's licenses, currency, or passports) is likely to degrade significantly.

Highly persuasive live replicas of appearance, voice, and even style are becoming less expensive. They will empower a variety of increasingly effective scams over time. The threat to authentication may lead us to become more reliant on secrets for proving our identity. But those secrets will need to be far more tightly guarded than today. GFMs are steadily getting better at inferring secrets based on public information. Most of us would expect that if, say, Sherlock Holmes were trying to guess our elementary school, he would have little trouble. Once Holmesian sleuths are available at little cost to a range of adversaries, we may have to redefine what we consider secret.

Second, consider privacy. Precisely because GFMs will make it harder to keep secrets, they threaten privacy. The cognitive limitations of those who want to invade privacy are largely what protects it today. A brilliant body-language expert or top intelligence analyst will be able learn most things we think of as private. But such skills have always been rare, and thus expensive and inaccessible. But GFMs could make the discovery of secrets as inexpensive and easy as cars made longdistance travel.

Such threats to authentication and privacy are the most attentiongrabbing examples of how GFMs will make it harder to evaluate what is authentic. Yet the collapse of context may be most insidious in social settings. Democratic governance has always rested on the interwoven fabric of individuals and the civic associations they are members of. For such groups to coordinate effectively to gain redress of grievances, they must establish shared views, goals, and plans, and they must protect these from external surveillance. The need to protect the privacy of dissident groups, in addition to individual citizens, led the Biden administration to prioritize "privacy-enhancing technologies" in its "Technology for Democracy" strategy.9 GFMs can undermine this protection by reducing the ratio of signal to noise in communication and by facilitating privacy violations, making groups vulnerable to state surveillance or adversaries.

Finally, beyond the challenges to authentication and privacy (both personal and social) lies an even more pernicious hazard. Democracies depend on citizens having at least partially shared horizons of understanding—of facts, values, and analytical frameworks—to allow for common action. Yet GFMs can undermine the establishment of shared context. In Neal Stephenson's 1995 novel, The Diamond Age, lower classes receive personalized digital content that entertains them but [End Page 150] makes common action impossible, while the ruling elite receive identical printed newspapers to ensure they can coordinate social control.10 Similarly, the increasing personalization of content enabled by GFMs will fragment the information landscape, allowing individuals to live in their own information universes, rendering elusive the shared understanding needed for stable democratic practice. Inauthentic content, rampant distrust, and a breakdown of shared understanding could do just as much as censorship to prevent the circulation of ideas critical to democracy.

The Harm of Authoritarianism from Singularity

For more than two decades, computer scientist and futurist Ray Kurzweil has been predicting a "singularity"—where technological progress accelerates so dramatically that human life as we know it is irreversibly and uncontrollably left behind.11 The metaphor has shaped conversation about artificial intelligence, conceptualized as a singular intelligence somehow aggregating, and then transcending, human intelligence. The term derives from a region of space-time where the laws of physics collapse. Many also take the idea of "singularity" for a claim about the concentration of social power in the hands of a few who are committed to a singular vision. After all, little meaningfully differentiates a future dominated by a small and homogeneous elite from one controlled by self-propagating machines.

These two outcomes are indeed different flavors of the same singularity, where a single, coherent will comes to organize and dominate global social life—whether that be the will of a technical or engineering caste, a corporate oligarchy, an authoritarian nation-state, or some self-aware machine. Any of these futures would be antithetical to pluralism and self-government. Yet several features of such a future seem to be emerging.

The development of today's leading systems, such as GPT-4, depends on extreme concentration of power. OpenAI, GPT-4's developer, has only a few hundred employees; its core development team is probably a fraction of that. Yet the company has received tens of billions of dollars from Microsoft and other investors; just one training run of GPT-4 is reported to have cost at least US$100 million. The Manhattan Project had (in present dollars) a budget similar to OpenAI's, but roughly a thousand times as many total employees and at least a hundred times as many core scientific and engineering staff—despite being a state secret.

The developers of the Manhattan Project had relatively limited power and discretion. OpenAI's mission is far broader: "to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity."12 The Manhattan Project's sole and [End Page 151] time-limited goal was developing nuclear weapons, after which it was disbanded; OpenAI is developing technology with anticipated deployment in most economic and social sectors, from entertainment to logistics, and is actively planning for a future where its income exceeds those of most nations.13

Moreover, design decisions can shape system behavior. The developers of the three leading GFMs (OpenAI's GPT, Hugging Face's Bloom, and Anthropic's Claude) have, for example, taken vastly different approaches to ensuring that the models behave as they should ("alignment") and to complying with recently proposed EU legislation.14 But in the end, a tiny and homogeneous group of narrowly trained technicians—predominantly young, white, secular, technically trained, male, and mostly American—are making some of the most consequential decisions in the world today. The formal scope of their authority is more limited than that of some heads of state but far less mediated; even the most powerful authoritarian leaders must manage layers of bureaucracy.

This concentration of power is not specific to the "commanding heights" of the emerging GFM-based economy but is instead a generalized feature of our technologically driven economy. In most developed countries, labor's share of national income has fallen by roughly a tenth since a 1970s peak, shifting economic power from dispersed earners of labor income to concentrated holders of capital. These changes are commonly attributed to the proliferation of technologies that automate rather than complement labor and a rise in market power.15 Technologies aimed at "outperform[ing] humans at most economically valuable work" seem almost perfectly designed to concentrate power with a technical elite and reduce labor's share of national income.16 While opinions vary about the danger of such concentrated technical and economic power, supporters of democracy should fear the "choke points" that the current AI-development structure offers as opportunities for authoritarian control. The rapid scaling of GFMs as consumer products presents an unparalleled opportunity for controlling information.

Mobilizing Technology for Democracy

How can we defend against collapse and chaos on the one hand, or singularity and concentrated power on the other? A range of strategies can mitigate these risks. They broadly fall into three categories: retreat, containment, and mobilization.

Retreat strategies for avoiding collapse involve reducing the role of digital interactions and relying more heavily on in-person interactions to establish truth. Containment strategies focus on detecting, labeling, and limiting GFM behavior (typically by watermarking GFM-generated content); these require fairly universal enforcement to be effective against malicious actors. Mobilization strategies involve harnessing [End Page 152] GFMs and advanced cryptography (digital encoding and decoding) to enrich the foundations of democratic institutions and make them more robust in the face of GFMs.17 Yet retreat from digital life leaves the space open to development by authoritarian regimes, and containment, which is to say, primarily defensive and regulatory approaches, tends to further concentrate development in a small group of democratically unaccountable companies.

So how can we defend against the threats from singularity, concentrated power, and encroaching authoritarianism? The simplest containment strategy to address the challenges of singularity focuses on facilitating competition—through regulations, incentives, and legal rules that foster development of open-source models to loosen the grip of concentrated actors. Yet this strategy leaves little water between Scylla and Charybdis, either undermining the scale needed for developers in democratic countries to compete with the models being developed by authoritarian regimes and inadvertently reinforcing concentration of power, or accelerating collapse by making unregulated models more widely available, or both.

Neither retreat nor containment will serve to protect plural societies. Rather than watching our societies shatter on the rocks of either collapse or singularity, or veer dangerously back and forth between the two, we propose to navigate the narrow passage through mobilizing policy and technology development toward the goal of plurality.

Democratic systems have long struggled to handle fast-moving and large-scale phenomena, from global wars to pandemics. They have often addressed them by either temporarily and extraordinarily concentrating authority or by muddling through. Only in very limited cases have they harnessed the unique genius of democratic collective intelligence to meet such moments. The current technological era has combined massive scale and speed with no clear end date; this makes extraordinary measures in response inappropriate for democratic societies. OpenAI's CEO, Sam Altman, tried and failed to raise public support for OpenAI, before seeking funding from private sources. Government entanglement with this work was understandably hard to motivate in a democratic context, given the characteristics of the technology. Yet only mobilization strategies that harness GFMs to build new forms of democratic engagement and control seem to offer the power and steering capacity needed for navigation.

Reminding us that hard tradeoffs in the name of progress are nothing new, John Dewey wrote in The Public and Its Problems (1927) that

"industry and inventions in technology" can bring about changes, including new publics or associational groupings, that are extrinsic to political forms which, once established, persist of their own momentum. The new public which is generated remains long inchoate, unorganized, because it cannot use inherited political agencies. . . . The public which generated political forms is passing away, but the power and lust of possession [End Page 153] remains in the hands of the officers and agencies which the dying public instituted. This is why the change of the form of states is so often effected only by revolution.18

Ultimately, complacency and inertia most threaten democracies in times of great technological change. Recognizing the impacts of new technologies, we need to steer toward new institutional forms harnessing GFMs and other advanced technologies in defense of plural societies.

Distinct mobilization strategies are available for each threat. The strategy to prevent collapse harnesses GFMs and advanced cryptography to enrich the foundations of democratic institutions (specifically, authentication, privacy, and common knowledge). The strategy to combat singularity uses those same tools to build new forms of democratic engagement and control that can scale and pace with GFMs, thereby allowing democratic governance to directly shape and contribute to these developments.19

Why do we believe that GFMs can support mobilization against both collapse and singularity? GFMs can lift some of the deepest and oldest barriers to enable meaningful democratic participation among large masses of people spread across vast areas. In principle, GFMs could empower democratic pluralism of a richness and scale previously unimaginable. J.C.R. Licklider, a founder of what would eventually become the internet, cautioned almost half a century ago that design and investment would determine whether the latest digital revolution narrowed or broadened the corridor for pluralism.20 How can a plurality strategy do this?

Every nation has symbols, treasured shared memories that represent the familiar and respected national character. Increasingly the word "democracy" is used in many Western countries in a similar vein—not as an ideal, aspiration, or dream, but as a cherished existing possession to be defended. Yet if democracy is to endure and meet the challenges of technology, we must remember that existing democratic institutions represent difficult pragmatic compromises on democratic aspirations.

Today's democracies derive from eighteenth-century commitments to limit power and ground legitimacy in the expressed consent of the governed. Achieving that through voting required organization and deliberation at the level of town, borough, and county. People who knew each other debated a direction for their society. They also made side deals about jobs and roles and built networks out of relationships and favors that extended a horizon of common knowledge. Those deliberations and side deals turned into votes. Those votes, and the networks and relationships they came from, supported an experience of "being all in it together."

The eighteenth-century expectation was that a stable social structure required a combination of mediated communication and networks of people bound to each other through commitments of reciprocal benefit. Even in ancient Athens, where as many as six-thousand people might gather to deliberate, this structured view of stability pertained. There was little [End Page 154] space for deliberation in the formal public assembly. There, speeches were given and votes were taken. But outside the assembly, in those networks of reciprocity, debate and deliberation flowed freely. These formal and informal spaces together gave citizens experiences of both collective control and interpersonal connection. Despite expansion of the franchise, few people today have access to genuinely deliberative experiences.

The institutional form of modern constitutional democracies results from the evident impossibility of directly talking, making sense of things, and trading favors with one another in nations of millions spread across hundreds or thousands of miles. The industrial era saw an explosion in transportation and production, but much slower advances in information and communication technologies, leading to systems that economized on information and communication while aiming to allow large-scale economic interactions. This created immense social complexity and degraded popular control and connection. Because GFMs could, in principle, now permit a scaling up of deliberative talking, shared sense-making, and favor-trading, they could be mobilized to stabilize legitimation processes for plural society. They can enable a reinvention of representation, through the embedding of deliberative elements and direct democracy elements, that enables more dynamic, adaptive forms of representation than presently exist. Pulling off a renovation of representation requires focusing on three things: identity, media, and voice.

Identity and Credentials

As we have said, identity and credentials are perhaps the most foundational information components of a democratic society. The equal protection of citizens is impossible without them. Providing a range of identity markers and credentials, from birth certificates to educational achievement, is among the central functions of most states. This poses a dilemma for democracies, which promise citizens autonomy from state surveillance—a tension highlighted by twentieth-century atrocities facilitated by national registries. Many democratic polities have thus resisted establishing national-identification systems, insisting instead on service-specific identity systems that minimize information retention.

Yet the consequences of such well-intentioned approaches have been quite different from the intentions themselves. The U.S. Social Security number (SSN) is perhaps the most dramatic example. Intended as a compromise with Republican Party opposition to establishing a national identity number and theoretically restricted for use by the Social Security Administration alone, the SSN has become broadly used as the default for personal identification in American life, despite a range of weaknesses.21 It is grounded in only the thinnest validation (requiring little more than a birth certificate), and because it is used so frequently, it is almost impossible to maintain as a secret. [End Page 155]

The SSN system was never designed to resist attack and, unsurprisingly, is threatened by GFMs. The ease of tracing informational breadcrumbs across contexts means that everyone's Social Security number will soon be available, at very low cost, using GFMs. Governments in many democratic countries are facing critical challenges arising from their reliance on simplistic and outmoded identity and classification systems. The legitimacy of election administration (and thus elections themselves), for example, is notably eroding as the challenges of authenticating citizen voters grow more severe.

How might GFMs and other advanced technologies improve such systems and make them more robust? One need only look at technology firms or government security agencies to see the possibilities. Such entities rely very little on traditional "legal" identities to authenticate access or target services. Instead, they harness a diverse range of signals, such as social relationships, economic transactions, and location data. They do so with sophisticated technology or at great analytical expense and, most important, with much less formal respect for citizen privacy than would typically be expected of democratic states.

Yet these are precisely the types of challenges that GFMs and modern cryptography could help to overcome. Today, it might take a genius intelligence analyst to parse a clear identity signal from an array of social and financial data, but in time such genius will be at the fingertips of every government bureaucrat. So what now seems an unacceptable infringement on citizen privacy—analyzing personal data to identify someone or to assess professional qualifications, for example—will soon be achievable without any such breach. Modern cryptographic techniques (such as zero-knowledge proofs, which allow the verification of specific information without revealing any of the data that this verification was based on) and secure multiparty computation (which allows creation of shared intelligence without sharing underlying data) can glean rich but highly targeted information while keeping all other data private. Such techniques could be used, for instance, to verify someone's age without revealing other personal information. They could therefore be harnessed to improve our voter and social-service identification systems, strained as they are by the mismatch of old technologies and current realities.

Not only representation and voting can be improved; so can government delivery of services. Estonia's e-Estonia infrastructure has effectively made all public services available securely online and fully interoperable with private-sector services, in much the same way that one can "sign in with Google" across products. Service providers can harness government data with appropriate permissions to improve services to citizens. To make this work and protect privacy, Estonia relies on legal sanctions for information misuse rather than cryptography; doing so, the country achieves powerful security against focused adversaries (such as Russia) and efficient service provision simultaneously. [End Page 156]

India's "India Stack" digital public-infrastructure system, increasingly internationalized as the Modular Open Source Identity Protocol (MOSIP), has followed a similar approach better adapted to the technological conditions of developing countries. Experiments with systems with this level of performance but more absolute privacy protections via cryptography are urgently needed if we are to maintain privacy and democratic functioning in the age of GFMs.

Creating Plural Media

Many of the "multimedia" and "interactive" formats of contemporary culture (from interactive public art to social media) arose from conscious efforts after World War II to imbue democratic ideals in the communication technologies then proliferating.22 But such aspirations can easily drift astray. The last decade has shown how, in the service of advertising profits, social media and internet culture more broadly have become engines of polarization, democratic decay, and authoritarian surveillance. GFMs have the potential to accelerate these trends dramatically.

Yet there is something important to be salvaged from midcentury aspirations, in the words of Licklider, to make "decisions about the development and exploitation of computer technology . . . not only 'in the public interest' but in the interest of giving the public itself the means to enter into the decision-making processes that will shape their future."23

How might GFMs be harnessed for a plural media environment? A crucial element of democratic media is recognizing and fostering a diversity of perspectives. This should be supplemented by fostering what is variously called "a connected society" or "cooperation across difference."24 For diversity to survive and thrive, it needs to be bridgeable both to avoid descending into entrenched conflict and to enable recombination to create new social groups.

There is concrete potential for computational tools to be helpful. Over the last decade, the Taiwanese government and civil society have been using a platform called vTaiwan to bridge differences of opinions—among citizens, lawmakers, business, and civil society—on public-policy issues, and to identify areas of consensus ready for social action. This system takes in short statements on an issue and allows other users to evaluate them. Their algorithms cluster the data to highlight different opinion groups as well as where usually diverging groups find surprising agreement.

Dozens of Western jurisdictions are now also using pol.is, a system similar to vTaiwan, and the Community Notes feature of the socialmedia platform X (formerly Twitter) is gathering comparable analytics to highlight contributor notes on posts that a diversity of typically conflicting users and experts found helpful. This process not only checks facts but also helps to surface shared understandings that can provide a basis for democratic common understanding. [End Page 157]

Yet such systems are still in their infancy, both technically and socially. They harness only the most basic forms of machine learning, grouping together comments of those with similar voting patterns on other comments. But ongoing research is using GFMs to understand the content of comments and produce interactive syntheses of opinion clusters.25 The systems also currently assume that all participants act in good faith, which is unlikely to remain the case as such models disseminate. Improving security and incentives will be critical to allowing broader use. Generally, these systems are still an afterthought, supported by X to reduce public outcry or by the Taiwanese government because of the popularity of the country's digital affairs minister, Audrey Tang. Only once such publicly supported digital spaces become as common as parks and libraries are they likely to meet the challenge of polarization driven by deceptive GFM-generated content.

The Voice of the People

Authentication and information are bedrocks of democracy. Yet the public control of state functions defines it. Transforming democratic systems using advanced technology will require transforming how we formally represent and hear citizens' voices. Entrenched holders of power in existing institutions will seek to maintain the status quo, so this is likely to be the most challenging and contentious area of influence for new technologies.

We suspect that mobilization and transformation along this dimension will be outside the formal mechanisms of governance, or start from their periphery—in municipalities first, and at national levels only later. In her classic On Revolution (1963), Hannah Arendt argues that a key difference between the American and French revolutions was that in the former case, local democracy grew as a legitimate and widely accepted practice, outside the British reach, before it was tried on a larger scale: Legitimacy preceded authority.26 Similarly, we suggest that organic evolution outside formal political institutions will help the "new public" inherit formal authority when dominant structures give way to erosion.

As Dewey suggests, one natural place for these new practices to emerge is precisely where democratic institutions may be most threatened: In the development and deployment of GFMs themselves. The speed and scale of these models' development means that meaningful democratic engagement in designing them requires a dramatic increase in the capacity of democratic systems. Luckily, despite the simplistic media portrayal, many organizations developing leading GFMs are not simple profit-driven companies: OpenAI, Anthropic, and many maintainers of open-source models are much more complicated hybrid entities with a significant interest in legitimacy and public participation.27

As a result, these entities are supporting experiments with new forms of democracy—outside of formal political institutions and empowered [End Page 158] by GFMs and advanced cryptography. These experiments often begin with tools such as pol.is as means of encouraging deliberation among participants, but then turn more directly to choices and inputs to action. Anthropic experimented with harnessing pol.is to reshape the constitution underlying its flagship Claude system, dramatically reducing the bias it exhibited against people with disabilities.28 Other ongoing experiments conducted by OpenAI through its "Democratic Inputs" program involve advanced voting techniques that allow participants to express not just preferences but priorities (such as quadratic voting) or harnessing GFMs to help synthesize expressed values.29

Open-source models are approaching similar challenges from a different starting point. Rather than opening previously secret processes to public input, they seek public legitimacy amid threats of being banned for lack of licensing. In response, open-source models and their maintainers (such as Hugging Face) are increasingly looking to build authoritative versions with strong, but participatory and democratic, controls that can pass regulatory hurdles. These models are harnessing formal voting protocols and participatory design processes.30

Such approaches offer the prospect of digital democratic governance that keeps pace with technological change. Growing evidence suggests that these models can carry out detailed, flexible, and interactive conversations with large numbers of participants and then portray in easily digestible forms the current state of collective concerns.31 The models also learn from the information gathered from those conversations, allowing them to provide decisionmakers with qualitative and quantitative insights, as illustrated in a recent Microsoft Research project that focused on discerning opinion about a community's disputed land-use rules. Eventually these models should be able to serve as full interactive advisors on public concerns.32

This possibility begins to challenge our very understanding of "representation." What are the bounds of trust that should be put in such advisors? Should they ever directly prescribe actions? If such models can identify groups of like-minded citizens that are distinct from the groupings identified by standard geographical districts, how do we reconcile the two possible structures for representation? And how much should regulation be determined by traditional governmental processes versus new socially participatory, democratic ones? If interaction with and feedback to such models can become more continuous than we have come to expect from infrequent elections, should we use them to constantly reshape policy and plans? And if so, what thresholds for participation and authentication should be required? Could it ever make sense, as the philosopher Bruno Latour has suggested, to provide formal representation to natural phenomena such as ecosystems via GFMs in a "parliament of things"?33 Although the notion of representing "things" might seem alien to current practice, we already do so by, for instance, carrying out environmental-impact studies before permitting various infrastructure projects. [End Page 159]

In all likelihood, GFMs could vastly improve such efforts and expand what it means for "all affected" to have a voice in decisionmaking, as experiments by Dark Matter Labs are beginning to illustrate.34 And these experiments are already migrating into political institutions themselves. Civil society organizations such as Partners In Democracy and New_ Public are harvesting lessons from these experiments and identifying points of application inside existing political systems, where new technologies might be used to improve representation.35

Through and Out

The world's modern experiment in democracy is still young, not quite 250 years old. Political or social convulsions occur periodically. Some have seemed to threaten the durability of democracy. In the United States, both the Civil War and World War II did so. Each democracy around the globe has experienced existential perils; great transformations have invariably emerged in their wake. The United States after the Civil War briefly created one of the world's first multiracial democracies; and World War II led to the largest wave of democratization and decolonization in history and the founding of the United Nations.

When Odysseus sailed the straits between Scylla and Charybdis, with wax in his ears to mute the sirens' fatalistic messages, he and his crew knew what home they sailed toward: Ithaka. We too must chart a narrow course between two monsters, singularity and chaos, but few of us in this increasingly complex world can identify "home" so clearly. Our suggestion is this: We must aspire to harness digital tools to manage our world's complexity in support of plural homes in which, together, we can thrive.

Or, as Ralph Ellison put it, "The way home we seek is that condition of man's being at home in the world, which is called love, and which we term democracy."36 [End Page 160]

Danielle Allen

Danielle Allen is James Bryant Conant University Professor at Harvard University and director of the Allen Lab for Democracy Renovation at the Harvard Kennedy School's Ash Center for Democratic Governance and Innovation.

E. Glen Weyl

E. Glen Weyl is research lead at Plural Technology Collaboratory and Microsoft Research Special Projects and chair of the Plurality Institute.

NOTES

1. Divya Siddarth et al., "How AI Fails Us," Justice, Health, and Democracy Impact Initiative and Carr Center for Human Rights Policy Technology and Democracy Discussion Paper, December 2021.

2. Daron Acemoglu and James A. Robinson, The Narrow Corridor: States, Societies, and the Fate of Liberty, 1st ed. (New York: Penguin, 2019).

3. See Danielle Allen, Henry Farrell, and Cosma Rohilla Shalizi, "Evolutionary Theory and Endogenous Institutional Change" (unpubl. paper, 2019), https://projects.iq.harvard.edu/files/pegroup/files/allen_farrell_shalizi.pdf; also Danielle Allen, Justice by Means of Democracy (Chicago: University of Chicago Press, 2023), ch. 4.

4. On intelligence, see Howard Gardner, Multiple Intelligences: New Horizons in Theory and Practice, rev. and updated ed. (New York: Basic, 2006); Hugo Mercier and Dan Sperber, The Enigma of Reason (Cambridge: Harvard University Press, 2017). On epistemic pluralism, see: Henry Farrell and Cosma Rohilla Shalizi, "Pursuing Cognitive Democracy," in ed. Danielle Allen and Jennifer S. Light, eds., From Voice to Influence: Understanding Citizenship in a Digital Age (Chicago: University of Chicago Press, 2015), 209–31.

5. Authoritarian regimes often strategically "flood" information ecosystems instead of just censoring them. Gary King, Jennifer Pan, and Margaret E. Roberts, "How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, Not Engaged Argument," American Political Science Review 111 (August 2017): 484–501. See also Justin Pottle, "'The Greatest Flood of Mass Suggestion': John Dewey, Propaganda, and Epistemic Costs of Social Organization," Journal of Politics 84 (July 2022): 1515–27.

6. Shrey Jain, Zoë Hitzig, and Pamela Mishkin, "Contextual Confidence and Generative AI," arXiv (preprint), 2 November 2023, https://doi.org/10.48550/arXiv.2311.01193.

7. Helen Nissenbaum, "Privacy as Contextual Integrity," symposium, Washington Law Review 79 (March 2004): 119.

8. Divya Siddarth et al., "Who Watches the Watchmen? A Review of Subjective Approaches for Sybil-Resistance in Proof of Personhood Protocols," arXiv, 13 October 2020, https://doi.org/10.48550/arXiv.2008.05300.

9. White House Office of Science and Technology Policy, "US and UK to Partner on Prize Challenges to Advance Privacy-Enhancing Technologies," press release, 8 December 2021, www.whitehouse.gov/ostp/news-updates/2021/12/08/us-and-uk-to-partner-on-a-prize-challenges-to-advance-privacy-enhancing-technologies/.

10. Neal Stephenson, The Diamond Age: Or, a Young Lady's Illustrated Primer (New York: Bantam Books, 1996).

11. Ray Kurzweil, The Singularity Is Near: When Humans Transcend Biology (New York: Penguin, 2006).

12. "OpenAI Charter," OpenAI, https://openai.com/charter.

13. Devin Coldewey, "OpenAI Shifts from Nonprofit to 'Capped-Profit' to Attract Capital," TechCrunch, 11 March 2019.

14. Rishi Bommasani et al., "Do Foundation Model Providers Comply with the Draft EU AI Act?" Center for Research on Foundation Models, Stanford University, 2021, https://crfm.stanford.edu/2023/06/15/eu-ai-act.html.

15. Daron Acemoglu and Simon Johnson, Power and Progress: Our Thousand-Year Struggle over Technology and Prosperity (New York: Public Affairs, 2023); Jan De-Loecker, Jan Eeckhout, and Gabriel Unger, "The Rise of Market Power and the Macroeconomic Implications," Quarterly Journal of Economics 135 (May 2020): 561–44.

16. OpenAI, "How Should AI Systems Behave, and Who Should Decide?" OpenAI blog, 16 February 2023, https://openai.com/blog/how-should-ai-systems-behave.

17. Jain et al., "Contextual Confidence and Generative AI."

18. John Dewey, The Public and Its Problems (New York: Henry Holt and Company, 1927), 31.

19. Such strategies have already been pioneered by Taiwan's Digital Minister Audrey Tang, who calls her strategy "Plurality." In Taiwanese Mandarin, the same word means both "digital" and "plural." The digital itself can be rethought as a foundation for the plural, instead of as a foundation for either singularity or chaos. Tang has done that, and we can follow her lead, in defense of plural societies. "Audrey Tang on the Technology of Democracy," interview, Conversations with Tyler, 7 October 2020, https://conversation-swithtyler.com/episodes/audrey-tang/.

20. J.C.R. Licklider, "Computers and Government" in in Michael L. Dertouzos and Joel Moses, eds., The Computer Age: A Twenty Year View (Cambridge: MIT Press, 1979).

21. Arthur M. Schlesinger, The Coming of the New Deal: The Age of Roosevelt (Boston: Houghton Mifflin, 1958), 311.

22. Fred Turner, The Democratic Surround: Multimedia and American Liberalism from World War II to the Psychedelic Sixties (Chicago: The University of Chicago Press, 2013).

23. J.C.R. Licklider, "Computers and Government."

24. Danielle Allen, "Toward a Connected Society," in Earl Lewis and Nancy Cantor, eds., Our Compelling Interests (Princeton: Princeton University Press 2016), 71–105; Audrey Tang and E. Glen Weyl, Plurality: Technology for Collaborative Diversity (forthcoming, 2024).

25. Christopher T. Small et al., "Opportunities and Risks of LLMs for Scalable Deliberation with Polis," arXiv, 20 June 2023, https://doi.org/10.48550/arXiv.2306.11932; Alejandro Cuevas Villalba et al., "Automated Interviewer or Augmented Survey? Collecting Social Data with Large Language Models," Deepai.org, 18 September 2023, https://deepai.org/publication/automated-interviewer-or-augmented-survey-collecting-social-data-with-large-language-models.

26. Hannah Arendt, On Revolution (New York: Viking Press, 1963).

27. "Our structure," OpenAI, https://openai.com/our-structure; Dylan Matthews, "The $1 Billion Gamble to Ensure AI Doesn't Destroy Humanity," Vox, 17 July 2023, www.vox.com/future-perfect/23794855/anthropic-ai-openai-claude-2.

28. "Collective Constitutional AI: : Aligning a Language Model with Public Input," Anthropic, 17 October 2023, www.anthropic.com/index/collective-constitutional-ai-aligning-a-language-model-with-public-input.

29. Wojciech Zaremba et al., "Democratic Inputs to AI," Open AI blog, 25 May 2023, https://openai.com/blog/democratic-inputs-to-ai.

30. See, for example, the recent start-ups and communities Gov-4-Git, Bloom, and Together.ai.

31. Villalba et al., "Automated Interviewer or Augmented Survey?"; Nils Gilman and Ben Cerveny, "Tomorrow's Democracy Is Open Source," Noema, 12 September 2023, www.noemamag.com/tomorrows-democracy-is-open-source/.

32. Researchers at the Berggruen Institute, a Los Angeles–based think tank that focuses on an array of issues including democracy and governance, are working on this now.

33. Bruno Latour, We Have Never Been Modern, trans. Catherine Porter (Cambridge: Harvard University Press, 1993), 165.

34. Yu Tang Hsiao et al., "VTaiwan: An Empirical Study of Open Consultation Process in Taiwan," SocArXiv (preprint), 4 July 2018.

35. See, for example, New_ Public's "Digital Spaces Directory" online tool at https://newpublic.org/directory.

36. Ralph Ellison, "Brave Words for a Startling Occasion," 1953, available at Wichita State University Libraries, Special Collections and University Archives, http://specialcollections.wichita.edu.

Share