publisher colophon


Artificial Intelligence (AI)—algorithmic systems—comprises technologies the world is still trying to understand in their essence, in order to assess their impact and risks.

Nation-states and international organizations are reviewing existing laws in the public and private sectors, human rights, and ethical norms with regard to legal gaps in need of regulation. Every week a new example showcases how the current order is unable to harness the effects of this technology on human beings.1 And every second week a new "ethical" code seeks to present principles to fill a gap that public opinion fears has been opened by this technology. To a certain extent, the sense of a normative gap is justified, since the current approaches fail to understand what artificial intelligence is.

The manifestation of this technology into services and products is leading to scrutiny and evaluation of AI from a very individualistic human or consumer rights perspective. However, the products and services derived from AI are not equal to its nature. AI and algorithmic systems do not understand individuals. Conceptually, they represent ideas of the social. The way they compute and classify patterns is relational. Algorithms categorize people in fine granular groups. The [End Page 477] identity of individuals is no longer relevant. Personalization may be perceived by the user as the technical procedure for individualization, but technically, personalization is relational: it is the classification of this individual into a very specific collective of similar people.

It does not necessarily become clear to the human concerned that he or she is being classified into a collective that may not be part of the conventionally known social categories in a society. Personalized advertisement and "microtargeting" may give the impression that marketing is addressing potential consumers individually, based on information about the preferences of the individual. But technically the individual is being assigned to various categories shared with many other individuals. The connection of all these categories results in an intersectional profile encompassing more categories than the usual ones such as age, gender, and social status; and that profile is equally shared by many other individuals. This level of granularity and intersectionality is easy to confuse with individuality. The technical and conceptual mechanisms are counterintuitive and not tangible.

As a result, assessments of AI tend to focus on detecting individual damage and human rights abuses, although problematic algorithmic systems primarily discriminate against collectives without detecting individual damage.2 This is a classic effect when it comes to assessing the impact of infrastructure. The effects of the shape, standards, and rules of infrastructure mediating the flow of resources, mobility, or telecommunication can only be detected with a structural overview of the system. In this view, infrastructure is the physical Foucaultian dispositif to distribute power, create the conditions for societal inclusion or exclusion, and shape the space of a society. Michel Foucault's concept dispositif has also been translated among others as apparatus. In his interview "Confessions of the Flesh," he defined the term as

a thoroughly heterogeneous ensemble consisting of discourses, institutions, architectural forms, regulatory decisions, laws, administrative measures, scientific statements, [End Page 478] philosophical, moral and philanthropic propositions—in short, the said as much as the unsaid. Such are the elements of the apparatus. The apparatus itself is the system of relations that can be established between these elements.

The mobility infrastructure in a city determines the way its citizens access its geography; the infrastructure fosters or demotes inclusion. The streets in a suburb in the United States with roads but no sidewalks shape the mobility of its residents differently than the streets of Amsterdam with sidewalks, bicycle paths, and roads. Pedestrian traffic lights that have very short green intervals may generate more fluid car traffic, but they certainly present a challenge for older pedestrians.

Artificial intelligence is a new form of infrastructure. It is not a product; it is immaterial infrastructure. AI is technology that standardizes and automates processes. Everything that is a process implies a certain system and a set of standards, which can then be formalized in mathematical language and become partially or fully automatable. Such standardization goes beyond cables and hardware. Automating a process with AI implies setting a fine invisible layer of software to permanently mediate interactions with and among all involved parties of the process. In this way, immaterial infrastructure is being built into sectors where an infrastructural dimension was unthinkable before. Let's take for example Spotify, a service model that has become a new form of infrastructure for music, podcasts, and audio-books, and is changing not only the format in which music, (amateur) radio, or books are consumed but also the method of access. Recorded oral forms of art, oral politics, and oral literature for leisure or science are now being bundled in one structure through the format, while at the same time preserving its unilateral communicative character. The writing as the central instrument of scientific propagation, the sola scriptura pose of science and the educational walls around it, are now being made accessible to oral culture. The public opinion generated [End Page 479] in podcasts is now part of the architecture of what constitutes a new infrastructure of the oral. The function here is following the form—while the content, the type of information entailed in the format, has become a second order category.

The current understanding of infrastructure therefore needs revision. Presently, infrastructure denotes either the institutions preserving the economic, cultural, educational, and health functions of a country—soft infrastructure—or "all stable things that are necessary for mobility and an exchange between people, goods and ideas" (van Laak 2018, my translation)—hard infrastructure. An essential characteristic defining both soft and hard infrastructure is stability, whether procedural, in the case of soft infrastructure, or physical, in the case of hard infrastructure). The two types of infrastructure are a form of fundamental planning to systematically design access, distribution, and interaction with goods and services that are of interest for a collective. For Foucault, this is a fundamental aspect of political power. His concept of dispositif regulating the politics of health, sexuality, or architecture was novel for broadening the definition of power beyond mere rules to a collection of "relations of power, practices, and actions" (Elden 2016) depicted in normative and material infrastructure and mechanisms. Infrastructure is thus the planning of power and its distribution through a set of standards embodying societal ideas of efficiency and fairness of procedures and distribution.

Another relevant characteristic of infrastructure is that modularity is inherent in it. Hannah Arendt's criticism of the bureaucratization of murder during the Third Reich in Germany is a fundamental criticism of soft infrastructure (Arendt 2006). Administration as soft infrastructure—once seen by Max Weber as the mechanism of democracies to ensure equality before the law and its procedures in opposition to the arbitrariness of charismatic autocracies—entails risks. Dividing the extermination processes into standardized administrative steps or modules led individuals to decontextualize each module from the broader process and foster moral distance from its ultimate consequence. Administration banalized evil into a bureaucratic procedure [End Page 480] that would obfuscate responsibility through modularization, making bureaucrats in the system accountable only for a mere step of the process.

Furthermore, infrastructure usually has an interdependent character: information and telecommunication infrastructures are fundamentally dependent on electricity infrastructure.

One last characteristic to mention is the unavailability of infrastructure and infrastructure goods to single households and companies, both for production and cost reasons. "Although bread is able to satisfy our hunger, it is not an infrastructure good, since the ingredients for bread production are easy to obtain; today everyone can bake bread for himself" (Buhr 2009).

Because fixed costs are very different depending on the capital goods, the supply of infrastructure happens under different market forms: mainly (natural) monopolies (e.g., electricity supply), but also competition (e.g., housing construction). While a single household may afford a generator or solar panels, a constant secure supply of electricity still needs connection to the grid, and the grid itself is the infrastructure that a single household cannot afford.

Artificial intelligence entails many of these aspects. It has physical prerequisites such as cables and hardware. It is not stable in its ontology; its formulas and code are constantly changing. But it does create a stable layer of a mathematically formalized structure concurring with or complementing the rules and constrictions given by soft and hard infrastructure. Further, it automates processes through technical and mathematical modularization—each process is split into several steps depending on the technical requirements, but not necessarily the administrative and social context. AI systems are not instruments or derivations of the rules and mechanisms of soft infrastructure. They have a different rationale running parallel to that of soft infrastructure, and, consequently, they require their own categorization.

Social media, for example, could be seen as a form of immaterial infrastructure in the communicative sector. A software layer [End Page 481] orchestrates the interface, the timeframe, the space, and the format (videos, text, pictures) in which people communicate and interact with each other. This infrastructure is thus standardizing and moderating communication along with the rules and standards of soft infrastructure (in this case speech rights, personal rights, etc.), and its present market consists of monopolies defined by the format.3

Predictive policing systems would be another example. They are used to identify pattern behavior for concrete crime categories that are applied strategically to prevent similar crimes. For example, organized criminal groups have a modus operandi for robberies, and within a given time frame and geographical parameter, this information is being used systematically to prevent similar robberies in the area. A predictive policing system for this use would standardize the geography of a city and identify the smallest geographical unit. It would be built upon a set of definitions and human-made decisions: which data categories will be used and which will be excluded; how old the data may be and whether there is an expiration date for the data; what crime categories will be included and correlated to each other; which methodology will be used to process the data and compute probabilities; and so forth. With this, information about crime, the geography of the jurisdiction, historical police records, and so on is structured and "datafied"—the information is gathered, parsed, and saved as a usable data set of standards and rules that function together with those specified by soft infrastructure. Although informing, assisting, and thus systematizing police work becomes very much dependent on the ideas and concepts of optimization, fairness, and efficiency4 of the diverse actors designing and implementing the technology, these social ideas of efficiency and fairness are at the same time constrained in their translation into algorithms by the rules of mathematics and the limits of "datafication."5 Thus, soft infrastructure and immaterial infrastructure are two separate systems dialectically influencing and constraining each other.

Another central aspect of AI as an architect and moderator of social relations, practices, and actions is the value at the center of its [End Page 482] optimization rules. AI systems optimize for a specific goal or value, to the detriment of other values. Does a system optimize for efficiency in the sense of pragmatism or of fairness? As the work of Nina Grgic-Hlaca and colleagues (2018) proved—in the course of a debate around bias in a software called COMPAS assisting judges in some US states to analyze prisoner reentry and parole risk—there is no single algorithm able to account for many different fairness metrics and concepts at the same time. What, then, is the system optimizing for? Is it optimizing for false positives to make sure the number of defendants requesting parole and being falsely categorized as high risk is minimal? Or is it optimizing for the false negatives to make sure that the rate of high-risk defendants wrongly classified as low risk is minimal?

All the above examples illustrate traits of the technology that are also applicable in other AI mediated sectors and services: health, agriculture, mobility, communication, social welfare, e-commerce, banking, employment, and so on. Using artificial intelligence in those sectors implies standardizing and creating a second layer of norms in mathematical modules that will automatically structure the human relations, practices, and interactions within that context.

Understanding that infrastructure, as a term, needs a rethinking is key to understanding what normative instruments are needed to harness AI. Because infrastructure is the architectonic expression of the politics of a society, AI is a technology impacting societies architectonically, and thus on a collective rather than individual level.


Algorithms and artificial intelligence do not understand individuals; and democracies, from their law-dogmatic perspective, do not understand collectives. The Western ethical approach and its legal cultures are individualistic in methodology and anthropocentric in ontology. In the theory of the legitimation of political power, the good society—legitimate political order—could only be achieved by morally good citizens. So, in the first place, the individual had duties and [End Page 483] obligations towards society. Rights emanated from those obligations but were not central to the existence of society. Constitutionalism changed that narrative: the concentration of power into a political order was legitimate if it achieved protecting the fundamental rights of individuals. First society owed rights to its citizens, and from those rights emanated obligations. Under the new narrative, rights came first, duties second.

This was the narrative birth of the legitimation of occidental democratic power, based on the idea that political power is there to protect individuals. But the Leviathan-state, the monster with the monopoly on violence, was created to protect individuals from a very specific form of violence, a war of all against all.

That a man be willing, when others are so too, as farreforth, as for Peace, and defence of himselfe he shall think it necessary, to lay down this right to all things; and be contented with so much liberty against other men, as he would allow other men against himselfe. For as long as every man holdeth this Right, of doing anything he liketh; so long are all men in the condition of Warre.

The purpose of the Leviathan-state was not to grant liberties and rights to individuals but to overcome (civil) war and make a life in society possible by granting restricted liberties and rights to individuals. Rights were instrumental to making society and cooperation among individuals possible.

Most constitutionalist theories part from a self-centered, rationalist anthropological view of man to justify the need for a social contract that does not presuppose good, moral individuals for a political order to come to terms and be stable. The social contract would appeal to the rationality of the individual instead. The construction of the social contract had to incentivize man to accept handing over power and to trust the resulting political architecture in order to be [End Page 484] able to cooperate and benefit individually within the given rule structure. Increasingly, especially from the Enlightenment on, the focus on appealing to the rationality of individuals to accept and follow the social contract distracted from the purpose of rationality and contractual incentives.6

Individual autonomy can only be achieved through a societal lens, and is essentially dependent on the structural framework within which it operates.

While vervet monkeys have a feeding rank where dominant females eat first and longer, domestic dogs eat alone and do not share their food. An individualistic lens focused only on individual benefits and rights would consider an animal able to eat first and as much as it wants as clearly the one enjoying the greatest freedom. Without looking at the broader structural context, this could apply to an animal living in a zoo or a kennel. Most animals living in packs have dominance hierarchies reflected in food, reproduction, and other vital aspects. Domestic animals are generally denied the pack and enter a higher dependency and hierarchy with a human master. Mastership over oneself is not defined by individual benefits or rights, such as eating alone, but by relational factors dependent on the structure within which those rights are embedded.

So, from a legal dogmatic perspective, democracies only know about individuals. As a consequence, most democratic cultures have legal instruments to assess impact, and provide protection and redress, merely on an individual level. There are already politically relevant examples caused by the lack of an assessment rating based on structural risks and harms. The fitness tracking app Strava released a visualization map showing all the activity tracks by the app's users all over the world. The data was anonymized. However,

in locations like Afghanistan, Djibouti and Syria, the users of Strava seem to be almost exclusively foreign military personnel, meaning that bases stand out brightly. In Helmand province, Afghanistan, for instance, the locations of [End Page 485] forward operating bases can be clearly seen, glowing white against the black map.

Similarly, a few software programs for predictive policing used within the European Union are falling out of the scope of regulation and thorough impact assessment: in several parts of Germany, state police are using software to predict burglaries, given a specific modus operandi, within a specific time frame and geographical parameter, by using anonymous data about the crime type and procedure, as well as geographical data. The software makes sense in regions with fewer police officers because it may assist police in creating more efficient patrol shifts regarding burglary prevention. The corresponding data-protection state agencies permitted use of the software since it did not process personal identifiable data and thus did not fall under their jurisdiction. Data protection was the only risk factor considered regarding individual rights, and from a data protection and privacy standpoint, the software appeared harmless.

But these systems leave many societal questions open. If the software is fed historical data, it consequently raises concerns about structural ethnical biases associated with ZIP codes: To what extent is the data bank reflecting data asymmetries? How can representation be controlled so some regions are not overrepresented while others are underrepresented? To what extent may the system amplify this kind of ethnic bias and affect the social cohesion of a city as a consequence? If a greater police presence is being observed in structurally poorer regions, will residents feel more secure or will this lead to a massive exodus of residents able to afford housing in a different part of the city? What methods for feedback are being undertaken to measure and ensure that these programs are not causing a similar effect to the one observed in the theory of the broken window, where visible signs of civil disorder—as a stronger police presence might suggest—further crime? How and to what extent is this tool meaningfully embedded in a broader prevention strategy? This is yet another example of the need for a more collective approach to evaluate algorithmic systems. [End Page 486]

However, there are only a few areas of law—labor law, for example—where most legal cultures possess instruments to address the collective dimension of discrimination. In the future, discrimination will be a phenomenon observed in all sectors where AI is used, be it the distribution of energy or critical resources, consumer services, the health sector, or social welfare. The fact that discrimination does not only exist in the labor sector, and that the use of these technologies will take place in all sectors, points to one of the legal gaps that new technologies make more tangible.

For these reasons, focusing on individual rights is ironically detrimental to individual autonomy and rights. The autonomy of the individual depends very much on the social framework and infrastructure within which this autonomy is exercised. And the later frameworks and infrastructures require a nonindividualistic reflection and assessment. Being able to eat alone is not a sign of selfmastery. The absence of individual harm with regard to human or fundamental rights does not mean that human beings are not harmed in general. There is such a thing as societal harm, and assessing societal impact requires other questions and criteria than the ones applied for human and fundamental rights.


It has been argued that AI and algorithmic systems are "immaterial infrastructure," a new form of infrastructure disrupting the concept in its essence. Furthermore, infrastructure is a concept that cannot be harnessed with methodological individualism, but rather requires a collectivist approach. It has also been argued that although the collective was always part of the concept and purpose of democracies, on a normative level, democracies are essentially individualistic, unlike infrastructure. As a result, artificial intelligence is escaping regulatory control due to this character of democracies that existed prior to the technology. The technology is merely making this weakness more distinct. [End Page 487]

In order to understand how to assess and address algorithmic systems normatively in a democracy, it is important to first understand how democracies have been dealing with infrastructure as a collective affair, and then to identify elements, questions, and approaches that can be transferred to the assessment of artificial intelligence.

Infrastructure is an expression of planning, making provisions for the future in a systematic way. Planning implies intelligibility, calculability, and systematization.

The future as a concept has been, in occidental cultures, closely tied to monotheism and the development of a linear narrative about societies, with a predicted end of the world, where individuals end up either in paradise or hell. This was a radical change from the narratives of classic cultures, where there was no notion of the past or prehistory, but rather a narrative of a cultural, god-given origin similar to the present, which did not anticipate change in the manner of future narratives. Future narratives see the time to come as a time when evolution happens, when neither clothes nor context nor social habits remain the same. With the development of Protestantism and capitalism, the future became more than a point in time when the story would end. It became an unwritten point of opportunity to be shaped by human beings.

At the beginning of the twentieth century, the idea of the future was closely tied to technology as an instrument for changing historical contexts and shaping societies. Elites initiated a technical discourse focused on scientific pragmatism and technocracy, with social engineering focused on the creation and "neutral" planning of big societal projects. In 1933, sociologist Hans Freyer stated, "If the immanent utopia of technology is the transformability of all materials and forces into each other, then the immanent utopia of planning is the transformability of all historical situations into each other" (Freyer 1987, 22, my translation). Karl Mannheim even declared the year 1935 the end of "unplanned things" (Mannheim [1935] 1958). Technical experts postulated that by organizing societies' infrastructure [End Page 488] based on efficiency and rationality, for the sake of the common good, it would even be possible to overcome subjectivity in politics. "We are so used to fighting that we cannot see there is a better way—the way of planning" (Kizer 1939) was the argument advanced at a conference in 1939, at the end of the American New Deal era; the "drawing-board" planning approach was also known on the European continent, and the intellectual elites behind it conducted an extensive occidental international exchange.

The constrictions of planning through infrastructure crystallized in public discussions during the 1970s. New issues such as climate change and sustainability tested the limits of what could be planned for. More and more, the value of public opinion and public participation in infrastructure projects initiated scrutiny of the political dimension of infrastructure and its alleged neutrality. In the very same decade, Foucault presented his new theory of power within infrastructure and the social and ethical assumptions implied therein.

Planning and infrastructure in democracies have become more complex, diverse, and decentralized. Infrastructure goods and services are not produced only by states, but also by private parties: private hospitals, private schools, airports, museums, railways. There, the role of the state is focused not on the process of producing infrastructure but on intervening in and regulating the socioeconomic behavior of its producers; on defining public infrastructure and the conditions of operation of said infrastructure, including the planning of geographical space for infrastructure.

Infrastructure within the region of the European Union is unifying the societal principles behind regulation of infrastructure. There is, to quote constitutional law scholar Jens Kersten, a "paradigm shift in the spatial guiding principles of public services. They are transitioning from the welfare state's guiding principles of equality of living conditions towards the EU values of economic, social and territorial cohesion" (2008, 12).

Apart from the principle of cohesion, the definition of infrastructure, as well as the criteria for safeguarding it and the rules of its [End Page 489] usage to distribute services and goods, has slightly different implications and consequences.

Social cohesion is a contested concept used by multiple countries as a factor both in the debate about social values and in policy strategy. Since 2000, the government of the United Kingdom has been commissioning a report known as the State of the English Cities, which is focused on the social and economic performance of its biggest cities. This ongoing study categorizes social cohesion in five dimensions: material conditions (health, housing); passive relationships (tolerance, security); active relationships; solidarity; and inclusion (people's sense of belonging) and (social) equality. These five dimensions are categories where public policy is meant to steer its citizens according to a collective perspective. Thinking along these dimensions helps us to understand how this guiding principle may shape the policies around infrastructure differently: How is social cohesion steered by infrastructure? Does cohesion transcend diversity or does it acknowledge diversity in the access to and distribution of infrastructure services and goods? What becomes critical infrastructure—only material infrastructure, or is soft cultural infrastructure with strong emotional symbolism also part of a country's designed critical infrastructure? What does infrastructure require in terms of geographical/spatial needs? Are these needs based on distinctions of rurality and urbanity, or are they delineated across culturally defined areas encompassing both? What are the societal and public-interest values that foster cohesion, and should be protected through the regulation of infrastructure and the space it occupies?

In a comparative study of the United States, the United Kingdom, Sweden, and Denmark, Christian Albrekt Larsen showed that trust among citizens is influenced by social inequality and the representation of poor and middle classes in mainstream media. The infrastructure of a society is the political response to social inequalities. It decides what is considered necessary for a dignified life in a society and should be made available to all citizens. Social cohesion is very much dependent on how infrastructure addresses social inequality. [End Page 490] Which conditions of operation of infrastructure and what type of markets are allowed in areas of significant social inequality? Is equal access and distribution fair enough? Or should some regions and groups have special access and distribution rights to compensate for specific burdens or inequalities? If so, what asymmetries need to be leveraged? What are the minimum standards needed to secure a fair supply? What are the minimum criteria to ensure societal access that acknowledges the differences and asymmetries of a society? What parts of the infrastructure should be open to competition? What parts of infrastructure are noncommercial and subject to societal solidarity on taxation?

There are societies where nature has a highly symbolic value. In many of them, the woods cannot be privatized. In some of them, woods can be sold under certain conditions but cannot be fenced and must still be accessible to any citizen. In other societies, both privatization and fencing of woods are allowable. These policies are the expression of a society's political values and expectations of space, and the structures considered fundamental by the citizenry in order to lead an adequate life and develop a sense of belonging.

The concept of social cohesion as the guiding principle for shaping societies is an issue that has been explicitly discussed regarding soft infrastructure, for example in the concept of citizenship and the politics of migration. There, the frictions between pluralism and cohesion have generated prolonged debate on how pluralist societies can create a unified sense of belonging. But for material infrastructure and its spatial politics, the concept of social cohesion still has not been substantiated. If the ceiling of an underpass or tunnel is too low for buses, then only (mainly private) car drivers would be able to use it. It would exclude people who can only afford public transport. What alternatives are being provided to ensure access between the areas at both ends of the tunnel? And what social consequences result from this design?

Europeans have long been aware of the potential conflicts among different kinds of material infrastructure due to the scarcity of [End Page 491] space. In 1970, the Council of Europe initiated the CEMAT high-level minister conferences for spatial planning, leading to the approval in 2000 of the Guiding Principles for Sustainable Spatial Development on the European Continent. In 1999, the European Union developed the European Spatial Development Perspective, a policy framework providing the conditions and criteria that are instrumental in building trans-European networks (transport, energy, telecommunications). Both the European Council and the European Union enshrine social cohesion as one of the guiding principles in their policies and frameworks of spatial planning when building and coordinating (material) infrastructure. And yet the conflict between the standardizing, homogenizing character of cables, bridges, and roads and its impact on pluralist societies has not yet been the object of a thorough discussion comparable to the debate on citizenship.

The example given above of the low tunnel ceiling resembles many examples of bias seen in algorithmic systems—for example, the automated soap dispenser unable to detect users with darker skin. The problem behind the soap dispenser was an issue of bias at the standardization level. Near-infrared technology is not good at detecting darker skin, and since the developers and testers of these articles were lighter-skinned individuals, this problem had gone unnoticed for a long time. When asked about the cause of and possible solution to the problem, Richard Whitney, the vice president of product at Particle, a company producing Internet of Things devices, said, "In order to compensate for variations in skin color, the gain, [or] sensor equivalent to ISO [International Standards Organization] and exposure in cameras, would have to be increased" (Plenke 2015).

Both examples are about the creation of standards by making implicit assumptions about humans and their contexts that do not reflect otherness—the variety in human nature and of social contexts. Many of the biases identified in algorithmic systems are problems caused by the nature of standardization, which is also common to material and soft infrastructures. [End Page 492]

Even though the concrete implementation of material infrastructure is reduced to allegedly neutral numbers and mathematical standards, on a more abstract level, its collective social impact is fully accounted for by norms and rules as they're written in the European continent. The principles that guide the creation and coordination of infrastructure are enshrined in corresponding spatial planning policies at local, regional, national, and continental levels. Those principles constitute a set of criteria to guide, assess, and evaluate the impact of infrastructure very much applicable to algorithmic systems:

  • • balancing social, economic, ecological, and cultural conditions (also considering geographical asymmetries);

  • • safeguarding diversity;

  • • providing for stable and continuous access to public services to ensure fairness of opportunity;

  • • providing for a competitive, balanced economic structure that fosters a wide range of jobs and apprenticeships;

  • • preserving and developing cultures;

  • • ensuring sustainability and respect for nature;

  • • ensuring that the needs to provide for defense and civil protection are taken into account;

  • • ensuring that the conditions needed to foster social cohesion are provided for.

And while all these questions and approaches do have an impact on individuals, the approach and the criteria differ fundamentally from the catalogue of questions used in the individual rights approach. The principles listed here are better suited to generating [End Page 493] a deeper analysis of the impact of algorithmic systems by providing answers on a structural level. This more collectivistic approach at a regulatory and normative level is thus not unknown to many democracies. However, as discussed above, several aspects and implications of infrastructure still need more scrutiny and further legal and methodological thinking.


Due to immaterial infrastructure's interdependence with both material and soft infrastructure, there are a few characteristics that pose a challenge and need to be considered.

The life cycle of immaterial infrastructure is shorter than that of the material infrastructure on which it may depend. Likewise, the life cycle of AI technologies is shorter than the life cycle of the cables and physical infrastructure they may depend on. This means for soft infrastructure (regulatory institutions) that their normative instruments must focus on the human values and conflicts at stake and must be technology-neutral to ensure that they are not rapidly outdated, resulting in legal uncertainty. Concerning the building and reform of material infrastructure, this requires sufficient physical ambivalence of the created infrastructure so that it is able to accommodate future technology—immaterial infrastructure—that has not been created and whose properties are not known yet.

An additional challenge is constituted by the already described inherent restrictions of algorithms. Algorithmic systems can help us identify the evolutions and patterns of diversity in pluralistic societies. However, they can only optimize for one value, to the detriment of others. Considering the catalogue of questions and principles listed at the end of the previous section, this means that a specific service of immaterial infrastructure will only be able to optimize for a single principle. In many cases immaterial infrastructure will be able to detect—but not operate to fulfill—diverse levels of justice. In algorithmic systems, justice becomes a zero-sum game, where one notion of [End Page 494] justice becomes the rule and leaves no "algorithmic space" for other dimensions and measurements of fairness.

Hence, the planning of these systems will need to provide for additional compensation and balancing mechanisms. Implementing immaterial infrastructure will frequently require joint measures in both soft and material infrastructure, as well as a prior reflection about the fabrics of society, formulating what social cohesion in a pluralist society of the future means.

In conclusion, algorithmic systems and artificial intelligence as collectivistic technologies amplify a weakness of democracies: the methodological individualism of democratic systems has characterized the normative approach of democratic powers. However, democracies do also have collective, societal purposes, though the more collectivistic dimensions of democracies, and their corresponding regulatory instruments, are less developed. Still, democracies do have regulatory experience with a more collectivist approach in the realm of infrastructure and spatial planning, where the nature of AI is better harnessed. The implementation of AI requires societal thinking. Given this new era of automatization, the most imperative task for democracies lies in the further development of the idea of public interest, the common good, and the shape of society.

Lorena Jaume-Palasi

lorena jaume-palasí is the founder of the Ethical Tech Society and AlgorithmWatch. An appointed member of the Spanish government's Council on Artificial Intelligence and Big Data, she has coauthored several books on internet governance, and she lectures and writes regularly on data protection, privacy, public goods, and discrimination.


1. See for example the cases described by Vera Eidelman in "The First Amendment Case for Public Access to Secret Algorithms Used In Criminal Trials" (2018), where dubious software used for DNA tests in criminal cases could not be scrutinized due to the legal protection of trade secrets.

3. Existing social media companies have monopoly positions within their own specific formats: the format offered by Twitter is different than that offered by Instagram, Snapchat, Youtube, or Facebook.

4. These ideas of efficiency and fairness do not necessarily need to be specified in the set of rules constituting a soft infrastructure. They can be the expression of common social expectations and prejudices in a society running contrary to legal and administrative rules.

5. Not all social contexts and circumstances can be comprehensively turned into data.

6. Along with the Enlightenment's command of daring to know, making use of rationality, the increase of literacy rates, and the permeation of the contractual narrative into the cultural expectations of societies, the concept of privacy emerged, marking the threshold where state power ended. Both rationality and privacy reinforced the methodological individualistic approach in the social contract. This would lead Jürgen Habermas, with The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society, and Richard Sennett, with The Fall of the Public Man, to a fundamental criticism of how this development is eroding the public dimension in the lives of private individuals and, with this, essential aspects of the public sphere and societal cohesion.


Arendt, Hannah. 2006. Eichmann in Jerusalem: A Report on the Banality of Evil. New York: Penguin Books.
Buhr, Walter. 2009. "Infrastructure of the Market Economy, Volkswirtschaftliche Diskussionsbeiträge." Fachbereich Wirtschaftswissenschaften, Wirtschaftsinformatik und Wirtschaftsrecht: Discussion Paper No. 132–09. U. Siegen.
Eidelman, Vera. 2018. "The First Amendment Case for Public Access to Secret Algorithms Used in Criminal Trials," 34 Ga. St. U. L. Rev. 915.
Elden, Stuart. 2016. Foucault's Last Decade. Cambridge, UK: Polity Press.
Foucault, M. 1980. "The Confession of the Flesh" [Interview, 1977]. In Power/Knowledge: Selected Interviews and Other Writings, edited by Colin Gordon, 194–228. New York: Pantheon Books.
Freyer, Hans. 1987. "Herrschaft und Planung. Zwei Grundbegriffe der politischen Ethik." In Herrschaft, Planung und Technik. Aufsaetze zur politischen Soziologie, edited by Hans Freyer, 17–43. Weinheim: VCH Verlagsgesellschaft.
Grgic-Hlaca, Nina, Elissa M. Redmiles, Krishna P. Gummadi, and Adrian Weller. 2018. "Human Perceptions of Fairness in Algorithmic Decision Making: A Case Study of Criminal Risk Prediction." In Proceedings of the 2018 World Wide Web Conference on World Wide Web, WWW 2018, Lyon, France, April 23–27, 903–12.
Hern, Alex. 2018. "Fitness Tracking App Strava Gives Away Location of Secret Us Army Bases." Accessed Jan. 4, 2019.
Hobbes, Thomas. 1968. Leviathan. Edited by Crawford Brough Macpherson. London: Penguin Books.
Kersten, Jens. 2008. "Mindestgewährleistungen im Infrastrukturrecht." Informationen zur Raumentwicklung 24, H 1/2: 1–15.
Kizer, Ben H. 1939. "The Need for Planning." In National Conference on Planning. Proceedings of the Conference held at Minneapolis, Minnesota, June 20–22, 1938. Chicago: 1–9.
Laak, Dirk van. 2018. "Alles im Fluss." Die Lebensadern unserer Gesellschaft– Geschichte und Zukunft der Infrastruktur. Frankfurt am Main: S. Fischer Verlag.
Larsen, Christian Albrekt. 2013. The Rise and Fall of Social Cohesion: The Construction and De-construction of Social Trust in the US, UK, Sweden, and Denmark. Oxford: Oxford U. Press.
Mannheim, Karl. [1935] 1958. Mensch und Gesellschaft im Zeitalter des Umbaus. Darmstadt: Wissenschaftliche Buchgesellschaft.
Plenke, Max. 2015. "The Reason This 'Racist Soap Dispenser' Doesn't Work on Black Skin." Mic. Sept. 9.
van der Sloot, Bert. 2016. "The Individual in the Big Data Era: Moving towards an Agent-Based Privacy Paradigm." In Exploring the Boundaries of Big Data, edited by Bert van der Sloot, Dennis Broeders, and Erik Schrijvers, 178–79. Amsterdam: Amsterdam U. Press.

Additional Information

Print ISSN
Launched on MUSE
Open Access
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.