- Can Machines Be Ethical?On the Necessity of Relational Ethics and Empathic Attunement for Data-Centric Technologies
ethics is a growing concern in the realm of data science, artificial intelligence (AI), and data-centric technologies in general. There are good reasons for this. We are all familiar with concerns over such issues as data privacy, data-driven surveillance, and the increased intertwining of the data industry with the finance industry and the so-called defense industry. We are all familiar with the fact that data-extracting and data-driven algorithms increasingly regulate the temporal, affective, and intersubjective modalities of everyday life. And we are all familiar with how this regulation—along with the sometimes over-the-top, but sometimes legitimate, concerns of how AI may change the very definition of the human, as well as life itself—is increasingly of not only ethical but also existential concern. Indeed, there are good reasons we are experiencing what I would call an ethical demand made by the data-centric situation in which we now find ourselves.
In this essay, I will discuss the dominant ways in which ethics is talked about vis-à-vis data-centric practices and technologies today. I will then turn to a more adequate ethical response to the demand of our current data-centric situation: this is what I call relational ethics. [End Page 1001]
ETHICS AS PRINCIPLES AND RULES
Most data-centric practitioners and ethicists today respond to the ethical demands of data-centric technologies in terms of principles, guidelines, rules, and ultimately law and policy. This response has taken various forms. Some, for example, have called for increased ethical policing of data science research, which might take the form of oversight by an institutional review board (IRB). Others consider it vital that data scientists themselves receive some form of ethical training to help combat, for example, blindness to both their own and structural biases that might enframe their work, or to help combat what is now seen as the widespread transgressions of data-centric industries. Finally, some have argued that data-centric technologies themselves, such as AI, must be built with ethical capacities that help guide their inevitable shaping of human worlds. These are all reasonable, and perhaps even necessary, responses. Unfortunately, however, and true to the instrumentalist reasoning that guides so many data-centric practitioners, ethics in each of these various forms tends to be conceived in terms of what I call either the "checklist" or "rulebook" approach to ethics (see Metcalf, Moss, and boyd 2019).
The "checklist" approach considers a practice ethical if and only if certain predefined principles can be checked off a pre-established list of such principles. Does a certain algorithmic practice respect the principle of privacy? Yes—good—check—ethical. That, in essence, is the principle-based or "checklist" approach to ethics in action. Similarly, the "rulebook" approach considers a practice ethical if and only if pre-articulated rules are followed. Have you guarded against the reidentification of your data? Yes—good—rule followed—ethical. That, in essence, is the rule-based or "rulebook" approach to ethics in action. Although these two approaches may be useful in certain institutional and bureaucratic contexts, they have nothing to do with ethics as a human practice, or the cultivated sensibilities necessary to be and become an ethical person. Therefore, while the checklist and rulebook approaches may be useful, for example, in the very narrow concern of IRB research approval or the prevention of [End Page 1002] litigation, they are the wrong roads to take if one were interested in developing ethical persons who are also data scientists, or, as I will focus on in the rest of this essay, developing ethical data-centric technologies such as artificial intelligence.
In order to clarify this, it's worth a brief detour to consider an early and important critique of artificial intelligence. In 1965, the phenomenologist Hubert Dreyfus published a critical essay entitled "Alchemy and Artificial Intelligence," which was later expanded into the book What Computers Can't Do (1972). In short, Dreyfus was critical of the then-dominant approach in AI research that assumed that human intelligence is a matter of the mental...