In lieu of an abstract, here is a brief excerpt of the content:

  • Trust in Invisible Agents
  • Drew Hemment

a defining challenge for our time is the increasing pervasiveness of computational processes that are not readily transparent or legible. This contributes to multiple crises in computing and society that play out in rolling news headlines on data harvesting, electoral manipulation, alternative facts, weaponization of data, the business models of Silicon Valley and the complicity of social media users.

Increasingly, lives are mediated, and decisions and opportunities facilitated, by opaque computational processes, from choice of music and TV to the accessibility of insurance or a mortgage. In smart home consumer products, from energy meters to voice-operated personal assistants, the device will collect data in the intimacy of the user’s home. In hospitals, doctors need to understand the decisions being made. Regulation lags behind. When the software in a car is updated, no authority needs look at it, even though it is updated in real time on the road.

Thirty years ago, Mark Weiser set out a vision for the disappearance of computing into the fabric of everyday life [1]. This vision, of “ubiquitous computing,” is of a future both seamless and benign. Today, his highest principle of invisibility now appears as one of the dimensions of the present crises.

The purposeful drive to make computing invisible is compounded by the sheer complexity of today’s landscape of interconnected systems, people and things. Internet of things (IoT) and artificial intelligence (AI) systems can be difficult to understand, even for experts in adjacent domains. An expert in, say, security may not understand the latest advances in privacy. When accessing an IoT service, it can be difficult to determine where data and algorithms originate and who is accountable when things go wrong.

Current research in computing investigates how artificial intelligence can be explainable. Explainable AI, or XAI, aims to enable a smart object to explain its reasoning and how it has reached the conclusions that it has. Such work tends to approach the problem of interpretability as a technical challenge.

We see rich currents of work between art, science and technology addressing this challenge. Creative disciplines can contribute a holistic approach to collaboration and orchestration between human agency and machine learning. Artists can create imaginative interfaces and open infrastructures to investigate the legibility and ethics of data systems and build visibility and literacy around capabilities and consequences.

Here the question arises of the role art and creativity can play in technology innovation. Longstanding and profound debates arise about the critical distance of art, its disinterest, its use and function. Where this entails collaboration between art and industry, critical distance can be rethought through multiple fault lines, boundary crossings and liminal spaces.

One illustration is the way critical and commercial considerations can converge around the ethics and governance of data systems. On the one hand, ethical consideration of technology is a concern for many in these pages. On the other hand, barriers to user acceptance are of increasingly central concern to industry. The opportunity and challenge is to leverage this flashpoint to bring critical debate and intervention into the mainstream of technology innovation [2].

A theme of increasing importance for Leonardo is the “application and influence of the arts and humanities on science and technology” [3]. The history of work between art and technology innovation is well represented in the journal, dating to work at Xerox PARC and CalArts from the 1960s. Elsewhere, this theme has recently gained further prominence through the Science, Technology and the Arts (STARTS) program of the European Commission. The Leonardo STEAM Initiative currently invites contributions on integrating arts into science, technology, engineering and mathematics (STEM) education.

In “The Computer for the 21st Century,” Weiser grounded ubiquitous computing in philosophy, psychology and economics, as well as technology [4]. We now ask how the disciplines, practices and communities that find their home in this journal present other visions of computing in the 21st century at a time of multiple crises.

Drew Hemment
Leonardo Editorial Advisor
Email: <d.hemment@dundee.ac.uk>

References and Notes

1. Mark Weiser, “The Computer for the 21st Century,” Scientific American 265, No. 3, 94–104 (1991).

2. Such was the focus of “Future Sessions,” FutureEverything: Manchester, 22 March...

pdf

Share