In lieu of an abstract, here is a brief excerpt of the content:

  • Automation as Echo
  • Erin Gee (bio) and Sofian Audry (bio)

The varied voices of automation range from the swift scream of a drone's bomb dropping from the sky, to the dumb voice of an automatic teller, asking me if I am still there. The urban landscape is littered with helpful voices, talking GPS devices, and presence-sensing advertisements that address you upon walking into a public washroom. Cheerful, often female-gendered voices of personal assistants proliferate as human-machine interaction moves toward natural user interfaces that are increasingly embodied, personal, and materially embedded in everyday life.

Vocalization and conversation are popular modes for naturalized interaction, as one might long to communicate with our machines as we might communicate with a trusted familiar, friend, or family member. When humans ascribe (or in the case of computers, prescribe) voices to nonhumans, this often serves an anthropomorphic function: think of loudmouthed insects and animals, babbling brooks, and the extraterrestrial singing of faraway magnetospheres. Using voice as metaphor is a way of thinking through communication with nonhuman others on human terms, even if this communication is impossible or one-sided to begin with.

In an effort to baffle or contradict this narcissist tendency, I created several projects in 2018 that amplify and highlight what I call the echoist potential of human-machine interaction, with the goals of using sound to highlight the subtle noise of algorithmic process itself as an algorithmic "voice."1 I approach this subject as an artist. Corporate powers are rapidly territorializing the creative potential of automation processes, and it is important to grasp which aspects of machine learning might embody the critical, the generative, and the unique in the face of normalizing corporate stratification. I believe that the challenges and promises of vocalization and echo might provide a useful strategy for thinking through machine learning not as a set of aesthetic tools that produce increasingly efficient end results, but rather as lively processes and performances that might articulate lags, vibrational loops, and material difference between human and nonhuman bodies.

Theorists such as Rosalind Krauss and Lev Manovich have named digital media as essentially narcissist in character.2 Whether through the image-based work of early video artists, or the data-driven interactive works, Krauss originally stated that the narcissism of the viewer blinds her to the technological-material reality of the digital work. In the past, I have articulated ways that this anthropocentric privileging of the human gaze/self in new media might be destabilized through uncanny references to [End Page 307] the technological body and its specific temporal modes, through aesthetic moves that I call echoist.3 The echo is a metaphor that goes beyond sound, speaking to the physical and temporal gaps in human-computer interaction that open up a space of aesthetic consumption problematized by the impossibility of comprehending machine perspectives on human terms. The echo unfolds in time, but most importantly it unfolds in space: sound travels as a physical interaction between a subject and an object that seemingly "speaks back."

The mythological nymph Echo "speaks" or "performs" her subjectivity through reflection or imitation of the voice of human Narcissus. Her (incomplete, sometimes humorous, sometimes uncannily resemblant) nonhuman voice is dependent on the human subject, who is also the progenitor of her speech. The relationship between these two mythological entities creates an apt metaphor for machine learning: its processes are not of the human, yet its "neural" functions are crafted in imitation of and in response to human thought. As machine subjectivity is crafted from human subjectivity, we cannot grasp its machined voice, nor perceive its subjective position, through analysis of its various textual, sonic, visual, and robotic outputs alone. Rather, the "voice" of machine learning is fleeting, heard through the spaces, the gaps, the movements between the machine and the human, the vibrational color of nonhuman noise.

To articulate these gaps and returns, I produced several audio works in collaboration with media artist Sofian Audry. The algorithmic process used in these works is a machine-learning system known as "long short term memory" (LSTM)—a type of deep recurrent neural network that is able to learn patterns in sequences of data. Because data come...

pdf

Share