In lieu of an abstract, here is a brief excerpt of the content:

  • Emerging Technologies for Real-Time Diffusion Performance
  • Bridget Johnson, researcher, student (bio)
abstract

With the ascendance of the field of new interfaces for musical expression, a new phase of sound diffusion has emerged. Rapid development is taking place across the field, with a focus on gestural interaction and the development of custom performance interfaces. This article discusses how composers and performers embracing technology have broadened the boundaries of spatial performance. A particular focus is placed on performance interfaces built by the author that afford the artist more control over performative gestures. These new works serve as examples of the burgeoning field of diffusion performance interface design.

Supplemental materials such as audio files related to this article are available at <https://vimeo.com/98397876>.

For over half a century, the performance paradigm of sound diffusion has centered on the performer using a mixing desk as a controller. While much development has taken place regarding studio spatialization techniques and rendering algorithms, until recently the performance interface for diffusion has seen little change. A recent trend in diffusion performance is the application of new musical interfaces.

History

In 1951, Pierre Schaefer and Pierre Henry presented the potentiomètre d’espace, a diffusion system with which they performed precomposed electroacoustic music by dynamically spatializing sounds through a tetrahedral speaker array. The two artists built an interface of potentiometers to control the gain of each speaker and, thus, the spatial field [1]. The diffusion concert is a tradition that remains active throughout the world. Over the last 70 years, diffusion has mostly been performed on a desk of faders in a manner similar to that of the potentiomètre d’espace. After the initial quadraphonically based systems, many institutes began to develop larger speaker orchestras, notable examples of which include the GRM (Groupe de Recherches Musicales) Acousmonium, BEAST (Birmingham Electro-Acoustic Sound Theatre), the Gmebaphone and later the ZKM Klangdom. The large scale of such systems meant new ways for controlling and calculating the spatialization needed to be devised.

Through the 1980s and 1990s, these traveling speaker orchestras continued to diversify. They began to include more speakers, requiring sophisticated routing systems. However, the user interface that drove these systems remained largely unchanged, with systems continuing to use a mixing desk as the main form of user interaction. This lack of change is understandable given the diffusion performance practices in vogue at the time [2]. The diffuser’s actions focused on the overall perception of the piece in the environment rather than the placement of a sound object in a discrete location. The audience’s perception of the spatial field was a function of its position in the hall. These concerts tended to take place in a traditional configuration: the diffuser was positioned in the sweet spot, with the audience seated behind or in front of the desk and with little to no view of the diffuser.

As spatialization algorithms became more sophisticated, composers were able to think about where they wanted to place their sounds within the space, rather than merely the way they were dispersed. This began with research into the psychoacoustics of human hearing, leading to more accurate pan-pot laws for stereo panning [3], and were furthered again in the 1990s with developments in vector-base amplitude panning [4], wave field synthesis and higher-order ambisonics. The new technologies encouraged new spatial aesthetics, allowing composers to conceive spatialization through a focus on the creation of holophonic sound fields and phantom sources.


Click for larger view
View full resolution
Fig. 1.

The tactile.space diffusion interface running on the multitouch table Bricktable. (© Bridget Johnson. Photo © Jason Wright.)

Tools for the control of spatialization algorithms found their way into digital audio workstations, allowing the composer to drag a virtual representation of a sound object and place it within a speaker array. The amplitude mapping of faders was counterintuitive for pantophonic motion (i.e. circular spatial trajectories): Most spatial user interfaces became graphical. In spite of this, the mixing desk continued to be the user interface for diffusion performance. With composers thinking and acting one way in the studio and another in the concert hall, the paradigm was ripe for disruption. In the studio...

pdf

Share