In lieu of an abstract, here is a brief excerpt of the content:

  • Performing Musical Interaction:Lessons from the Study of Extended Theatrical Performances
  • Steve Benford

The field of Human-Computer Interaction (HCI) has long been interested in how people interact with digital technologies, including—through the closely related field of Computer-Supported Cooperative Work (CSCW)—how they collaborate through and around these technologies. Although initially focused on office applications and work, the spread of digital technologies into nearly every aspect of our everyday lives has led these fields to increasingly focus on emerging leisure, entertainment, and cultural applications of digital technologies in areas such as games, museum installations, interactive artwork, and, of course, playing and listening to music. In its turn, the focus of studying and designing interfaces has also shifted from issues of usability and productivity to encompass new goals such as pleasure, creativity, expression, and aesthetics.

For more than a decade, research at Nottingham's Mixed Reality Laboratory has explored the use of digital technologies in live performance. This has involved working with artists to create, tour, and study a series of theatrical experiences that mix fictional stories with real settings, virtual environments with physical sets and props, and interaction with computers with live encounters with actors and other participants. Various examples have shown how digital technologies can be embedded into extended theatrical performances, including Can You See Me Now?, a game of chase in which on-line players logged in over the Internet were chased through a 3-D virtual model of a city by actors who, equipped with handheld computers with GPS receivers, had to run through the actual city streets to catch them; Uncle Roy all Around You, in which on-line and "street" players collaborated to navigate a mixed real and virtual cityscape, encountering various actors, props, and settings on the way (Benford et al. 2004); and Fairground: Thrill Laboratory, which used bio-sensing technologies and wireless communications to transform the act of riding a rollercoaster into a public performance. An overview of several of these performances can be found in Benford et al. (2009). These experiences were also the subject of ethnographic studies in which observation of participants, including the public, actors, and technical crew, revealed the fine details of how the interactions were delivered and experienced.

Reflecting on these experiences and studies led to the development of various theories to account for the design and experience of performance interfaces. This article takes these theories, alongside others from HCI and CSCW, and considers how they might be relevant to the design of musical interfaces, identifying key issues and approaches that might inform an agenda for future work in this area. The argument unfolds by following a trail of ever-widening participation in a musical performance, from an initial focus on the issues that arise when just one musician interacts with their digital instrument, through consideration of ensemble playing, to different ways in which interfaces might address an audience, to the embedding of musical interfaces within an extended performance structure.

Interacting: The Musician and Their Instrument

The first thing to note is that there are many traditional forms of interaction with instruments (plucking, bowing, and strumming strings; pressing keys; striking drums; and so forth) that are not the primary focus of this article. Also out of scope are mainstream interfaces in which desktop displays, mice, keyboards, and similar devices are used to interact with musical software tools. Rather, the [End Page 49] focus of attention is on emerging forms of interface that might enable particularly interesting and alternative forms of musical performance.

From enhancing traditional instruments (Bevilacqua et al. 2006; Poepel and Overholt 2006), to creating modified digital instruments (Jordà et al. 2007; Warming Pedersen and Hornbaek 2009), to attaching sensors to their own bodies (e.g., Pamela Z; see Lewis 2007), performers have employed sensing-based interfaces to lend greater expression to their playing, allowing interaction with digital music through gestures and other bodily or facial movements. Such interaction via sensing-based interfaces is often indirect in the sense that the musician is not immediately physically connected to their instrument, and the sensors may even be invisible, potentially allowing the kind of untethered and unencumbered interaction that could, for example, support a more seamless integration...

pdf

Share