In lieu of an abstract, here is a brief excerpt of the content:

  • Problems and Prospects for Intimate Musical Control of Computers
  • David Wessel and Matthew Wright

When asked what musical instrument they play, few computer musicians respond spontaneously with "I play the computer." Why not? In this report, we examine the problems associated with the notion of the computer as musical instrument and the prospects for their solution.

We begin with a discussion of our goals and requirements for computer-based musical instruments, raising the main issues that inform our work. Then we discuss a variety of metaphors for musical control that we have found to be powerful. Finally, we discuss some of the technology we have developed for implementing these instruments.

Goals and Requirements

Musical instruments and their gestural interfaces succeed for a variety of reasons, most of which are social in character. These sociological aspects, such as the development of a repertoire for the instrument, are beyond the scope of this article. Here we will concentrate on factors such as ease of use, potential for development of skill, reactive behavior, and coherence of the cognitive model for control.

Relationships of Gestures to Acoustic Results

The image immediately called to most people's minds by the statement "I play the computer" is of a performer physically interacting with the QWERTY keyboard and pointing device of a typical computer workstation live on stage. Many musicians do indeed perform live music with this interface, but we prefer to interact with more specialized gestural interfaces, because they offer lower latency, higher precision, higher data rates, and a broader range of physical gestures than the keyboard/mouse-type interface. There is also the issue of the visual appearance of a performance and the association of standard computer interfaces with office work (Zicarelli 1991).

At the onset, it would be useful to consider some of the special features that computer technology brings to musical instrumentation. Most traditional acoustic instruments such as strings, woodwinds, brass, and percussion place the performer in direct contract with the physical sound production mechanism. Strings are plucked or bowed, tubes are blown, and surfaces are struck. The performer's gesture plays a direct role in exciting the acoustic mechanism. With the piano and organ, the connection between gesture and sound is mediated by a mechanical linkage (and in some modern organs by an electrical connection). But the relationship between gesture and acoustic event remains in what one might call a "one-gesture-to-one-acoustic-event" paradigm.

One obvious feature of computer technology is immense timbral freedom. Although skilled players of acoustic instruments can produce a wide variety of timbres, they are nevertheless constrained by the sound production mechanism. In contrast, computers can produce arbitrary sampled or synthesized sounds. A sample-playback synthesizer can reproduce any recorded sound in response to any gesture. The extreme case can be found in tape music: with the single gesture of pressing the "play" button, the acoustic result is an entire composition. A more interactive middle ground is the kind of piece where a performer steps through a sequence of prerecorded computer sound cues—for example, with a foot pedal. An even more interactive example would be the typical sampling keyboard, where each key depression triggers a potentially different sound. In each of these examples, the acoustic result of a gesture may be arbitrarily complex, perhaps consisting of multiple perceived sonic events, but the control model is still a one-to-one mapping from gestures to acoustic results.

When sensors are used to capture gestures and a computing element is used to generate the sound, an enormous range of possibilities becomes available. [End Page 11] Sadly but understandably, the electronic music instrument industry, with its insistence on standard keyboard controllers, maintains the traditional paradigm.


Click for larger view
View full resolution
Figure 1.

A conceptual framework for our controller research and development.

Figure 1 provides a conceptual framework for our controller research and development. Our human performer intends to produce a certain musical result. These intentions are communicated to the body's sensorimotor system ("motor program"). Parameters are sensed from the body at the gestural interface. These parameters are then passed to controller software that conditions, tracks, and maps them to the algorithms that generate the...

pdf

Share