In lieu of an abstract, here is a brief excerpt of the content:

  • About This Issue

The first two articles of this issue explore different aspects of the topic "communication in performance," which can involve performers conveying information to each other as well as to the audience. Focusing on improvisation, Jason Freeman and Akito Van Troyer present a text-based environment for ensembles of laptop musicians. Their system was inspired by live-coding programming languages but is not intended to be a full-featured language. Instead, it is designed for ease of use, even by nontechnical musicians; the authors report that laptop orchestra members were comfortably working with it after two hours of introduction. The system offers manipulation of sound files, but not sound synthesis, and it is specifically oriented toward rhythmic patterns that can be easily created on the fly and then reused and transformed by other performers. A screen projection, visible to the audience, textually displays the musicians' patterns and scheduling commands, and it also permits conversational coordination, as in a chat interface. The article analyzes the degree of pattern-sharing by the performers and concludes with plans for future features, including real-time music notation.

The second article, by researchers at Ghent University's Institute for Psychoacoustics and Electronic Music, examines how to design a gestural interface in a way that intuitively communicates the musical effect to the audience. As Steve Benford explained in the Winter 2010 CMJ (34:4, pp. 55–57), interface designs lie at various points on a plane that maps a manipulation to its effect, with one axis measuring how hidden, revealed, or amplified the manipulation is, and the other axis doing the same kind of measurement for the effect. An "expressive" strategy (as opposed to a "secretive," "magical," or "suspenseful" strategy) seeks to amplify, or at least reveal, both the manipulation and its effect. The authors of the present article sought an "expressive" strategy for a singer's gestural interface to a harmonizer, i.e., a realtime digital signal-processing unit that creates pitch-shifted copies of the singer's voice. Whereas previous designers of singing interfaces have tended to approach the gesture-to-sound mapping in an ad hoc manner, the authors of this article propose a systematic, empirical approach based on the embodied music cognition (EMC) paradigm.

To solve the specific problem of how best to control a harmonizer through bodily motions, they used results of a previous experiment, in which subjects were asked to move their bodies in response to the sounds of a harmonizer adding or dropping voices. An outward and upward movement of the upper arms was found to correspond to the addition of harmonized voices, and the reverse movement to the removal of voices. The authors present the technical details of how they incorporated this finding in a performance interface, in which a movement-detection system controls a harmonizer. Then they describe their implementation of three interaction designs: one having a solo singer, the second having an ensemble accompanying a singer, and the third having a dancer who controls the harmonization of two singers' voices. They argue that their third design extends Agostino Di Scipio's notion of "composing musical interactions," because this design places interactions not at the level of the sound signal—as in Mr. Di Scipio's work—but at the motor level in the singers' and dancer's bodies, facilitating multimodal information exchange. The authors also describe this design as taking EMC theory, which concerns the communicative qualities of the individual human body, and extending it to cover the interactions of a group, or "social body."

Antti Jylhä and his colleagues at Aalto University in Finland have been investigating human-computer rhythmic interaction. As a test case, these authors created an automated tutor that teaches novices how to clap various flamenco rhythms, which it performs and which the user must imitate in a call-and-response fashion. The tutor can operate by solely giving audio cues, or by adding visual cues to the audio. It presents three types of visual cue: (1) a depiction of the complete rhythmic pattern, with a cursor denoting the current position in time; (2) a pair of "dancing" circles—one representing the tutor's clapping and the other the user...

pdf

Share