In lieu of an abstract, here is a brief excerpt of the content:

  • About This Issue

As sometimes happens, the compound theme announced on the cover of this issue of Computer Music Journal actually comprises two unrelated topics. In the present case, one involves technical research, and the other, music-making. "Pattern discovery" relates in a loose sense to all this issue's technical articles, and in a strict sense to the one titled "Feature Set Patterns in Music." The other topic, "the laptop orchestra," is the subject of the opening pair of articles, from Princeton University.

The Princeton Laptop Orchestra (PLOrk) represents a fairly new type of performing ensemble. Directed sometimes by a conductor, its 15 members operate networked laptop computers, either with the usual keyboard-and-mouse interface or via other controllers and sensors. Each of these computers is connected to its own audio equipment rack and its own onstage loudspeaker enclosure, housing six individually addressable speakers that emit sound in six different directions. The intent is to imbue electroacoustic music with a spatial and sonic presence analogous to that of a conventional orchestra, while exploring the musical opportunities afforded by a relatively large number of networked performers. The first article documents 18 pieces that composers have conceived for this ensemble. The compositions embody various approaches to spatialization, sound design, control, networking, conducting, and game play. The second article explains how the authors teach students concepts and techniques for performing in PLOrk and composing for it. There are no technical prerequisites, but students learn a textual music-programming language (ChucK) as well as a graphical one (Max/MSP). The authors describe their pedagogical approach as informal, improvisational, and interdisciplinary; students acquire technical and artistic knowledge "along the way" toward the goal of making compelling music together.

The next three articles in this issue consist of revised and extended versions of papers first presented at the International Workshop on Artificial Intelligence and Music (Music-AI 2007), held in Hyderabad, India, in conjunction with the 20th International Joint Conference on Artificial Intelligence. We are indebted to the organizers, especially Rafael Ramirez, for providing us with the workshop referees' evaluations of many of the original paper submissions and for discussing with us the most highly rated papers, from which Computer Music Journal selected a subset. Our thanks also go to the referees themselves, who kindly agreed to re-evaluate the manuscripts after they had been revised for the Journal. (More generally, anonymous peer review serves as a sine qua non for the Journal. The referees, who are specially selected for each article on the basis of their particular expertise, receive no compensation or other recognition. We honor them all for their dedication to the advancement of the field.)

The first of the Music-AI 2007 articles, "A Genetic Rule-Based Model of Expressive Performance for Jazz Saxophone," employs techniques from evolutionary computation, a subfield of AI. Previous studies of expressive performance have usually been empirical, based on human-created models of expression. By contrast, the present authors' software automatically constructs such models—in this case, sets of "rules" describing how jazz musicians inflect timing and dynamics in melodies. It does so by applying an evolutionary algorithm to a symbolic representation of a set of performances. (In this study, a professional saxophonist played four jazz standards, each at eleven different tempi.) The authors describe how they first extract musical information from the audio recording, using spectral analysis, fundamental frequency estimation, segmentation into notes, envelope approximation, brightness measurement, and so on. The note-level measurements are supplemented by a higher-level musical analysis that is largely based on Eugene Narmour's implication/realization model of melodic expectation. Finally, a genetic sequential covering algorithm operates on the training data and learns new rules, each predicting how a human saxophonist might expressively deviate from the values specified by the musical score. Such rules could, of course, be applied when synthesizing music.

The next article, by Christopher Raphael, tackles the problem of how to separate a monaural audio recording of a concerto into two tracks: one capturing the soloist, and the other, the orchestra. Practical applications include creating an accompaniment track that a soloist can use for practice, à la Music Minus One. This [End Page 1] research assumes access...

pdf

Share