- About This Issue
As is often the case, this issue's first article consists of an interview with a well-known composer of electro-acoustic music. Roger Reynolds, a professor at the University of California, San Diego, for almost four decades, responds to questions focusing on his conceptual approaches to music and technology. Among other topics, the interview elicits the composer's thoughts on the aesthetics of multimedia, the importance of form and planning (informed by his engineering background), and the subordination of technology to aesthetic need. Mr. Reynolds describes his objectives in using the computer for real-time sound processing, especially spatialization (e.g., in Process and Passion and in Archipelago), and for interactive multimedia, including motion capture and recognition (in The Image Machine, a recent piece that he discusses in some detail). The interview also considers the influence of electronic techniques on compositional thinking in general, with the composer's "editorial algorithms" as a case in point.
The article featured on this issue's cover presents new techniques for manipulating the algorithms known as cellular automata (CA). To refer to these techniques, Christopher Ariza has coined the term "automata bending," by analogy with circuit bending. Like circuit bending, automata bending involves applying transfor-the mations to otherwise predictable systems, motivated by an aesthetic goal that welcomes indeterminacy. Mr. Ariza's article reviews the fundamentals of CA and introduces a platform-independent notation for specifying CA. The author briefly surveys previous work in musical applications of CA, and then explains his two main methods of automata bending: mutation and dynamic rules. Numerous specific cases of automata bending are given, with illustrations depicting the CA's parameters and output. The article also provides examples of how data streams produced by these complex CA can be mapped to musical parameters.
A number of researchers have developed systems for interactive dance, in which the dancers control synthesized music. (See, for example, CMJ 22:4 and 26:3.) Often these systems were not designed with large numbers of dancers in mind. By contrast, the article by Mark Feldmeier and Joseph Paradiso in this issue describes a new motion sensor they have developed that is cheap enough to be given away to crowds at interactive entertainment events. Radio-frequency pulses from the sensors are converted to MIDI and analyzed by a Max patch to detect rhythmic features and overall activity level, which are then mapped to musical attributes such as tempo, timbre, and style. Much of the article describes the specific musical mappings that the authors developed for large-scale interactive dance applications. Also presented are a number of tests in which groups of 15 to 100 participants collectively controlled the music to which they danced. The sensors worked well, and the participants enjoyed the interaction. Although many participants felt the music was responsive to their movements, the majority desired more control. The article concludes by considering future work and broader application areas.
A recurring theme in music information retrieval concerns the widely recognized need for objective tools that can assess competing systems. (See, for example, CMJ 28:2 and 28:3.) Along these lines, the article in this issue by Pierfrancesco Bellini, Ivan Bruno, and Paolo Nesi presents a quantitative technique for evaluating systems for optical music recognition (OMR)—i.e., systems that can recognize the symbols in scanned sheet music. The authors propose two models that respectively examine the recognition rates of more-or-less atomic symbols (flat sign, black note-head, sixteenth-note beam, etc.) and composite symbols (key signature, group of beamed notes, etc.). The proposed metrics, which assume a linear combination of weighted values, were applied to three OMR software packages: SmartScore, O3MR, and [End Page 1] SharpEye2. To validate the models, the opinions of human experts were sought in two phases. First, the experts were given a questionnaire in which they rated the importance of the different types of symbols. Then, in a "blind" test, the experts judged the accuracy of the three OMR programs, by comparing the programs' outputs when given identical inputs. The authors conclude that their metrics can approximate the evaluations of human experts. Concerning the specific OMR systems tested, the article states...