In lieu of an abstract, here is a brief excerpt of the content:

  • About This Issue

As is typical for Computer Music Journal, this issue begins with announcements and news items, followed by an interview with a composer of electroacoustic music. Agostino Di Scipio's compositions and writings will be familiar to regular readers of the Journal. His recent music focuses on interactivity, but not of the common variety where human performers and computers exchange sequences of notes via MIDI. Rather, Mr. Di Scipio's pieces concern sonic interactions between humans, computers, and rooms, conceived of together as an ecosystem. Pitch structures are eschewed in favor of noisy and granular textures. Among other topics, the interview examines the development of the composer's thinking since his student days.

From the earliest years of computer music, software developers designed programs around the metaphor of "scores" (textual note lists) that are played by "instruments" (sound-synthesis algorithms). A complete text-based software environment typically integrated both of these aspects—score and synthesis. The subsequent rise of graphical user interfaces made possible not only music notation applications, in which a composer can enter a score the traditional way, but also visual environments for real-time custom sound synthesis, such as Max/MSP, which employ a connection paradigm reminiscent of the patch cords of modular analog synthesizers. Yet graphical music software has not generally supported both a traditionally notated score and custom sound synthesis within the same program. The article by Mikael Laurson and his colleagues presents their visual environment PWGLSynth, which attempts to bridge this gap. As illustrated on the front cover of this issue, their software allows the user to depict the score in traditional music notation (along with supplementary specifications of time-varying parameters for expressive rendering of the music), from which the system generates control data for the synthesis patch. The software also supports real-time performance input. The article presents an overview of the software architecture and explains the authors' strategy for mapping control information to synthesis parameters.

Steffen Brandorff and his co-authors have been working on a project involving the computerized typesetting of notated music based on textual input. In tackling the numerous problems that music notation presents, they found the discipline known as design patterns to be particularly helpful. The authors' article in this issue offers the novice a tutorial on design patterns. The authors trace the origins of design patterns in the field of architecture, and then illustrate the utility of some patterns well known in computer science (Observer, Abstract Factory, and Strategy), showing how they can solve problems in a music notation application.

Jonatas Manzolli and Paul Verschure have contributed an article about a paradigm for generating MIDI sequences based on the encounters of a mobile robot with its environment. The authors describe the relationship between the robot's behavior, the dynamics of the control states that give rise to this behavior, and the resulting MIDI output. In contrast with some sonification techniques that map physical quantities directly to acoustical or musical quantities, the authors' approach involves a more complex conversion of real-world in-put data into sonic events.

The previously mentioned articles present the work of an Italian composer and researchers from Finland, Denmark, Brazil, and Switzerland. Our final article highlights research from Singapore, where Arun Shenoy and Ye Wang have developed software for automatically extracting the key and chords from an audio recording. Beat-detection techniques are [End Page 1] employed to find temporal demarcations. (A time signature of 4/4 is assumed, as the system is tailored for popular music.) The audio spectrum is reduced to pitch classes as a prelude to finding candidate triadic harmonies and determining the key. The algorithm attempts to winnow out erroneous guesses about chords, using heuristics based on music theory and typical characteristics of popular music, such as how often chords are likely to change within a measure, and on which beats. This cross-referencing of harmonic and metric information foreshadows a next generation of music information retrieval tools, in which audio signal-processing techniques are combined in a mutually reinforcing manner and refined with music-theoretical knowledge.

The Reviews section takes a look at two books on specific technologies: the first on the Lisp-based Common...

pdf

Share