In lieu of an abstract, here is a brief excerpt of the content:

  • About This Issue

In its early years, Computer Music Journal periodically included sound examples and compositions on a tear-out vinyl soundsheet bound into the issue. Later, after the arrival of compact disc technology, the Journal produced an annual audio CD, first as a separate product and then as part of the Winter issue. This year, for the first time, we are including visual content and multichannel audio compositions on a digital video disc (DVD), attached to the inside back cover of this issue. (However, this step does not necessarily mark the disappearance of the CD medium, which will likely reappear in future volumes of the Journal.) Associate editor Colby Leider served as curator and producer of the DVD, which contains two hours of audio and video material. Program notes can be found toward the end of this issue.

Each article in this issue inhabits a markedly different terrain within the broad domain of computer music. An interview with composer Hans Tutschku sheds light on his career and activities, including his collaborations with dancers, video artists, and other musicians. Mr. Tutschku recalls his early musical experiences in East Germany. He describes Karlheinz Stockhausen's influence on the Ensemble für Intuitive Musik, the small performing group that Mr. Tutschku has been a member of since shortly after its inception over two decades ago. The discussion delves into Hans Tutschku's compositional approaches, his software development, and the sound-processing techniques he has used, including granular synthesis, cross-synthesis, and physical modeling. His music often involves a sort of counterpoint between simultaneously but differently evolving parameters of sound. The interview also describes his ideas on spatialization. He finds advantages in sometimes combining multiple identical loudspeakers (as typically used for multichannel music) with sets of intentionally dissimilar loudspeakers (as found in the "loudspeaker orchestra" paradigm).

Dave Phillips has written a survey of the Linux operating system as a platform for computer music. His article functions as a tutorial, covering the history and benefits of Linux in general and its audio capabilities in particular. Mr. Phillips describes three levels of the Linux sound system: (1) the kernel level, which includes basic system hooks and drivers, and which now features the Advanced Linux Sound Architecture (ALSA); (2) a middle layer, consisting of ALSA tools as well as sound servers, libraries, plug-ins, and software such the JACK Audio Connection Kit; and (3) user space, which is already populated by numerous open-source music programs, and in which commercial application developers are showing increasing interest.

Donald Byrd and Eric Isaacson lay out in great detail the features they believe a symbolic music representation must support if it is to be useful in academia. The authors wish such a representation to be useful for serviceable rendering of common music notation in a variety of application areas. Accordingly, they rate the importance of a large number notational features as encountered in traditional Western classical and popular music, classifying each feature into the logical, performance, graphic, and analytic domains that were established by the Standard Music Description Language (SMDL). SMDL itself is not a requirement, however. The article is aimed at developers of music notation software, but it also offers insights to anyone interested in music representation.

The music industry has been increasingly concerned with digital rights management in recent years. Audio identification is expected to be one solution, for which there are two alternative approaches that employ digital signal processing: water-marking, which embeds a practically inaudible identifier within the audio signal, and fingerprinting, which stores a data-reduced audio analysis separately from the signal for later comparison to potential occurrences of that signal. In this issue, Wei Li and Xiangyang Xue present a water-marking technique, based on the discrete wavelet transform, that they designed to survive audio processing such as MP3 encoding or random deletion of samples. (These kinds of processing have defeated some earlier watermarking techniques.)

The final article, by Jörg Langner and Werner Goebl, introduces a method of visualizing timing and [End Page 1] dynamics as measured in expressive musical performances. Data acquired from MIDI pianos or audio recordings is smoothed and then plotted on a graph of loudness versus tempo. Temporal...

pdf

Share