In lieu of an abstract, here is a brief excerpt of the content:

  • About This Issue

We begin this issue by sadly observing, in our News section, the death of David Wessel in October 2013. David organized the first annual computer music conference in 1974; promoted new, interactive technologies such as the Macintosh, MIDI, and Max at the Institut de Recherche et Coordination Acoustique/Musique (IRCAM); directed the Center for New Music and Audio Technologies (CNMAT) at the University of California, Berkeley; championed new musical interfaces and paradigms for control of sound synthesis; published research in music perception; performed as an improvising electronic musician; and influenced a couple of generations of computer musicians. Adrian Freed of CNMAT has kindly contributed an obituary.

The issue continues with five articles, each of which explores a different region of the diverse field of computer music: algorithmic composition, aesthetics, sound synthesis, robotics, and information retrieval. Not all the articles will engage all readers, but we expect that each article will be intriguing to quite a few, and we hope that all readers will find something of interest.

In the area of algorithmic composition, Andrew Brown, Toby Gifford, and Robert Davidson present their techniques for generating melodies, illustrated by a series of progressively more sophisticated code examples in the language Scheme with Impromptu extensions. A primary focus of the authors is live coding, i.e., generating music in a performance setting through real-time input of program code. Their techniques are inspired by principles from the music cognition literature, and so the article may interest not only practitioners of algorithmic composition but also music psychologists and theorists. In an experiment, 32 musicians evaluated the system’s output, and their judgments tended to reflect the expected improvement as refinements were added to the code.

Christopher Haworth’s article examines a trend in recent “underground” computer music: composers’ purposely calling attention to the computer music techniques or software they use in their compositions, as well as to the inventors of that technology. Haworth contrasts this trend with a dominant viewpoint in academic electroacoustic music, namely, that listeners should be primarily concerned with a piece’s structure and spectromorphological content (to use Denis Smalley’s vocabulary) rather than with attempts to figure out the specific tools and techniques that the composer used in constructing the sounds. As case studies, the article considers underground computer musicians’ use of two computer programs: PulsarGenerator by Curtis Roads and Alberto de Campo, and GENDYN by Iannis Xenakis (or, more properly speaking, other people’s re-implementations of GENDYN’s dynamic stochastic synthesis). From science and technology studies, Haworth (following Georgina Born) borrows the term “ontological politics,” alluding to a relational understanding of reality as changeable and determined through practice. In this sense, the author views the aforementioned trend as politicizing the status of technology in music.

At the 2013 New Interfaces for Musical Expression (NIME) conference in Korea, two papers received the Best Paper award ex aequo. Computer Music Journal is publishing revised and extended versions of these papers. In the current issue, we present the work by Charlie Roberts and colleagues, and the next issue will contain an article by the other NIME 2013 prizewinner, Andrew McPherson. Research by Roberts et al. was mentioned in Lonce Wyse and Srikumar Subramanian’s survey of Web audio technologies in our Winter 2013 issue. In the present issue, Roberts et al. describe their system for creating musical “instruments” that can run in desktop and mobile Web browsers. The system contains two JavaScript libraries. Gibberish.js is a sound-synthesis library, [End Page 1] and Interface.js is a user-interface library that displays items on the screen and processes input from a mouse or touchscreen, as well as from a mobile device’s motion sensors. The latter library also allows a Web page to control remote sound-synthesis applications via MIDI and OSC. These two libraries are put to work in the authors’ programming environment called Gibber. Gibber lets one create a complete virtual instrument—both the synthesis and the graphical interface—in very few lines of code. Users can then distribute such instruments publicly via a central database, so that other users can play these instruments from their Web browsers.

The article by Murphy et al...

pdf

Share