In lieu of an abstract, here is a brief excerpt of the content:

  • About This Issue

The six articles in this issue encircle the concept of transformation, each approaching it from a different angle. The opening interview features a composer who finds inspiration in the sonification of visual forms. The next four articles respectively discuss modifying notation dynamically, mapping gesture into sound, mapping from a virtual space to a real space, and converting from sound back to notation. The sixth article provides tutorial on techniques for sonic transformation.

French composer Philippe Leroux is known for his electroacoustic music as well as his vocal, orchestral, and chamber compositions. Born in 1959, Mr. Leroux studied with a number of famous composers, among them Olivier Messiaen, Iannis Xenakis, and Pierre Schaeffer. In the interview in this issue, he also mentions being influenced by the spectral school (e.g., Gérard Grisey and Tristan Murail), François Bayle, and György Ligeti. Mr. Leroux discusses the impact of electronic studio techniques on his approach to writing for acoustic instruments. For example, he often conceives of composing with sounds or groups of sounds, rather than with notes, rhythms, and chords. Elements such as dynamics and texture are important, as is a notion of perpetual motion and musical transformation. He is interested in translating movement and gesture including handwriting) into music, and gives several detailed examples of doing so. In addition to employing computer-aided composition tools such as OpenMusic, he has used electronics to create new timbres and to extend traditional instrumental and vocal timbres, working with both real-time electronics (such as Max patches) and fixed media.

Jason Freeman’s article surveys compositional practice in the use of dynamically updated notation—that is, notation that changes during the performance, based on human or computer input. This new paradigm requires the musicians to adapt to real-time score modifications in the middle of a concert. The dynamic score might contain either traditional Western music notation or some sort of graphical representation. The author compares real-time notation to precedents in computer-assisted composition and in open-form composition. An example of the latter is Earle Brown’s music that was influenced by the mobiles of the sculptor Alexander Calder. With computer technology mediating the display of notation, input to the algorithm can come from various sources, including the audience. Mr. Freeman describes number of pieces, some by himself, that link the audience into the performance this way. He then describes the challenges of designing notation that can be sight-read during performance, as well as the challenges of informing the audience about just what is happening onstage.

Introduced almost 80 years ago, the Theremin is renowned as the first musical instrument to have no tactile interaction. More recently, researchers at theHelsinki University of Technology implemented a distantly related digital instrument, the “virtual air guitar,” in which the player controls synthesized electric-guitar sounds by pantomiming guitar-playing gestures on a nonexistent instrument. The article by Jyri Pakarinen, Tapio Puputti, and Vesa Välimäki describes how they have extended the virtual air guitar to imitate a slide guitar. In playing a real slide guitar, also known as bottleneck guitar, the musician wears a tube on one finger and slides it lightly along the strings for a glissando effect. The authors use a camera to track the air guitarist’s hand movements, which are converted into data such as string length. The sonic emulation of the slide guitar is accomplished by extending the authors’ previous waveguide physical model of the electric guitar. The new slide guitar emulation incorporates a model of the contact sound made by the tube against the strings.

The article by Jonas Braasch, Nils Peters, and Daniel Valente presents a spatialization technique based on the idea of virtual microphones. The classic approach to spatialization in computer music involves positioning a source sound by placing it in multiple loudspeakers, using simple amplitude-panning laws to determine the balance between speakers. The authors’ approach introduces simulated microphones that have adjustable directivity patterns, capturing virtual sounds in a virtual space. Playback involves reproducing the sounds that were encountered at the virtual microphones’ positions. If loudspeakers are placed at real-world locations corresponding to the virtual microphones’ locations, accurate reproduction...

pdf

Share