In lieu of an abstract, here is a brief excerpt of the content:

Computer Music Journal 25.2 (2001) 1



[Access article in PDF]

About This Issue


This issue of Computer Music Journal examines three radically different types of interfaces for performing electroacoustic music. One type consists of a traditional musical instrument outfitted with sensors that control a synthesizer, which extends the acoustic instrument's sonic repertoire. This approach offers the obvious advantage that trained musicians can immediately put to use techniques honed over years of practice, instead of having to learn a completely new instrument, and it also lets the performer integrate the natural instrumental sound with synthetic sounds. The article by Sølvi Ystad and Thierry Voinier presents an example of extending the flute in this way. Their interface consists of magnetic sensors on the keys as well as an internal microphone. In conjunction with the controller, the authors developed a hybrid synthesis technique that combines physical and spectral models to produce flutelike sounds. The flute's acoustical output can be muted so that the synthetic and natural sounds can be produced in isolation or together. The controller also provides MIDI output.

Another approach to performance interfaces involves tracking the motions of dancers, rather than musicians, and using those motions to control the music. This approach has been discussed recently in Computer Music Journal 22:4 (Winter 1998) and 24:1 (Spring 2000). The article in the present issue, by Roberto Morales-Manzanares and colleagues, focuses less on the sensors attached to the dancers than on the software for mapping the sensor data into musical parameters. This software includes two music composition environments, Escamol (by Mr. Morales-Manzanares) and Aura (by Roger Dannenberg). In addition, the article describes how the system was used in composing three pieces, by Jonathan Berger, Mr. Dannenberg, and Mr. Morales-Manzanares, respectively.

Gil Weinberg and Seum-Lim Gan wrote the third article on performance interfaces, which investigates how multiple players, who may be musically untrained, can control a single system in an interdependent manner. The controller in this case consists of a set of gel balls that the performers squeeze and pull, and the control data is converted to music by way of a Max patch that sends MIDI data to external synthesizers. The implementation's use of a Max patch should allow composers to use the authors' controller while easily substituting their own mappings.

On a different topic, Axel Röbel proposes a sound-synthesis technique based on the theory of system attractors and dynamic modeling. Although these concepts are familiar with respect to chaotic systems, Mr. Röbel explains their applicability to simulating the sounds of musical instruments. He used a neural network to train his model on some recorded sounds--namely, saxophone, flute, and piano notes--which were then resynthesized from the model. To test whether the model successfully characterized the sounds, the model's inputs were subsequently altered to create time-stretched versions of the original tones. In the case of the saxophone and flute, these stretched variants sounded natural. The technique is neither straightforward nor practical for general use yet, but the author believes that interpolation techniques might lead to the ability to capture all the sonic behavior of a musical instrument from a set of recorded samples.

There are three event reviews in this issue, including a summary of the concerts at last year's International Computer Music Conference in Berlin. Also reviewed are David Cope's third book on his algorithmic composition systems and a CD-ROM containing a host of Csound orchestra and score files. The compact discs reviewed here feature music from Canada, the UK, and the USA, as well as an international assortment of composers on a CD from the Canadian Electroacoustic Community.

...

pdf

Share