In lieu of an abstract, here is a brief excerpt of the content:

  • About This Issue

An interview with composer Clarence Barlow begins this issue's articles. Raised and educated in Calcutta, India, Mr. Barlow then spent several decades in Europe, where he studied with Zimmerman and Stockhausen (among others), lectured in Darmstadt and Cologne, organized an International Computer Music Conference, and served as professor of composition and sonology in The Hague. Currently he teaches at the University of California, Santa Barbara. The interview traces Mr. Barlow's development as a composer, one whose individuality and creativity stand out even in a field where these attributes are taken for granted.

Following the interview, this issue presents two technical articles on sound synthesis for emulating traditional musical instruments. The first comes from the Music Technology Group at Pompeu Fabra University in Barcelona, who have a long history of working on such techniques, primarily using the spectral model of Xavier Serra's deterministic-plus-stochastic additive synthesis. More recent work has investigated the use of sampled sound and how to construct natural-sounding musical phrases from a database of recorded notes. Along these lines, the article by Esteban Maestre and his Barcelona coauthors presents a system whose goal is to create expressive synthetic performances by retrieving the optimal sampled sounds to match the notes of a given musical score, transforming them in the frequency domain as needed to achieve realistic transitions from one note to the next. The samples in the database are annotated with descriptors derived from spectral analysis. From this database, a machine-learning subsystem constructs a model of expressive performance that calculates expected transformations of timing, amplitude, and timbre. The synthesis component uses this model to find the most suitable samples and to determine how to process and concatenate them. This study used a pre-existing database of performances by one jazz saxophonist.

The other article on sound synthesis is based on work that won the Best Paper award ex aequo at the 2008 Digital Audio Effects (DAFx) conference, an award that was granted in consultation with Computer Music Journal. In contrast to the sample-based technique of Mr. Maestre et al., Stefan Bilbao's article approaches emulation through physical modeling. Many earlier efforts in physical modeling, such as digital waveguide synthesis, have revolved around efficient simplifications of the instrument acoustics, for purposes of real-time synthesis. However, today's microprocessor speeds allow direct simulation of the acoustics through standard numerical techniques such as finite-difference schemes. Mr. Bilbao's article presents a standard mathematical model of a reed instrument, followed by its embodiment in a finite-difference time-domain algorithm. Although this fully time-space discrete model remains one-dimensional for efficiency, it avoids problems that arise in more-simplified designs, such as the need to "lump" impedances or to deal with fractional delay interpolation. This model also facilitates experimentation with a wide variety of bore profiles, and it is amenable to extensions such as those involving time-varying and nonlinear effects.

François Pachet's article proposes a new paradigm for computer-aided composition. Whereas most composition systems ask users to build musical objects by explicitly assembling components using various construction tools, the author argues that such systems require the user to have some technical understanding of the objects being constructed. In contrast, the author's approach, called description-based design, aims to remove the need for any programming knowhow on the part of the user. Instead, the user tags automatically generated objects with arbitrary adjectival descriptors, training the system via a classifier (specifically, a support vector machine) that learns what objective features correspond to the descriptors. The user can then ask the system to generate another object that is like a chosen one but having more or less of the quality specified by a descriptor. As a case study, Mr. Pachet describes experiments with unaccompanied melodies in which the system attempts to classify and generate melodies using the descriptors "serial," "tonal," "brown" (i.e., Brownian), "long," and "short."

This issue's final technical article is like the first two in that they all explore general problems using woodwind instruments as specific examples. But where the first two technical articles investigate synthesis of the sounds of reed instruments, the final...

pdf

Additional Information

ISSN
1531-5169
Print ISSN
0148-9267
Pages
pp. 1-2
Launched on MUSE
2009-11-27
Open Access
No
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.