In lieu of an abstract, here is a brief excerpt of the content:

  • About This Issue

Click for larger view
View full resolution
Front cover.

A collection of figures from the issue’s four articles on sound synthesis.


Click for larger view
View full resolution
Back cover.

A figure from the article by Conan et al., showing their synthesizer’s graphical user interface and the use of a tablet for gestural control.

This issue’s front cover promises “Different Angles on Sound Synthesis,” a theme pertaining to the first four articles. Each of these four does in fact present a different approach to digital sound synthesis, yet this statement should not be taken to mean that these approaches all represent fundamentally new synthesis techniques. On the contrary, three of them use a classic technique, additive synthesis, but in uncommon ways that harness psychoacoustic phenomena or unusual mappings from inputs to synthesis parameters.

The first article, by Gary Kendall and colleagues, offers a fresh perspective on combination tones. These are typically described as psychoacoustic products of nonlinear distortion in the ear resulting from the interaction between loud acoustical stimuli. A number of composers have explored the special perceptual effects of these distortion products, notably Maryanne Amacher (1938–2009). In contrast to the high volumes and limited timbral control that characterized Amacher’s installations, the article shows how to use digitally induced combination tones to construct moderate-volume sounds that follow the pitch and amplitude of a recorded or real-time input signal, or that match up to four harmonics of a time-varying target spectrum, while engendering distinctive spatial imagery.

The second article, by Simon Conan et al., extends the work that the authors presented in their prizewinning paper at last year’s Digital Audio Effects conference, DAFx 2013. Their Best Paper award entailed publication in Computer Music Journal. The authors’ synthesis technique emulates the continuous interaction between an object and a surface, specifically focusing on friction phenomena—rubbing and scratching—and on the rolling of an object across the surface. The synthesis is controlled by a strategy that lets the user morph between these different types of simulated physical interactions.

The next two articles examine mappings from algorithms or equations into control parameters for additive synthesis. In the article by Jaime Serquera and Eduardo Reck Miranda, the input is a cellular automaton. The authors’ technique uses histograms that measure the occurrence frequencies of different “colors” (cell values) in the cellular automaton. The mapping takes advantage of these histograms’ similarity to sound spectra, aurally representing the automaton’s behavior in a way that avoids some of the unpredictability of other techniques for sonifying cellular automata. In the article by Rodrigo F. Cádiz and Javier Ramos, on the other hand, the input is a quantum physics equation. The equation in question describes a Gaussian-shaped bouncing wave packet, which the authors chose for its interesting dynamic behavior. Sound or video examples are provided for all four of these articles on sound synthesis.

The issue’s fifth article, by Qi Yang and Georg Essl, lies more in the domain of controllers than of synthesis per se. Specifically, the authors are interested in using a performer’s hand moving freely in the air as an alternative to the pitch and modulation wheels found on many keyboard-based synthesizers. Their user study evaluated camera-based tracking of hand gestures in comparison to the use of the traditional wheels. The choice of mapping was important; for example, changing the hand’s detected width (by opening or closing the hand or by turning the wrist) was found to offer good control of tremolo. In some cases, the gesture tracking outperformed the wheels when two synthesis parameters were being controlled simultaneously.

In the final article, Amy Hoover et al. discuss their work in computer-aided composition by amateur musicians. The idea is for a user to present the computer with a melody, or, more generally, any set of one or more simultaneous voices in a polyphonic texture, and have the computer generate a new voice to add to the texture, using the provided music as a model. The software incorporates the paradigm of interactive evolutionary computation, in which the user chooses one candidate from a set proposed by the computer...

pdf

Share