In lieu of an abstract, here is a brief excerpt of the content:

  • About This Issue

The first two articles in this issue represent revised and expanded versions of papers submitted to the 2012 Sound and Music Computing Conference, which was held in July at Aalborg University in Copenhagen. The two papers shared the Best Paper award, granted by the conference chair and technical paper committee in consultation with Computer Music Journal.

The first article offers an overview of the Spatial Sound Description Interchange Format (SpatDIF), a specification for storing and transmitting spatial audio scene descriptions. The specification, which is not bound to any particular implementation, programming language, or file format, offers syntax and semantics for representing spatialized sound, whether for authoring, streaming, or rendering. Content creators such as composers can define a sound's three-dimensional placement and motion, as well as other spatial sound parameters, in a manner independent of software, hardware, and performance venue. SpatDIF utilizes a stratified model consisting of layers for authoring, scene description, encoding, decoding, hardware abstraction, and physical devices. The authors show examples of Spat-DIF's use in representing the spatial aspects of a fixed-media composition re-implemented in Max/MSP (John Chowning's Turenas) as well as those of an interactive audiovisual installation (Flowspace II by Jan Schacher, Daniel Bisig, and Martin Neukom).

The second of the prizewinning articles presents a simple hardware device designed to introduce musicians to haptics without imposing high barriers to entry, whether financial or cognitive. The authors decided to use motorized faders, which are familiar from their presence in many audio mixing consoles. The force feedback offered by this simple user interface is visually conveyed to the performer and the audience via a light whose brightness is proportional to the force applied to the fader. Importantly, the software and Arduino-based firmware are open-source and designed for easy reconfiguration, so that the device can be incorporated in diverse do-it-yourself hardware applications. Existing examples include a virtual plucked-string instrument and a device for "flinging" sound around a room.

The issue continues with two articles on digital sound-synthesis techniques. Victor Lazzarini and Joseph Timoney's article studies a group of techniques that use nonlinear distortion to emulate resonant sounds. These methods provide an alternative to the familiar use of oscillators and filters for subtractive synthesis. They also improve upon the traditional technique of frequency modulation (FM) synthesis, which can emulate resonant frequency regions but which can suffer from unrealistic spectral evolutions in response to the amount of nonlinear distortion applied. In the described synthesis techniques, the traditional source-modifier arrangement is reconstructed as a heterodyne structure made of a sinusoidal carrier and a complex modulator created by nonlinear distortion. These methods are computationally efficient and allow signals to be added without the problems of frequency-dependent phase interference.

David Bessell's article tackles the synthesis of percussion instruments. His technique aims to offer some of the flexibility and expressiveness of physical modeling without its drawback of requiringmusicians to acquire technical expertise. The technique also has some similarities with sampling synthesis but reduces the stored data's size by an order of magnitude. In the author's approach, which he calls dynamic convolution modeling, an attack part and a decay part are obtained from recorded sounds. The amplitude envelope of the attack part is applied to a noise source. The enveloped noise can then be low-pass filtered, with the filter's cutoff frequency controlled in real time by MIDI velocity. This filtered noise is then convolved with an impulse response derived from a recording of a resonant body—for instance, a drum. The author presents hybrid sound examples, such as a pizzicato gong.

The issue's final article, in the arena of music information retrieval, concerns the automatic segmentation of musical audio into sections composed in different keys. The audio is analyzed for pitch classes using the chroma energy distribution normalized statistics (CENS). The resulting twelve-dimensional space is data-reduced using principal component analysis and non-negative matrix factorization, and the k-means clustering algorithm is applied to separate the frames into groups that are in [End Page 1] different keys. The technique is evaluated using both real audio recordings and artificial data sets...

pdf

Share