In lieu of an abstract, here is a brief excerpt of the content:

  • About This Issue

The first two articles of this issue present research on topics unrelated to the theme of "emotion and expression" announced on the cover. In the first article, Grégoire Carpentier and Jean Bresson of Paris's Institut de Recherche et Coordination Acoustique/Musique (IRCAM) introduce Orchidée, their software for computer-assisted orchestration. Similarly to the work by François Rose and James Hetrick published in the Spring 2009 issue of Computer Music Journal, the IRCAM software retrieves, from a database, a set of instrument tones that in combination best approximate the timbre of a specified target sound. A composer or orchestrator can then choose to adopt the recommended instrumental combination for a certain spot in the piece. Orchidée offers a number of enhancements. For example, the sound-similarity model is based on a number of perceptual features rather than just the spectrum. A generic client–server framework is offered, with an emphasis on connecting sonic to symbolic (musical) data and perceptual feature spaces. This architecture facilitates the design of flexible graphical user interfaces, and the authors provide an example implementation based on OpenMusic. When no target recording is available, the user can approximate the desired sound using sound-synthesis controls within the OpenMusic tool. The program lends itself to a progressive, interactive fine-tuning of the instrumentation constraints and the target sound during the search task. This software was used for orchestral emulation of vocal sounds in the composition of Jonathan Harvey's Speakings (2008).

The second article, by Victor Lazzarini and Joseph Timoney in Ireland, reviews techniques for digitally emulating the oscillators of classic analog synthesis. Ironically, although analog synthesizers use mathematically simple waveforms (sawtooth, square, triangle, etc.), it is not so simple to construct these same signals digitally in a computationally efficient manner without incurring the penalty of aliasing. One of the article's main contributions is to show that the existing digital techniques are all related to nonlinear distortion synthesis. Another is to propose novel nonlinear distortion-synthesis algorithms for approximating those classic analog waveforms. An advantage of distortion techniques is that they allow timbral control without the computational expense of separate digital filters.

The next three articles relate to the theme of "emotion and expression," presenting research that quantitatively approaches these aspects of music that are often thought of as subjective. The first of these articles, by Steven Livingstone et al., describes their research using software for altering a MIDI "score" (not necessarily represented as music no-tation) in a way that changes the emotion that the music is perceived to express. The system changes not only the dynamics and timing (parameters that are traditionally modifiable in expressive perfor-mance), but also sometimes the pitches (a parameter that the Western classical music tradition usually considers to be the composer's territory, not the performer's). Music in a major key can thus be switched to minor and vice versa. The authors' tool is intended to help researchers analyze how music conveys emotions. This article makes use of a two-dimensional representation of emotion in which the axes are valence (positive or negative) and arousal (active or passive). For example, happiness is considered to have active arousal and positive valence, and anger is considered to have active arousal but negative valence. The authors conducted two psychological experiments using their software. The first of these showed that subjects usually were able to identify the intended emotion. The second showed that the authors' software accomplished significant shifts in both valence and arousal, whereas software that did not modify pitches shifted only the arousal.

The valence/arousal model is also central to the article by Luca Mion et al. In addition to this affective model, these authors use a sensorial model in which the two axes are kinematics and energy. Whereas the affective domain contains emotions represented by adjectives such as happy, angry, sad, and calm, the sensorial domain's descriptors include hard, soft, heavy, and light. The authors' focus is not music with its attendant structure so much as isolated sounds (including single tones and short musical gestures) such as might be used in human–computer interfaces. Using machine-learning techniques, they extracted audio...

pdf

Share