In lieu of an abstract, here is a brief excerpt of the content:

  • Virtual Gesture Control and Synthesis of Music Performances: Qualitative Evaluation of Synthesized Timpani Exercises
  • Alexandre Bouënard, Marcelo M. Wanderley, Sylvie Gibet, and Fabrice Marandola

The increasing availability of software for creating real-time simulations of musical instrument sounds allows for the design of new visual and sound media. Research in recent decades has especially focused on the control of real and virtual instruments by natural gestures. In this article, we present and extensively evaluate a framework (see Figure 1) for the control of virtual percussion instruments by modeling and simulating a percussionist’s gestures. By positioning the virtual performer at the center of the gesture-to-sound synthesis system, we aim to provide original tools to analyze and synthesize instrumental gesture performances. Our physics-based approach for gesture simulation brings some insight into the effect of biomechanical parameters of instrumental performance gestures. Simulating both gesture and sound by physical models also leads to a coherent and human-centered interaction, and provides new ways of exploring the mapping between gesture and sound. The use of motion-capture data enables the realistic synthesis of both prerecorded and novel percussion sequences from the specifications in “gesture scores.” Such scores involve motion-editing techniques applied to simple beat attacks. We therefore propose an original gesture language based on instrumental playing techniques. This language is characterized by expressivity, interactivity with the user, and the possibility of taking into account co-articulation between gesture units. Finally, providing 3-D visual rendering synchronized with sound rendering allows us to observe virtual performances in the light of real ones, and to qualitatively evaluate both pedagogical and compositional capabilities of such a system.

Background

Digital musical instruments have been widely studied during the past decades, focusing mostly on the elaboration of new interfaces for controlling sound processes (Miranda and Wanderley 2006). The design of these new musical instruments relies [End Page 57] fundamentally on the input gestures they can track. Both sensor- and camera-based motion-capture systems have become widespread solutions for tracking instrumental gestures (Kapur et al. 2003). One can find comprehensive comparisons of tracking solutions more specific to percussion (Tindale et al. 2005; Benning et al. 2007). Such systems may also be used for the analysis of performer gestures (Dahl 2004; Bouënard, Wanderley, and Gibet 2010), where a good understanding of movements leads to the identification of gesture parameters that may be used for interacting with sound-synthesis processes.


Click for larger view
View full resolution
Figure 1.

Global framework, from the off-line editing of a gesture score with its corresponding input signal, to the real-time simulation with visual and sound feedback.

Nevertheless, motion-capture data intrinsically presents several drawbacks. The recorded motion is dependent on the uniqueness of the interaction situation under study, in the sense that it is difficult to extrapolate it to new instrumental situations. With such data-based approaches, it is indeed far from straightforward to go beyond the recorded data and reuse it to synthesize adaptative and realistic new performances. Moreover, although these systems retrieve kinematic motion data, they fall short in retrieving the physics of the recorded situation. The interaction between such kinematic data with sound relies then on non-intuitive multi-dimensional correspondences (Dobrian and Koppelman 2006). Therefore, a promising research direction consists of providing motion models that could interact with sound-synthesis processes.

Developments in sound synthesis have given rise to various methods of generating percussive sounds. Specifically, physics-based synthesis of percussive sounds has involved the modeling of a hammer, collision, and sliding excitations (Avanzini and Rochesso 2004; Avanzini 2007), as well as drum skins (Chuchacz, O’Modhrain, and Woods 2007). However, a main limitation of these methods seems to be the way they are controlled. Despite a few early attempts to approach this issue, it is still not [End Page 58] clear how to formally relate these models to the excitation by a real or virtual performer.

Most research has involved the interaction of a virtual percussion instrument with a real performer (Mäki-Patola 2005), and contributions exploring the modeling of the equivalent gestures by the definition of a synthetic performer are fairly new. These are either based on real-world...

pdf

Share