In lieu of an abstract, here is a brief excerpt of the content:

  • Estimation of Parameters in Rule Systems for Expressive Rendering of Musical Performance
  • Patrick Zanon and Giovanni De Poli

Analyses of musical performances show that performers never strictly respect tempo, timing, or loudness score notations in a mechanical way. Even if they try to follow these indications literally, some changes are always introduced (Palmer 1997). These differences can be quantified by measuring the deviations of performance-related attributes from their notated values. Performances of the same piece change with the kind of music, instrument, and musician (Repp 1992a). There are also some implicit rules that are related to different musical styles and musical epoch that are verbally handed on and used in the musical practice. Furthermore, musicians have their own performance styles and interpretations of musical structures, resulting in high degrees of deviation from the notation of the score.

Repp analyzed many professional pianists' performances, measuring deviations in timing and articulation. His results (Repp 1990) show the presence of patterns of deviations related to musical structure. Moreover, performers introduce deviations to communicate their expressive intentions and emotions (Gabrielsson and Juslin 1996). Most studies of music performance use the word "expressiveness" to indicate the systematic presence of deviations from the musical notation as a communication means between musician and listener (Gabrielsson 1997).

The analysis of these systematic deviations has led to the formulation of several models that attempt to describe their structures, with the goal of explaining where, how, and why a performer modifies—sometimes unconsciously—what is indicated by the notation of the score. Notice that, although deviations are only the external surface of something deeper and often not directly accessible, they are quite easily measurable and thus widely used to develop computational models in scientific research and generative models for musical applications. However, the use of a score as reference has some drawbacks for the interpretations of how listeners judge expressiveness. Alternative approaches are the intrinsic definitions of expression (expressive deviations defined in terms of the performance itself; see Gabrielsson 1974; Desain and Honing 1991) or non-structural approaches relating expression to motion, emotion, etc. (see Clarke 1995 for a general discussion).

Some models based on an analysis-by-measurement method have been proposed by Todd (1985, 1992, 1995), De Poli et al. (1998), and Clynes (1990). This method is based on the analysis of deviations measured in recorded human performances. The analysis attempts to recognize regularities in the deviation patterns and to describe them by means of numerical formulas.

An alternative to this method is that of performing controlled experiments (Palmer 1997). By manipulating one parameter in a performance (e.g., the instruction to play at a different tempo), the measurements reveal something of the underlying mechanisms (see www.nici.kun.nl/mmm/papers/NICI-position.pdf).

Another approach derives models described with a collection of rules using an analysis-by-synthesis method. The most important is the KTH rule system (Friberg et al. 1991; Sundberg 1993; Friberg 1995a; Friberg et al. 2000). Other rules were developed by De Poli, Irone, and Vidolin (1990). The rules describe quantitatively the deviations to be applied to a musical score to produce a more attractive and human-like performance than the mechanical one that results from a literal playing of the score. Every rule tries to predict (and to explain with musical or psychoacoustic principles) some deviations that a human performer is likely to insert. At first, rules are obtained on the basis of the indications of professional [End Page 29] musicians; then, the performances, produced by applying the rules, are evaluated by listeners, allowing further tuning and development of the rules. The rules can be grouped according to the purposes that they apparently have in music communication. Differentiation rules appear to facilitate categorization of pitch and duration, whereas grouping rules appear to facilitate grouping of notes, both at micro- and macro-levels. As an example of such rules, let us consider the Duration Contrast rule: it shortens and decreases in amplitude the notes with duration between 30 and 600 msec, depending on their duration according to a suitable function. The value computed by the rule is then weighted by a quantity parameter k.

The machine learning of performance rules is another...

pdf

Share