In lieu of an abstract, here is a brief excerpt of the content:

  • Visualizing Expressive Performance in Tempo–Loudness Space
  • Jörg Langner and Werner Goebl

The previous decades of performance research have yielded a large number of very detailed studies analyzing various parameters of expressive music performance (see Palmer 1997 and Gabrielsson 1999 for an overview). A special focus was given to expressive piano performance, because the expressive parameters are relatively few (timing, dynamics, and articulation, including pedaling) and comparatively easy to obtain. The majority of performance studies concentrated on one of these parameters exclusively, and in most of these cases, this parameter was expressive timing.

In our everyday experience, we never listen to one of these parameters in isolation as it is analyzed in performance research. Certainly, the listener's attention can be guided sometimes more to one particular parameter (e.g., the forced stable tempo in a Prokofieff Toccata or the staccato– legato alternation in a Mozart Allegro), but generally the aesthetic impression of a performance results from an integrated perception of all performance parameters and is influenced by other factors like body movements and the socio-cultural background of a performer or a performance as well. It can be presumed that the different performance parameters influence and depend on each other in various and intricate ways. (For example, Todd 1992 and Juslin, Friberg, and Bresin 2002 provide modeling-based approaches.) Novel research methods could help us to analyze expressive music performances in a more holistic way to tackle these questions.

Another problem of performance analysis is the enormously large amounts of information the researcher must deal with, even when investigating, for example, only the timing of a few bars of a single piece. In general, it remains unclear whether the expressive deviations measured are due to deliberate expressive strategies, musical structure, motor noise, imprecision of the performer, or even measurement errors.

In the present article, we develop an integrated analysis technique in which tempo and loudness are processed and displayed at the same time. Both the tempo and loudness curves are smoothed with a window size corresponding ideally to the length of a bar. These two performance parameters are then displayed in a two-dimensional performance space on a computer screen: a dot moves in synchrony with the sound of the performance. The trajectory of its tail describes geometric shapes that are intrinsically different for different performances. Such an animated display seems to be a useful visualization tool for performance research. The simultaneous display of tempo and loudness allows us to study interactions between these two parameters by themselves or with respect to properties of the musical score.

The behavior of the algorithm and insights provided by this type of display are illustrated with performances of two musical excerpts by Chopin and Schubert. In the first case study, two expert performances and a professional recording by Maurizio Pollini are compared; in the second case study, an algorithmic performance according to a basic performance model is contrasted by Alfred Brendel's performance of the same excerpt. These two excerpts were chosen because articulation is constant throughout the whole excerpt (legato), and analysis can concentrate on tempo and dynamics.

Method

Our visualization requires two main steps in processing. The first step involves data acquisition either [End Page 69] from performances made on special recording instruments such as MIDI grand pianos or directly from conventional audio recordings (i.e., commercial compact discs). Second, the gathered data must be reduced (smoothed) over a certain time window corresponding to a certain granularity of display.

Timing Data

The timing information of expressive performances in MIDI format has the advantage of having each onset clearly defined, although the precision of some computer-monitored pianos is not much higher than obtaining timing data from audio recordings. (For a Yamaha Disklavier, see Goebl and Bresin 2001). Yet, each performed onset must be matched to a symbolic score of a given piece so that the onsets of the track level can be automatically determined (i.e., score-performance matching; see Heijink et al. 2000 and Widmer 2001). The track level is a unit of score time (e.g., quarter note, eighth note) that defines the resolution at which tempo changes are measured. The track level is usually faster than...

pdf

Share