In lieu of an abstract, here is a brief excerpt of the content:

  • Changing Musical Emotion:A Computational Rule System for Modifying Score and Performance
  • Steven R. Livingstone, Ralf Muhlberger, Andrew R. Brown, and William F. Thompson

Composers and performers communicate emotional intentions through the control of basic musical features such as pitch, loudness, and articulation. The extent to which emotion can be controlled by software through the systematic manipulation of these features has not been fully examined. To address this, we present CMERS, a Computational Music Emotion Rule System for the real-time control of musical emotion that modifies features at both the score level and the performance level. In Experiment 1, 20 participants continuously rated the perceived emotion of works each modified to express happy, sad, angry, tender, and normal. Intended emotion was identified correctly at 78%, with valence and arousal significantly shifted regardless of the works' original emotions. Existing systems developed for expressive performance, such as Director Musices (DM), focus on modifying features of performance. To study emotion more broadly, CMERS modifies features of both score and performance.

In Experiment 2, 18 participants rated music works modified by CMERS and DM to express five emotions. CMERS's intended emotion was correctly identified at 71%, DM at 49%. CMERS achieved significant shifts in valence and arousal, DM in arousal only. These results suggest that features of the score are important for controlling valence. The effects of musical training on emotional identification accuracy are also discussed.

Background

[E]verything in the nature of musical emotion that the musician conveys to the listener can be recorded, measured, repeated, and controlled for experimental purposes; and . . . thus we have at hand an approach which is extraordinarily promising for the scientific study of the expression of musical emotion

(Seashore 1923, p. 325).

Empirical studies of emotion in music constitute one of the most practical resources for the development of a rule-based system for controlling musical [End Page 41] emotions. For over a century, music researchers have examined the correlations between specific musical features and emotions (Gabrielsson 2003). One well-known example in the Western tradition is the modes' strong association with valence: major mode is associated with happy, and minor mode is associated with sad (Hevner 1935; Kastner and Crowder 1990). Although many exceptions to this rule exist in Western music literature, such a connection may have a cross-cultural basis. Recently, Fritz et al. (2009) showed that members of a remote African ethnic group who had never been exposed to Western music exhibited this association.

In the 1990s, the capability of musical features to be manipulated in the expression of different basic emotions received considerable interest. In a study of composition, Thompson and Robitaille (1992) asked musicians to compose short melodies that conveyed six emotions: joy, sorrow, excitement, dullness, anger, and peace. The music was performed in a relatively deadpan fashion by a computer sequencer. Results found that all emotions except anger were accurately conveyed to listeners. In a similar study of performance, Gabrielsson (1994, 1995) asked performers to play several well-known tunes, each with six different emotional intentions. Performers were found to vary the works' overall tempo, dynamics, articulation, and vibrato in relation to the emotion being expressed. Subsequent studies of performance found that both musicians and non-musicians could correctly identify the set of basic emotions being expressed (Juslin 1997a, 1997b).

Music may use an emotional "code" for communication. In this model, emotions are first encoded by composers in the notated score using the variation of musical features. These notations are then interpreted and re-encoded by performers in the acoustic signal using similar variations. These intentions are then decoded by listeners as a weighted sum of the two (Kendall and Carterette 1990; Juslin and Laukka 2004; Livingstone and Thompson 2009). This code is common to performers and listeners, with similar acoustic features used when encoding and decoding emotional intentions (Juslin 1997c). The code appears to function in a manner similar to that observed in speech and facial expression (Ekman 1973; Scherer 1986). Most recently, facial expressions in emotional singing may also use this code. Livingstone, Thompson, and Russo (2009) reported that singers' emotional intentions could be identified from specific facial features. Speech and music share many of the same features...

pdf

Share