In lieu of an abstract, here is a brief excerpt of the content:

  • Generative Musical Tension Modeling and Its Application to Dynamic Sonification
  • Ryan Nikolaidis, Bruce Walker, and Gil Weinberg

This article presents a novel implementation of a real-time, generative model of musical tension. We contextualize this design in an application called the Accessible Aquarium Project, which aims to sonify visually dynamic experiences through generative music. As a result, our algorithm utilizes real-time manipulation of musical elements in order to continuously and dynamically represent visual information. To effectively generate music, the model combines low-level elements (such as pitch height, note density, and panning) with high-level features (such as melodic attraction) and aspects of musical tension (such as harmonic expectancy).

We begin with the goals and challenges addressed throughout the project, and continue by describing the project’s contribution in, and comparison to, related work. The article then discusses how the project’s generative features direct the manipulation of musical tension. We then describe our technical choices, such as the use of Fred Lerdahl’s formulas for analysis of tension in music (Lerdahl 2001) as a model for generative tension control, and our implementation of these ideas. The article demonstrates the correlation between our generative engine and cognitive theory, and details the incorporation of input variables as facilitators of low- and high-level mappings of visual information. We conclude with a description of a user study, as well as self-evaluation of our work, and discuss prospective future work, including improvements to our current modeling method and developments in additional high-level percepts.

Previous Work

After originating in the early 1950s, computer-based generative music branched into several different directions. The probabilistic generative approach we take in this project can be related to the pioneering work of Lejaren Hiller and Leonard Isaacson, who premiered their algorithmic composition Illiac Suite, for string quartet, in 1957 (Belzer, Holzman, and Kent 1981). One of the techniques that Hiller and Isaacson used was the Monte Carlo method, where, after randomly generating a note, an algorithm tested it against a set of compositional rules. If the note passed the test, the algorithm accepted it and began generating the next note. If the proposed note failed the test, the algorithm erased it and generated a new note that was again tested by the rules. Although this approach produced melodic and even contrapuntal examples that followed certain voice leading principles, this algorithm had no higher-level model for the structure of the piece.

Our approach is also informed by David Cope’s Experiments in Musical Intelligence, which sought to capture both high- and low-level features of compositions in order to generate stylistically authentic reinventions of music. His early work in this field, in the 1980s, revolved around the concept of defining a set of heuristics for particular genres of music and developing algorithms to produce music that recreates these styles. By Cope’s own account, these early experiments resulted in “vanilla” music that technically followed predetermined rules, yet lacked ”musical energy” (Cope 1991). His succeeding work built on this research with two new premises: every composition had a unique set of rules, and an algorithm determined this set of rules autonomously. This was in contrast to his previous implementation, where a human realized the rule set. This work ultimately relies on pattern recognition for analysis and recombinancy for synthesis, in an effort to create new musical material from pre-existing compositions. Although this implementation produces effective reconstructions true to the form of the original composition, it does [End Page 55] not have the ability to generate music in real time (Cope 1991).

Belinda Thom and François Pachet each developed software that addressed the challenges of real-time generative algorithms with authentic musicality. In 2001, Thom completed the first generation of Band-Out-of-the-Box (BoB; Thom 2001). Her work relies on two models for improvisational learning. First, with previous knowledge of the work’s harmonic structure, an offline algorithm listens to solo improvisations and archives probabilistic information into histograms. Then, in real time, BoB analyzes a human player’s solo improvisation for modal content. Based on this content and the information learned offline, BoB then generates its own solo improvisation. From here, in the...

pdf

Share