In lieu of an abstract, here is a brief excerpt of the content:

  • Harnessing the Enactive Knowledge of Musicians to Allow the Real-Time Performance of Correlated Music and Computer Graphics
  • Ilias Bergstrom (bio) and R. Beau Lotto (bio)
Keywords

Real-time performance, enactive interfaces, visual music, new media, digital musical instruments, mapping, colour organs, programming as art

Artists and scientists have a perpetual interest in the relationship between music and art. As technology has progressed, so too have the tools that allow the practical exploration of this relationship. Today, artists in many disparate fields occupy themselves with producing animated visual art that is correlated with music (called 'visual music'). Despite this interest and advancing technology, there still is no tool that will allow one to perform visual music in real-time with a significant level of control. Here we propose a system that would enable a group or individual to perform live 'visual music' using the musical instrument(s) itself as the primary source of control information for the graphics. The hypothesis driving this choice of interface is that, by connecting musical control data (i.e. scales, notes, chords, tempo, force, sound timbre and volume) to graphical control information (a process called mapping[6]), a performer will be able to more readily transfer his/her enactive knowledge [1] of the instrument to creating visual music. The term enactive knowledge refers to knowledge that can only be acquired and manifested through action. Examples of human activities that heavily rely on enactive knowledge include dance, painting, sports, and performing music. If our hypothesis is correct, this will enable a mode of musical/ visual performance different from current practice, which is likely to enhance the experience of both the performer( s) and audiences.

The outcome of this process will not simply be music visualization, with the graphics being subordinate. Rather, we believe that images controlled directly by the same physical action that generates music will feedback to shape what is actually played musically by the performer. Furthermore, many people will be able to collaboratively perform a complex musical and visual experience live – in a manner that live musicians are already used to, because each performer will be influenced by the performance of the others in the group.


Click for larger view
View full resolution
Fig. 1.

Sequence of images produced using prototype. (© Ilias Bergstrom)

Background

The immediacy with which music can communicate emotion has been envied by many visual artists, most notably Wassily Kandinsky [2], who set out to recreate it in painting. The first known machine for exploring the relationship between music and visual art was Louis Bertrand Castel's "Clavecin oculaire" (1734); Castel implemented a modified version of the note-to-color correspondence proposed by Isaac Newton [2]. Many such systems have since followed, made to either accompany music with colour or provide a form of visual music – named "Lumia". The term Lumia was coined by Thomas Wilfred, developer of the "Clavilux" color-organ (1922) [2], who, rejecting the notion of an absolute correspondence between sound and image, concentrated on generating visual compositions that were meant to be viewed alone, i.e., without musical accompaniment.

Though lacking an entirely rigid definition, what qualifies as Visual Music is sufficiently well described by Brian Evans [3] as: 'time-based visual imagery that establishes a temporal architecture in a way similar to absolute music. It is typically non-narrative and non-representational (although it need not be either). Visual Music can be accompanied by sound but can also be silent'.

In modern times, analogue video synthesizers, laser shows and more recently computer graphics have all been employed to accompany music. For instance, at live music concerts and at clubs with music played by a DJ, there are often live graphics performed by a VJ (Visual Jockey), who mixes prerecorded video clips together, while altering the playback parameters of the individual clips, as well as processing these clips using real-time video effects. A further advance has been the recent development of computer programming environments, such as Processing [4] and Cycling74's Max/MSP/Jitter combination, that allow the description and performance of real-time procedural graphics by non-expert programmers, and as such their use has begun making its way out of the avant-garde and...

pdf

Share