In lieu of an abstract, here is a brief excerpt of the content:

  • Creating Visual Music in Jitter:Approaches and Techniques
  • Randy Jones and Ben Nevile

"Visual music" is a term used to refer to a broad range of artistic practices, far-flung temporally and geographically yet united by a common idea: that visual art can aspire to the dynamic and nonobjective qualities of music (Mattis 2005). From paintings to films—and now to computer programs—the manifestations of visual music have evolved along with the technology available to artists. Today's interactive, computer-based tools offer a variety of possibilities for relating the worlds of sound and image; as such, they demand new conceptual approaches as well as a new level of technical competence on the part of the artist.

Jitter, a software package first made available in 2002 by Cycling '74, enables the manipulation of multidimensional data in the context of the Max programming environment. An image can be conveniently represented by a multidimensional data matrix, and indeed Jitter has seen widespread adoption as a format for manipulating video, both in non-real time production and improvisational contexts. However, the general nature of the Jitter architecture is well-suited to specifying interrelationships among different types of media data including audio, particle systems, and the geometrical representations of three-dimensional scenes.

This article is intended to serve as a starting point and tutorial for the computer musician interested in exploring the world of visual music with Jitter. To understand what follows, no prior experience with Jitter is necessary, but we do assume a familiarity with the Max/MSP environment. We begin by briefly discussing strategies for the mapping of sound to image; influences here include culturally learned and physiologically inherent cross-modal associations, different domains of association, and musical style. We then introduce Jitter, the format of image matrices, and the software's capabilities for drawing hardware-accelerated graphics using the OpenGL standard. This is followed by a survey of techniques for acquiring event and signal data from musical processes. Finally, a thorough treatment of Jitter's variable frame-rate architecture and the Max/MSP/Jitter threading implementation is presented, because a good understanding of these mechanisms is critical when designing a visualization and/or sonification network.

Visualizing Through Mappings

Let us consider the compositional problem of creating a dynamic visual counterpart to a given musical work. Jitter's dataflow paradigm, inherited from the Max environment, lends itself for use in composition as a means of defining relationships between changing quantities. Mappings are transformations used to convert input parameters to outputs in a different domain. Bevilacqua et al. (2005) present an overview of recent work on mappings in the context of gestural control. Classes of mappings based on the number of inputs and oututs include many-to-one, one-to-many, and one-to-one. In discussing visual music, the choice of parameters themselves will be our main concern. Given the large number of parameters that can be generated from musical data, and by which images can be specified, even the number of possible one-to-one mappings is too large to be explored fruitfully without some guiding principles. Significant works of visual music from artists including John Whitney, Norman McLaren, and Oskar Fischinger have shown the possibility of composing with sound and image to create a whole that is larger than the sum of the parts (iotaCenter 2000). By analyzing the mappings that underlie such works and considering some results from the study of human audiovisual perception, we can point to several avenues for exploration. [End Page 55]

Synaesthetic Mappings

Synaesthesia is a psychological term that refers to a mixing of the senses that occurs in certain individuals. It is said to occur when a person perceives something via one sense modality and experiences it through an additional sense. Though synaesthesia does occur between sounds and colors, its most common form combines graphemes (written characters) with colors. Grapheme/color synaesthetes perceive each letter with an accompanying color, linked by a consistent internal logic. The letter "T," for example, might always appear green to such a person, while "O" is always blue. Based on psychophysical experiments, Ramachandran and Hubbard (2001) have demonstrated that this form of synaesthesia is a true sensory phenomenon...

pdf

Additional Information

ISSN
1531-5169
Print ISSN
0148-9267
Pages
pp. 55-70
Launched on MUSE
2005-12-20
Open Access
No
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.