In lieu of an abstract, here is a brief excerpt of the content:

  • Collaborative Creation, Live Performance and Flock
  • Jason Freeman (bio)

Photos, sound and video examples related to this article are available at <www.jasonfree man.net/flock>.

In Flock (2007), my recent full-evening work for saxophone quartet, dancers, electronic sound, video and audience participation, I attempt to reconcile the growing cultural shift toward collaborative models of content creation with the one-to-many model of music creation and dissemination that has traditionally dominated live performance. We attend live concerts, in part, because we want to participate in a unique, spontaneous musical experience and to share that experience with others. Many such concerts, however, seem more concerned with delivering a consistent product than with creating music in the moment. For some artists, the biggest risk is that their lip-synching will be discovered.

Flock uses novel computer vision and real-time notation systems to delay content creation until the moment of each performance, so that the music can reflect the creative activities of each show's performers and audience members. Music notation, electronic sound and video animation are all generated in real time based on the location of musicians, dancers and audience members as they stand up, move around and interact with each other in accordance with simple textual and visual instructions.

Computer vision software, developed by my collaborator Mark Godfrey, analyzes images from an overhead video camera to calculate the location data. After pre-processing and lens distortion correction, the software calculates an (x, y) point for each participant, using blob detection for the audience members and dancers and a more sophisticated particle filter [1] to uniquely identify each saxophonist. Each participant wears a lighted hat to facilitate efficient and reliable tracking.

My own custom software then generates music notation for each saxophonist based on the location data; that notation is sent wirelessly to a PDA mounted on each player's instrument. The notation (Fig. 2) sometimes displays music on conventional staves but often utilizes graphical contours, along with pitch labels, dynamics and articulations, to guide the musicians' improvisation.


Click for larger view
View full resolution
Fig. 2.

Jason Freeman, four styles of real-time music notation generated by Flock's software. (Drawing © J. Freeman) The musician plays the darker notes; the lighter items show music played by the other saxophonists. The vertical bar shows measure position and maintains time synchronization among the players.

I employ a variety of algorithms to generate the notation. Sometimes, the coordinates of each point simply map to measure position (x) and pitch (y). At other times, each saxophonist serves as the center of a polar coordinate system, and each point within range is mapped to a pitch (radius) and measure position (angle). Often, [End Page 44] participants create motion trails on the notation as they move over time. Dozens of other algorithmic parameters control everything from dynamics and articulations to pitch-set quantizations and point clustering. During the performance, a graphical interface enables me to step through preset structural changes and to tweak additional parameters in response to the unique dynamics of each show. The music played by the saxophonists ranges from pointillistic bursts and slowly changing drones to rhythmically dense textures full of sudden register shifts, undulating arpeggios and multiphonics.

The position data also drives the generation of a real-time video animation projected onto the four walls of the performance space. The video, developed by my collaborator Liubo Borissov, shows a virtual representation of the position data in a three-dimensional space [2]. The video also highlights the music as performers play, and it visually connects each musician to the participants who generate their notation.

In several sections of the performance, position data also generates electronic sound. Each audience member or cluster activates a single musical event, exciting a physical model of a plucked string or struck percussion instrument, or resonating a partial of a spectral sound model. Small position variations, as well as parameters such as cluster size and velocity, subtly change the timbre of each sound; larger position changes affect the sound's distribution to the eight-channel speaker system.

Flock premiered in December 2007 at Carnival Center for the Performing Arts in Miami. Audiences, musicians and dancers all devised...

pdf

Share