In lieu of an abstract, here is a brief excerpt of the content:

  • Collaborative Textual Improvisation in a Laptop Ensemble
  • Jason Freeman and Akito Van Troyer

For us, text-based performance interfaces, such as those used in live coding systems (Collins et al. 2003), are a fascinating fusion of composition and improvisation. Textual performance interfaces can offer a precise and concise means to define, manipulate, and transform motives, gestures, and processes in real time and at multiple hierarchical layers. They can also render musical thinking visible to the audience by projecting the text as it is written.

We are particularly interested in textual performance interfaces in laptop ensemble contexts. We believe that the dynamics of ensemble performance can lead laptop musicians in new creative directions, pushing them towards more real-time creativity and combining the diverse skills, ideas, and schemas of the ensemble's members to create unexpected, novel music in performance. But when laptop performance interfaces move beyond simple one-to-one mappings, they present unique ensemble challenges, particularly in terms of the synchronization and sharing of musical material. Specialized improvisation environments for specific performances (e.g., Trueman 2008) or ensembles (e.g., Rebelo and Renaud 2006) can help groups to negotiate these challenges and structure their collaboration: The tools can create powerful new channels of networked communication among ensemble members to supplement aural and visual interaction. Textual performance environments are uniquely positioned in this regard: They potentially offer greater efficiency and flexibility in performance, and because text is already a dominant networked communication medium, they can draw from an abundance of interaction models in areas such as instant messaging, collaborative document editing, and the real-time Web.

Although many text-based performance environments do support collaboration, most current systems are challenging to use with ensembles of more than a few musicians. As a result, we created a new textual performance environment for laptop ensemble named LOLC. (LOLC was initially an acronym for Laptop Orchestra Live Coding, though we now consider the applicability of the term "live coding" to be debatable.) From its inception, we designed LOLC to focus on ensemble-based collaboration. Though inspired by live coding systems, LOLC is not itself a programming language: It is neither Turing-complete nor does it allow its users to define new computational processes. Instead, its design, in which musicians create rhythmic patterns based on sound files and share and transform those patterns over a local network, facilitates an interaction paradigm inspired by other forms of ensemble improvisation, particularly in jazz and avant-garde music.

In this article, we discuss the related work upon which LOLC builds; we outline the main goals of the LOLC environment; we describe the environment's design, its technical implementation, and the motivations behind some of our key decisions; and we evaluate its success through discussion of a recent performance.

Related Work

The design and implementation of LOLC was influenced by existing models for collaborative text-based laptop performance, by approaches to collaborative improvisation in other types of ensembles, and by collaborative composition systems. [End Page 8]

Existing Models for Collaborative Text-Based Performance

In principle, laptop-based musical ensembles can use any software environment to perform together; they need not be connected over a data network, and they can coordinate their musical activities solely via aural and visual cues. But although such a strategy works well with an instrumental ensemble, it can be more problematic in a laptop ensemble. Without a shared clock or live beat tracking, time synchronization is difficult. Further, the borrowing and transformation of musical motives across members of the group—an interaction paradigm that is common to many forms of improvisation—can be difficult in a laptop ensemble, where each member would need to manually recreate the musical content or underlying algorithm in his or her own environment.

To more effectively share information about timing and musical content, several live-coding environments implement collaboration features over a local-area network. Rohrhuber's JITLib (Collins et al. 2003) and Sorensen's Impromptu (Brown and Sorensen 2009) both take a similar approach: They enable clients to share and manipulate dynamic objects or variables over a network. Recent versions of Impromptu utilize tuple space to implement this type of synchronized sharing more robustly (Sorensen 2010), whereas JITLib...

pdf

Share