In lieu of an abstract, here is a brief excerpt of the content:

  • Roboser:A Real-World Composition System
  • Jônatas Manzolli and Paul F. M. J. Verschure

We present a novel paradigm for the interactive composition and performance of music called Roboser consif a real-world device (,i.e. a robot), its control software, and a composition engine that produces streams of MIDI data in real time. To analyze the properties of this framework, we present the application of Roboser to a learning mobile robot, called EmotoBot, that is controlled by the Distributed Adaptive Control (DAC) architecture. The EmotoBot composition is based on the generation of real-time sound events that express sensory, behavioral, and the internal states of the robot's control model. We show that EmotoBot produces a complex set of sonic layers and quantify its ability to generate complex emergent sonic structures. We subsequently describe further applications of the Roboser framework to other interactive systems, including a large-scale interactive exhibition called Ada. Our results show the potential of the Roboser paradigm to define the central-processing stage of interactive composition systems. Moreover, Roboser provides a general framework for transforming information from real-world systems into complex sonic structures and as such constitutes a real-world composition system.

Background

The generation of music by machines, and the interaction between musicians and machines, has a broad history and is an active area of current artistic and research efforts. A musical instrument can be seen as an extension of the musician's body, and, as such, the relationship between humans and artifacts for making music has a long history. If we focus only on automated instruments, it is possible to see a continuous development from music boxes and automated pipe organs to the Pianola played by card-rolls and the modern Yamaha Disklavier. Currently, musical robots are being developed that are self-playing instruments with the goal to produce music autonomously. Devices such as the Guitar Bot, an electrified slide guitar constructed by Eric Singer, Kevin Larke, and David Bianciardi (see www.lemurbots.org), aim to augment, not simply duplicate, the capabilities of a human guitarist. The concept behind the use of such robots is the idea of constructing autonomous musical instruments that ultimately play themselves without human intervention.

In contrast to this approach, the paradigm described here, called Roboser, attempts to autonomously generate complex sonic structures by exploiting the dynamics of the real-world interaction between artifacts and their environments, including humans. Here, we explore and analyze the properties of the Roboser paradigm using a mobile robot that interacts with and learns from an environment that contains rewards and punishments. The compositional process transforms the experience of this evolving artifact into a multifaceted sonic expression. Hence, in contrast to other approaches in musical robotics, the application of Roboser to EmotoBot is based on a robot that does not have strings, keyboards, or resonance tubes but contains proximal and distal sensors and wheels. We are not developing a self-playing device, but we are constructing a system that produces a sequence of organized sounds that reflects the dynamics of its experience and learning history in the real world. [End Page 55]

In the case of Roboser, we study the potential of an autonomous interactive composition system in which a real-world artifact (i.e., a robot) and an experimental setup generate sonic structures in real time without direct human intervention. A system with the capacity of autonomously producing realtime structures out of the interaction between these real-world artifacts and their human and non-human environments is defined here as a real-world music system.

In general, the real-time operations performed by real-world music systems can be described in terms of a sensor interface, a central-processing stage, and a transformation into sound (Wanderley, Schnell, and Rovan 1998), also referred to as sensing, processing, and response (Rowe 1993). In the field of computer music, most emphasis has been placed on the interfaces between music systems and their users. For instance, this approach has led to the creation of novel musical instruments applied to installations such as the Mind Forest (see Paradiso 1999 for an overview), which use technology to create more accessible interfaces for physical or virtual musical instruments...

pdf

Share