In lieu of an abstract, here is a brief excerpt of the content:

  • Max at Seventeen
  • Miller Puckette

I have worked for many years on a computer environment for realizing live electronic music that I named in honor of Max Mathews. Three currently supported computer programs—Max/MSP, jmax, and Pd—can be considered as extended implementations of a common paradigm I refer to here as "Max." The Max paradigm appears to be stable enough now that a printed description of it no longer risks becoming too quickly outdated. We can now usefully assess what Max (the paradigm) does well, what it does less well, and what we can all learn from the experience. The Dartmouth Symposium on the Future of Music Software, organized by Eric Lyon, offers the perfect occasion to start this project. I apologize in advance if, for obvious reasons, parts of this article might not seem perfectly objective or impartial.

The Max paradigm can be described as a way of combining pre-designed building blocks into configurations useful for real-time computer music performance. This includes a protocol for scheduling control- and audio-rate computations, an approach to modularization and component intercommunication, and a graphical representation and editor for patches. These components are realized differently in different implementations, and each implementation offers a variety of extensions to the common paradigm. On the surface, Max appears to be mostly concerned with presenting a suitable graphical user interface for describing real-time MIDI and audio computations. However, the graphical look and editing functions in Max are not highly original, and most of what is essentially Max lies beneath the surface.

In my experience, computer music software most often arises as a result of interactions between artists and software writers (occasionally embodied in the same person, but not in my own case). This interaction is at best one of mutual enabling and mutual respect. The design of the software cannot help but affect what computer music will sound like, but we software writers must try not to project our own musical ideas through the software. In the best of circumstances, the artists remind us of their needs—which often turn out quite different from what either of us first imagined. To succeed as computer music software writers, then, we need close exposure to high-caliber artists representing a wide variety of concerns. Only then can we can identify features that can solve a variety of different problems when in the hands of very different artists.

Many of the underlying ideas behind Max arose in the rich atmosphere of the MIT Experimental Music Studio in the early 1980s (which became part of the MIT Media Lab at its inception). Max took its modern shape during a heated, excited period of interaction and ferment among a small group of researchers, composers, and performers at IRCAM during the period 1985–1990; writing Max would probably not have been possible in less stimulating and demanding surroundings. In the same way, Pd's development a decade later would not have been possible without the participation of the artists and other researchers of the Global Visual Music Project, of which I was a part. My work on Max and its implementations has been in essence an attempt to capture these encounters in software, and a study of Max will succeed best if it considers the design issues and the artistic issues together.

Background and Influences

The Max paradigm and its first full implementation were mostly developed over the period 1980– 1990, drawing on a wide variety of influences. The most important was probably Max Mathews's RTSKED program (Mathews and Pasquale 1981), some of whose key ideas also appeared in Curtis Abbott's earlier 4CED program (Abbott 1980). RTSKED attacked the problem of real-time scheduling of control operations for a polyphonic synthesizer. [End Page 31]

Whereas Mathews's earlier GROOVE program (Mathews and Moore 1970) emphasized the handling of periodically sampled control voltage signals, the notion of control in RTSKED was one of sporadically occurring events that caused state changes in the synthesizer. For instance, starting a note in RTSKED might involve setting the frequency of an oscillator and triggering an envelope generator. This is similar in style to the "note card" approach of the Music N...

pdf

Share