In lieu of an abstract, here is a brief excerpt of the content:

  • About This Issue

Composer James Dashow has been active in computer music for some three decades. In an interview in this issue, he briefly reminisces about matters such as his development of the MUSIC30 synthesis language, but he concentrates chiefly on his ongoing compositional interests. Mr. Dashow continues to base compositions on his Dyad System, a theoretical framework for deriving pitches and timbres alike from selected frequency ratios. In the interview, he shares a few signal-processing tricks and explains why his compositions have used only fixed media, not live electronics. He also sheds light on his compositional process and his ideas, influenced by Jean Piaget, about the poetics of structure. Finally, he describes his in-progress work Archimedes, a multimedia opera for performance in a planetarium.

The next article discusses live electronics in music by Luciano Berio, whose stature as one of the foremost composers of the last half-century renders his use of such techniques particularly noteworthy. The authors, who work at Mr. Berio's Centro Tempo Reale in Florence, describe three recent compositions: Ofanìm, Outis, and Altra voce. The electronic processing in these pieces attests to the persistence of Mr. Berio's longstanding interest in expanding the timbral and spatial resources of traditional instruments and voices. Hardware and software (such as Max/MSP) are employed for spatialization, reverberation, sampling, and delay, as well as for pitch-shifting, which is applied to solo parts as a means for generating heterophony and to ensemble passages as a means for creating dense textures. The effects range from the strikingly dramatic, as in Ofanìm's portrayal of the prophet Ezekiel's vision, to the subtle: in Outis, the composer conceals the loudspeakers in an effort to downplay the very existence of the electronic processing.

Rajmil Fischman's article investigates some compositional applications of Erwin Schrödinger's famous wave equation, which is related to the structure of chemistry's periodic table of the elements. Mr. Fischman developed algorithms for applying the equation to parameters of asynchronous granular synthesis as well as to overall musical structure. He then created a general-purpose compositional software program known as AL, as well as a specific AL plug-in that implements his approach to Schrödinger's equation. Finally, he used this software to realize his composition Erwin's Playground.

The final two articles present research in the use of artificial neural networks for interpreting humans' musical intentions. In one case, the intentions are mediated by the human hand, and in the other case, they are essentially unmediated. Susan George's article examines machine recognition of handwritten music. Although many composers today use "point-and-click" notation programs such as Finale or Sibelius to produce their scores, some long for a more natural means of entering the data. MIDI input is one possibility, already available in such programs, but handwritten entry of music remains a research area. The author surveys previous work in this field and then describes her own experiments with a multi-layer perceptron neural network trained to recognize handwritten music symbols. Her system, which operates on musical symbols that are dynamically constructed on-line, rather than written off-line and then scanned, achieves recognition rates of about 80% with a set of f 20 different symbols.

The article by Eduardo Reck Miranda and his colleagues, from four different institutions in all, offers a look at groundbreaking research in the musical application of brain-computer interfaces. The idea of thought-controlled music has long fascinated composers such as Alvin Lucier and David Rosenboom, but recent technological advances have increased its feasibility. The authors perform spectrum analysis of electroencephalograms (EEGs) obtained while the subjects are performing certain mental tasks related to music listening. A neural network attempts to map the mental states to variations in the EEGs' spectral density. Three experiments were conducted, showing that the system could distinguish between such tasks as active versus passive listening (where "active listening" involves imagining that a musical passage is being played and "passive listening" means it really is being played) or listening to the left versus the right side of the stereo field. The researchers are using these results for real-time control of...

pdf

Share