In lieu of an abstract, here is a brief excerpt of the content:

Biomusic and the Brain AN INTERVIEW WITH DAVID ROSENBOOM "Biomusic" refers to music which is created by, or results from, natural processes , including bio-environmental sounds (sounds generated from the biological functions of animals or humans). "Brainwave music" describes the technique of using electro-encephalography ("electric brain writing") in the production of music. Brainwave music uses electrodes to tap electrical activity from the surface of the scalp. The derived current produces coherent waves of four different bandwidths (Delta: .5-3.5 Hz; Theta: 4-7 Hz; Alpha: 8-13 Hz; Beta: 14-30 Hz) and transient waves (ERP's, called event related potentials or evoked responses), which can be used to control synthesizer components, resonate percussion instruments, and activate solenoids. Alpha is the strongest signal, and is used more commonly than the others. A few of the major composers Involved with EEG music include David Rosenboom, Alvin Lucier, Richard Teitlebaum, and Alex Hay. The following interview with David Rosenboom, one of the major pioneers of this interfacing of brain science and more aesthetic concerns, was conducted by David Paul. Tell us about some of your early experiments and explain how music is produced through biofeedback. The earliest stuff, which began with the straight biofeedback experimental paradigm, simply took advantage of the fact that one could electronically monitor a bio-electronic signal, in this case the electro-encephalogram, and then cause the presence or absence of those signals to turn on and off, or 12 to modulate in some way, a sound. At first it's that simple. In the biofeedback experiments the object is to, by some means of inner-conscious control , learn with that feedback to voluntarily manipulate the biological response. The extension of that to music, initially, was very simple. It took not very much more than simply thinking about that experience in musical terms. The same way a musician would learn to control some very specific kind of physiological or mental process to produce an end result-one was doing basically the same thing in the biofeedback paradigm. Many of the states of consciousness associated with biofeedback have a lot to do with musical experiences, partly because musical experiences are often involved with various kinds of states of consciousness, and also partly because the degree of finesse and the fine-ness of control of psychophysical functions that a fine performer or a good musican achieves is very similar to the fine degree of control that is required for fine-tuning a biofeedback experience. So it is not very hard to make the leap into music, and technologically, the initial experiments are really very simple. The next step was to get involved compositionally, exploring the relationship of these states of consciousness and the learning experience to more and more subtle, and yet more and more complex, aspects of musical language. How is the music actually produced? What are the sound sources? The sources in most of my work are electronic, that is, a synthesizer, or a computer, or a computer-controlled synthesizer of some sort is made part of the loop, so that a repertoire of controllable sounds are programmed into the system. Because of the nature of general-purpose computers, one can program the relationship of any stimulus to any response. So the computer is capable of analyzing the EEG signal, extracting features of it, and then deciding what to do if those features are present, and what to do if their statistical characteristics change-if they get more coherent or less coherent, or more present or less present. Then that information is sent to a sound-generating system, or it might be inside the same computer, and then the sound is controlled. Alvin Lucier's work involves amplifying to a very large degree the raw Alpha wave form and then having the pulses that are roughly around ten cycles per second that come from that amplification activate acoustic instruments. In my case, I'm doing various kinds of complex analysis of the waveforms, taking the results of that information and feeding it to a sound synthesis system that generates sound from scratch, electronically. You're more involved with artificial intelligence, in a sense. That's correct...


Additional Information

Print ISSN
pp. 12-16
Launched on MUSE
Open Access
Back To Top

This website uses cookies to ensure you get the best experience on our website. Without cookies your experience may not be seamless.