- Whose Opera Is This, Anyway?
Composer and MIT Media Lab Professor Tod Machover believes anyone can make music. At least that’s what it says on the cover of the glossy brochure for his Brain Opera, which premiered last July 23rd at Lincoln Center in New York. Part science exhibit, part music recital, the Brain Opera promises its audience a chance to play electronic instruments in an Interactive Lobby and then hear a 45-minute performance based on their impromptu riffs and recitatives. The fact is, however, that the Brain Opera doesn’t exactly deliver on this promise—which makes Machover’s professed faith in his audience’s ability to make music a bit less convincing than his brochure would have us believe.
I had been told in advance that the Brain Opera’s Interactive Lobby contained a battery of 40 or so computer workstations that produce sounds and video based on visitors’ inputs. Since for me “a battery of computer workstations” conjures up phosphorescent screens set into sleek metal consoles with shiny right angles, I was a bit surprised upon entering the lobby to find myself in a dark jungle of amorphous pods, plastic toadstools, and oversized potatoes hanging from the ceiling, all interlaced with vines of computer cable. It seems that Machover and his collaborators hoped that a touchy-feely room of bouncing legumes would allay the public fear of technology. Judging from the crowd’s reaction, there was some justification for this hope: visitors streaming in the door eagerly hopped from pod to pod, thumping rubber protrusions to play crude rhythms or leaning their heads inside plastic cowls to chat insouciantly with videos of Artificial Intelligence pioneer Marvin Minsky.
It was not just the somewhat chintzy-looking plastic tubers that were designed to appeal to the computer illiterate, but also the way that these “hyperinstruments” responded to the audience’s presence. A particularly successful example was called the Singing Tree, though it looked more like a giant plastic mushroom: by singing a pure tone into a microphone found under the mushroom’s canopy, I was rewarded by a slightly delayed wash of sound harmonized to my voice, accompanied by the image on a video screen of a hand opening. The experience was unexpectedly gratifying—certainly the most intimate encounter I’ve ever had with a computer. The gratification was even more immediate with the Brain Opera’s other hyperinstruments. Waving my hand in front of the Gesture Wall, for example, triggered a big splashy sound that I was told was the consequence of my hand interrupting an electric field generated by the Gesture Wall’s bud-like protuberances. The trouble was, when I placed my hand at different points in the field to map out the way the sound was affected by my hand position, I found that no matter where I placed my hand or how fast I moved it the music sounded pretty much the same. Ironically, the same mechanism that enabled me to make a sumptuous sound easily—a computer algorithm generating musical phrases based somewhat loosely on my hand position—prevented me from understanding, and hence controlling, the sound I was triggering. The result felt a little like having a conversation with a schizophrenic: it’s hard to tell whether he’s listening or not.
I felt this frustration at many of the hyperinstruments in the first room of the opera, whether I was drumming plastic protuberances, waving my hands in front of video screens, or listening to Marvin Minsky respond to my questions with non sequiturs. With any interactive work, the important question is not how to make it interactive—which is relatively easy with today’s technology—but how to make the interaction rich and meaningful. This “quality of interactivity” problem is especially acute when the object of interaction is simultaneously billed as an artwork and an instrument (or “hyperinstrument”). One approach is to make an instrument that is highly underdetermined, something like leaving a guitar in the gallery for visitors to play. By plucking a few strings or strumming...