In lieu of an abstract, here is a brief excerpt of the content:

Towardsa Universal and Intelligent MIDI-BasedStage System:A Composer/Performer’s Testimony Philippe Minard here is a straightline linking the ‘cybernetic’ paradigm of the mid-1950s to the various ‘robotic’applications of the 1980s.During the last 3 decades, in the most vital research in sound synthesis, sound processing and sound recording, there have been continual attempts to bring ‘control’and ‘autocontrol’ into the field of music. I remember, in the early 1970s,when I was still a student, being thoroughly impressed by Peter Beyls’s and Joel Chadabe’sexperiments;they were real ‘control-voltagesorcerers ’ to me. During that glorious period in analog electronics , the role of electronicswas huge compared to that of digital technology. This ratio has been completely reversed since then. In the past decade there has been an explosion of control experiments, as never before-an eagernessto apply to the arts,and especiallyto music,what had been orwas being developed in other, usuallyless peaceful fields, such as the military/industrial field. I am thinking in particular of pattern and speech recognition, artificialvision and audition, and the like. I suspect that one of the main reasons for this explosion is the shift from heavy electronicsto digital computing and microcomputing. A great deal of electronic operations have shifted to programming, making thisworld accessibleto many more people. I would say,ironically,that the control field has become affordable to ‘ordinary’ researchers , in the senseof ‘computer-literateindividualsnot necessarily supported by large research and teaching institutions ’. The MIDI standard probably opened the last door for accessto this robotic world, one more step in the direction of easier and better material and human communication. I am not saying that things have become so easy that research on control has been trivialized.On the contrary,I would say that complexityand heterogeneityremain the ‘trademarks’ of this research, but that, without the pretext of all kinds of technical difficulties,researchers no longer have an excuse not to be really inventive. I make the hypothesis that it is probablyeasiertodayforan artistto realizehisor her robotic ideas than it wasjust a decade ago. As far as my own work is concerned, the microcomputer, assemblerlanguage, MIDI standard and some electronic expertisehave proved affordableenough tomake my dream of SYNCHOROScome true: away to give the human body control ofthe music,and ultimately of the whole stage environment. Philippe MCnard,5982 rue Durocher, Outremont,Quebec, Canada HZV 3Y4. Received 5 May 1988. ROBOTICS IN SYNCHOROS Control is certainly the key word of cybernetics and its robotic applications.Actually, a system designed and built on detection or external data retrieval, on decision-making or ‘artificial intelligence’ and, finally, on task execution, corresponds to what I consider a ‘robot system’. SYNCHOROSisbasically a systembelongingto the family of the new MIDI systems. In the hands (literally) of a performer , it ‘shrinks’to a ‘simpleinstrument’,but I shallleave that matter for later and focus instead on SYNCHOROSas acollectionof units in acommunicationnetwork,which has had from the beginning the complexity inherent in any cyberneticsystem. SYNCHOROSis notjust one more MIDI product on the market,but rather anew,organicway tohave MIDI instruments interrelate to each other. It is a new organization of separate, common MIDI instruments. SYNCHOROSis a robotic system and, at the performing stage, a robotic musical instrument, in the following sense: its inputs are a combination of artificialperceiversand sensors ,sending real-timeinformation.Thesesensors,artificial equivalents of the human eye, ear, members and whole body, are actually sensors of light, color, weight and movement . The outputs may be combinations of various sound synthesisand processing units or, more generally, a combination of anysound-and-imagesynthesisor processingunit. In the middle, to interrelate one to the other, to bring the output into straight dependence on or full interdependence with the input, is located a ‘decision-maker’,the seat of artificial intelligence, processing input information and controlling output units according to a set of conditions listed in an aesthetic protocol designed by the composer. This is not the place to describe the fabulous history of art and electronic technology. But ‘educated’ musicians know that even as pioneers like Shannon, Weaver (1949), Von Neumann (1951), Wiener (1954),Ashby (1956) and Foerster (1960) were establishing the new science of automata ,artistslikeXenakis,Barbaud,Hiller,Mathews,Nicolas SchGffer and...

pdf

Share