- About This Issue
The field of unconventional computing comprises paradigms other than the standard von Neumann architecture that dominates computer science. Among the more provocative of these paradigms is "wetware" computing, which harnesses living neurons. In this issue's first article, Eduardo Miranda and his co-authors break new ground by investigating whether live neurons, coupled to conventional computer systems, can have musical utility. The authors use in vitro cultures of chicken brain cells, whose neural firings are both detected and stimulated by embedded electrodes. On the detection side, the neural firing patterns can be recorded and "sonified" as any other signal can be. The authors describe mappings that they have found musically useful in converting the biological data to sound-synthesis parameters. They also report on their efforts to steer the neuronal network's behavior through computer-controlled electrical stimulation. Such controllability will be required if live neurons are to perform sound-synthesis tasks in a predictable and repeatable manner—that is, if the synthesis is to be at all like a musical instrument instead of a passive sonification. In about a third of their experiments, the authors were able to influence the neurons' behavior. The authors call this new field "music neurotechnology."
Moving to a more conventional topic in computer music, this issue presents two articles on computer-aided composition. Regular readers of Computer Music Journal may already be acquainted with the visual programming environment called PWGL, for PatchWork Graphical Language. (The previous issue described a piano synthesis algorithm implemented using PWGL's sound-synthesis component, PWGLSynth, which in turn was covered thoroughly in CMJ 29/3. PWGL's notation component was described in CMJ 30/4.) In the present issue, Mikael Laurson, Mika Kuuskankare, and Vesa Norilo present a broader picture of PWGL's design goals and features. These include an elegant graphical user interface, direct manipulation of high-level musical data, a cross-platform code base, and tight integration of music notation, sound synthesis, scripting, and constraint-based programming. The authors explain how PWGL relates to, and differs from, other major Lisp-based composition environments, including OpenMusic, Common Music, and PatchWork. Like other well-known music software, PWGL uses graphical patching, where boxes are interconnected to depict the flow of data and information. However, PWGL also provides a direct interface to the Lisp code that underlies each box, allowing the user to decide whether visual or textual programming is most appropriate to a given task. To promote direct manipulation, PWGL requires a three-button mouse with a scroll wheel. But in addition to the visual patch-level programming, PWGL allows the user to extend the kernel with Lisp code, to use C++ for signal processing, to load user-created libraries, and so on.
The article by François Rose and James Hetrick addresses a different topic in computer-assisted music creation, namely, orchestration. The goal here is to write for traditional musical instruments, using acoustical analysis to find instrumental combinations that emulate target sounds. In one example from the article, a certain chord for piano and violin imitates a specific clarinet multiphonic. The authors use a database of Fourier transforms of orchestral instruments. Given a desired ensemble of instruments (the "palette"), the software uses linear algebra to determine sound mixtures that approximate the target sound. The authors describe three of their algorithms, then go on to show an excerpt from a composition by the first author that employs their tool's orchestration proposals.
An important area in the field of music information retrieval concerns finding the musical key in an audio recording. A number of researchers have proposed and implemented key-estimation algorithms. The article by Katy Noland and Mark Sandler measures the contributions of different factors in a key-estimation algorithm, using their own algorithm as a representative case. Some factors are related to low-level audio analysis: downsampling factor, hop size, maximum and minimum analysis frequencies, and a threshold on transform kernels. Another factor indicates the type of tone profile used. (A tone profile assigns a numerical weight to each chromatic scale degree, indicating the musical importance of that degree when the first degree is the tonic, i.e., the key note.) The final...