In lieu of an abstract, here is a brief excerpt of the content:

  • A Framework for the Evaluation of Digital Musical Instruments
  • Sile O’Modhrain

At the outset of a discussion of evaluating digital musical instruments (DMIs)—that is to say, instruments whose sound generators are digital and separable (though not necessarily separate) from their control interfaces (Malloch et al. 2006)—it is reasonable to ask what the term evaluation in this context really means. After all, there may be many perspectives from which to view the effectiveness of the instruments we build. For most performers, performance on an instrument becomes a means of evaluating how well it functions in the context of live music making, and their measure of success is the response of the audience to their performance. Audiences evaluate performances on the basis of how engaged they feel by what they have seen and heard. When questioned, they are likely to describe good performances as “exciting,” “skillful,” “musical.” Bad performances are “boring,” and those which are marred by technical malfunction are often dismissed out of hand.

If performance is considered to be a valid means of evaluating a musical instrument, then it follows that, for the field of DMI design, a much broader definition of the term “evaluation” than that typically used in human–computer interaction (HCI) is required to reflect the fact that there are a number of stakeholders involved in the design and evaluation of DMIs. In addition to players and audiences, there are also composers, instrument builders, component manufacturers, and perhaps even customers. And each of these stakeholders may have a different concept of what is meant by “evaluation.” Composers, for example, may evaluate an instrument in terms of how reliable it is. If a composer writes a piece of instrumental music to be performed on a DMI, then they ought to be able to assume that (1) the instrumentalist is skilled on their instrument, and (2) the instrument has a known space of sound attributes that the composer can draw upon for musical effect.

The designer of a DMI, who may also be a composer and/or performer, is primarily interested in ensuring that the instrument does what it was intended to do—in other words, if the instrument is designed to respond to certain gestures of the player, that it does so in a reliable way. However, a designer may also wish to leave room in their design for a skilled player to explore the “corners” of an instrument’s sound space, much as a skilled violinist can exploit extended playing technique that expands the range of bowing and fingering gestures.

For manufacturers of DMIs or components of DMIs, evaluation means testing the reliability of the systems they build at a much lower level. Their motivation is primarily financial because they must determine whether the system or any component of the system is likely to fail and cost money to repair or replace. Customers, too, engage in a form of evaluation by voting with their wallets. If the product has flaws in its hardware design, its interaction design, or the quality of the sound it produces, then it simply will not sell.

These examples suggest that DMI designs can be evaluated from multiple perspectives, each of which may require different techniques and approaches. Furthermore, boundaries between roles, although usually distinct for acoustic instrument development, are blurred in the world of DMI design. Performers are often the composers of the music they play and may also be the designers of their instruments. This poses additional evaluation challenges because it requires the digital instrument builder to identify which role they must take on in objectively critiquing their work. Given that there is no one-size-fits-all solution to evaluating DMIs, a next step in determining what approaches are appropriate for a given context is to ask what such evaluations seek to discover and why. In reviewing existing examples of evaluations of DMIs, it quickly becomes apparent that answering this question has given rise to a variety of methodological approaches to evaluation. It is therefore important in each case to bear in mind that the results obtained will [End Page 28] reflect not just the question posed, but also the methodological approach used and the interests of...

pdf

Share