In lieu of an abstract, here is a brief excerpt of the content:

  • Convergent Technologies, Custom Aesthetics
  • Phillip Hermans, composer (bio)
abstract

The author anticipates how technological advances in the 21st century will give musicians the tools to further control their craft, while giving listeners the ability to personalize and share their aesthetic experience.

We may have conquered the “infinite variety of noise-sounds” [1], yet “the machines we use for making music can only give back what we put into them” [2]. While theoretically we can generate any sound we imagine, implementation is often a laborious task. Toward this end, many musicians and sound artists exploit new technologies as soon as they become available. This frequently results in different aesthetic practices, and new forms often subsume older paradigms rather than replace them. New technology has strengthened the variety of musical communities and allowed more people to participate in music and multimedia practices. By looking at the current musical and technological landscape, I suggest that emerging technologies will allow humans to personalize their perception of environments through embedded technology and human-computer interfaces.

In 2002 the US National Science Foundation and the Department of Commerce called for transdisciplinary convergence of nanotechnology, biotechnology, information technology and cognitive science (NBIC) in order to create technologies for improving human health, cognition and physical abilities [3]. Subjects of a 2013 report [4] include ubiquitous, wireless, intelligent sensors; complexity science; human-computer interfaces; and technologies for telepresence and teleoperation. While this report aspires to create a new renaissance for humanity, and cites institutions inclusive of the arts (Media Lab, Bell Labs), there is little mention of art and music in regard to converging technologies.

Technology has always had an impact on music and aesthetics. Early musical instruments allowed for exploration of the natural resonance of reeds, bones and land formations. The monochord, an ancient string instrument, was used by Pythagoras to demonstrate simple number ratios and to develop his theories of music and the universe [5]. Twentieth-century technology affected the aesthetics of the Italian Futurists [6], musique concrète, Elektronische Musik, Plunderphonics [7] and Glitch music’s “aesthetic of failure” [8], to name a few.

Electroencephalography (EEG) has been used in Lucier’s Music for Solo Performer (1965) and more recently the Brain Dreams Music Project initiated in 2011 [9]. David Rosenboom pioneered the use of computers and biofeedback in musical contexts [10]. The Xth Sense is a biophysical technology for digital interactivity that captures mechanomyogram (MMG) signals for playback or use as control data [11]. The Hub experimented with networked music early on, and their current work extends the network, from performers to audience, with pieces such as Glimmer [12] and platforms such as MassMobile [13]. The BioSync interface merges network and biosensing paradigms by using biometric responses in audience members via a mobile phone [14]. Animated scores by composers such as Jesper Pedersen allow for real-time algorithmic composition realized by instrumentalists following animations [15]. Some video games, such as Otocky [16], are designed for simultaneous game/music playing while SoundCraft sonifies the gameplay of StarCraft 2 [17]. My own work includes rule-based systems for human and artificial agents [18].

Further convergence of the above practices will increase as technology gets smaller, less expensive and more abundant. In a world of ubiquitous technology, music and sound art will be constantly accessible and interactive. Wearable/embedded devices can serve as inputs to a networked, multimedia performance system involving other humans and artificial intelligences. The design of soundscape in both virtual and physical environments will be important for public infrastructure and private homes/businesses. Sonification of sensor data will be important for monitoring health, interfacing [End Page 74] with computers and understanding big data.

As users now customize technology, whether by changing their desktop image or modifying the source code of operating systems, humans will be able to craft their own subjective experience of the world. Individuals may be able to alter an advanced cochlear implant or choose between sonification mappings in a human-computer interaction. Emerging technologies will impact music and multimedia art by making personal experiences more customizable. The aesthetics of music in the future will be at once intensely personalized and shareable within a cybernetic network.

Phillip Hermans, composer
2806 Sleepy Hollow Lane...

pdf

Share