In lieu of an abstract, here is a brief excerpt of the content:

  • Affective Audio
  • Jonathan Weinel, researcher (bio), Stuart Cunningham, educator (bio), Darryl Griffiths, student (bio), Shaun Roberts, student (bio), and Richard Picking, educator (bio)
abstract

The authors discuss their interdisciplinary research, which investigates the use of affective computing technologies in the context of music, audiovisual artworks and video games. One current project involves the expansion of mobile sound walk apps through incorporating environmental and emotional factors, forming new sonic landscapes. What type of music could reflect driving through a hot desert landscape at midday or walking through a snowy cityscape at dawn? Through a discussion of their collective work in this area, the authors aim to elicit a vision of the computer-based musical experiences of the future.

In this article we discuss our work in Creative and Applied Research for the Digital Society (CARDS) at Glyndŵr University (North Wales) that explores the application of approaches from affective computing within the context of audio and audiovisual projects. “Affective computing” [1] is an area of research that focuses on the design of computer systems that respond to and exhibit human affectations such as mood or emotion. Such computer systems can make use of a variety of available sensor technologies and biofeedback equipment, which are becoming increasingly affordable. They can also be used in mobile scenarios, as many of these sensors are already available on modern smartphones. When used with appropriate interpretive algorithms, a “context-aware” system [2] that informs us about an individual can be developed. We may consider as “context” a variety of factors pertinent to the user, including environmental, physical, emotional and social factors. The challenge for research in this area is to provide systems having the necessary sensors and inputs, suitably interpreted to give meaningful contextual information that can then be utilized for the intended purpose.

Such systems can be used for a variety of purposes, ranging from the automation of systems for informational retrieval to advertising. Among these uses, affective computer systems can be employed to control the delivery of sound, music and visualizations. These can be designed for informative purposes or be artistically driven to provide new sonic experiences. For example, we may conceive of new types of creative experience that enable the composition of visual music using gestural control alongside automatic reflections of emotion. Imagine the new types of creative social activity that could be facilitated on a global scale through computer networks. Alternatively, mobile “sound walk” apps can be expanded to incorporate environmental factors and user emotion, forming new sonic landscapes. What type of music would reflect driving through a hot desert landscape at midday, or walking through a snowy cityscape at dawn? “Affective audio” is the term we have adopted to describe our collective work in this area, which seeks to address the technical, aesthetic, social and philosophical questions this research area presents.

Affective Technologies

The principal requirements of affective computing are sensor technologies—or other data input sources such as logs of online activity—that provide the raw materials that may be used in determining a user’s context. These can broadly be divided into categories that deal with different aspects of the latter, such as environmental, biophysical or social factors. For example, a GPS sensor may tell us about the environmental context of an individual, while a heart-rate sensor indicates one particular aspect of biophysical context. However, there are co-dependencies that emerge—biophysical data alone may not necessarily give a clear indication of what a user is doing or might be feeling until it is seen in relation to other factors, such as environmental data and online activity, that help to build a clearer picture.

While many sensors, such as GPS, accelerometers and altitude meters, are available on modern smartphones, there are others that have not yet been provided that are relatively inexpensive. Co-author Darryl Griffiths has constructed an Arduino-based “Sensor Belt” that houses a comprehensive range of input sources, including temperature, humidity, light, speed, altitude, acceleration, latitude, longitude, date and time [3] (Fig. 1). Consisting of several small boxes mounted on a belt and powered by long-life batteries, it can be worn while performing most typical daily activities. Sensors for temperature, humidity and...

pdf

Share