The MIT Press
  • Enacting Sonic-Cyborg Performance through the Hybrid Body in Teka-Mori and Why Should Our Bodies End at the Skin?
  • Aurie Hsu, artist, educator, researcher (bio) and Steven Kemper, artist, educator, researcher (bio)
abstract

In "A Cyborg Manifesto," Donna Haraway explores implications of the increasing hybridization of humans and machines. While society has long been concerned with the encroachment of technology onto human activity, Haraway challenges this concern, suggesting instead a kinship between organism and machine, a hybrid body. A sonic-cyborg performance realizes this understanding of the human-machine hybrid through movement and sound, incorporating a "kinesonic" approach to composition and an exploration of "mechatronic" expression. In this article, the authors describe their approach to enacting sonic-cyborg performance by outlining the creative framework and associated technologies involved in two collaborative pieces that explore questions of fluidity between organism and machine: Teka-Mori and Why Should Our Bodies End at the Skin?

In her essay "A Cyborg Manifesto" [1], Donna Haraway explores implications of the increasing hybridization of humans and machines. Haraway summarizes: "Late twentieth-century machines have made thoroughly ambiguous the difference between natural and artificial, mind and body, self-developing and externally designed, and many other distinctions that used to apply to organisms and machines" [2]. In dispelling binaries and advocating for holistic perspectives, Haraway's writings have far-reaching philosophical and sociopolitical implications, providing critique that redefines notions of gender, identity and feminist approaches. While society has long been concerned with the ever-growing encroachment of technology onto human activity, most recently the development of artificial intelligence and robotics [3], Haraway proposes instead a kinship between organism and machine. In a broad sense, this kinship suggests hybrid bodies that transcend mechanical and organic boundaries, raising questions about embodiment in our contemporary technoculture where the lines between organism and machine are fluid. In a book about the "Chthulucene," Haraway invokes the term sympoiesis, meaning "making-with" [4], originally coined by M. Beth Dempster as "collectively-producing systems that do not have self-defined spatial or temporal boundaries. Information and control are distributed among components. The systems are evolutionary and have the potential for surprising change" [5]. In contrast to homeostatic, autonomous and autopoietic systems, "the more ubiquitous symbiogenesis seems to be in living beings' dynamic organizing processes, the more looped, braided, outreaching, involuted, and sympoietic is terran worlding" [6]. Thus, the hybridization of the body and machine reflects a symbiotic merging of systems.

A sonic-cyborg performance realizes this understanding of the human-machine hybrid through movement and sound. In this article, we describe our creative framework and the associated technologies involved in enacting sonic-cyborg performance through two recent collaborative pieces that explore these questions of fluidity between organism and machine: Teka-Mori (2013), for dancer, RAKS system and computer-generated sound; and Why Should Our Bodies End at the Skin? (2018), for dancer, RAKS system, robotic percussion, sound exciters and live sound processing. These pieces feature the Remote electroAcoustic Kinesthetic Sensing (RAKS) system, a wearable wireless sensor interface that translates a dancer's movement into sonic output, engaging an embodied creative practice that foregrounds tactile, kinetic and kinesthetic sensory experience.

background

Alongside research in the areas of gesture, human-computer/ robotic interaction and new interfaces for musical expression, artists have developed interactive performance systems that link movement and sound production. Sensor technologies and robotics enable humans to interact with computer-controlled mechanical systems. In electronic music performance, sensors and robotics output movement data [End Page 83] that can be mapped to musical parameters, enabling a direct link between human action and sound production. While these types of interactive systems have been created for a variety of purposes, several projects actively embrace the blurring of human and machine, including Tomie Hahn and Curtis Bahn's Pikapika [7], Laetitia Sonami's lady's glove [8], Onyx Ashanti's sonocyb1 exomesh bodyware system [9], Scott Barton's Cyther [10], LP Demers and Bill Vorn's Inferno [11], Åsa Unander-Scharin and Carl Unander-Scharin's Robocygne [12] and Marco Donnarumma's Xth Sense system [13]. Our pieces are positioned among these works with a focus on exploring the hybrid body through kinesonic composition and the interaction between mechatronic movement and sound.

hybrid body

The Remote electroAcoustic Kinesthetic Sensing (RAKS) system is an Arduino-based wearable wireless sensor interface that we designed (Fig. 1). The RAKS system enables a direct link between a dancer's movement and computer-generated sound production and processing. By linking these elements, we explore both the human and mechanical aspects of the hybrid body in performance. The RAKS system consists of a corset and a lightweight belt with sensors, a LilyPad Arduino and an XBee radio sewn into the belt. Sensors capture the dancer's movement and transmit the data wirelessly to a computer running Max software over a point-to-point network. The system is modular, and different sensors can be incorporated for different performances. The sensors used in Teka-Mori and Why Should Our Bodies End at the Skin? include a flex sensor in the corset and an accelerometer on the belt. The system also includes two programmable LED rings sewn into the corset that take data from the accelerometer in order to visualize the dancer's movements.

The development of the RAKS system is informed by gesture studies [14] and specifically our concepts of "choreographed sound" and "kinesonic composition" [15]. A kinesonic approach foregrounds embodied activity by integrating movement, kinetic and kinesthetic experience and sonic elements in the compositional process. In pieces using sensor-based interfaces, the designer directly defines and maps the relationship between movement and sound, employing physical gesture as a composable parameter. By focusing on the mechanics of movement and its relationship to sound, choreographing sound engages an embodied perspective that involves tactile, kinetic and kinesthetic sensory experience.

Fig. 1. Remote electroAcoustic Kinesthetic Sensing (RAKS) system, 2010. (© Steven Kemper)
Click for larger view
View full resolution
Fig. 1.

Remote electroAcoustic Kinesthetic Sensing (RAKS) system, 2010. (© Steven Kemper)

Table 1. Movement vocabulary and RaKS system design.
Click for larger view
View full resolution
Table 1.

Movement vocabulary and RaKS system design.

Movement vocabulary from contemporary belly dance influences the design and functionality of the RAKS system, including the hardware, types of sensors and costume architecture. The types of movements employed in belly dance can be metaphorically linked to the fundamental concepts of electronic music. For example, continuous motion, such as torso body waves and chest circles, relates to analog oscillators, specifically the sine wave. Isolated movements, such as hip and shoulder locks, are binary, evoking the idea of mechanical switches. To capture these types of movements, we placed the flex sensor near the chest and torso and the accelerometer on the hips. Table 1 summarizes the elements of the movement and corresponding design features on the [End Page 84] RAKS system. This approach to correlating movement and sound engages an embodied, kinesonic perspective [16] as movement serves to trigger, generate and alter sonic material. Integrating movement with sound production, the "body becomes the instrument" [17] and the dancer inhabits a "sonic body" [18].

mechatronic expression

Our performance strategies are influenced by the concept of "mechatronic expression," defined by Steven Kemper and Scott Barton, in relation to music performed by robotic instruments [19]. This idea challenges the assumption that musical expressivity results from the capacity for sonic nuance and argues that robotic instruments are capable of their own unique vocabularies of expressive gestures that are specific to mechanical instruments, as distinct from human performers. These include the capability for hypervirtuosic speed, complex rhythms, humanly impossible articulations and algorithmic control, as well as their own idiosyncrasies and limitations that vary by instrument, such as microvariations in timing caused by physical forces like friction and gravity [20]. Mimicking these types of gestures through movement and sound, as well as performing with robotic instruments, represents a central element of sonic-cyborg performance.

teka-mori

Teka-Mori, for dancer, RAKS system and computer-generated sound, features an interactive, bidirectional relationship between movement and sound. Teka refers to the vocalization of two different drum strokes ("tek" and "ka") on a doumbek. Mori, adapted from the Latin phrase memento mori, evokes the idea of lifelessness and decay. Teka-Mori conveys a dystopian, "broken-machine" aesthetic through noisy, distorted sonic materials.

Teka-Mori enacts a hybrid body through the direct connection between movement and timbral control, where the performer inhabits and controls this sonic machine. This relationship is intensified through the use of distorted, electronic sounds, including a bowed bar model [21], pulse-width modulation, wave-shaping, filtering and an oscillator bank. Additionally, sonically degraded acoustic recordings of the different drum strokes played back on a grid produce a mechanical reinterpretation of a deconstructed drum pattern. The piece begins with the dancer controlling a physical model of a bowed bar. The dancer's torso movements, including waves and a side-to-side figure-eight motion, increase and decrease the bow pressure. Hip circles and torso waves control the amplitude of the first 12 partials of a Chebyshev wave-shaping module. Lateral hip circles modulate midrange frequencies, and reverse torso waves modulate high frequencies. To shift between pitch and noise, lateral figure-eight movements in the hips, punctuated by pauses and slow side bends in the torso, change the center frequency of a low pass filter. The combination of these movements simultaneously affects pitch, rhythm and timbre. In the second part of the piece, the choreography consists of a combination of slow and rapid turn sequences punctuated by pauses in the movement. When the dancer is turning, an oscillator bank produces a cascade of sine waves up to a high partial determined by the level of movement. When the dancer pauses, these upper partials disappear, leaving only the lowest frequency audible. The output signal from this module is delayed, producing a sonic echo of the dancer's movement.

Fig. 2. Configurable Automatic Drumming Instrument (CADI). (© Steven Kemper)
Click for larger view
View full resolution
Fig. 2.

Configurable Automatic Drumming Instrument (CADI). (© Steven Kemper)

Why Should Our Bodies End at the Skin?

Why Should Our Bodies End at the Skin?, for dancer, RAKS system, the Configurable Automatic Drumming Instrument (CADI), sound exciters and live sound processing, explores hybridization of organism and machine through the idea of embodied mechatronic expression. The title references Haraway's suggestion of a kinship between organism and machine [22]. This piece realizes the capability of the hybrid body in performance, sonically connecting mechanized human movement and humanized robotic action. Robots are often designed to complete human tasks; their motions reflect a mechanical abstraction of human movement [23]. The RAKS system combined with CADI connects the dancer's actions with machine tasks [24].

CADI is a solenoid-driven, robotic percussion battery (Fig. 2). We designed the 3D-printed striking arms to hold a variety of beaters, and they are mounted on microphone stands, making it easy to position the beaters to hit a variety of different instruments. CADI is controlled via MIDI, and velocity messages control the solenoids' on-times, affecting striking force and allowing for dynamic control. Steven Kemper, Troy Rogers and Scott Barton of Expressive Machines [End Page 85] Musical Instruments (EMMI) originally designed CADI, with the iteration used in Why Should Our Bodies End at the Skin? designed by Kemper at Rutgers University.

The performance of Why Should Our Bodies End at the Skin? features a dancer equipped with the RAKS system, CADI, sound exciters actuating both drums and shaken percussion, and electroacoustic textures created from processed mechanical sounds (Fig. 3). To blur the boundaries between organism and machine, we have the percussion instruments encircle the dancer, serving as a visual and sonic extension of the body. The choreography reflects the mechanical nature of robotic movement, using isolations and body locks, while CADI produces a visual and sonic echo of this movement through rhythmic and sustained textures. The RAKS system translates the dancer's movement into sonic control in a variety of ways, including triggering CADI's attacks and varying the dynamics, panning sound events within the semicircular field of the setup and processing the electroacoustic texture. A feedback loop emerges between the dancer's movements and CADI's mechanical actions, calling into question whether the human is controlling the machine or the machine is controlling the human. These interactions are sympoietic, creating a symbiotic merging of the different systems.

The piece consists of three overarching motivic ideas. The beginning develops through an accumulation of textural layers. Contrasting rhythmic sections reference the "drum solo," a Middle Eastern musical form for solo drummer or drummer with dancer consisting of a fast-tempo virtuosic improvisation where the lines between leader and follower are indistinguishable. In the final section, the dancer's actions combine with CADI, which itself begins to break down.

The piece begins with a solo dance encompassing short phrases of hip isolations and locks. After a few phrases, CADI "awakens" with rhythmic patterns that result from the preceding accelerometer movement data. The rhythm mimics the movement but also adds extra reverberations to which the dancer reacts through movement. The next section weaves in and out of several interactive modes between the movement and the sonic texture. First, the dancer moves to automated timbral changes in the texture; then the dancer's movement itself controls timbral shifts. The flex sensor data controls the amplitude of the 12 upper partials of Chebyshev wave-shaping played through the sound exciters. The result is an ebb and flow in the density and amplitude of the texture, referencing the continuous movements found in the torso, hips and chest.

The middle section of the piece features two interactive drum solos, one where the dancer leads the robotic percussion and one where the robotic percussion leads the dancer. Within the context of a 13-beat rhythm, the dancer slowly rotates and faces one of the toms, making "eye contact" with the instrument. At this point, control over the rate of the arm hitting the drum is transferred to the dancer, who can control the speed of rolls through a stomach flutter movement. After this solo, the performer returns to make eye contact with the floor tom and responds to an algorithmically generated solo on that instrument.

In the final section, the dancer's actions intertwine with the machine, which begins to "malfunction." The dancer plays the drums along with CADI, which performs a series of increasingly irregular and accelerating loops. The piece ends with the dancer resting drumsticks on the floor toms. The sound exciters mounted to these drums resonate, causing the drumsticks to vibrate on the drumheads. This creates a physical and sonic connection between the dancer's body and the electromechanically actuated acoustic instruments.

Fig. 3. Why Should Our Bodies End at the Skin? in performance, 17 February 2018. <br/><br/>(Photo courtesy of the Ammerman Center for Arts and Technology.)
Click for larger view
View full resolution
Fig. 3.

Why Should Our Bodies End at the Skin? in performance, 17 February 2018.

(Photo courtesy of the Ammerman Center for Arts and Technology.)

conclusion

The development of sensor-mediated performance has enabled us to investigate Haraway's conception of the hybrid body and develop a practice of sonic-cyborg performance. By exploring the performative possibilities of this type of extended system, we have attempted to recontextualize negative implications of a hybrid body. In this sense, we actualize Haraway's notion of a kinship between organism and machine, exploring a world where the cyborg represents not an inevitable march toward a technological singularity but rather an evolving, creative body. [End Page 86]

Supplementary Material

• [ Click to download ] Performance video of Why Should Our Bodies End at the Skin? for dancer, RAKS system, robotic percussion, sound exciters, and live sound processing.

• [ Click to download ] Performance video of Teka-Mori for dancer, RAKS system, and computer-generated sound.

Aurie Hsu, artist, educator, researcher
Oberlin Conservatory of Music, Technology in Music and Related Arts (TIMARA) Department, 77 West College Street, Oberlin, OH 44074, U.S.A. Email: ahsu@oberlin.edu. Web: www.auriehsu.com. ORCID: 0000-0003-3852-446X.
Steven Kemper, artist, educator, researcher
Mason Gross School of the Arts, Music Technology Department, Rutgers, State University of New Jersey, 81 George Street, New Brunswick, NJ 08901, U.S.A. Email: skemper@mgsa.rutgers.edu. Web: www.stevenkemper.com. ORCID: 0000-0003-4146-3792.
Aurie Hsu

aurie hsu is a composer and performer. She is currently assistant professor of computer music and digital arts in the Technology in Music and Related Arts (TIMARA) department at the Oberlin Conservatory.

Steven Kemper

steven kemper is a composer and music technologist. He is currently associate professor of music technology and composition at the Mason Gross School of the Arts at Rutgers University.

References and Notes

1. Donna Haraway, "A Cyborg Manifesto: Science, Technology, and Socialist-Feminism in the Late Twentieth Century," in Simians, Cyborgs, and Women: The Reinvention of Nature (New York: Routledge, 1991).

2. Haraway [1] pp. 293–294.

3. Aaron Smith and Janna Anderson, "AI, Robotics, and the Future of Jobs" (6 August 2014): www.pewinternet.org/2014/08/06/future-of-jobs (accessed 6 November 2018).

4. Donna Haraway, Staying with the Trouble: Making Kin in the Chthulucene (Durham, NC: Duke Univ. Press, 2016) p. 58.

5. M. Beth Dempster quoted in Haraway [4] p. 61.

6. Haraway [4] p. 61.

7. Tomie Hahn and Curtis Bahn, "Pikapika—The Collaborative Composition of an Interactive Sonic Character," Organised Sound 7, No. 3, 229–238 (2002).

8. Laetitia Sonami, lady's glove: www.sonami.net/ladys-glove (accessed 12 March 2019).

9. Onyx Ashanti, Wreck now—sonocybin: www.youtube.com/watch?v=gaFi_jUHS4s (accessed 12 March 2019).

10. Scott Barton, Ethan Prihar and Paulo Carvalho, "Cyther: A Human-Playable, Self-Tuning Robotic Zither," Proceedings of the 2017 New Interfaces for Musical Expression Conference (Copenhagen, 2017) pp. 319–324.

11. LP Demers and Bill Vorn, Inferno: www.billvorn.concordia.ca/menuall.html (accessed 12 March 2019).

12. Åsa Unander-Scharin and Carl Unander-Scharin, "Robocygne: Dancing Life into an Animal-Human-Machine," Leonardo 49, No. 3, 212–219 (2016).

13. Marco Donnarumma, "Music for Flesh II: Informing Interactive Music Performance with the Viscerality of the Body System," Proceedings of the 2012 Conference on New Interfaces for Musical Expression (Ann Arbor, MI: University of Michigan, 2012).

14. Claude Cadoz and Marcelo M. Wanderley, Gesture—Music (Paris: IRCAM, 2000).

15. Aurie Hsu and Steven Kemper, "Kinesonic Composition as Choreographed Sound: Composing Gesture in Sensor-Based Music," Proceedings of the 2015 International Computer Music Conference (Denton, TX: University of North Texas, 2015) pp. 412–415.

16. Aurie Hsu and Steven Kemper, "Kinesonic Approaches to Mapping Movement and Music with the Remote electroAcoustic Kinesthetic Sensing (RAKS) System," Proceedings of the 2nd International Workshop on Movement and Computing (MOCO '15) (New York: ACM, 2015) pp. 45–47. DOI: www.dx.doi.org/10.1145/2790994.2791020.

17. Godfried-Willem Raes, " 'Namuda Studies': Doppler Radar-Based Gesture Recognition for the Control of a Music Robot Orchestra" (2012): www.logosfoundation.org/ii/Namuda_JIM_paper.doc (accessed 5 November 2018).

18. Hahn and Bahn [7] pp. 229–238.

19. Steven Kemper and Scott Barton, "Mechatronic Expression: Reconsidering Expressivity in Music for Robotic Instruments," Proceedings of the 18th International Conference on New Interfaces for Musical Expression (NIME) (Blacksburg, VA: Virginia Tech University, 2018) pp. 84–87.

20. Ajay Kapur et al., "Collaborative Composition for Musical Robots," Journal of Science and Technology of the Arts 1, No. 1 (2009) p. 49.

21. Dan Trueman and R. Luke DuBois, PeRColate: www.music.columbia.edu/percolate (accessed 1 January 2019).

22. Haraway [1] p. 295.

23. Cynthia Breazeal and Brian Scassellati, "Robots That Imitate Humans," Trends in Cognitive Sciences 6, No. 11, 481–487 (2002): www.doi.org/10.1016/S1364-6613(02)02016-8.

24. Oscar D. Lara and Miguel A. Labrador, "A Survey on Human Activity Recognition Using Wearable Sensors," IEEE Communications Surveys Tutorials 15, No. 3, 1192–1209 (2013): www.doi.org/10.1109/SURV.2012.110112.00192.

Share