MH/CH Gestural Interface Project
MH/CH are a pair of bespoke glove controllers constructed with Teensy microcontrollers and a variety of sensors designed specifically for gestural live electronics performance in higher-order ambisonics. This project includes documentation of their use from a real-time performance of "Convergence" with double-bassist Aleksander Gabryś at Stanford University's Center for Computer Research in Music and Acoustics.
Program Note:
Convergence is a work composed for augmented double-bass and electronics performer in third-order ambisonics, which explores numerous concepts including the interactivity and agency between acoustic / electronic elements, and the interplay of gesture and musical materials in three-dimensional space. Convergence is the second piece in a small collection of works developed for five-string double-bass and ambisonic electronics, in collaboration with bassist Aleksander Gabryś.
In order to perform Convergence, the bass is outfitted with eight microphones placed at various points across the body of the instrument. This causes the physical actions of the bassist to correspond not only to specific sounds / timbres, but also to discrete points in space. However, this perceptual mapping is then manipulated and paired against new electronically generated materials in real-time by an electronics performer using a pair of bespoke electronic performance interfaces, MH & CH. MH is constructed out of a metal frame which mounts to the palm; this structure places five 2-axis control sticks at the performer’s fingertips, as well as a ribbon sensor on the back of the hand, and a 3-axis accelerometer. Meanwhile, CH is constructed with flex sensors, which provide data about the bending of individual fingers, and another 3-axis accelerometer. Both of these controllers feed into machine-learning processes (made with the ml.star package for Max/MSP and Wekinator) which allow for the classification of a number of gestures, as well as shaping the sometimes chaotic control data.
All of these ideas collide in a densely chaotic and gestural work which encourages both performers to push their respective limits musically and technically while interacting with a performance system that encourages intricate and nimble musical interactions. In working with Aleksander, we developed a set of expectations and rules which governed the performance, and which allowed for occasionally subtle, and sometimes pronounced shifts in our musical roles. Convergence also makes use of an audio score paired with visual cues, which is visible to the performers on a small screen nearby. Ultimately, the chaotic nature of the work gives both performers agency to explore the sonic and performative extremes of this complex system, as well as the liminal spaces which exist in-between.
//
Technical Notes:
Both Convergence and its companion piece Conduit make use of some fixed-media sound, as well as a microphone array, which was developed and recorded with the assistance of Mr. Gabryś in January 2020 at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University. In order to create this system, eight microphones were placed at various points across the body of a five-string double-bass. Each of these microphone inputs were then spatialized into a three-dimensional “exploded map" of the instrument using the ICST Ambisonics Package in Max/MSP. In addition to placing the listener “inside” of the fragmented instrument, this technique also allows for physical gesture, and sound produced all over the instrument to correspond to literal points in space. In both Convergence and Conduit, this technique is further employed in real-time, in third-order and fifth-order ambisonics respectively.
The microphone array was achieved with the following microphone models and placements:
2x DPA 4099 Microphones: One placed at the primary point of bow contact (near the bridge) on the strings, and the other placed at the top of the fingerboard.
4x AKG C411 Condenser Vibration Microphones: Placed in a “four-corners” pattern on the front of the instrument, with two on
opposite sides of the upper bout, and the other two on opposite sides of the lower bout.
2x Schertler Bass Pickup Microphones: One attached directly to the bridge, and the other attached directly to the tailpiece.
Convergence is a work composed for augmented double-bass and electronics performer in third-order ambisonics, which explores numerous concepts including the interactivity and agency between acoustic / electronic elements, and the interplay of gesture and musical materials in three-dimensional space. Convergence is the second piece in a small collection of works developed for five-string double-bass and ambisonic electronics, in collaboration with bassist Aleksander Gabryś.
In order to perform Convergence, the bass is outfitted with eight microphones placed at various points across the body of the instrument. This causes the physical actions of the bassist to correspond not only to specific sounds / timbres, but also to discrete points in space. However, this perceptual mapping is then manipulated and paired against new electronically generated materials in real-time by an electronics performer using a pair of bespoke electronic performance interfaces, MH & CH. MH is constructed out of a metal frame which mounts to the palm; this structure places five 2-axis control sticks at the performer’s fingertips, as well as a ribbon sensor on the back of the hand, and a 3-axis accelerometer. Meanwhile, CH is constructed with flex sensors, which provide data about the bending of individual fingers, and another 3-axis accelerometer. Both of these controllers feed into machine-learning processes (made with the ml.star package for Max/MSP and Wekinator) which allow for the classification of a number of gestures, as well as shaping the sometimes chaotic control data.
All of these ideas collide in a densely chaotic and gestural work which encourages both performers to push their respective limits musically and technically while interacting with a performance system that encourages intricate and nimble musical interactions. In working with Aleksander, we developed a set of expectations and rules which governed the performance, and which allowed for occasionally subtle, and sometimes pronounced shifts in our musical roles. Convergence also makes use of an audio score paired with visual cues, which is visible to the performers on a small screen nearby. Ultimately, the chaotic nature of the work gives both performers agency to explore the sonic and performative extremes of this complex system, as well as the liminal spaces which exist in-between.
//
Technical Notes:
Both Convergence and its companion piece Conduit make use of some fixed-media sound, as well as a microphone array, which was developed and recorded with the assistance of Mr. Gabryś in January 2020 at the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University. In order to create this system, eight microphones were placed at various points across the body of a five-string double-bass. Each of these microphone inputs were then spatialized into a three-dimensional “exploded map" of the instrument using the ICST Ambisonics Package in Max/MSP. In addition to placing the listener “inside” of the fragmented instrument, this technique also allows for physical gesture, and sound produced all over the instrument to correspond to literal points in space. In both Convergence and Conduit, this technique is further employed in real-time, in third-order and fifth-order ambisonics respectively.
The microphone array was achieved with the following microphone models and placements:
2x DPA 4099 Microphones: One placed at the primary point of bow contact (near the bridge) on the strings, and the other placed at the top of the fingerboard.
4x AKG C411 Condenser Vibration Microphones: Placed in a “four-corners” pattern on the front of the instrument, with two on
opposite sides of the upper bout, and the other two on opposite sides of the lower bout.
2x Schertler Bass Pickup Microphones: One attached directly to the bridge, and the other attached directly to the tailpiece.
Douglas McCausland
: Composer / Performer
Douglas McCausland is a composer / performer who is fascinated with new aesthetic and technological domains, and whose chaotic and dense works explore the extremes of sound and the
digital medium. Through his work, he investigates the various intersections of real-time electronic music performance with handmade interfaces / instruments, spatial audio (higher-order ambisonics and binaural), dynamic / interactive systems, the musical applications of machine- learning, experimental sound design, and DIY electronics / hardware-hacking.
His works have been performed internationally at festivals and symposiums including: Sonorities (SARC), SEAMUS, the San Francisco Tape Music Festival, Splice, MISE-EN Music Festival, Klingt Gut!, Sounds Like THIS!, Electronic Music Midwest, NYCEMF, Sonicscape, CEMEC, Eureka!, and CEMICircles. Recent honors include a finalist nomination for the 2021 ASCAP/SEAMUS commission competition, winning the gold-prize for “contemporary computer music” in the Verband Deutscher Tonmeister Student 3D Audio Production Competition, and being awarded the runner-up nomination for the International Confederation of Electroacoustic Music's 2019 CIME Prix.
Douglas is currently a DMA candidate at Stanford University, working towards his doctorate in Composition while studying with Chris Chafe, Patricia Alessandrini, Jaroslaw Kapuscinski, Fernando Lopez-Lezcano, and Mark Applebaum.
digital medium. Through his work, he investigates the various intersections of real-time electronic music performance with handmade interfaces / instruments, spatial audio (higher-order ambisonics and binaural), dynamic / interactive systems, the musical applications of machine- learning, experimental sound design, and DIY electronics / hardware-hacking.
His works have been performed internationally at festivals and symposiums including: Sonorities (SARC), SEAMUS, the San Francisco Tape Music Festival, Splice, MISE-EN Music Festival, Klingt Gut!, Sounds Like THIS!, Electronic Music Midwest, NYCEMF, Sonicscape, CEMEC, Eureka!, and CEMICircles. Recent honors include a finalist nomination for the 2021 ASCAP/SEAMUS commission competition, winning the gold-prize for “contemporary computer music” in the Verband Deutscher Tonmeister Student 3D Audio Production Competition, and being awarded the runner-up nomination for the International Confederation of Electroacoustic Music's 2019 CIME Prix.
Douglas is currently a DMA candidate at Stanford University, working towards his doctorate in Composition while studying with Chris Chafe, Patricia Alessandrini, Jaroslaw Kapuscinski, Fernando Lopez-Lezcano, and Mark Applebaum.
Connect with Douglas McCausland
How I can help you:
I offer services in composition, live performances, sound design (including spatial audio and binaural), Max/MSP and Pd development, and education / instruction. Feel free to reach out to me directly to inquire about services.
I offer services in composition, live performances, sound design (including spatial audio and binaural), Max/MSP and Pd development, and education / instruction. Feel free to reach out to me directly to inquire about services.
How you can help me:
Follow me on YouTube, and check out my work via my website!
Follow me on YouTube, and check out my work via my website!