My newest piece, Ñamandú for saxophone and interactive electronics, is going to be premiered on July 12 at the World Saxophone Congress in Zagreb, Croatia. Katherine Weintraub commissioned the piece and will perform it along with In Rarefied Air, the solo piece I wrote for her last year. I'm honored to collaborate with such a high-calibre musician as Katherine - she's doing an amazing job with it, and her performance is going to be fantastic! Here's the recital information. This will be the first performance of my music at the World Saxophone Congress, and in Croatia - I'm very excited to have my music heard by new audiences!
I'd like to explain some of my process for creating this piece. I returned to the visual programming environment Max/MSP to create the electronic sounds, like I did for Draconids several years ago. It had been a while since I had used Max so the Electronic Music and Sound Design series by Cipriani and Giri helped me get back into it! I enjoy the flexibility of the programming environment and the ability to do real-time effects with a performer's sound; however, since Katherine and I are on opposite sides of the country and I will not be present for the premiere, I decided not to use a lot of live-processed sounds, but rather to focus on pre-recorded sounds that are triggered by the performer.
First, Katherine sent me some audio samples and I made a Max patch to create sounds (using techniques like granular synthesis, a sort of compression side-chaining, and a bit of extreme time stretching using the spectral processing techniques that Jean-Francois Charles developed for Max.) and did a lot of experimentation and recorded some raw materials. About 95% of the sounds I created in this piece were processed from either saxophone recordings or bird sounds (or both).
Next, I put the piece together in Reaper, where I could have tight control over dynamics, EQ, and rigid timing. After I had a complete mix, I extracted the tracks, one section at a time, until I had a bunch of "slices" of the piece in the form of separate audio files. Then I went back to Max and built the performance patch, which triggers the playing of these slices from pitch tracking of the saxophone notes (certain notes cause sounds to be played), and a foot pedal. This is where the piece became flexible and responsive to the performer. As you can imagine, the process took a while, but it allowed me to create a very well-mixed and stable result, and it still interacts dynamically with the performer. Here's the patch interface:
Don't worry though, there is still some live processing going on! One of my goals for this piece was real-time polyphony, where the saxophonist's sound is transposed to different notes as they play. I've heard a lot of pieces where the soloist's sound is transposed by an octave or a fifth or something, but I wanted to transpose dynamically, where each note the soloist plays can be transposed to different notes, in actual harmony rather than just static transposition. It was not as hard to do as I expected it would be. I've only heard this done once before, by Russell Pinkston, whom I studied with several years ago (but not specifically on this technique). I don't know if my polyphony works as well as his, but I was pleased to be able to use this technique effectively in my piece. Here's a demo:
Finally, I used ambisonic encoding to move the sounds in 3-dimensional space - that's what the 5 circles at the bottom of my patch represent. Using ambisonics gives me a great control over the movement and placement of sound around the listener and also allows me to easily translate the piece into several different speaker arrays, which will come in handy when performing in different locations.
Ñamandú is the name of the creator deity in the mythology of the Mbyá people, a Guaraní tribe native to central South America. As you may tell from my patch image, the hummingbird is important in this legend and in my piece. In another post I will write about the theme and inspiration for Ñamandú.