Jonestown sonic experimentation under way
Every sound can be reinterpreted.
CLICK PHOTOS TO ENLARGE< SOUND CLIPS TO HEAR
The path from the sound source to the ear isn’t always clear or short.
The sound might be muffled by distance. There might be a wall between the hearer and the sound source. Earwax might rob the sound source of its crisp high end.
Then again, there might be an analog or digital electronic sound path that distorts and transforms the original sound signal into something new.
Is that an interesting thing?
Sometimes.
Sometimes it’s just research.
But you have to go through a lot of research to find what might be some new avenue of sound.
The path to my Jonestown opera is paved with experimentation.
Most of it produces unattractive results in the short term. But in the long term, a level of expertise and control evolves that helps to develop something new.
When I first started working with the Jonestown tapes in the early 1980s, taken from an NPR broadcast of a piece called “Our Father Who Art In Hell: The Last of Jonestown,” if you wanted to experiment with sound the only option was some form of tape manipulation. You could speed the tapes up or slow them down. You could layer individual track and control their relative sonic balances. You could cut tapes up and reassemble them. I did all of those things and then I hit upon tape loops. You record a phrase of some length to a piece of magnetic recording tape, then cut that section from the reel and splice it back to itself, end to end, so that that phrase would play in perpetuity. Layer phrases of different lengths and sound sources and you started to get something complex and interesting.
In the 1980s, digital sampling came into being, at first through toy keyboards and later through an evolving series of professional samplers. Early samplers had limited memory, so the phrases input had to be either short or played at a lower bit rate (with a corresponding degradation of sound). Those early samplers also produced relative pitch by the same process that drives perception in humans. An octave higher is a doubling of pitch, and its wavelength is twice as short. So if you had a phrase and you played it with itself and at an octave higher on the keyboard, the higher pitched version would play out in half the time. Similarly lower pitches took longer to play out. And the mathematical ratios of their playing lengths all had to do with the relative pitches of the equal-temperment western keyboard.
JONES 1A SNIPS 1-3 ARE UNALTERED SAMPLER MANIPULATIONS OF PHRASES SPOKEN BY JIM JONES. Click title to hear sound.
Sampling software improvements made it possible to vary the pitch of a sound source according to western tuning without varying the length of the original sampled sound.
Later samplers had more memory and could accommodate longer samples and higher bit rates, resulting in better sound. And the advent of computer based samplers meant huge libraries of individual sounds could be developed.
Increasing speed of processors and cheap, available memory made for more possibilities in the way of software to control effects of all sorts, generally broken down into filtering effects and time effects (delay, reverb). In recent years, software such as Izotope’s Stutter Edit and Native Instruments’ The Finger and The Mouth have allowed keyboard players to manipulate a track on the fly, carving the sound up, making it repeat faster or slower or filtering the content of the phrase sampled in the editor. Each key of the keyboard would generate another process to effect the sound.
In the past year, Izotope has come up with another highly interesting tool for sound designers called Iris. Iris allows the composer to shape the spectral components of a sound, either in real time or in pre-created ways.
As a means of letting people in on my creative process I have been generating some examples of what I’m currently researching. So that people will have something to hang onto, I have used a section of Reverend Jim Jones trying to talk his flock into taking the Kool Aid.
He says, “Please, for God’s sake. Let’s be done with it. We’ve lived as no other people have lived, and loved. Let’s just be done with the agony of it.”
This phrase has been broken into sub-phrases and tightly mapped to the music keyboard so that as one plays the keyboard from left to right one adds material from further in the phrase. Some key phrases are repeated in looped and unlooped forms. All have been mapped in such a way that they play our in different locations of the stereo field.
I have created a couple of basic rhythmic patterns as one might assemble a song. These are accompanied by a separate track of gongs and low bowed basses.
THE ABOVE CLIPS ARE VARIATIONS OF THE FIRST THREE CLIPS< PROCESSED WITH IZOTOPE STUTTER EDIT ONLY. Click clip to hear.
In later variations I have processed the spoken voice track using U He’s Filterscape (which creates complex filtering patterns) and Izotope’s Stutter Edit. There are several variations of each to briefly illustrate a bit of the range each can create.
THE ABOVE SOUND CLIPS ARE VARIATIONS ON THE ORIGINAL SOUND SAMPLES USING BOTH FILTERSCAPE AND STUTTER EDIT
The last piece was a track created using Izotope’s Iris to sculpt the original phrase, then trigger new rhythmic possibilities.
THE ABOVE SOUND CLIPS WERE CREATED USING IZOTOPE IRIS
As you can hear, there’s an infinite range of possibilities from a single sound source.
Now multiply this times thousands of hours of tapes and you begin to see the need for a lot of pure experimentation in order to arrive at a new kind of story telling, and a new sound.
– Daniel Buckley, October 2012