Steve Reich

It's Gonna Rain

1965

Performing and listening to a gradual musical process resembles: pulling back a swing, releasing it, and observing it gradually come to rest; turning over an hour glass and watching the sand slowly run through the bottom; placing your feet in the sand by the ocean's edge and watching, feeling, and listening to the waves gradually bury them. Steve Reich, Music as a Gradual Process, 1968

The first piece we're going to explore takes us back to the mid-1960s. This was a time right after the Cuban Missile Crisis, when the end of the world by nuclear annihilation did not seem all that implausible.

In this tense atmosphere, young American composer Steve Reich was experimenting with using reel-to-reel magnetic tape recorders for the purpose of making music.

A reel-to-reel tape recorder. Photo: Alejandro Linares Garcia.

One of the exciting new possibilities of this technology was that you could record your own sounds and then play them back. Reich was using this possibility to make field recordings. In 1965 he happened upon San Francisco's Union Square, where a Pentecostal preacher called Brother Walter was bellowing about the end of the world.

From this tape Reich would construct what is considered to be his first major work and one of the definitive pieces of American minimal music: "It's Gonna Rain".

As you listen to it, I'd like you to pay particularly close attention to the movement that begins around the two minute mark and lasts for about six minutes. This is the part produced by the process that we're about to examine.

Now, this is not exactly easy listening. It gets pretty intense. Notwithstanding the pigeon drummer, it even challenges most people's conception of what "music" even means. This is basically just looped human speech, isn't it?

To me, what this piece seems to be saying is "we're going to take this subsecond apocalyptic sound clip and we're going to listen the shit out of it". And as we listen, the clip starts shifting and layering on top of itself. The words disappear and it all just becomes pure sound. Even though no new sonic information is being introduced, you keep noticing new things all the time — things that were there all along but you just hadn't payed attention to. Until, at the end, the words re-emerge again and we're back where we started and not quite sure what just happened.

To say I actually enjoy listening to this piece would probably be stretching it. It wouldn't be among the records I'd take with me on a desert island. But it is certainly fascinating and kind of hypnotic too. If you allow it to, it does evoke a certain kind of mental atmosphere.

All music to some degree invites people to bring their own emotional life to it. My early pieces do that in an extreme form, but, paradoxically, they do so through a very organized process, and it’s precisely the impersonality of that process that invites this very engaged psychological reaction. Steve Reich, 1996

Because of the simplicity of this piece, it's a great way for us to begin exploring Web Audio. So let's see how we might be able to reconstruct it in a web browser.

Setting up itsgonnarain.js

The source code for this project is available on GitHub.

To host our JavaScript version of It's Gonna Rain, we're going to need a bit of HTML and a bit of JavaScript. Create a new directory for this project and add two files into it:

itsgonnarain/ index.html itsgonnarain.js



Then set the contents of index.html as follows:

< html > < body > < script src = "https://cdn.rawgit.com/mohayonao/web-audio-api-shim/master/build/web-audio-api-shim.min.js" > </ script > < script src = "https://cdnjs.cloudflare.com/ajax/libs/fetch/1.0.0/fetch.min.js" > </ script > < script src = "itsgonnarain.js" > </ script > </ body > </ html >

This is a very simple HTML document that just loads a couple of JavaScript files to the page:

The web-audio-api-shim library, which irons out some browser inconsistencies with the Web Audio API so that we don't have to deal with them.

A polyfill for the Fetch API, which also isn't fully supported across all browsers yet.

The file that will host our own code, itsgonnarain.js .

We're not going to use an ES2015 transpiler in these code examples, and will just rely on native browser support of the latest JavaScript features. If you want to share your work with the wider world, you'll definitely also want to transpile your code with a tool like Babel.

Also add some content to the itsgonnarain.js file:

console .log( "It's gonna rain" );

This will print a message to the browser's development console. It'll give us a chance to check that everything is working once we load up the page in a web browser. But before we can do that, we'll need to serve it up with a web server. Open up a command line and install the lite-server Node.js package:

npm install -g lite-server

The npm command comes with Node.js, so if it isn't working for you make sure you have Node installed.

We now have a package we can use to spin up a web server, and we can do so by running the following command from inside the itsgonnarain project directory:

lite-server

We'll keep the server running throughout the project. You should now be able to open http://localhost:3000 in a web browser to load the project (or it may already have opened automatically for you). It'll just be a blank page, but if you go and open the browser's developer tools you should see the It's gonna rain message in the console. We're ready to start playing some sounds!

I recommend always keeping the developer console open while working on the project. If something goes wrong with the code, this is where the browser will tell you about it.

Loading A Sound File

What we're going to do first is simply play a sound sample on our webpage. Before we can play it though, we'll need to load it.

What I did for my version was to take an MP3 file of the Reich piece and extract a short clip from the first 15 seconds or so. Unfortunately I can't give you that clip to download, but you can easily obtain it (it's on e.g. Amazon or iTunes). Or you can use some other piece of audio that you think would be interesting to use as raw material - it's entirely up to you!

Whatever audio you use, make sure it's in a format that web browsers can understand (.wav, .mp3, .ogg, etc.) and then place the file inside the project directory.

Once we have an audio file, we can add some JavaScript code to load it in:

itsgonnarain.js

fetch( 'itsgonnarain.mp3' ) .then(response => response.arrayBuffer()) .then(arrayBuffer => console .log( 'Received' , arrayBuffer)) .catch(e => console .error(e));

Here we are using the Fetch API to send a request for the audio file over the network (which in this case will just go to our local web server). The file will eventually be received back from the server, and what fetch() returns back to us is a Promise object that will give us a chance to react when that happens.

What we do with the Promise is call its .then() method and supply it with a callback function we want to run when the response comes in. In the callback we invoke the arrayBuffer() method of the response, letting it know that we want to have the incoming data as a binary ArrayBuffer object.

Then we have to wait a bit more to actually receive all that data, so we get back another Promise object. This one will get fulfilled when the browser has finished receiving every last bit of the (binary) data. Into this second Promise we attach a callback function that simply logs the resulting ArrayBuffer into the developer console. We also include a catch rejection handler, which will log any errors that might occur during this whole process, so that if something goes wrong it'll also be shown in the developer console.

You should now see this message:

We now have the raw MP3 data loaded to our page.

Playing The Sound

We still have to do one more thing before we can play the MP3 data we now have. We have to decode it into a playable form. This is where we bring in the Web Audio API, which has the facilities for doing that.

With Web Audio, everything begins with something called an AudioContext. This is an object that handles all the audio processing we're going to do and internally manages any communication with the audio hardware and the resources associated with it. Before we do anything else, we need to create one of those AudioContext s. We'll be using it for many things, but right now we're going to call its decodeAudioData method to turn our MP3 ArrayBuffer into a decoded AudioBuffer:

itsgonnarain.js

let audioContext = new AudioContext(); fetch( 'itsgonnarain.mp3' ) .then(response => response.arrayBuffer()) .then(arrayBuffer => audioContext.decodeAudioData(arrayBuffer)) .then(audioBuffer => console .log( 'Decoded' , audioBuffer)) .catch(e => console .error(e));

As we see here, decodeAudioData actually also returns a Promise instead of giving us the audio data right away. That's because the data may take a while to decode and the browser does it in the background, letting us get on with other business in the meantime. So what we now have is a chain of three nested Promises. After the last one we do our console logging. The message in the developer console should show that we have about 14 seconds of stereo audio data – or whatever the duration of your sound file happens to be.

This is something we can now play, so let's go right ahead and do that. We're going to use the AudioContext again, this time to create something called a buffer source. This is an object that knows how to play back an AudioBuffer. We give it the buffer we have, connect it, and start it. The result is that we can hear the sound play from the browser window – our equivalent of hitting the play button on Reich's tape recorder.

itsgonnarain.js

let audioContext = new AudioContext(); fetch( 'itsgonnarain.mp3' ) .then(response => response.arrayBuffer()) .then(arrayBuffer => audioContext.decodeAudioData(arrayBuffer)) .then(audioBuffer => { let sourceNode = audioContext.createBufferSource(); sourceNode.buffer = audioBuffer; sourceNode.connect(audioContext.destination); sourceNode.start(); } ) .catch(e => console .error(e));

What we've done here is set up our very first audio-processing graph, which consists of just two nodes, one connected to the next:

An AudioBufferSourceNode that reads in the AudioBuffer data and streams it to other nodes.

The AudioContext's built-in AudioDestinationNode, which makes the sound audible on the machine's speakers.

Basic Web Audio graph with a single source

In general, all audio processing with the Web Audio API happens with such graphs, through which audio signals flow, starting from source nodes and ending up in destination nodes, possibly going through a number of intermediate nodes. These graphs may contain different kinds of nodes that generate, transform, measure, or play audio. We'll soon be constructing more elaborate graphs, but this is all you need if you simply want to play back a sound.

Looping The Sound

What we have so far plays the audio file once and then stops. What we really want to do though to begin recreating the magic of "It's Gonna Rain" is make it loop over and over again.

Steve Reich was using magnetic tape, which in its usual configuration spools from one reel to the other. But there's a brilliant hack, pioneered by Pierre Schaeffer, that allows you to circumvent this process: You can splice the tape into a loop which gets you an infinitely repeating sample with a duration of a few seconds or less.

For Reich, as for anyone working with tape, this meant a laborious process of finding the right bit of tape, cutting it up and gluing it back together again. For us, it's much, much easier. This is because the AudioBufferSourceNode interface we're using happens to have a loop flag we can just set to true :

itsgonnarain.js

let sourceNode = audioContext.createBufferSource(); sourceNode.buffer = audioBuffer; sourceNode.loop = true ; sourceNode.connect(audioContext.destination); sourceNode.start();

The effect of this is that the whole sample loops round and round forever.

We should still do the equivalent of cutting up the little piece that says "It's Gonna Rain" and loop just that bit. We could go into an audio editor and create an .mp3 file that only contains that part, but we don't actually need to. AudioBufferSourceNode has attributes called loopStart and loopEnd for controlling this. They allow setting the offsets, in seconds, of where the loop should start and end. With some trial and error you can find good values for them:

itsgonnarain.js

let sourceNode = audioContext.createBufferSource(); sourceNode.buffer = audioBuffer; sourceNode.loop = true ; sourceNode.loopStart = 2.98 ; sourceNode.loopEnd = 3.80 ; sourceNode.connect(audioContext.destination); sourceNode.start();

With these settings, the sample still starts playing from the very beginning and then gets "stuck" looping between 2.98 and 3.80, resulting in a loop with a duration of 0.82 seconds.

If we only want to play the looping part, we should also begin playing the sample from the loop start offset, which we can do by giving that same value to the start() method of the buffer source:

itsgonnarain.js

let sourceNode = audioContext.createBufferSource(); sourceNode.buffer = audioBuffer; sourceNode.loop = true ; sourceNode.loopStart = 2.98 ; sourceNode.loopEnd = 3.80 ; sourceNode.connect(audioContext.destination); sourceNode.start( 0 , 2.98 );

The start() method optionally takes up to three arguments, and we're giving two of them here:

When to start playing. We want to start playing immediately, so we set it to zero. This is also the default but since we want to provide that second argument, we also need to come up with something for the first.

The offset at which to start playing the buffer. We set it to 2.98 - the beginning of our loop.

How Phase Music Works

And now we come to the real technical trick that underlies "It's Gonna Rain": What we are hearing are two identical tape loops playing at the same time. There's one in the left ear and one in the right. But one of those loops is running ever so slightly faster than the other.

The difference is so small you don't even notice it at first, but after a while it amounts to a phase shift between the two loops that keeps growing bigger and bigger as time goes by. First it sounds like there's an echo. Then the echo transforms into a repeating effect, and then it builds into a series of strange combinations of sounds as different parts of the sample overlap with others. At the halfway point the loops start to approach each other again, and we go back through repetition and echo to unison.

Brian Eno has likened this process to a moiré pattern, where two simple identical geometrical patterns are superimposed to give rise to something much more complex than the original.

It's a remarkably simple trick compared to the apparent complexity of what you actually hear. And from a 0.82 second loop we get six minutes of music. That's pretty good return on investment!

This is called phase music, which is a kind of systems music. One way to think about it is that Reich, as the author of the piece, did not write a score for it. He just set the inputs and parameters for a phase-shifting process which he would then execute, generating the music.

Though I may have the pleasure of discovering musical processes and composing the musical material to run through them, once the process is set up and loaded it runs by itself. Steve Reich, Music as a Gradual Process

Setting up The Second Loop

To reproduce the phase-shifting process, we're going to need another instance of the "It's gonna rain" sound loop. Let's extract the initialization of our source node into a new function called startLoop , which we can then call twice. Two source nodes that both play the same AudioBuffer are created and started:

itsgonnarain.js

let audioContext = new AudioContext(); function startLoop ( audioBuffer ) { let sourceNode = audioContext.createBufferSource(); sourceNode.buffer = audioBuffer; sourceNode.loop = true ; sourceNode.loopStart = 2.98 ; sourceNode.loopEnd = 3.80 ; sourceNode.connect(audioContext.destination); sourceNode.start( 0 , 2.98 ); } fetch( 'itsgonnarain.mp3' ) .then(response => response.arrayBuffer()) .then(arrayBuffer => audioContext.decodeAudioData(arrayBuffer)) .then(audioBuffer => { startLoop(audioBuffer); startLoop(audioBuffer); }) .catch(e => console .error(e));

The only effect of doing this so far is that the sound we hear becomes louder. There are now two sources playing the loop in unison, which amps up the volume.

Web Audio graph with two looped sources.

Adding Stereo Panning

In Reich's piece the two loops are playing in separate stereo channels, one in your left ear and one in the right. We should do the same.

The AudioBufferSourceNode interface has no property for controlling this. Instead there's a more general purpose solution for it in the Web Audio API, which is to add specialized StereoPannerNodes to the audio graph. They can be used to pan their incoming audio signal to any point in the stereo field.

StereoPannerNode has a pan attribute that can be set to a number between -1 (all the way to the left) and 1 (all the way to the right), with the default being 0 (in the center). We set up our loops so that one is fully on the left and the other on the right:

itsgonnarain.js

let audioContext = new AudioContext(); function startLoop ( audioBuffer , pan = 0 ) { let sourceNode = audioContext.createBufferSource(); let pannerNode = audioContext.createStereoPanner(); sourceNode.buffer = audioBuffer; sourceNode.loop = true ; sourceNode.loopStart = 2.98 ; sourceNode.loopEnd = 3.80 ; pannerNode.pan.value = pan; sourceNode.connect(pannerNode); pannerNode.connect(audioContext.destination); sourceNode.start( 0 , 2.98 ); } fetch( 'itsgonnarain.mp3' ) .then(response => response.arrayBuffer()) .then(arrayBuffer => audioContext.decodeAudioData(arrayBuffer)) .then(audioBuffer => { startLoop(audioBuffer , -1 ); startLoop(audioBuffer , 1 ); }) .catch(e => console .error(e));

We are no longer connecting the source nodes directly to the AudioContext destination. We connect them to the panner nodes, and then the panner nodes to the destination. The audio signals flow through the panners:

Web audio graph with two loops panned to different ends of the stereo field.

Putting It Together: Adding Phase Shifting

Finally we're ready introduce the phase shifting. This is done so that one of the loops plays slightly faster than the other. The difference should be so small that you don't perceive it at first, but large enough so that a noticeable phase shift starts accumulating over time.

The playback speed of an AudioBufferSourceNode can be set using the playbackRate attribute. The default value is 1 and setting it to something else makes it go slower or faster. For example, setting it to 2 doubles the playback speed, also halving the duration in the process.

For our purposes, I've found setting a value around 1.002 for one loop and keeping the other as 1 to work well. When the original loop is 0.82 seconds long, the second one thus becomes 0.82 / 1.002 ≈ 0.818 seconds long.

itsgonnarain.js

let audioContext = new AudioContext(); function startLoop ( audioBuffer, pan = 0 , rate = 1 ) { let sourceNode = audioContext.createBufferSource(); let pannerNode = audioContext.createStereoPanner(); sourceNode.buffer = audioBuffer; sourceNode.loop = true ; sourceNode.loopStart = 2.98 ; sourceNode.loopEnd = 3.80 ; sourceNode.playbackRate.value = rate; pannerNode.pan.value = pan; sourceNode.connect(pannerNode); pannerNode.connect(audioContext.destination); sourceNode.start( 0 , 2.98 ); } fetch( 'itsgonnarain.mp3' ) .then(response => response.arrayBuffer()) .then(arrayBuffer => audioContext.decodeAudioData(arrayBuffer)) .then(audioBuffer => { startLoop(audioBuffer, -1 ); startLoop(audioBuffer, 1 , 1.002 ); }) .catch(e => console .error(e));

And with that, we have "It's Gonna Rain"!

"It's Gonna Rain"

Web audio graph with two loops, panned and with phasing.

Exploring Variations on It's Gonna Rain

There's a lot of exploration we can do around the process we've set up. Here's a couple of ideas you can try to see what happens:

Try different playback rates. How much faster can you make one of the loops compared to the other before it turns from "phasing" to something else?

Set different values for panning. Does the piece sound different if you only partially pan the loops, or don't pan them at all? How about dynamic panning that changes over time? (There are several methods in the Web Audio API for gradually ramping parameters like pan or playbackRate up or down)

Another route of exploration is using different sound sources for the piece. You can try looping different parts of itsgonnarain.mp3 , some of them shorter, some of them longer. Why not also try sourcing samples from audio you've found from the Internet or recorded yourself? You could try speech samples or even pieces of songs. The same process can result in wildly different results given different inputs. Here are some loops I've made:

"Ways to Survive" (Kendrick Lamar) "Rhythm Was Generating" (Patti Smith) "Does It Make Common Sense?" (Laurie Anderson)

Steve Reich himself would later develop the tape phasing technique further in the 1966 piece Come Out, whose civil rights theme is still unsettlingly pertinent 50 years later.

Reich also took the tape phasing idea and applied it to live performance in Piano Phase in 1967 and Drumming in 1970. Alexander Chen has made a great Web Audio version of Piano Phase.

Brian Eno Ambient 1: Music for Airports, 2/1 1978

Once I started working with generative music in the 1970s, I was flirting with ideas of making a kind of endless music — not like a record that you'd put on and which would play for a while and finish. I like the idea of a kind of eternal music, but I didn't want it to be eternally repetitive, either. I wanted it to be eternally changing. Brian Eno, 2007

Prior to the mid-1970s, Brian Eno was mainly known for his art rock bona fides, first as a member of Roxy Music and then as a solo artist. But this would soon change. Eno was interested in the experimental techniques of John Cage, Terry Riley, and Steve Reich. He was – and still is – also something of a tech geek. He was an early user of synthesizers and would later become a big proponent of the idea of the whole music studio as a compositional tool.

In 1975 Eno got into an accident. He was overrun by a speeding London taxi and was left hospitalized. During his time in the hospital something happened that led him to the conceptualization of "ambient music" – a term everyone is by now familiar with.

Now, Ambient 1: Music for Airports was not the first record Eno made as a result of this epiphany. That would have been "Discreet Music" a couple of years earlier, something that we're going to look into in the next chapter. But "Music for Airports" has a track on it that is very directly influenced by both the concept and the execution of Steve Reich's tape pieces. It's the second track on side A.

As a musical experience, this is clearly nothing like "It's Gonna Rain" though. Instead of a frantic loop of apocalyptic sermonizing, we're treated with gentle waves of vocal harmonies. It actually kind of sounds like music in the traditional sense!

But even though the results are very different, there are very close similarities in the underlying systems that produce the music.

The Notes and Intervals in Music for Airports

Before we discuss the system though, let's discuss the inputs.

The raw materials Eno used on this track are seven different "aahhh" sounds ("sung by three women and myself"). Each one of those sounds is on a different pitch.

In terms of musical notes, what we are hearing are F, A-flat, and from the next octave C, D-flat, E-flat, F, and A-flat. These, I'm told, together form "a D-flat major seventh chord with an added ninth". It's really quite beautiful:

F A ♭ C' D ♭ ' E ♭ ' F' A ♭ '

This chord evokes a kind of reflective mood, which is very common throughout Eno's ambient work. You can get an idea of what he was going for in this interview clip (I like the sound of "well, if you die, it doesn't really matter"):

The note selection is also important because of the way the notes may combine. We'll be feeding these individual notes into a kind of generative system, and we will have little control over which combinations of two or more notes may occur. Any succession of notes might be played back to back forming a melody, or on top of each other forming a harmony.

Here are all the possible two note intervals in this chord:

A ♭ C' D ♭ ' E ♭ ' F' A ♭ ' F ▶ ▶ ▶ ▶ ▶ ▶ A ♭ ▶ ▶ ▶ ▶ ▶ C' ▶ ▶ ▶ ▶ D ♭ ' ▶ ▶ ▶ E ♭ ' ▶ ▶ F' ▶

Among all these intervals, there's just one that's sharply dissonant (a minor second) and sounds a bit off. Can you hear it? However, since there's just one and it'll thus not occur very frequently, it'll only serve to add a bit of spice to the music. Looks like whatever the system may throw at us will generally sound pretty good.

Setting up musicforairports.js

The source code for this project is available on GitHub.

With this background knowledge, we're ready to start writing our own little version of this piece. Let's begin by setting up the project. The basic structure will be very similar to what we did with "It's Gonna Rain".

Create a new directory and add two files to it:

musicforairports/ index.html musicforairports.js



Then set the contents of index.html as follows:

< html > < body > < script src = "https://cdn.rawgit.com/mohayonao/web-audio-api-shim/master/build/web-audio-api-shim.min.js" > </ script > < script src = "https://cdnjs.cloudflare.com/ajax/libs/fetch/1.0.0/fetch.min.js" > </ script > < script src = "musicforairports.js" > </ script > </ body > </ html >

We can leave the JavaScript file empty for now. If we just launch the web server from this directory, we're all set to start hacking:

lite-server

Obtaining Samples to Play

We're going to build a system that plays sounds based on the musical chord that we just heard. But we haven't really discussed what sounds those are actually going to be.

Eno used a set of samples based on voices he'd recorded himself. We have little hope of reproducing those voices exactly, so we're not going to even try. We can use readymade samples obtained from the Internet instead. Specifically, we're going to use samples from the freely available Sonatina Symphonic Orchestra sound bank. This is a very nice sampling of the sounds of individual instruments in a symphonic orchestra - violins, brass instruments, percussion, etc.

Go ahead and download and then unpack the Sonatina ZIP archive. It's pretty large so it might take a while to transfer and you'll need space for it. Then move or copy the Samples directory to the musicforairports project.

musicforairports/ index.html musicforairports.js Samples 1st Violins 1st-violins-piz-rr1-a#3.wav etc. etc.



Basically, we want to set things up so that we'll be able to load any of the samples into our JavaScript code, which means they need to be inside a directory we're serving using the lite-server .

If you're feeling adventurous, there's a lot of more experimentation you can do around sample selection, beyond the sounds found in the Sonatina library. You could: Find other samples online, in places like freesound.org.

Generate samples from GarageBand or some other piece of software or equipment.

Record your own voice or a musical instrument.

Synthesize the sound, as we do in the next chapter.

Building a Simple Sampler

We now have a sample library and we should write some code that will allow us to play different musical notes using the various musical instruments in that library. In other words, we're going to write a primitive JavaScript sampler.

If you look into the various subdirectories under Samples you'll notice that there are many .wav files for each musical instrument, for various notes. But there's a problem: There aren't samples for every musical note. For example, looking at the alto flute, we've got samples for A # , C # , E, and G on three different octaves, but that's only four notes per octave. There are twelve semitones in an octave, so eight out of twelve are missing!

This is not really a problem though, and it's a pretty common occurrence in sample libraries. An obvious reason for this is preserving storage and bandwidth: You may have noticed that the Sonatina library is already half a gigabyte in size. If it included all the notes of all the instruments, that number could easily triple.

What we will do for each note that is missing a sample, is to find the nearest note that does have a sample and pitch-shift that sample to the new note. This technique for "filling in the blanks" is commonly used by samplers.

Let's look at the samples for the grand piano - an instrument I find fits well with this piece. It has samples on about seven octaves, a few in each octave. If we visualize the whole chromatic scale for octaves 4, 5, and 6, we see what we have and what's missing:

C4 C # 4 D ♭ 4 D4 D # 4 E ♭ 4 E4 F4 F # 4 G ♭ 4 G4 G # 4 A ♭ 4 A4 A # 4 B ♭ 4 B4 C5 C # 5 D ♭ 5 D5 E # 5 E ♭ 5 E5 F5 F # 5 G ♭ 5 G5 G # 5 A ♭ 6 A5 A # 5 B ♭ 5 B5 C6 C # 6 D ♭ 6 D6 D # 6 E ♭ 6 E6 F6 F # 6 G ♭ 6 G6 G # 6 A ♭ 7 A6 A # 6 B ♭ 6 B6

What our sampler should do is find for each of these notes the nearest note that has a sample. When we do that we have the keymap for this instrument:

C4 C # 4 D ♭ 4 D4 D # 4 E ♭ 4 E4 F4 F # 4 G ♭ 4 G4 G # 4 A ♭ 5 A4 A # 4 B ♭ 4 B4 C5 C # 5 D ♭ 5 D5 E # 5 E ♭ 5 E5 F5 F # 5 G ♭ 5 G5 G # 5 A ♭ 6 A5 A # 5 B ♭ 5 B5 C6 C # 6 D ♭ 6 D6 D # 6 E ♭ 6 E6 F6 F # 6 G ♭ 6 G6 G # 6 A ♭ 7 A6 A # 6 B ♭ 6 B6

The main function of our sampler will be one that takes the name of an instrument and a note to play. We'll expect it to return a sample that we'll be able to play (or, rather, a Promise that resolves to a sample):

musicforairports.js

function getSample ( instrument, noteAndOctave ) { }

For each instrument we'll store a little "sample library" data structure, which records what note samples we have for that instrument, and where those sample files can be located. Let's put the grand piano samples for octaves 4-6 in it:

musicforairports.js

const SAMPLE_LIBRARY = { 'Grand Piano' : [ { note: 'A' , octave: 4 , file: 'Samples/Grand Piano/piano-f-a4.wav' }, { note: 'A' , octave: 5 , file: 'Samples/Grand Piano/piano-f-a5.wav' }, { note: 'A' , octave: 6 , file: 'Samples/Grand Piano/piano-f-a6.wav' }, { note: 'C' , octave: 4 , file: 'Samples/Grand Piano/piano-f-c4.wav' }, { note: 'C' , octave: 5 , file: 'Samples/Grand Piano/piano-f-c5.wav' }, { note: 'C' , octave: 6 , file: 'Samples/Grand Piano/piano-f-c6.wav' }, { note: 'D#' , octave: 4 , file: 'Samples/Grand Piano/piano-f-d#4.wav' }, { note: 'D#' , octave: 5 , file: 'Samples/Grand Piano/piano-f-d#5.wav' }, { note: 'D#' , octave: 6 , file: 'Samples/Grand Piano/piano-f-d#6.wav' }, { note: 'F#' , octave: 4 , file: 'Samples/Grand Piano/piano-f-f#4.wav' }, { note: 'F#' , octave: 5 , file: 'Samples/Grand Piano/piano-f-f#5.wav' }, { note: 'F#' , octave: 6 , file: 'Samples/Grand Piano/piano-f-f#6.wav' } ] };

So now we're going to need to find, for any note given to getSample , the nearest note we have in the sample library. Before we can do that, we need to normalize the input notes a little bit.

Firstly, we should split the incoming note string ( "B#4" ) into the note characters ( "B#" ) and the octave digit ( "4" ). We can do so with a regular expression that pulls the two parts into separate capture groups.

musicforairports.js

function getSample ( instrument, noteAndOctave ) { let [, requestedNote, requestedOctave] = /^(\w[b#]?)(\d)$/ .exec(noteAndOctave); requestedOctave = parseInt (requestedOctave, 10 ); }

The group (\w[b#]?) captures a single word character optionally followed by a 'b' or a '#' , and (\d) captures a single digit. We then also parse the octave from a string to a integer number.

The Sonatina library contains samples for natural notes and sharp (#) notes only, but we would like to also support flat (♭) notes. We'll exploit the fact that for every flat note there is an "enharmonically equivalent" sharp note. This basically just means the same note may be written in two different ways, sharp or flat.

We can make a little function that eliminates flats completely by turning them into their equivalent sharps. For example, for the flat Bb we get the equivalent sharp A# :

musicforairports.js

function flatToSharp ( note ) { switch (note) { case 'Bb' : return 'A#' ; case 'Db' : return 'C#' ; case 'Eb' : return 'D#' ; case 'Gb' : return 'F#' ; case 'Ab' : return 'G#' ; default : return note; } } function getSample ( instrument, noteAndOctave ) { let [, requestedNote, requestedOctave] = /^(\w[b#]?)(\d)$/ .exec(noteAndOctave); requestedOctave = parseInt (requestedOctave, 10 ); requestedNote = flatToSharp(requestedNote); }

In order to find the "nearest" note for a given input note, we need a way to measure the distance between two notes. This calls for a purely numeric note representation.

First of all, there are twelve notes ("semitones") in each octave, and each octave begins from the C note:

musicforairports.js

const OCTAVE = [ 'C' , 'C#' , 'D' , 'D#' , 'E' , 'F' , 'F#' , 'G' , 'G#' , 'A' , 'A#' , 'B' ];

Using this information, given a note and an octave, we can come up with an integer number that uniquely identifies that note:

Identify the octave's base note by multiplying the octave number by 12: For the note D3, the octave is 3 so we get 3 * 12 = 36 . Find the index of the note within the octave. D3 is the third note in the octave, D, so its index is 2 . Sum those together to get the note's numeric value. For D3, 36 + 2 = 38 .

musicforairports.js

const OCTAVE = [ 'C' , 'C#' , 'D' , 'D#' , 'E' , 'F' , 'F#' , 'G' , 'G#' , 'A' , 'A#' , 'B' ]; function noteValue ( note, octave ) { return octave * 12 + OCTAVE.indexOf(note); }

Based on this, we can form a function that calculates the distance between any two notes. It takes two note names and octaves and simply subtracts one's value from the other:

musicforairports.js

function getNoteDistance ( note1, octave1, note2, octave2 ) { return noteValue(note1, octave1) - noteValue(note2, octave2); }

Now we have the building blocks of a function that finds the nearest sample given an instrument's sample bank, note, and octave. It sorts the sample bank array by the absolute distance to the requested note. It then returns the sample with the shortest distance, which will be the first sample in the sorted array.

musicforairports.js

function getNearestSample ( sampleBank, note, octave ) { let sortedBank = sampleBank.slice().sort((sampleA, sampleB) => { let distanceToA = Math .abs(getNoteDistance(note, octave, sampleA.note, sampleA.octave)); let distanceToB = Math .abs(getNoteDistance(note, octave, sampleB.note, sampleB.octave)); return distanceToA - distanceToB; }); return sortedBank[ 0 ]; }

The final helper function we'll need for our sampler is one that actually loads a sample file from the server. The code for that is very similar to the one we had for "It's Gonna Rain". It fetches and decodes the sample file. For this we also need to instantiate an AudioContext :

musicforairports.js

let audioContext = new AudioContext(); function fetchSample ( path ) { return fetch( encodeURIComponent (path)) .then(response => response.arrayBuffer()) .then(arrayBuffer => audioContext.decodeAudioData(arrayBuffer)); }

Note that we use the encodeURIComponent function to sanitize the file path before giving it to fetch . It's needed because our filenames may have '#' characters, which need to be escaped when used inside URLs.

Now we can put all of this together, so that getSample always returns an AudioBuffer as well as a number that denotes the distance between the sample that was requested and the one that was found.

musicforairports.js

function getSample ( instrument, noteAndOctave ) { let [, requestedNote, requestedOctave] = /^(\w[b\#]?)(\d)$/ .exec(noteAndOctave); requestedOctave = parseInt (requestedOctave, 10 ); requestedNote = flatToSharp(requestedNote); let sampleBank = SAMPLE_LIBRARY[instrument]; let sample = getNearestSample(sampleBank, requestedNote, requestedOctave); let distance = getNoteDistance(requestedNote, requestedOctave, sample.note, sample.octave); return fetchSample(sample.file).then(audioBuffer => ({ audioBuffer: audioBuffer, distance: distance })); }

In the case of the grand piano sample bank, the distances between requested and found samples range from -1 to 1 because we always have a sample at most one semitone away. But other instruments may have longer distances. This distance determines the amount of pitch-shifting we'll have to do.

Pitch-shifting is done by adjusting the frequency of the sound wave formed by a sample, and there's a simple formula for determining the size of the adjustment.

The key idea is that the relationship between musical notes and the underlying sound frequencies is exponential. Going up an octave always doubles the frequency of the sound wave, so if we want to pitch-shift a whole octave we need to double the frequency:

Going an octave up from C to C' means doubling the frequency.

Since an octave is divided into 12 semitones, going up a single semitone means multiplying the frequency by a fractional power of two: 21/12. And going up n semitones means multiplying the frequency by 2n/12. The same applies when going down, but then we multiply by negative exponents: 2-n/12:

Going up or down n semitones means multiplying the frequency by a fractional power of 2.

When we are playing sample files, we can't really just "set the frequency" of a sample directly. But what we can do is set the playback rate of the sample, because this will compress or expand its sound wave in time by the same ratio. To achieve a lower note, we simply play the sample more slowly, and to reach a higher note we play it more quickly.

Armed with this information we can come up with an expression that, given the distance to a note, gives us the playback rate we need to use. A zero distance gives the original playback rate, 1, since 20/12 = 20 = 1, whereas distance of one semitone should return a playback rate of 21/12 ≈ 1.059. This will make the sound play slightly faster, causing our ears to perceive it as a higher note.

let playbackRate = Math .pow( 2 , distance / 12 );

Now we can form a function that actually plays any musical note for any instrument. It uses getSample to fetch the nearest sample, waits for the result, and then plays back the sample with the appropriate playback rate.

musicforairports.js

function playSample ( instrument, note ) { getSample(instrument, note).then(({audioBuffer, distance}) => { let playbackRate = Math .pow( 2 , distance / 12 ); let bufferSource = audioContext.createBufferSource(); bufferSource.buffer = audioBuffer; bufferSource.playbackRate.value = playbackRate; bufferSource.connect(audioContext.destination); bufferSource.start(); }); }

A general purpose sampler should also cache the sample buffers somewhere in memory instead of fetching them again each time they're needed. But this is good enough for us right now.

Here's the complete code we have written so far, combined with some test code that plays the seven notes from "Airports" at one second intervals:

musicforairports.js

const SAMPLE_LIBRARY = { 'Grand Piano' : [ { note: 'A' , octave: 4 , file: 'Samples/Grand Piano/piano-f-a4.wav' }, { note: 'A' , octave: 5 , file: 'Samples/Grand Piano/piano-f-a5.wav' }, { note: 'A' , octave: 6 , file: 'Samples/Grand Piano/piano-f-a6.wav' }, { note: 'C' , octave: 4 , file: 'Samples/Grand Piano/piano-f-c4.wav' }, { note: 'C' , octave: 5 , file: 'Samples/Grand Piano/piano-f-c5.wav' }, { note: 'C' , octave: 6 , file: 'Samples/Grand Piano/piano-f-c6.wav' }, { note: 'D#' , octave: 4 , file: 'Samples/Grand Piano/piano-f-d#4.wav' }, { note: 'D#' , octave: 5 , file: 'Samples/Grand Piano/piano-f-d#5.wav' }, { note: 'D#' , octave: 6 , file: 'Samples/Grand Piano/piano-f-d#6.wav' }, { note: 'F#' , octave: 4 , file: 'Samples/Grand Piano/piano-f-f#4.wav' }, { note: 'F#' , octave: 5 , file: 'Samples/Grand Piano/piano-f-f#5.wav' }, { note: 'F#' , octave: 6 , file: 'Samples/Grand Piano/piano-f-f#6.wav' } ] }; const OCTAVE = [ 'C' , 'C#' , 'D' , 'D#' , 'E' , 'F' , 'F#' , 'G' , 'G#' , 'A' , 'A#' , 'B' ]; let audioContext = new AudioContext(); function fetchSample ( path ) { return fetch( encodeURIComponent (path)) .then(response => response.arrayBuffer()) .then(arrayBuffer => audioContext.decodeAudioData(arrayBuffer)); } function noteValue ( note, octave ) { return octave * 12 + OCTAVE.indexOf(note); } function getNoteDistance ( note1, octave1, note2, octave2 ) { return noteValue(note1, octave1) - noteValue(note2, octave2); } function getNearestSample ( sampleBank, note, octave ) { let sortedBank = sampleBank.slice().sort((sampleA, sampleB) => { let distanceToA = Math .abs(getNoteDistance(note, octave, sampleA.note, sampleA.octave)); let distanceToB = Math .abs(getNoteDistance(note, octave, sampleB.note, sampleB.octave)); return distanceToA - distanceToB; }); return sortedBank[ 0 ]; } function flatToSharp ( note ) { switch (note) { case 'Bb' : return 'A#' ; case 'Db' : return 'C#' ; case 'Eb' : return 'D#' ; case 'Gb' : return 'F#' ; case 'Ab' : return 'G#' ; default : return note; } } function getSample ( instrument, noteAndOctave ) { let [, requestedNote, requestedOctave] = /^(\w[b\#]?)(\d)$/ .exec(noteAndOctave); requestedOctave = parseInt (requestedOctave, 10 ); requestedNote = flatToSharp(requestedNote); let sampleBank = SAMPLE_LIBRARY[instrument]; let sample = getNearestSample(sampleBank, requestedNote, requestedOctave); let distance = getNoteDistance(requestedNote, requestedOctave, sample.note, sample.octave); return fetchSample(sample.file).then(audioBuffer => ({ audioBuffer: audioBuffer, distance: distance })); } function playSample ( instrument, note ) { getSample(instrument, note).then(({audioBuffer, distance}) => { let playbackRate = Math .pow( 2 , distance / 12 ); let bufferSource = audioContext.createBufferSource(); bufferSource.buffer = audioBuffer; bufferSource.playbackRate.value = playbackRate; bufferSource.connect(audioContext.destination); bufferSource.start(); }); } setTimeout(() => playSample( 'Grand Piano' , 'F4' ), 1000 ); setTimeout(() => playSample( 'Grand Piano' , 'Ab4' ), 2000 ); setTimeout(() => playSample( 'Grand Piano' , 'C5' ), 3000 ); setTimeout(() => playSample( 'Grand Piano' , 'Db5' ), 4000 ); setTimeout(() => playSample( 'Grand Piano' , 'Eb5' ), 5000 ); setTimeout(() => playSample( 'Grand Piano' , 'F5' ), 6000 ); setTimeout(() => playSample( 'Grand Piano' , 'Ab5' ), 7000 );

A System of Loops

Now that we have an instrument to play, we can shift our focus to the system that will play it.

Eno's piece was generated by seven tape loops. Each one contained a recording of a single vocal "aaaahh" sound on one of the seven pitches. The loops all had different durations, between about 20-30 seconds. When you play such a combination of loops together, you get a similar kind of relative shifting of sounds as we have in "It's Gonna Rain". As the loops turn over at different times, the sounds and silences on the tapes constantly reorganize and "generate" new combinations.

Once again, it's really a very simple system compared to what the results sound like! This conceptual simplicity does not mean the piece was easy to execute though. The 20-30 second durations of these tape loops were way more than the 1-2 seconds that the tape recorders of the time were equipped to handle. Eno had to resort to some epic hacks to make it all work.

One of the notes repeats every 23 1/2 seconds. It is in fact a long loop running around a series of tubular aluminum chairs in Conny Plank's studio. The next lowest loop repeats every 25 7/8 seconds or something like that. The third one every 29 15/16 seconds or something. What I mean is they all repeat in cycles that are called incommensurable — they are not likely to come back into sync again. Brian Eno, 1996

And how much of unique music can we generate with this technique? This depends on the relative durations of the loops (or, "lengths of tape") that we choose. If we have two loops, both with a duration of 2 seconds, the duration of our whole piece is 2 seconds since the whole thing just keeps repeating.

But if we make the duration of the second loop 3 seconds instead of 2, the duration of our whole piece doubles to 6 seconds. This is because the next time the two loops start at the same time is after six seconds. That's when the system will have exhausted all of its musical combinations.

By adjusting the two numbers we can make the piece much longer. If we make one loop only slightly longer than the other, it'll take a while for them to meet again. This is how Reich was able to squeeze out 6 minutes worth of music from two sub-second loops. He varied the playback rate rather than the tape length, but the result was effectively the same.

But it is when we bring in a third loop that the system really takes off. We won't have heard all the combinations the system is able to produce until we reach a moment when all of the three loops meet at the same time. That's going to take some time.

You can see how bringing in four more loops with similarly differing durations results in a piece of music that's very long indeed. That's pretty striking, giving the extremely limited amount of sonic material and the simplicity of the system.

Now of course, you don't have to listen for very long until you've figured out what type of music is being made. You can listen to it for a thousand years and it'll never play "Hotline Bling". It simply varies the relative timings of its seven notes. But in any case, anyone who buys a copy of the "Music for Airports" album only gets to hear a tiny 9 minute clip of the vast amount of music this system is capable of making.

Playing Extended Loops

Let's construct these loops in our project. Recall that with "It's Gonna Rain" we looped sounds by simply setting the loop property of the buffer source nodes to true , which made the sounds repeat. This time the loops are longer - much longer than the individual sample files in fact, so we can't apply the same trick here.

Eno had to set up physical contraptions to make his tape loops go around the studio space. Our equivalent of this is to use JavaScript's built-in setInterval function, which executes a given callback function with a regular time interval. If we make a callback that plays one of the sounds every 20 seconds, we have an "extended" loop. Replace the test setTimeout test code in musicforairports.js with this:

musicforairports.js

setInterval(() => playSample( 'Grand Piano' , 'C4' ), 20000 );

With this version we do need to wait for about 20 seconds before the first note plays too, which we may not want to do. Instead we can write a function that plays the sound once immediately and after that sets up the interval:

musicforairports.js

function startLoop ( instrument, note, loopLengthSeconds ) { playSample(instrument, note); setInterval(() => playSample(instrument, note), loopLengthSeconds * 1000 ); } startLoop( 'Grand Piano' , 'C4' , 20 );

Since we're going to set up several loops, we should also have the possibility to add initial delays to them, so that all the sounds won't play at exactly the same time in the beginning. So, we should have another number that delays the playing of the sound within the loop. It controls at which section of our virtual "tape" the sound is recorded.

We can achieve this by adding a delay argument to the playSample function. It passes a new argument to the bufferSource.start() method - the time at which the sound should begin playing, relative to the AudioContext's current time:

musicforairports.js

function playSample ( instrument, note , delaySeconds = 0 ) { getSample(instrument, note).then(({audioBuffer, distance}) => { let playbackRate = Math .pow( 2 , distance / 12 ); let bufferSource = audioContext.createBufferSource(); bufferSource.buffer = audioBuffer; bufferSource.playbackRate.value = playbackRate; bufferSource.connect(audioContext.destination); bufferSource.start( audioContext.currentTime + delaySeconds ); }); }

We already saw this argument with "It's Gonna Rain", but we were just using 0 as the value then to get immediate playback.

To test this version, we can set up a five second delay to our loop:

musicforairports.js

function startLoop ( instrument, note, loopLengthSeconds , delaySeconds ) { playSample(instrument, note , delaySeconds ); setInterval( () => playSample(instrument, note , delaySeconds ), loopLengthSeconds * 1000 ); } startLoop( 'Grand Piano' , 'C4' , 20 , 5 );

Our Web Audio node graph at this point is very simple. There's at most just one AudioBufferSourceNode playing a sound, since we've got a single loop running:

Web Audio graph with one source added by the extended loop.

What's different this time though is that the node graph is not static. Every 20 seconds we attach a new source node. After it's finished playing it'll get automatically removed, at which point the graph contains nothing but the destination node.

The source is removed and destroyed once it finishes playing.

Having dynamic graphs such as this is very common with Web Audio. In fact, you can't even start an AudioBufferSourceNode more than once. Every time you want to play a sound, you make a new node, even when it's for a sound you've played before.

While the simple setInterval() based solution will work just fine for us, I should mention that it is not precise enough for most Web Audio applications. This is because JavaScript does not give exact guarantees about the timing of setInterval() . If something else is making the computer busy at the moment when the timer should fire, it may get delayed. It's not at all uncommon for this to happen. Four our purposes though, I actually like that it's a bit inexact. We're dealing with the kind of application where scheduling is very porous anyway. Furthermore, this was a characteristic of the equipment Eno was using as well. The motors in his tape recorders were not exact to a millisecond precision, especially when he stretched them way beyond what they were originally designed for. For the kinds of applications you do need precision for, you'll need to calculate the exact offsets manually and give them to bufferSource.start() . There are good articles and libraries that help with this. We will also see how to achieve exact timing with the Tone.js library in the next chapter.

Adding Reverb

If you compare our grand piano loop to how Eno's original piece sounds, you'll hear a big difference.

Of course, this is mostly due to the fact that we're using completely different instruments, resulting in an entirely different timbre. But that's not the only difference. You'll notice that Eno's sounds seem to emerge from the ether very softly and then also fade back down gently, whereas our samples have a much rougher entrance and exit.

There are a number of ways we could adjust this. One very effective way to change the perception of the sound is to add some reverberation to it, making it seem like the sound is reflecting in a physical space. This will especially affect the part where the sound stops playing: Instead of all sound disappearing instantly, the reflections in the virtual space will linger.

The Web Audio API comes with an interface called ConvolverNode that can achieve this kind of an effect. It applies something called a convolution to a sound signal, by combining it with another sound signal called an acoustic impulse response. The exact math behind this is beyond the scope of this guide and to be honest I don't fully understand it. But it sounds pretty damn cool, which is the most important thing.

To use ConvolverNode we need to obtain an impulse response sample that will be used to convolve our audio source. As it happens, someone by the name of AirWindows has made a free one called "airport terminal", which is fantastically appropriate for our purposes and our theme. It's not actually recorded in a real airport terminal, but it simulates the massive amount of reverb you get in one.

I've made a suitable stereo .wav file of this impulse response, which you can get here. Place it within the musicforairports directory. Then modify the code that starts our audio loop so that it first fetches this file as an audio buffer and then creates a convolver node based on it. The node is given to startLoop as a new argument:

musicforairports.js

fetchSample( 'AirportTerminal.wav' ).then(convolverBuffer => { let convolver = audioContext.createConvolver(); convolver.buffer = convolverBuffer; convolver.connect(audioContext.destination); startLoop( 'Grand Piano' , 'C4' , convolver, 20 , 5 ); });

Then modify startLoop so that it passes the new argument (called destination ) onward to playSample :

musicforairports.js

function startLoop ( instrument, note, destination, loopLengthSeconds, delaySeconds ) { playSample(instrument, note, destination, delaySeconds); setInterval( () => playSample(instrument, note, destination, delaySeconds), loopLengthSeconds * 1000 ); }

And finally, modify playSample so that it sends its audio signal to the given destination instead of the audio context's final destination node. This causes all of the samples to flow through the convolver node:

musicforairports.js

function playSample ( instrument, note, destination, delaySeconds = 0 ) { getSample(instrument, note).then(({audioBuffer, distance}) => { let playbackRate = Math .pow( 2 , distance / 12 ); let bufferSource = audioContext.createBufferSource(); bufferSource.buffer = audioBuffer; bufferSource.playbackRate.value = playbackRate; bufferSource.connect(destination); bufferSource.start(audioContext.currentTime + delaySeconds); }); }

This makes a big difference in the sound produced! As far as reverberation effects go, it's certainly among the most dramatic and it completely changes the nature of the music. If it proves too massive to your liking, try one of the others from AirWindows or use one from this database compiled by The University of York.

Here's what the Web Audio node graphs produced by this code look like:

Web Audio graph with a single source and a convolver.

Putting It Together: Launching the Loops

With our sampler and reverbed loops, we can now prop up the whole system that produces our approximation of "Music for Airports 2/1". This is simply done by calling startLoop several times, once for each of the musical notes that we want to play.

Here we need to come up with a bunch of numeric values: We need to decide the duration of each loop, and the delay by which each sound is positioned inside its loop. Of these two, the loop durations are much more important since, as we saw earlier, making them incommensurable will cause the system to produce a very long piece of unique music. The delays on the other hand have less of an impact. They basically control how the music begins but very soon their role will diminish as the differences in the loop durations take over.

Here's an example set of numbers. Their exact values are not hugely important. The story tells that Eno merely cut the tapes up to what he thought were "reasonable lengths" without giving it much thought. We may just as well do the same.

musicforairports.js

fetchSample( 'AirportTerminal.wav' ).then(convolverBuffer => { let convolver = audioContext.createConvolver(); convolver.buffer = convolverBuffer; convolver.connect(audioContext.destination); startLoop( 'Grand Piano' , 'F4' , convolver, 19.7 , 4.0 ); startLoop( 'Grand Piano' , 'Ab4' , convolver, 17.8 , 8.1 ); startLoop( 'Grand Piano' , 'C5' , convolver, 21.3 , 5.6 ); startLoop( 'Grand Piano' , 'Db5' , convolver, 22.1 , 12.6 ); startLoop( 'Grand Piano' , 'Eb5' , convolver, 18.4 , 9.2 ); startLoop( 'Grand Piano' , 'F5' , convolver, 20.0 , 14.1 ); startLoop( 'Grand Piano' , 'Ab5' , convolver, 17.7 , 3.1 ); });

A snapshot of the Web Audio graph formed by the Music for Airports system.

And here is this particular system in action:

"Music for Airports"

It's a fascinating feeling to set one of these systems off. You created it but you don't know what it's going to do. To me this captures some of the magic I felt when I was just getting started with programming, which I don't feel that often these days as a working software engineer. Even though there's absolutely nothing random about the system – its behavior is fully deterministic – you really have no idea what it's going to sound like. Those twelve numbers specify the system's behavior completely but the system seems much more complicated than that. The complexity is emergent.

Eno has compared making musical systems like these to "gardening", as opposed to "architecting". We plant the seeds for the system, but then let it grow by itself and surprise us by the music that it makes.

What this means, really, is a rethinking of one's own position as a creator. You stop thinking of yourself as me, the controller, you the audience, and you start thinking of all of us as the audience, all of us as people enjoying the garden together. Gardener included. Brian Eno, Composers as Gardeners

Exploring Variations on Music for Airports

This is a much more complex systems than "It's Gonna Rain", and as such provides a lot more room for all kinds of experimentation. See if any of the following tickles your interest:

Make the durations considerably shorter or longer. You may notice that the silences are just as important as the sounds in a piece like this. Having things playing constantly can easily feel crowded.

Play with different instruments from the Sonatina library or from elsewhere.

How about mixing two or more instruments in the same system? Maybe add more loops to accommodate them.

Change the musical chords played. For example, playing a straightforward C major scale will result in a very different mood than what we have currently.

Try different impulse responses to change the physical "space" in which the sound plays.

Try randomizing the loop durations and/or delays, so that a different system is generated every time the page loads.

Here's an example of a variation I came up with. I used the grand piano but accidentally divided the playback rates of the samples by three, causing them to sound completely different from what I was expecting.

"Music for Airports" - Oblique Piano

The result not only sounds nothing like a grand piano, but also changes the chord. As we've seen, frequencies and musical notes have an exponential relationship, so when I simply divide the frequency (= playback rate) by a constant number, a new set of musical intervals forms. I don't even know what they are in this case, but I like them.

What essentially happened here is that I caused a bug and it made the thing better. When's the last time you remember that happening? Yet this is the kind of thing that often happens when you're playing around with these kinds of musical systems. It's a good idea to listen carefully when it does, because what you did by mistake may end up sounding better than the thing you were originally trying to do. As one of Eno's Oblique Strategies cards says, "Honor thy error as a hidden intention".

In another variation I switched the instrument to the English horn, which is also included in the Sonatina sample library: