Stefan Goldmann:

Jeff Mills, Robert Hood and FM Synthesis as a Metaphor 1

I. Discoveries

Recently while preparing a lecture on the influence of gear on music I puzzled over the formal differences between Chicago’s house and Detroit’s techno. Both owed a lot to the restrictions inherent in Roland’s rigid TR-808 and TR-909 drum machines and the absence of a budget for much more of a set-up. There are so many commonalities that I wondered what the formal differences really are. And I don’t seem to be the only one who was confused. Reportedly, Derrick May thought they were doing house music – until Juan Atkins insisted on the techno tag, which he in turn had borrowed from Alvin Toffler.2 Researching gear lists, I eventually stumbled upon a device named DX100. It was used by virtually every Detroit producer (including Derrick May: Nude Photo employs the ‘Wood Piano’ preset) and there were periods where it was the only other sound source in the set-ups of Jeff Mills3 and fellow minimalist Robert Hood4, aside from a TR-909 drum machine. Core lessons learned while adapting to FM have been applied to other synthesizer (and synthesis) models later, shifting the focus of programming from the keyboard-derived approaches of 1970s art rock and fusion to the synthesizer’s modulation matrix.

II. Carriers and Modulators

The DX100 was one of Yamaha’s cheaper implementations of FM synthesis, a process accidentally discovered by John Chowning and later licensed exclusively to the Japanese manufacturer. As a postgraduate music student at Stanford in the mid-1960s, Chowning found out that modulating the frequency of one oscillator (the carrier) by another (the modulator) could produce rich and dynamic timbres. This is known today as FM synthesis, mostly through Yamaha’s success with the best-selling hardware synthesizer of all time, the DX7. In Chowning’s words, “with two simple sinusoids I could generate a whole range of complex sounds which done by other means demanded much more powerful and extensive tools.“5 Changing the frequency or amplitude of the modulator would cause complex spectral variation, but such changes actually affect results in an extremely counterintuitive way, i.e. the relationship between parameter changes and resulting sounds isn’t easily predictable for users – especially when multiple modulators or complex waveforms are involved. Then, Yamaha decided to spare the user the perils of programming altogether by hiding it underneath the surface. The DX7 had just one data entry slider to set any parameter, making real time control pretty much impossible and thus starting an era of excessive preset use.

Instrument layouts often determine central features of music. The rigidly quantized pitches of a piano come to mind. Similarly, it was the awkward interface for entering patterns into Roland’s TB-303 bassline synth that led to DJ Pierre’s chance discovery of acid house by abandoning the frustrating note level, setting chance figures on repeat and shifting all attention to the two knobs that control the filter’s cut-off and resonance. Looking at the layout of the DX100, there is no intuitive way to explore the infinite possibilities it holds for shaping dynamic sounds, which requires altering the relationships between the four internal modulator/carrier units. Since in complex FM synthesis everything is interrelated, almost any change of any parameter changes, well, everything. This also means that in FM a parameter value means nothing unless all other values are known too.

The minimalism developed by Robert Hood and Jeff Mills once they started working with the DX100 springs directly from the machine’s inherent resistance to quick programming progress. The most efficient way to tap its potential was to set a simple one or two note trigger pattern on repeat while jumping back and forth between parameters until something interesting emerges. Trial and error.6 It’s simply the only way to evaluate the effect of changed settings quickly. Once this mode of research was established, something remarkable happened.

III. From Nuances to Categories

There is a fundamental difference between electronically programmed and manually performed music. I find it most useful to understand this through the difference between categorical and nuanced features of music. For example, since there are limits to manual precision, the tone a violinist plays will necessarily be a bit sharp or flat at first, and the violinist will compensate quickly by moving their finger towards the intended pitch. They will be further modulating the pitch slightly. These nuances of intonation and expression are continuous (moving from one state to another fluidly), while categories are discrete (jumping from one fixed value to another): we’ll hear a ‘C’ note despite the actual variation in frequency around the ‘C’ category’s center frequency.7 Within the European concert music tradition such categories have been codified into a notation system that fixes what is expressed categorically (pitch and metric units for instance). Nuances then are mostly at the discretion of the performer since they often can’t be communicated in a way that enables reliable reproduction. An interesting feature of such nuances is that they can’t be memorised easily. This has been used as an explanation why we don’t get tired of listening to the same recordings over and over: much detail is experienced anew with every listen.8

Nuance thus refers to the fleeting qualities that connect the dots: transitional zones, continuous shifts, bent curves. If any of these could be replicated in a way that could be easily remembered, they would become categorical and cease to be nuance. And that is pretty much what happens when FM, an inexhaustible generator of odd shapes, is stabilised by strictly repeating the same otherwise contingent segment over and over again: just the way Hood and Mills did after being forced to come to terms with the DX100. We learn categories by repeated exposure (‘statistical learning’),9 so there necessarily are a lot of individual and cultural differences to ‘understanding’ music. Closely sequencing identical stretches of sound accelerates the process, so that a category that is individual to one track is learned by listening to the track itself.

This process of close repetition eventually transforms every aspect of music: from micro-rhythmic deviations (hence the science of ‘shuffles’) to pitch bending movements, filter cut-off envelopes, compression curves, reverb tails and noise contours – all set to amalgamate into one entity where everything moves in relation to everything else. This all is shaped in an experimental setting, in a real time feedback loop between gradual changes to individual parameters and the perceptual evaluation of the resultant gestalt. An electronic production environment empowers you to decide which features will become categorical and which will be just nuanced. Repetition as a magnifying glass. I tend to think that in techno, wider form is pretty much a matter of nuance (does it really matter if there are 16 more bars of a loop or not?) while even the slightest bend of an envelope is often meticulously designed. DJs as interpreters of techno: nuancing form by mixing records.

IV. The Intangible Scores of Techno

Intriguingly, this represents the almost exact inversion of what applies to composition in the medium of notation, thus I suspect that applying notation-based concepts or methods of analysis derived from notated music to repetitive electronic music is bound to produce systematically skewed conclusions.

Musicological analysis mirrors the categorical character of scores: counting discrete elements in a medium that has been designed for partial representation, while anything that can’t be counted has remained in the domain of the pre-linguistic art of interpretation. Mistaking the score for the music, there are still papers and books being published today treating recordings as if they were scores. If the recording rather than the score is the medium in which the work manifests itself, traditional analysis is neither possible nor necessary.10

FM taken as a metaphor hints at the problem that in a perception-based production scenario (such as techno’s paradigmatic mode of operation) all parameters are effectively bundled and thus exist in a state of permanent dynamic cross-modulation. Each layer is modulated by all others, mirroring the others, imprinting contours onto each other. Everything is melting into a convoluted perceptual entity where the products of cross-modulation form their own emergent aesthetic layer. No doubt that the stark patterns have obvious rhythmic values, but all the time these also carry other information that wouldn’t exist without a carrier for it to be projected on. Although individual parameters (such as rhythm, pitch or compression ratio) can be described in isolation, such descriptions fail to identify the corresponding functions of their values and movements in context, i.e. their cross-modulated products are missing from the description. The exactly same movement of one parameter may ‘mean’ totally different things in different contexts. One might also recall that outside of scores and measurements, i.e. while listening, we actually never encounter parameters in isolation (no pitch without timbre, no rhythm without duration …): there is no such thing as an independent variable.

Still, many authors have been fooled to pass judgements on ‘complexity’ on the basis of plain transcriptions of techno tracks into standard notation. What inevitably follows are conclusions about the transcription but not about the more relevant recording.11 In fact, it is probable that the neatly parametric graphical interfaces of sequencers (together with the lingo of early academic efforts in synthesis) have deluded many to think of electronic music as being emphatically parametric, i.e. addressing the movements of different elements independently. Yet particularly sequencers allow for perfect demonstrations that the same numbers can trigger vast varieties of totally unrelated output. MIDI data obviously contains even less information about sound and music than does notation.

V. Spread

Through exposure to the same records highly individual categories of sound have gained tremendous cultural momentum. Techno has become an almost universal method, rather than a genre, with people relating to it in ways where no common history seems to exist any longer.12 Largely self-explanatory and contagious, all you need to do in order to participate is listen. People who have never been to a club make legitimate tracks. You can spot traces of techno in the oddest places. Basically, in an edit-heavy digital recording environment any sound is potentially treated as if it came from an oscillator and is than placed in the same kind of sequential grid. Thus it has become increasingly difficult to separate electronic music from ‘manually’ performed styles in the recording medium. Performers in turn more often than not learn their craft by practicing along highly processed and edited recordings… At the core of it all, there was a kid in a basement, turning knobs to move sounds around in a loop. The empowerment that comes with the possibility to shape categories, combined with the perceptual mode of conveying them to practically anybody who’ll listen, make for a discovery so radical (and still unfolding) that it is probably rivalled only by historical developments on the scale of the Renaissance’s two-point perspective, Baroque music’s equal temperament or Willie Kizart (of Ike Turner’s Kings of Rhythm band) dropping an amp only to discover what distortion does to a guitar.

2 Thomas, Andy: Electronic Enigma: The Myths and Messages of Detroit Techno, in: Wax Poetics 45 (2011), 74

3 Mills, Jeff: liner notes, SRD Distribution info sheet of Late Night (2002): “In those days of my transition from Detroit to New York […] from each weekly trip back to Detroit to fulfill my duties at Underground Resistance, I began to take one piece of equipment at a time. My ability to create the more complex tracks was gradual and slow. Nevertheless, within the weekdays I would have the enormous urge to compose music, so I would try to record with what I had at the time. Which was only a Roland 16 track mixer, Yamaha DX100 keyboard, Yamaha sequencer, Roland TR-909 drum machine, no monitor return speakers (I mixed through headphones) and one of those new gadgets, a portable DAT recorder […]. The day I purchased the DAT recorder was a productive one, I remember recording about 15 tracks within a few hours, “Late Night“ among other Waveform Transmission Vol.1 tracks were created on this evening.“

5 Roads, Curtis: The Computer Music Tutorial, Cambridge (MA), MIT Press, 1996, 226

6 Compare Cant, Tim: Kevin Saunderson on the Reese Bass, Synths, Software and a Life in Techno, 2013 (retrieved June 16, 2015): “Sometimes I might make a rhythm or sequence, and determine that I wanted to work on the sound, just getting into the parameters of it. It just inspired me to try different shit. I’d be getting into the oscillators, but it was trial and error. I didn’t know a lot about that kind of stuff, but I knew it affected the way the original patch sounded. You just messed around with every button! Then, as you got more experienced, you started knowing how the parameters really affected it. Back in the beginning you didn’t really know, you just kept messing with it until something great came out of it. So there was no real theory behind it, besides experimentation to make something happen differently to what was was already there.”

8 Snyder, Bob: Music and Memory, Cambridge (MA), MIT Press, 2000, 167

9 Huron, David: Sweet Anticipation. Music and the Psychology of Expectation, Cambridge (MA), MIT Press, 2006, 190

10 A recording features no obvious cut-off point to separate “structurally relevant” from contingent details. Within a typical digital recording, information is represented at 44,100 discrete data points per second. Beyond these there is nothing. Within these, where does structure end? What shall we look at? Parameters which we can extract easily through listening? Technically through measurements? Or should we seek to identify those that went into producing the sounds we hear in the recording?

Further reading: Stefan Goldmann: Presets – Digital Shortcuts to Sound (Bookworm)