



In The Score, American composers on creating “classical” music in the 21st century.

Roger Linn

If the difference between 1911 and 2011 is electricity and computation, then Max Mathews is one of the five most important musicians of the 20th Century. – Miller Puckette

In 1957 a 30-year-old engineer named Max Mathews got an I.B.M. 704 mainframe computer at the Bell Telephone Laboratories in Murray Hill, N. J., to generate 17 seconds of music, then recorded the result for posterity. While not the first person to make sound with a computer, Max was the first one to do so with a replicable combination of hardware and software that allowed the user to specify what tones he wanted to hear. This piece of music, called “The Silver Scale” and composed by a colleague at Bell Labs named Newman Guttman, was never intended to be a masterpiece. It was a proof-of-concept, and it laid the groundwork for a revolutionary advancement in music, the reverberations of which are felt everywhere today.

When Max died in April at the age of 84 he left a world where the idea that computers make sound is noncontroversial; even banal. In 2011, musicians make their recordings using digital audio workstations, and perform with synthesizers, drum machines and laptop computers. As listeners, we tune in to digital broadcasts from satellite radio or the Internet, and as consumers, we download small digital files of music and experience them on portable music players that are, in essence, small computers. Sound recording, developed as a practical invention by Edison in the 1870s, was a technological revolution that forever transformed our relationship to music.

A pioneer who believed that computers were meant to empower humans to make music, not the other way around.

In comparison, the contributions of Max Mathews may seem inevitable. Just as so much of our life has become “digitized,” so it seems that sooner or later, sound would become the domain of computers. But the way in which Max opened up this world of possibilities makes him a singular genius, without whom I, and many people over the last six decades, would have led very different lives.

As an engineer, Max had extremely diverse interests, all of which he pursued with a great deal of energy. He provided the initial research for virtually every aspect of computer music, from his early work with programming languages for synthesis and composition (the MUSIC-N family of software) to foundational research in real-time performance (the GROOVE system and RTSKED, the first real-time event scheduler). Max also helped start the conversation about how humans were meant to interact with computers by developing everything from modified violins to idiosyncratic control systems such as the Radio Baton. Marvin Minsky, a pioneer in the field of artificial intelligence and one of Max’s peers, said that Max “wrote the first beautiful examples of how to do things and then he moved on to something else,” leaving it to colleagues, students and other creative minds to pick up where he left off. Along the way, his fluency in human cognition, acoustics, computer science and electrical engineering allowed him to always keep in mind the big picture: that computers were meant to empower humans to make music, not the other way around.

Back in 1957, none of these ideas were self-evident. Rebecca Fiebrink, an assistant professor of computer science at Princeton University, says of Max: “Max had this vision of the computer as being something that is creatively empowering to people, even in the 1950s, when the words ‘empower’ and ‘creativity’ were not part of the vocabulary.”

Max’s early experiments with sound and the digital computer were made possible by a fortunate combination of factors, including a community of supportive colleagues led by his supervisor, John R. Pierce. Bell Labs, for whom he worked as an engineer, had a vested interest in Max’s research: as the practical demands of telecommunications in the United States broadened after the Second World War, a melding of analog telephony and digital computing was inevitable. Max’s initial mandate was to research the problem of getting computers to listen and speak. The fact that he interpreted his research agenda in the broadest possible terms, giving us not Moviefone, but music, is amusingly subversive; the fact that he got away with it, was encouraged to keep going, and created an entire world of possibility along the way, is astounding.

Max’s research was first published for a wider audience in an article, titled “The Digital Computer as a Musical Instrument” in the November 1963 issue of Science. He explained the language he created to work with sound digitally, wherein the user creates two sets of instructions. The first, an “instrument,” defines what the sound should be, in terms of waveforms, amplitude curves, filters and how these components should be connected to one another. The second set of instructions, the “score,” contain the musical notes, rhythms and durations with which the instrument will sound. This simple conceptual distinction between the instrument and the actions it performs to make music is still the norm today.

This article prompted two other computer music pioneers, John Chowning and Jean-Claude Risset, among others, to come to Bell Labs to work with Max. (Chowing appears as an influential figure in Martin Bresnick’s previous post in The Score, “Prague 1970: Music in Spring.”) They found themselves in a community made up of a seemingly peculiar pairing of Bell Labs scientists and avant garde musicians. Risset, finishing his doctorate in Paris, came to Bell Labs and began working with Max on new possibilities for synthesizing the timbre of existing instruments.

“Max was very generous about sharing,” Risset recalled. “At that time, Bell Labs was almost a public service. They had the feeling that there was a commitment and duty to make the research available to the general public, including artists, in terms of new possibilities. In fact, they felt the artists were also doing research, so that science and technology could both benefit.”

Chowning, in his second year as a graduate student in composition at Stanford, was experimenting with electronic sound and multiple loudspeakers. He recalled Max’s Science article well: “I had never seen a computer, so when I read this article and realized what this meant, it defined the possibilities of music in a wholly new way. So I decided to investigate. The first thing I did was to take a programming course, and convince myself that as a musician that I could learn to program. I then contacted Max.”

Speech (and speech synthesis) was of particular interest to what was rapidly becoming Max’s lab, and he and his colleagues John Kelly, Jr., who would go on to propose the Kelly criterion in economic investment theory, and Carol Lockbaum used the I.B.M. to generate perhaps the ultimate cover song. If “The Silver Scale” was a proof-of-concept, the 1961 speech synthesis rendering of “Bicycle Built for Two” is a tour-de-force of the new digital musicality possible with computer programming. In the man-versus-machine standoff in Stanley Kubrick’s 1968 film “2001: A Space Odyssey,” Douglas Rain’s HAL 9000 begins to sing the tune wistfully as astronaut David Bowman disengages its memory, regressing the homicidal machine back to its infancy as it fondly remembers a Mr. Langley, who taught it to sing a song.

By the early 1970s, Max’s lab at Bell, the Acoustic and Behavioral Research Center, was doing research in literally every possible aspect of sound in which a computer could provide assistance, all under the auspices of a company ostensibly committed to the comparatively modest goal of providing Americans with better telephone service. He was also becoming increasingly engaged in getting computers into the act of performance, something that was only just becoming possible. His first foray into the problem was a project he called GROOVE, a hybrid project wherein a computer controlled a large analog modular synthesizer.

Laurie Spiegel, a composer who at the time had been working with analog synthesizers, met Max through Rhys Chatham, who programmed a performance by Max and Emmanuel Ghent for a music series at the Mercer Arts Center, a venue that would evolve into the Chelsea arts space many of my friends and I perform in today.

Spiegel, excited by the possibilities of the GROOVE system, asked Max if she could join him in his endeavors at Bell Labs: “Being a woman with no technological credentials at the time, I doubt I would have been granted access to then-scarce powerful computer systems in any other lab. But Max didn’t go by credentials or background or identity. He took every instance in as its unique self, responding to each thing on its own terms.”

A life-long violinist, Max also began experimenting with electric instruments using custom circuitry. He created a series of twelve electric violins containing custom circuitry. Laurie Anderson recalls receiving one to work with, beginning a 30-year friendship with Max: “He gave me a violin that I used for a while. The violin itself was really beautiful. The way he talked about strings was amazing. Like everyone, I lost touch with him and got back in touch with him all the time. But no matter what, as soon as I would see him again we were right in the middle of the conversation. Max was one of those friends.”

Related More From The Score Read previous contributions to this series.

“The Sequential Drum,” an article Max published with Curtis Abbott in 1980, saw Max’s research taking a new, significant turn, as he began to outline the idea of an intelligent musical instrument, leveraging the power of the computer to generate sound and assist in musical performance. This work involved not only a computer program but also a physical device that enabled a performer to control the timing of a musical sequence stored on a computer by beating a “drum” (in actuality an electrical trigger). Three years later, Max published an article in the Journal of the Acoustical Society of America titled “RTSKED: a real-time scheduled language for controlling a music synthesizer.” In the article, Max explains a basic system for getting a computer to schedule musical events in real time, either on its own or in response to commands from a live performer. This system, which outlined a simple, efficient mechanism for human-computer interactivity, started an avalanche of innovation in computers that could finally perform alongside us.

I was 8 years old when Max described RTSKED. Nearly every day for the last 15 years, I’ve opened up my computer and double-clicked an icon to launch a program called Max. Max the program, named after Max the man, was developed in the late 1980s by Miller Puckette, an American computer scientist working at IRCAM, the Institute for the Research and Coordination of Acoustics and Music in Paris. Influenced by Max’s research thus far, the program allows for the creation of a visual graph, or “patcher,” representing a process that can generate and respond to sounds, images or any other input and output one can imagine to the computer. Puckette first heard Max speak at an International Computer Music Conference in the early 1980s presenting RTSKED: “I didn’t actually talk to him, but the thing I noticed was, unlike all the other speakers who got up, when Max showed up at the lectern the entire audience gave him a standing ovation before they allowed him to say a word. I was 22 at the time, so I paid some attention to what he said after that, and it was a good thing I did, because I trace a large part of what I did in Max to RTSKED.”

Nearly a quarter-century old, Max, the software, is currently developed by a software company in San Francisco called Cycling’74, founded by David Zicarelli. Zicarelli met Max, the person, as a graduate student at Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA), where Max began teaching upon retiring from Bell Labs in 1987: “He had this way of characterizing it, which is that pitch is not expressive, in comparison to rhythm and other aspects of performance. If you store the pitches and let people focus their performance expressivity on rhythm and legato and that kind of thing, you don’t have to worry about staying in tune, or playing the wrong note at the wrong time, and you can actually be really musical. He saw this drum both as an interesting sensor technology, but also as an egalitarian musical vision. That this is a way to open up music performance to a wide audience.” If you’ve ever played Guitar Hero, or Rock Band, you’ve experienced making music through the legacy of Max’s ideas about democratizing musical performance.

Throughout the 1980s and 1990s Max continued his research in expressivity in computer music performance, embarking on research that would culminate in the Radio Baton, a musical controller that allowed for the three-dimensional control of sonic parameters, and Scanned Synthesis, a paradigm for computer sound generation. The Radio Baton, which provided the missing link between the Theremin and the Nintendo Wii game controller, allows for smooth, expressive control of multiple musical parameters without being tied to such things as musical keyboards, faders and buttons.

In the 2000’s, Max began having breakfast every Thursday with a group of electronic and computer music pioneers from both academia and the commercial music industry. Max attended the group’s meetings religiously.

Up until the end of his life, Max continued to work on innumerable projects with computation and music. Richard Boulanger, who worked with Max extensively on the Radio Baton, tells me: “Even to the last days of his incredibly full life, he was learning, teaching, writing, coding, performing, and even now ‘remixing’ his classics.”

I met Max a handful of times, and sat across from him once in graduate school at one of those interminable dinners academics like to have at conferences. He was warm, funny, and had the grace not to let on to the kid at the table that he was the smartest man in the room. Bell Labs was several decades and many miles away, described in hindsight, and with certain nostalgia, as a magical place where artists and engineers were one and the same. We don’t really have those places anymore, which is a shame. Our new century, this century of data, is built on work done by the unassuming geniuses like Max who worked in the liminal spaces between science and art. On the telephone from Paris on Easter Sunday, Risset told me “in America, the phone company is very important.” It made me laugh, but he was right.

In February of 1972 Max wrote a short story and sent it to a few friends and colleagues, including Vladimir Ussachevsky, then director of the Columbia-Princeton Electronic Music Center. I found it in Max’s file in the Columbia archives, with a cover letter that reads “Attached is the result of a momentary madness which you might enjoy.” The story, set in 2165, concerns an astronaut wielding a 1704 Stradivarius violin, who returns after nearly two hundred years in hibernation to a planet Earth in which music is very different than it was when he left. On the one hand, there is a monastic society of musicians clad in the formal tails of concert soloists, virtuosi who perform canonized music for “perfect” digital recordings controlled by special stewards within their order. On the other hand, there are participatory mass-improvisations mediated by computers, called “Audances,” where players interface with digital machines creating work using all manner of joysticks, knobs and TV screens, all happening inside a specially built room, with no audience and no possibility for error. The story’s protagonist reminisces, in his last will and testament, on the world he left behind, where music was a physical act as well as a social one, involving physical instruments performed from the stage.

I didn’t know Max well enough to be certain, but I suspect that these two scenarios were his bêtes noires, worlds where music ceased to be live, immediate and accessible. As the potential for making music by computer grew, Max saw the oncoming ubiquity of the digital world, and he embraced it, fostering a spirit of inquiry, openness and experimentation among his colleagues and students. At the same time, it was vitally important to him that we, the musicians of the computer age, understood the computer for what it was: an instrument for enabling our creative acts, not replacing them.

The history of music is the history of technology. Unless you are improvising, a capella outdoors with your own singing voice, you are making music with technology, be it the technology of writing, architecture, instrument design, electric amplification, electronic reproduction, or digital synthesis. Musicians intuit this, and can easily weather massive shifts in how we relate to new technologies in the human experience because we integrate our future seamlessly with our past. We understand that every human culture will use the maximum level of technology available to it to make art. It’s natural, and everything Max gave us flows from that, because he understood. He was a musician, too.

For Max V. Mathews, the computer was the Stradivarius of the 20th Century.

He was our first virtuoso.



Sounds samples from “The Historical CD of Digital Sound Synthesis” appear courtesy of the WERGO record label.

R. Luke DuBois is a composer and artist living in New York City. He teaches at the Brooklyn Experimental Media Center at New York University’s Polytechnic Institute. His work can be found at his Web site, lukedubois.com.