The contemporary American composer John Adams once had a dream in which an oil tanker slowly ascended out of the San Francisco Bay and began to fly; this dream was the inspiration for Harmonielehre, one of the first works to bring him acclaim as a composer and a major contribution to orchestral music in the last 50 years. Adams is certainly not the only artist to draw inspiration for his work from the content of his dreams; Salvador Dalí’s paintings were reportedly based on dreams (and certainly have dreamlike qualities), and at least apocryphally Hector Berlioz’s masterpiece Symphonie Fantastique was the result of an opium-induced fever dream.

If a group of Finnish computer scientists have their way, the Berliozes and Adamses of the world won’t have a monopoly on creating music in their sleep – they’ve recently developed software that composes music from the rhythms of sleep. Using a sensor placed under the mattress to capture data, "the software composes a unique piece based on the stages of sleep, movement, heart rate and breathing. It compresses a night's sleep into a couple of minutes," says Aurora Tulilaulu, who developed the composition program. You can listen to a sample of the sleep-derived music here.

Clearly, there is a vast difference between the dream inspired music of major composers and music rendered from sleep rhythms by computer scientists. There is little doubt which is more interesting musically. Nonetheless, the work of the Finnish team presents some interesting possibilities. As the program becomes more widely available, it will be interesting to see (or hear) how the pieces differ from sleeper to sleeper and from night to night. Sleep patterns can and do vary widely, but it remains to be seen just how well the software can capture those differences. Another interesting question is who owns the music that the software produces? While much of the musical content is determined by the design of the program, the music could not exist without the input of the sleeper’s rhythms.





The Finnish team’s effort is only the latest in a growing list of musically inclined researchers (and scientifically curious musicians) finding ways to adapt new technologies and translate biological data streams to stretch our definition of the word “music.” DNA sequences and microbial movement are being turned into music; an instrument called the electroencephalophone, which converts brain waves into sound, has been around since the 1960s. The use of recorded and manipulated sound in music, which today we take for granted, has vastly changed the face of the music world since the 1940s.

Typically, the technology is developed for other purposes and then applied to musical pursuits, although musical necessity can and does drive technological and scientific innovation. Searching for ways to expand the timbres available to him (and ways to approximate the sounds of Javanese and Balinese gamelan instruments), John Cage was an early pioneer of the prepared piano, a term referring to a standard piano with carefully placed objects inside that alter the sound and pitch of each string struck by a key. Another 20th-century American composer, Harry Partch, invented a range of instruments capable of producing microtones, or divisions of the octave smaller than the 12 semitones common in Western music. More recently, spectralist composers such as Frenchmen Gérard Grisey and Tristan Murail have brought about advances in acoustic analysis and the computer tools used to analyze complex sounds to inform their compositions. You can hear some of Murail’s compositions here.

Clearly, music and technology have long been intertwined. But what about bio-music like the sleep-derived music or the electroencephalophone? One can make the argument that these are merely continuing the tradition of composers since at least the 17th century who used instruments to imitate animal sounds. The recent developments are clearly on a different scale than imitating birdsong, though, as they imitate models that aren’t in the audible domain. But even that effort can be traced all the way back to the Renaissance. In a recent conversation, music theorist Lyle Davidson indicated that 16th-century sacred vocal music was often performed at or near the tempo of a resting heartbeat. So while the Finnish researchers have accomplished something that’s never been done before, they are squarely within a long and vibrant tradition of music and science crossover. John Cage, whose centennial is being celebrated this year, would probably appreciate their work. Who knows, perhaps the great masters of 16th-century vocal polyphony would too.