Computers and music have been linked since the earliest days of the mainframe, when giant machines controlled primitive synthesizers. Recently, however, a significant advancement has taken place in the field of computer music with the development of software that can not only transcribe polyphonic music in real time, but can also play back complex harmonies alongside human performers. For instance, at the annual Music Information Retrieval Exchange (MIREX) competition, Christopher Raphael of Indiana University demonstrated a system that can understand live music well enough to accompany a musician.

Raphael started playing an oboe quartet written by Mozart, and his electronic accompaniment chimed in playing the other three instruments. When Raphael slowed down his performance, the computer "musicians" followed right along with him without missing a beat, even when he added a trill for emphasis.

"Technology is changing our sense of what music can be," Raphael says. "The effect is profound." The new software was not easy to develop, however. Raphael compares it to the slow progress in effective speech recognition. "There's been a veritable army of people who've worked on speech recognition for several decades, and [the problem] still remains open," he says. "Any time you deal with real data, there is a huge amount of variation that you have to understand."

Raphael's program works by analyzing the waveforms emitted by musical instruments. It has been relatively straightforward for computers to accurately identify a single note being played, but when harmonies enter the picture, it becomes a far more difficult problem. A program written by Daniel Ellis of Columbia University uses machine learning techniques to "teach" computers how to understand music by feeding it 92 recordings along with their musical scores. Gradually, the software begins learning the rules of music (such as how an E is often played with an A, but rarely with an A sharp) and over time it becomes more adept at discerning the notes from the music. Raphael's program doesn't have to go through this complex process because it only has to follow a single performer, but future iterations may be able to follow along with a whole group.

Similar technology has already shown up in the commercial music market, such as the infamous AutoTune software that gives Maroon 5 perfect pitch on CD even though the lead singer couldn't hit a note correctly if his life depended on it; artists like Shania Twain are even rumored to use a real-time version of AutoTune for their concerts.

However, making heavily-filtered pop music isn't the only possible outcome of this research. Computer performers could handle new types of music with many more notes played at once than humans are capable of handling. Disc Jockeys could have many more options for creating unique performances. And software could make learning an instrument faster and more enjoyable, as students could practice with a "real" orchestra again and again without it ever getting tired, or needing to clear its spit valves.

But mixed in with these great ideas are also worries about how copyright holders would react to the explosion of mash-ups and other creative works that would be facilitated by a computer with both perfect pitch and the ability to learn and playback music in real time. Indeed, what new DRMs and watermarking techniques will be invented once a computer can not only "record" music, but can break it down into its constituent parts and reassemble it again, perhaps in simulated HD audio quality. That day could be closer than you think.