Musicians playing a duet don’t just make beautiful music—they make beautiful math. A new study finds that as two players mesh, tiny hiccups in their rhythms follow repeating patterns. The study has implications for “humanizing” computer-generated music and helps reveal the complex mathematics underlying the common ways in which we interact.

Study author Holger Hennig, a physicist at Harvard University, became interested in the mathematics of human rhythms while listening to the electronic drums in the song “Sexy Love” by Ne-Yo. He guessed that no human could produce a beat as precise as a computer could, and in a 2012 study he showed that even professional musicians keep imperfect time—an early beat here, a late beat there, all on the order of milliseconds. What’s more, Hennig found that these tiny deviations from a steady beat aren’t random; they follow repeated, statistical patterns.

“It actually shows part of the beauty and richness that is in humans, which is based on their imperfections,” Hennig says. Although listeners may not be aware of the deviations, they still tend to hear the difference between human-produced and computer-generated music.

In the new study, Hennig took advantage of these human imperfections to explore not just rhythm, but rhythmic interaction. He brought pairs of players—some professional, some with no musical training—into a recording studio and watched as they played simple rhythms together on the same keyboard. He tracked the players’ rhythmic deviations as they synchronized over 6 to 8 minutes. In one way, the players interacted as Hennig expected, with a continual give-and-take. For example, when one player sped up even a tiny bit, the other would also speed up, and together they’d reestablish rhythmic equilibrium.

But the data also showed something surprising: a sort of long-term memory of rhythmic imperfections over the course of a song. A hiccup near the beginning would influence not only the next few beats, but also a pattern of beats much later in the song. As they try to stay in sync, the players repeat the same patterns of catch-up and slow-down over and over again, Hennig reports online today in the Proceedings of the National Academy of Sciences. “What I’ve found here is just this universal underlying structure of musical communication that already shows that there is a musical binding between two musicians over long time scales,” he says.

Hennig says that these findings can help computer-generated music sound more human. Music software today tends to either produce perfect rhythm or to introduce random deviations into the beat—a first step toward trying to sound human. But his research shows that in the case of rhythm, humans aren’t random. To try to trick listeners into believing music was played by humans, he has created a model based on his data that introduces patterns of deviations into a computer’s beat. “The model that I built of course couldn’t replace a musician; it can only serve as a starting point and maybe lead to new tools.”

The research also lends insight into how other components of complex systems interact with one another. Similar patterns have been observed in group behavior of fish and birds, variations in heartbeat and EEGs, and even fluctuations in the New York Stock Exchange. “Many models of collective synchronization assume that there is this kind of correlation, but it is very nice to have an experimental proof of it,” says Mehdi Moussaïd, a research scientist at the Max Planck Institute for Human Development in Berlin, who was not involved in the work. He hopes this kind of study will be used to explore the mathematics behind other basic human interactions that involve synchronized behavior, like clapping.