We all know that music is a powerful influencer. A movie without a soundtrack doesn’t provoke the same emotional journey. A workout without a pump-up anthem can feel like a drag. But is there a way to quantify these reactions? And if so, could they be reverse-engineered and put to use?

In a new paper, researchers at the University of Southern California mapped out how things like pitch, rhythm, and harmony induce different types of brain activity, physiological reactions (heat, sweat, and changes in electrical response), and emotions (happiness or sadness), and how machine learning could use those relationships to predict how people might respond to a new piece of music. The results, presented at a conference last week on the intersections of computer science and art, show how we may one day be able to engineer targeted musical experiences for purposes ranging from therapy to movies.

The research is part of the lab’s broader goal to understand how different forms of media, such as films and TV ads as well as music, affect people’s bodies and brains. “Once we understand how media can affect your various emotions, then we can try to productively use it for actually supporting or enhancing human experiences,” says Shrikanth Narayanan, a professor at USC and the principal investigator in the lab.