This year, the entire smart speaker market is predicted to reach a value of $7bn (£5.37bn) according to Deloitte Global, making it the fastest growing connected device category in the world. In 2017, only 7% of Americans owned smart speakers, a figure that Nielsen says has risen to 24%. By 2022, Juniper Research expects that 55% of US households will own a smart speaker, which equates to around 175 million devices in 70 million homes.

In spite of more transformative skills and features that can control your thermostat, order a ride, or deliver a pizza, the Deloitte research showed that playing music is still the most-used application in most markets, followed by weather forecast searches. Rumours are growing that Amazon may capitalise on this behaviour by launching a free, ad-supported streaming music service in the United States, which it would market through its voice-activated Echo speakers.

As these smart devices have jumped straight to the mainstream, becoming a hit in a few short years, it has highlighted an important challenge for the music industry. And it’s one that wades into the murky, waist-deep waters of algorithms. For record labels and artists, success on smart speakers comes down to how well they can optimise their songs’ metadata.

Happy talk

Before the days of Alexa, Siri and the other voice-activated software, record labels and streaming services were working with a much simpler algorithmic toolbox. Artist names, track titles, genre tags, even beats-per-minute and release dates are among these most basic pieces of music metadata. “These tags have been applied over time in order to make searching easier,” says Lydia Gregory, cofounder of independent machine-learning company FeedForward. “Typically, they’re put in a taxonomy, a hierarchy or a structure.”

Now, in a streaming and voice-activated world, the descriptors need to factor the way in which music is requested and the platforms they are distributed on. The more accurate and specific the tagging, the more likely the song will be circulated in the appropriate playlists and served up to the right listeners. “There are 30,000 songs uploaded to the internet daily,” says Hazel Savage, cofounder of AI-based start-up Musiio. “It is humanly impossible to listen to everything.” Metadata saves us from slogging through hours of music before finding something we enjoy.