If you ever use Google Assistant or the Google app to recognize a song you're listening to, that process should be faster and more accurate in the future, Google says – thanks to some advancements in its cloud-based artificial intelligence routines.

Sound Search (which is the official name of "hey Google, what's this song?") has been upgraded to use a neural network four times the size of its predecessor. It also takes samples of a song twice as frequently as before, in order to get a better idea of what you're listening to and to increase the chances of getting a positive match.

Google says Sound Search can now automatically adjust its techniques based on whether it thinks it's listening to a popular song or a more obscure song – the AI can use more detailed checking methods to help identify tracks that are less well known.

Name that tune

Sound Search is able to operate so quickly by using advanced machine-learning algorithms to quickly get a series of audio fingerprints for the music it's hearing, and quickly whittle down the number of potential matches in its database.

The same technology that the Pixel phones use in their Now Playing feature – where songs are identified automatically on the lock screen, no manual effort required – has now been adapted for Sound Search, Google says. The difference is that the Sound Search database is much larger, so more false positives need to be dismissed.

Google says it's working on making Sound Search even faster in the future, and admits that it's still not perfect: apparently the chances of getting a good match are reduced when you're in a very loud or very quiet environment.