Given the huge success of automated musical platforms like Spotify, you might think that transition has already happened. But as the compact disc (released 1982) was just the first consumer manifestation of a shift to digital that was already well underway in the studio, and only truly came of age with the appearance of Napster (1999) and the iPod (2001), when it comes to algorithmic music, we’re having our compact-disc moment, not our iPod moment. Over the next two decades, we can look forward to smart machines transforming how we make, discover and commune with music.

Playing by ear

Rebecca Fiebrink’s musical controller doesn’t look like much: it’s just a micro:bit, a barebones educational computer that fits into your hand. It’s the software that makes it come to life: a system called Wekinator, which aims to simplify next-generation music-making using machine learning. Wekinator learns from human actions and associates them with computer responses, thus eliminating the need to code.

Fiebrink holds the controller in the air and associates it with a sound, then repeats the process at another point in space with another sound. Moving between the two points results in a smooth transition from one sound to another – a pleasing effect. But it’s when she moves the controller to a third point, off the line, that it really comes into its own: the Wekinator creates a new sound. If you like it, you can keep it and add more; if not, you can try again.