The search giant's new project, named Magenta, is due for public launch at the start of June, but was unveiled at the Moogfest art and technology festival by Douglas Eck, a researcher from the company's Google Brain AI division.

As Quartz reports, Magenta was inspired by DeepDream, a product Google released last year which could turn the most ordinary of images into a trippy, surreal hellscape.

DeepDream is an image recognising software turned up to 11. Google trains its computers to describe images, but DeepDream deliberately over-interprets these images, picking out normally meaningless elements and exaggerating them. After being run through the program, a picture of a wandering cloud may turn into a bizarre fish, or a many-headed dog.

Magenta works in a similar way, by processing simple musical inputs and turning them into something more recognisable. Eck demonstrates this in a video captured by the site, which shows the software taking a normal five-note sequence and turning it into a rudimentary melody, by finding patterns and building a tune based on its 'knowledge' of music.

Magenta's first release will be a tool to let musicians and researchers upload music files to the software, so the program can start to learn its craft. More software, which may allow users to actually create AI-generated music, will be released on the Magenta GitHub page as time goes on.

Computer-generated music might not hit the top of the charts any time soon, and Eck admitted that his machines aren't going to take the place of human musicians.