Google wants to put the art back in artificial intelligence.

During the last session at Moogfest, a four-day music and technology festival, in Durham, North Carolina, Douglas Eck, a researcher on Google Brain, the company’s artificial-intelligence research project, outlined a new group that’s going to focus on figuring out if computers can truly create.

The group, called Magenta, will launch more publicly at the start of June, but attendees at Moogfest were given a taste of what it’s going to be working on. Magenta will use TensorFlow, the machine-learning engine that Google built and opened up to the public at the end of 2015, to determine whether AI systems can be trained to create original pieces of music, art, or video.

This is no simple task, given that even the most advanced artificially intelligent systems have enough trouble copying the styles of existing artists and musicians, let alone coming up with entirely new ideas themselves. Eck admitted this during a panel discussion, saying that AI systems are “very far from long narrative arcs.”

But Magenta will aim to create tools to help other researchers, as well as its own team, explore the creative potential of computers. Much in the same way that Google opened up TensorFlow, Eck said Magenta will make available its tools to the public. The first thing it will be launching is a simple program that will help researchers import music data from MIDI music files into TensorFlow, which will allow their systems to get trained on musical knowledge.

Adam Roberts, a member of Eck’s team, told Quartz that the Magenta group will on June 1 start posting more information about the resources it will be producing, adding new software on its GitHub page, and posting regular updates to a blog. Roberts also showed off a simple digital synthesizer program he’d been working on, where an AI could listen to notes that he played, and play back a more complete melody from those notes:

The goal of the project, Eck suggested, could well be to create a system that could give a listener “musical chills” with entirely new pieces of music, on a regular basis, as they sit listening to computer-generated music from the comfort of their couch at home. But he admitted that it’s likely that artistic creation, at least for the foreseeable future, will still involve humans to some degree, as it’s hard to program robots to be totally creatively independent.

Eck said the inspiration for Magenta had come from other Google Brain projects, like Google DeepDream, where AI systems were trained on image databases to “fill in the gaps” in pictures, trying to find structures in images that weren’t necessarily present in the images themselves. The result was the psychedelic images that the system could create, where ordinary images were infused with skyscrapers, eyeballs, or household items. Instead of using a system to create crazy images like that, Eck and his team wanted to see if, given enough training data, a machine could create music that would be engaging and exciting for a person to listen to. After tackling music, Eck said the group will pursue creating images and video.

While it’s unlikely that Google’s AI systems are going to be replacing chart-topping artists in the near future, Eck said he sees a path to computer-created music appearing in certain situations relatively soon. If, for example, a person’s wearable device tracking her heart rate sends a signal to her smartphone that she is stressed, an AI system designed to create music that person would find soothing could generate music for her.