Taryn Southern’s pop music career nearly ended in 2004, during a very brief stint on American Idol at the age of 17. After making it to the Hollywood week round and cracking the Top 50, Southern was booted after forgetting the lyrics to her song in an inexplicable bout of stage fright. Simon Cowell would tell her, “Taryn Southern, it’s such a shame, because you have the name of a star.”

“Imagine the worst nightmare you could have over and over as a kid,” Southern tells Inverse. “Then it happens.” Through laughs, she describes what a mortifying experience this was back in 2004. Luckily for her, “this was before YouTube, so you can’t find the original footage online,” she laughs.

Thirteen years on, Southern is about as far from American Idol as you might get, and this was perhaps only possible because she flubbed her performance so badly. She’s still making pop music, but pioneering the most improbable and innovative way to make it. Southern’s upcoming album will be a record made of music entirely written and composed by artificial intelligence (except her own voice).

The video for its first single, “Break Free”, uses visual art rendered by Google DeepDream.

In a lot of ways, losing out on the chance of success through American Idol has laid out a path by which Southern may end up clinching her claim to fame through something much more consequential. She is effectively one of a few trailblazers looking to make A.I.-generated music a mainstream tool of the commercial masses, more than just a gimmick to differentiate her in a crowded field.

Growing up, Southern says she “never really had any serious formal music training” other than a few piano lessons and had trouble turning the instrumentation percolating in her head into music. It’s precisely what attracted her to using A.I. to create the backing music for vocal melodies.

Musicians are no stranger to A.I. — Google debuted its music-synthesizing A.I. program at Moogfest in May, and there are other A.I. melody makers, even if the process by which they are made is completely artificial. Nevertheless, A.I. tools for musicians are still limited to the realm of electronic and experimental genres.

Southern aims to change of all of that soon enough.

After the American Idol epiosde, Southern swore off ambitions to become a pop sensation. As the years passed, she found a creative outlet in acting and comedy, and began to inject music into the mix to create musical comedy videos for the internet.

“The first YouTube video I ever made was a comedy music video,” she says. “It wasn’t really for making music, it was just a way to express the comedy.” She made around two-dozen musical comedy videos over the next five years, selling one pilot to MTV.

The process of making these comedy videos reminded her of her own limitations: “I was always frustrated by my inability to actually play instruments on my own and produce myself,” says Southern. “I always had to work with someone else to bring my vision to life. Sometimes that was an amazing process — working with a collaborative human who makes your vision better — and other times it was painful, frustrating, and expensive.”

The language by which Southern describes her work is methodical and taut, veering on the kind of technical lexicon you might hear from an engineer or scientist. “The creative process is always burdened by logistical challenges,” Southern says. “When you’re in flow, and you’re writing or painting or whatever your creative craft is, you have something that abruptly stops that flow because you don’t have the correct output mechanism. It’s incredibly frustrating.”

An anthropologist by trade, Southern has an obsession with social science and neuroscience, and loves to dig into the way by which these worlds clash with the emergent ubiquity of technology in the modern world. In fact, she calls her new album, I AM AI, a platform through which she can explore these larger issues and questions — albeit through a pop-friendly filter for listeners.

“The whole album is an exploration of what it will mean to be human later,” she says. “These are things I think about all the time. I think about how a person in the future might write a song.”

Meet the A.I.

After reading about the A.I. software FlowMachines and the Beatles-inspired song the system developed, Southern began tinkering around with NSynth, and started running into more professional systems like Amper A.I., Jukedeck, and others.

“It started out as this creative challenge — to see what I could do with these new tools to create pop songs. But I couldn’t quite figure out how to make complete songs out of them.”

Southern decided she wanted to make an entire album using these A.I. programs. She used Amper A.I. for the majority of the tracks she produced, though she says other tracks made using other tools will be included on the final tracklist for the album.

“The future of music will be created through a collaboration between humans and A.I.”

“We had a very strong belief that the future of music would be created through a collaboration between humans and A.I.,” Drew Silverstein, a music composer and co-founder and CEO of Amper, tells Inverse. He says the software was initially launched to help composers like him create customizable tracks for movies, television shows, commercials, and other projects — but there was always a larger goal to technology to new music creation for any sort of purpose.

“Our initial use case is largely for functional music,” says Silverstein. “At the same time, we believed then, and still believe now, Amper’s A.I. can be an incredible collaborative tool for artists, like Taryn. When you think about human collaboration and creativity, you can basically put that on steroids when you’re using A.I.”

Southern is one of the first artists to take A.I. music software and use it exclusively for an artistic purpose.

“Amper has the simplest interface combined with the most amount of customizability on the user’s side,” says Southern. This is pretty evident right off the bat — a user just has to input the genre, desired bit-per-minute rate, mood, key, instrumentation, and hit the “render” button to create a track.

Within five minutes, I used the simple version to create this 60-second “tender ‘90s pop” track, heavy on moody synths and light on percussion.

Southern says she likes to begin with a skeleton beat and chord structure that sounds good (“usually takes between six to 10 renders,” she says) and then move into playing around further with the instrumentation and other presets to fill in more of a particular sound that she sounds emotes something for her.

For someone like Southern who’s a little more well-versed in song creation, a user would want to take snippets of these tunes and stitch them together into a song that has a complete beginning, middle, and end, with a natural pop progression. While most programs require you to do this manually, Amper’s system already allows users to easily cut and paste snippets together without the need for an external application like Garage Band or Pro Tools.

Of course, these programs are fairly new, and have their limitations. When using Amper to create songs, for example, Southern corresponded with the company almost nonstop in order to figure out all of the program’s capabilities and push its technological potentials to their limits. “I was breaking their software,” she says, “because I was rendering out so much. One song had like 150 renders, and it got to the point where it couldn’t render itself out anymore.”

Amper was very enthusiastic to work to with Southern, and say her work with the software is precisely what they see as the future of music. Silverstein says the company’s interface is not yet advanced enough to really fulfill what a lot of professional musicians want, but he and his team are making headway.

“We believe that in a matter of years, every piece of music around the world will be created with Amper,” he says. “Amper is part of the greatest creative revolution in history, when you think about how much this type of technology democratizes the expression of one’s self through music.”

From the video for "Break Free" Taryn Southern / YouTube

Southern seems to share the same sentiment: “Writing music this way has definitely change the course of the overall album and sound,” she says. Her original goal with I AM AI was much more audacious — she wanted to create 12 tracks that each explored a different genre and sound from one-another, in order to demonstrate the capabilities of A.I. music software. Her friends talked her out of it when they suggested something like that might create a sonic whiplash for listeners, forced to go through extreme bounces track by track.

So Southern narrowed her focus into a cinematic sound that she doesn’t think she could have as executed as easily without A.I. “I love soundtracks for movies,” she says. “Sometimes you’re watching scenes that go on for four or five minutes on end, and those composers make sure you say riveted to that scene.

“I don’t have the music background to verbalize what I want,” she says, but through A.I., she can create that sonic quality of tension without much trouble. “Each song has an element of tension, and that’s what I search for when I create the first iteration of music.”

The A.I. is far from perfect: There are always parts to a rendering where Southern wishes the track would move in a particular direction than it already does. This isn’t such a hassle when working with a program that exports a MIDI file, which one can open up in Pro Tools to move around specific notes. But some software like Amper don’t work that way, and that’s where compromises need to be made — just as they might have to be made through conventional studio sessions as well.

Is it really music?

The rise in A.I. music won’t be without its detractors. Whenever a new technology rears its head, the old guard is always close by to bemoan the breakdown of authenticity and artistic purity. Is it really music if a machine makes it? Can art really be art if it’s made by an artificial mind?

“We do this with every new piece of technology, where we have this fear of it taking over some perception of human specialness,” she says. “But for whatever reason, we come to embrace it. Just look at photography. I don’t know how many professional photographers anymore know how to go into a dark room and develop film. Does that make them less creative or less of an artist? Or does that just mean the tools have changed?

“One could argue this — the A.I. — is just a new form of instruments. As a result, people have a new frame for writing and composing songs. I only think it augments our abilities to be more creative, and opens the door to more people who don’t have access to formal education or instrumentation, who want to sit down and write a song. That’s exciting to me.”

These and larger issues are already oozing out of the responses to Southern’s music. “Break Free” is about a human who wants to move beyond her biological limitations and experience more of the world, but when Southern played the song for friends, the response she got was that they believed it was about an A.I. wanting to “break free” and become human.

“I thought, ‘that’s so funny, of course humans would think we are the most desirable version of existence.’” In a catchy, radio-friendly 4-minute track, Southern is already putting listeners head-to-head with their own interpretations about what existence really is, and whether it’s truly defined by blood and bone, or if there is room for metal and wire.

I AM AI will be self-released by Southern sometime in December, along with four different VR music videos for listeners to experience.