Meet GROVER

In an attempt to prevent artificial intelligence-generated fake news from spreading across the internet, a team of scientists built an AI algorithm that creates what might be the most believable bot-written fake news to date — based on nothing more than a lurid headline.

The system, GROVER, can create fake and misleading news articles that are more believable than those written by humans, according to research shared to the preprint server ArXiv on Wednesday — and also detect them.

“We find that best current discriminators can classify neural fake news from real, human-written, news with 73% accuracy, assuming access to a moderate level of training data,” the researchers wrote in the paper. “Counterintuitively, the best defense against Grover turns out to be Grover itself, with 92% accuracy.”

In other words, the algorithm is apparently able to detect AI-written fake news better than any other tool out there. But in the wrong hands, GROVER could fill the internet with dangerous propaganda and misinformation.

Shipping Out

To help prevent fake news from taking hold, the University of Washington scientists who built GROVER announce in their research that they plan to release the tool to the public — a stark difference from the OpenAI team that refused to release the full version of its similar algorithm, GPT-2.

The algorithm can analyze more aspects of a news article than other tools, including not only the body of the article but also the headline, publication name, author name, and other details that could indicate foul play.

Wrong Hands

But that attention to detail also means that someone could use GROVER to create fake news of their own.

The study demonstrates how easily GROVER can churn out a news article falsely asserting that vaccines are linked to autism spectrum disorder, for instance, written in the distinctive styles of specific news outlets like The Washington Post, TechCrunch, The New York Times, and Wired. People who read GROVER’s articles found them more convincing than those written by humans, according to the study.

Writing in the style of The NYT‘s science section, GROVER generated not only a headline, but also an author’s name and the opening of a news article that attributes a link between vaccines and autism to scientists from UC San Diego and the federal government:

Those who have been vaccinated against measles have a more than 5-fold higher chance of developing autism, researchers at the University of California San Diego School of Medicine and the Centers for Disease Control and Prevention report today in the Journal of Epidemiology and Community Health.

In another example, the researchers demonstrate how GROVER can refine its output over time to better match a specific publication. They feed it with a headline about vaccines causing autism and tell it to match Wired‘s style.

GROVER uses that headline to write a full article, then goes back and refines the headline to make it look like something that Wired would publish. The researchers also included other fake articles in the style of The Washington Post claiming that Donald Trump had been impeached based on new evidence from the Mueller Report:

WASHINGTON — The House voted to impeach President Donald Trump Wednesday after releasing hundreds of pages of text messages that point to clear evidence of obstruction of justice and communication with the head of the Trump Organization about a potential business deal in Russia,” the article reads. The 220-197 vote came after weeks of debate over whether new evidence released by special counsel Robert Mueller’s office signaled sufficient grounds for Trump’s removal from office. The president personally denounced the move, announcing his intent to veto the resolution and accusing Democrats of plotting to remove him from office through a “con job.”

The scientists concede that releasing GROVER to the world could be dangerous, but maintain that it’s still the best line of defense against algorithmic propaganda, even that created by GROVER itself.

More on misinformation: This Site Tests Whether You Can Spot AI-Generated Faces