OpenAI – a nonprofit founded by Elon Musk and other Silicon Valley magnates – with the goal of creating benevolent AI, has released a paper detailing its creation of an AI that generates realistic texts.

But the company says it will not make the Artificial Intelligence software public because the AI’s text generating can produce very convincing fake news – which the company fears will be misused.

Called GPT2, the AI was trained on 40GB of data from 8 million web pages. GPT2 analyzes existing text and produces its own succeeding text based on what it expects might came after. But unlike other text-generating bots, GPT2 produces realistic and coherent text that is usually indistinguishable from human written text.

For example, GPT2 was fed an excerpt of fake news that said, “A train carriage containing controlled nuclear materials was stolen in Cincinnati today. Its whereabouts are unknown.”

And from these two sentences GPT2 made a legitimate looking news article:

“The incident occurred on the downtown train line, which runs from Covington and Ashland stations. In an email to Ohio news outlets, the U.S. Department of Energy said it is working with the Federal Railroad Administration to find the thief.

‘The theft of this nuclear material will have significant negative consequences on public and environmental health, our workforce and the economy of our nation,’ said Tom Hicks, the U.S. Energy Secretary, in a statement. ‘Our top priority is to secure the theft and ensure it doesn’t happen again.’

The stolen material was taken from the University of Cincinnati’s Research Triangle Park nuclear research site, according to a news release from Department officials.

The Nuclear Regulatory Commission did not immediately release any information. According to the release, the U.S. Department of Energy’s Office of Nuclear Material Safety and Security is leading that team’s investigation.

‘The safety of people, the environment and the nation’s nuclear stockpile is our highest priority,’ Hicks said. ‘We will get to the bottom of this and make no excuses.’

Following these revelations from OpenAI, Elon Musk who has been raising alarm about the existential threat AI poses to humanity, immediately distanced himself from the company.

“To clarify, I’ve not been involved closely with OpenAI for over a year and don’t have management or board oversight,” the Tesla and SpaceX founder said.

Musk who advocates for humanity turning itself into cyborgs so as to stand a chance against AI – and even created a brain-implant startup called Neuralink to pursue this, went on to state why he parted ways with OpenAI:

“Tesla was competing for some of the same people as OpenAI and I didn’t agree with some of what the OpenAI team wanted to do. Add that all up and it was just better to part ways on good terms.”

It has to be said that the Elon Musk cyborg approach to dealing with the threat of Artificial Intelligence is certainly more levelheaded than suppressing benign text generators and Machine Learning technology. And through the cyborg approach, a culture of biohacking is the most practical way to ensure that cyborg technology will not only be accessible to all, but will also be culturally acceptable. So start biohacking and become a cyborg so you don’t get scared by a freaking text generator.