Last week, OpenAI released GPT-2, a conversational AI system that quickly became controversial. Without domain-specific data, GPT-2 achieves state-of-the-art performance in seven of eight natural language understanding benchmarks for things like reading comprehension and answering questions.

A paper and some code were released when the unsupervised model, trained on 40GB of internet text, went public, but the entirety of the model wasn’t released due to concerns by its creators about “malicious applications of the technology,” alluding to things such as automated generation of fake news. As a result, the wider community cannot fully verify or replicate the results.

Some, including Keras deep learning library founder François Chollet, called the OpenAI GPT-2 release (or lack thereof) an irresponsible, fear mongering PR tactic and publicity stunt. Others argued that it’s terribly ironic for a nonprofit named OpenAI to begin closing access to its work.

State-of-the-art advances in language models are noteworthy, but there’s nothing new about the conversations GPT-2 has sparked. It breaches two seminal questions that likely cross the minds of the top AI and ML talent around the world: Should AI research that can be used for evil be locked away or kept private not to be shared with the wider scientific community? And how much responsibility does the creator have for their creation?

Future of Life Institute cofounder Max Tegmark summed up the conflict at play as AI evolves nicely in an interview with VentureBeat last year in which he referred to considering risks of AI models not as fear mongering but as safety engineering.

People often ask me if I’m for or against AI, and I ask them if they think fire is a threat and if they’re for fire or against fire. Then they see how silly it is; of course you’re for fire — in favor of fire to keep your home warm — and against arson, right? The difference between fire and AI is that — they’re both technologies — it’s just that AI, and especially superintelligence, is way more powerful technology. Technology isn’t bad and technology isn’t good; technology is an amplifier of our ability to do stuff. And the more powerful it is, the more good we can do and the more bad we can do. I’m optimistic that we can create this truly inspiring, high-tech future as long as we win the race between the growing power of the technology and the growing wisdom with which we manage it.

It’s these concerns and a change from viewing open source as an unquestionable good that recently led researchers from Microsoft, Google, and IBM to create Responsible AI Licenses (RAIL), an attempt to restrict the use of AI models through legal means.

“We recognized the risks our work can sometimes bring to the world; that led us to think about potential ways of doing this,” said cofounder Danish Contractor told VentureBeat in an exclusive interview.

A need to think about the implications of your work has been integral to conversations about bias and ethics in AI in the past year or so, and OpenAI’s declaration earlier this week that AI models need both social science as well as computer science.

A live conversation about these converging conflicts for researchers took place on This Week in ML and AI, which included OpenAI research scientists and industry experts.

OpenAI research scientists Amanda Askell and Miles Brundage said the nonprofit was being cautious because they weren’t highly confident that the model would be used for more positive than negative use cases. They also said OpenAI has considered some sort of partnership program for vetted researchers or industry partners to gain access to the model.

Nvidia director of ML research Anima Anandkumar called OpenAI’s approach counterproductive, and that its approach hurts students and academic researchers in marginalized communities with the least access to resources, but does little to prevent replication by malicious players.

“I’m worried if the community is moving away from openness and to closed setting just because we suddenly feel there is a threat, and even if there is, it’s not going to help because there’s already so much available in the open and it’s so easy to go look at these ideas, including the blog post and paper from AI to reproduce this,” she said.

Similar arguments were recently made when there was talk of the Commerce Department limiting the export of AI to other countries. Perhaps the APIs of popular tech companies like Microsoft could be limited, but open portals for papers like ArXiv or sharing code like GitHub will still support dissemination of vital elements.

Deepfake technology made to distort images and video and the evolution of large-scale AI models aren’t going away.

Ultimately, wherever you land on how OpenAI handled the release of GPT-2, the idea that creators bear some responsibility for their creation is an encouraging trend.

It’s hard to say if restrictions will keep determined malicious actors with resources and know-how from replicating models, but as more powerful systems are born, if limiting access becomes a trend, then it may be to the detriment to the science of creating AI systems.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers — and be sure to bookmark our AI Channel.

Thanks for reading,

Khari Johnson

AI Staff Writer

From VentureBeat

Facebook’s chief AI scientist: Deep learning may need a new programming language

Deep learning may need a new programming language that’s more flexible and easier to work with than Python, Facebook AI Research director Yann LeCun said.

Uber open-sources Autonomous Visualization System, a web-based platform for vehicle data

Uber’s Autonomous Visualization System (AVS) is a tool that enables developers to see through the eyes — or rather sensors — of driverless cars.

OpenAI: Social science, not just computer science, is critical for AI

In a newly published paper, OpenAI suggests that social science holds the key to ensuring AI systems perform as intended.

Q&A with leaders of Intel’s MESO chip: ‘This will happen faster than you think’

VentureBeat interviewed Intel’s Amir Khosrowshahi, CTO of AI, and Ian Young, Senior Fellow and leader of the MESO processor project.

Google Cloud Text-to-Speech adds 31 WaveNet voices, 7 languages and dialects

Google’s Cloud Text-to-Speech API has gained 31 new WaveNet voices, 7 new languages and dialects, and more. Cloud Speech-to-Text, meanwhile, is now cheaper.

Ctrl-labs raises $28 million from GV and Alexa Fund for neural interfaces

Ctrl-labs, a New York startup developing neural interface technology, today announced that it has raised $28 million in a financing round led by GV.

Strategy Analytics: Amazon beat Google in Q4 2018 smart speaker shipments

Strategy Analytics reports that smart speaker shipments hit a whopping 86.2 million units in Q4 2018, driven in part by smart displays.

Video of the Week

Please enjoy this video of the aforementioned conversation about GPT-2 on This Week in AI and ML.

Beyond VB

Apple acquires talking Barbie voicetech startup PullString

Apple has just bought up the talent it needs to make talking toys a part of Siri, HomePod, and its voice strategy. (via TechCrunch)

As concerns over facial recognition grow, members of Congress are considering their next move

“This is a perfect issue for our committee to look into,” California Rep. Jimmy Gomez told BuzzFeed News. (via BuzzFeed)

Pope Francis and Microsoft team up to promote prize for ethical artificial intelligence

Pope Francis and Microsoft are teaming up to sponsor an award for best dissertation on the ethics of “artificial intelligence at the service of human life.” (via uCatholic)

The Pentagon needs to woo AI experts away from big tech

Opinion: Without more DOD investment, there just aren’t enough incentives to lure talent away from high-paying jobs with great benefits into a life of public service. (via Wired)