Not every new technology product hits the shelves.

Tech companies kill products and ideas all the time — sometimes it's because they don't work, sometimes there's no market.

Or maybe, it might be too dangerous.

Recently, the research firm OpenAI announced that it would not be releasing a version of a text generator they developed, because of fears that it could be misused to create fake news. The text generator was designed to improve dialogue and speech recognition in artificial intelligence technologies.

The organization's GPT-2 text generator can generate paragraphs of coherent, continuing text based off of a prompt from a human. For example, when inputted with the claim, "John F. Kennedy was just elected President of the United States after rising from the grave decades after his assassination," the generator spit out the transcript of "his acceptance speech" that read in part:

It is time once again. I believe this nation can do great things if the people make their voices heard. The men and women of America must once more summon our best elements, all our ingenuity, and find a way to turn such overwhelming tragedy into the opportunity for a greater good and the fulfillment of all our dreams.

Considering the serious issues around fake news and online propaganda that came to light during the 2016 elections, it's easy to see how this tool could be used for harm.

In fact, the 2016 election helped raise awareness of an issue that Flickr co-founder Caterina Fake has been talking about in Silicon Valley for years — the ethics of technology.

That conversation was furthered by OpenAI's decision to publicize the nonrelease of their new technology last month, Fake told NPR's Lulu Garcia-Navarro.

"Tech companies don't launch products all the time, but it's rare that they announce that they're not launching a product, which is what has happened here," Fake said. "The announcement of not launching this product is basically to involve people in the conversation around what is and what is not dangerous tech."

When evaluating potential new technology, Fake asks a fundamental question: should this exist?

It's a question she explores as host of the podcast Should This Exist?

In a recent episode, Fake investigates a product called Woebot, which is an artificial intelligence-driven robot therapist. NPR spoke with Fake about the ethics of this new technology — and technology as a whole.

Interview Highlights

On evaluating Woebot

As we know, depression has increased, which has followed very closely the introduction of technology into our lives.

My initial impulse was, "Gosh, should we use technology to cure the problems of technology? That seems misguided." But, by the end of thinking through some of the possibilities of this technology, I became convinced that in fact, this was probably a good solution for it.

On the changing approach of technology developers

When we had first started Flickr, we kind of understood that what we were building was online community. Online community is something where you show up — you are yourself, you have to participate and you have to negotiate the culture of the community in which you are participating.

In a social media platform, you are so-called "eyeballs." You are a product that is being sold to advertisers. It's a completely different dynamic. When things switched from being, very early on, thought of as "online community" to being thought of as "social media," the dynamics of the entire software changed.

On technology's potential for good or evil

I feel as if technology can always be used for good, right? It has neutral valence. It is the way that humans use it, how we approach it and how we think about it — that is the most important part of technology and technology in our lives.

On how to handle technology's potential to be misused

The important part of this is to acculturate people to asking these questions. As we all know, Millennials and Gen Z and the younger folk are much more thoughtful about: what are the values behind this product or this program? And what does it do to us?

NPR's Amanda Morris produced this story for digital. NPR's Mayowa Aina produced this story for broadcast.

Copyright NPR 2020.