5. Keeping AI from the bad guys

Artificial intelligence researchers, worried about potential harm from their own inventions, want to keep some findings under wraps to prevent their misuse, Axios emerging technology reporter Kaveh Waddell writes:

AI researchers are working to limit dangerous byproducts of their work, like race- or gender-biased systems and supercharged fake news.

What's new: OpenAI, a prominent lab, unveiled a computer program last week that can generate prose that sounds human-written.

OpenAI allowed reporters to test-drive the program. (We did: See the result.)

reporters to test-drive the program. (We did: See the result.) But OpenAI said it would withhold the computer code, fearing that somebody could use it to mass-produce fake news.

said it would withhold the computer code, fearing that somebody could use it to mass-produce fake news. This was the first time a major research outfit is known to have used the rationale of safety to keep AI work secret.

The move got massive blowback: AI researchers accused the group of pulling a media stunt, stirring up fear and hype, and unnecessarily holding back an important advance.

Several experts praised OpenAI for tipping off a necessary debate.

praised OpenAI for tipping off a necessary debate. Why it matters ... Against the backdrop of the techlash, we're seeing a messy conversation around an urgent question: What to do with increasingly powerful "dual-use" technologies — AI that can be used for good or for ill.

Be smart: Computer science is lurching toward the same tough decisions that biologists and nuclear scientists had before them — when to circumscribe openness in the name of safety and ethics.