For the first six decades of AI's development, the biggest question facing researchers was whether their inventions would work at all. Now, the field has entered a new stage of introspection as its effects on society — both positive and damaging — reverberate outside the lab.

In this uneasy coming of age, AI researchers are determined to avert the catastrophic mistakes of their forefathers who brought the internet to adulthood.

What's happening: As the tech world reels from a hailstorm of crises around privacy, disinformation and monopoly — many stemming from decisions made 30 years ago — there's a sense among AI experts that this is a rare chance to get a weighty new development right, this time from the start.

In the internet's early days, technologists were "naive" about its potential downsides, John Etchemendy, co-director of Stanford's new Institute for Human-Centered AI (HAI), told reporters Monday at the institute's kickoff.

"We all imagined that it would allow everybody to have a voice, it would bring everybody together — you know, kumbaya. What has in fact happened is just frightening. I think it should be a lesson to us. Now we're entering the age of AI. … We need to be 100 times more vigilant in trying to make the right decisions early on in the technology."

— John Etchemendy, Stanford

At the beginning of Microsoft, nobody knew their work would lead to today's information free-for-all on social media, Bill Gates said at the HAI event. "There wasn’t a recognition way in advance that that kind of freedom would have these dramatic effects that we're just beginning to debate today," he said.

Driving the news: Stanford trotted out some of the biggest guns in AI to celebrate the birth of its new research center on Monday. The programming emphasized the university's outsized role in the technology’s past — but the day was shot through with anxiety at a potential future shaped by AI run amok.

The question at the center of the symposium, and increasingly of the field: "Can we have the good without the bad?" It was asked from the stage Monday by Fei-Fei Li, a co-director at HAI and leading AI researcher.

of the symposium, and increasingly of the field: "Can we have the good without the bad?" It was asked from the stage Monday by Fei-Fei Li, a co-director at HAI and leading AI researcher. "For the first time, the ethics of AI isn't an abstraction or philosophical exercise," said Li. "This tech affects real people, living real lives."

the ethics of AI isn't an abstraction or philosophical exercise," said Li. "This tech affects real people, living real lives." Similar themes swirled around MIT's high-profile launch of its own new AI center earlier this month.

At this early stage, the angst and determination has yielded only baby steps toward answers.

Companies are hiring ethics experts, like Google's Timnit Gebru and Accenture's Rumman Chowdhury, to help keep them out of hot water.

like Google's Timnit Gebru and Accenture's Rumman Chowdhury, to help keep them out of hot water. Nonprofits like the Partnership for AI are convening academics and tech firms to research ethical issues and come up with ground rules for addressing them.

are convening academics and tech firms to research ethical issues and come up with ground rules for addressing them. Computer scientists are debating how to engage with policymakers and the public about their work, and whether it's appropriate to publish potentially dangerous developments.

Among the concerns motivating the explosion of conferences, institutes and experts centered on ethics in AI: algorithms that perpetuate biases, widespread job losses due to automation, and an erosion of our own ability to think critically.

"Something big is happening in the plumbing of the world," said California Gov. Gavin Newsom at the Stanford event. "We're going from something old to something new, and we are not prepared as a society to deal with it."

Go deeper: Tech's scramble to limit offline harm from online ads