Fei-Fei Li heard the crackle of a cat’s brain cells a couple of decades ago and has never forgotten it. Researchers had inserted electrodes into the animal’s brain and connected them to a loudspeaker, filling a lab at Princeton with the eerie sound of firing neurons. “They played the symphony of a mammalian visual system,” she told an audience Monday at Stanford, where she is now a professor.

The music of the brain helped convince Li to dedicate herself to studying intelligence—a path that led the physics undergraduate to specializing in artificial intelligence, and helping catalyze the recent flourishing of AI technology and use cases like self-driving cars. These days, though, Li is concerned that the technology she helped bring to prominence may not always make the world better.

Her Stanford speech marked the opening of the Institute for Human-Centered Artificial Intelligence, or HAI, which will work on topics such as how to ensure algorithms make fair decisions in government or finance, and what new regulations may be required on AI applications. Luminaries from Silicon Valley and beyond—including Henry Kissinger and ex-Yahoo CEO Marissa Meyer—came to hear a day of discussions featuring a roster of academic and industry figures that included Bill Gates on how AI will shape society. Later, Li, a founder and co-director of HAI, told WIRED why AI research needs steering onto a new path.

WIRED: Stanford has one of the world’s longest-running AI labs, and around the world there is more AI R&D than ever before. Why create a new research institute?

Fei-Fei Li: AI started as a computer science discipline, but now we are in a new chapter. This technology has the potential to do so many good things, but there are also risks and pitfalls. We have to act and make sure it is human benevolent.

At HAI we are making AI an interdisciplinary field of study and education by working with many different thinkers and practitioners: social scientists, political scientists, economists, doctors, and neuroscientists. My aspiration is to come up with thoughtful frontier research as well as potential policy recommendations.

If people working on AI technology have to start engaging with such broader questions, will technical progress slow down?

I never thought this has anything to do with slowing down. We are asking people to be more imaginative, collaborative, thoughtful, and human-centered. I don't know if these adjectives imply slowing down. We want to broaden the horizon and deliver the positive potential in a more concrete way.