Stephen Hawking, who’s well-known for making (and losing) bets with other physicists, isn’t ready to wager on the future of Artificial Intelligence.

“In short, the rise of powerful AI will be either the best or the worst thing ever to happen to humanity,” Hawking said this week at the opening of the Leverhulme Centre for the Future of Intelligence in Cambridge, England. “We do not yet know which.”

The LCFI, which opened Monday, is part of the University of Cambridge’s Centre for the Study of Existential Risk. Their goal is to answer some of the biggest questions facing the rapidly advancing field — including what it all means, and how to keep AI from killing us.

In the past, Hawking has warned that AI could end mankind.

“It would take off on its own, and redesign itself at an ever-increasing rate,” he told the BBC in 2014. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Elon Musk and Bill Gates have also expressed anxiety over the development of AI. Gates said he was concerned that more people weren’t worried about the future effects of super-intelligent machines during a 2015 Reddit AMA, and Musk called AI our greatest existential threat in a 2014 talk with MIT students. “With Artificial Intelligence, we are summoning the demon,” he said.

Robots are already being developed and tested as pilots, journalists, manufacturing workers and health care providers. Cars are self-driving, the first AI-generated pop song was released in September, and Honda’s humanoid robot, Asimo, can even dance.

But Hawking’s remarks highlighted the potential AI holds, such as getting rid of disease, poverty and global warming.

“Every aspect of our lives will be transformed,” he said. “In short, success in creating AI could be the biggest event in the history of our civilization.”