Tegmark is an author on more than two hundred technical papers, and has featured in dozens of science documentaries. He has received numerous awards for his research, including a Packard Fellowship (2001-06), Cottrell Scholar Award (2002-07), and an NSF Career grant (2002-07), and is a Fellow of the American Physical Society. His work with the SDSS collaboration on galaxy clustering shared the first prize in Science magazine’s "Breakthrough of the Year: 2003."

He extended the east coast experiment and moved north of Philly to the shores of the Charles River (Cambridge-side), arriving at MIT in September 2004. He is married to Meia-Chita Tegmark and has two sons, Philip and Alexander.

After four years of west coast living, Tegmark returned to Europe and accepted an appointment as a research associate with the Max-Planck-Institut für Physik in Munich. In 1996 he headed back to the U.S. as a Hubble Fellow and member of the Institute for Advanced Study, Princeton. Tegmark remained in New Jersey for a few years until an opportunity arrived to experience the urban northeast with an Assistant Professorship at the University of Pennsylvania, where he received tenure in 2003.

Max Tegmark left his native Sweden in 1990 after receiving his B.Sc. in Physics from the Royal Institute of Technology (he’d earned a B.A. in Economics the previous year at the Stockholm School of Economics). His first academic venture beyond Scandinavia brought him to California, where he studied physics at the University of California, Berkeley, earning his M.A. in 1992, and Ph.D. in 1994.

Max Tegmark: I’m optimistic that we can create an awesome future with technology as long as we win the race between the growing power of the tech and the growing wisdom with which we manage the tech.

This is actually getting harder because of nerdy technical developments in the AI field.

It used to be, when we wrote state-of-the-art AI—like for example IBM’s Deep Blue computer who defeated Gary Kasparov in chess a couple of decades ago—that all the intelligence was basically programmed in by humans who knew how to play chess and then the computer won the game just because it could think faster and remember more. But we understood the software well.

Understanding what your AI system does is one of those pieces of wisdom you have to have to be able to really trust it.

The reason we have so many problems today with systems getting hacked or crashing because of bugs is exactly because we didn’t understand the systems as well as we should have.

Now what’s happening is fascinating, today’s biggest AI breakthroughs are a completely different kind where rather than the intelligence being largely programmed in an easy-to-understand code, you put in almost nothing except a little learning rule by which a simulated arc of neurons can take a lot of data and figure out how to get stuff done.

This deep learning suddenly becomes able to do things often even better than the programmers were ever able to do.

You can train a machine to play computer games with almost no hard-coded stuff at all. You don’t tell it what a game is, what the things are on the screen, or even that there is such a thing as a screen—you just feed in a bunch of data about the colors of the pixels and tell it, “Hey go ahead and maximize that number in the upper left corner,” and gradually you come back and it’s playing some game much better than I could.

The challenge with this, even though it’s very powerful, this is very much “blackbox” now where, yeah it does all that great stuff—and we don’t understand how.

So suppose I get sentenced to ten years in prison by a Robojudge in the future and I ask, “Why?”

And I’m told, “I WAS TRAINED ON SEVEN TERABYTES OF DATA, AND THIS WAS THE DECISION,” It’s not that satisfying for me.

Or suppose the machine that’s in charge of our electric power grid suddenly malfunctions and someone says, “Well, we have no idea why. We trained it on a lot of data and it worked,” that doesn’t instill the kind of trust that we want to put into systems.

When you get the blue screen of death when your Windows machine crashes or the spinning wheel of doom because your Mac crashes, “annoying” is probably the main emotion we have, but “annoying” isn’t the emotion we have if it’s myself flying an airplane and it crashes, or the software controlling the nuclear arsenal of the U.S., or something like that.

And as AI gets more and more out into the world we absolutely need to transform today’s packable and buggy AI systems into AI systems that we can really trust.