It’s challenging enough to sustain any scientific study for a decade. Now Eric Horvitz, managing director of the Microsoft Research lab in Redmond, Washington, is launching a project he wants to last a century. The One Hundred Year Study on Artificial Intelligence (AI100), based at Stanford University in Palo Alto, California, and funded by Horvitz and his wife, aims to track the impact of artificial intelligence (AI) on all aspects of life, from national security to public psychology and privacy. Horvitz recently helped create a standing committee of interdisciplinary researchers who serve rotating terms, to convene study panels that will produce a major report every 5 years. The first report is expected at the end of this year.

ScienceInsider recently caught up with Horvitz to discuss the ambitious project. (Horvitz has chaired the section on information, computing, and communication for AAAS, the publisher of ScienceInsider). This interview has been edited for clarity and brevity.

Q: AI100 is a continuation of a 2008 to 2009 study on the short- and long-term implications of AI that you commissioned when you were president of the Association for the Advancement of Artificial Intelligence. Why extend the study to 100 years?

A: Machine intelligence will have deep effects on people and society, and the influences will be changing over time. It would be really nice to have a platform where there’s a long vision to the future as well as a really sharp connected memory through sets of studies. For example, when it comes to understanding the relationship of machine intelligence and privacy, we are already seeing science that uses innocuous data like search logs and tweets to make predictions about the likelihood of somebody being at higher risks for certain illnesses. These kinds of things need to be studied.

Q: This is an ambitious project both in its length and its breadth—you outlined 18 areas of focus, ranging from the political and economic implications of AI to legal and ethical concerns. How do you plan to structure and sustain it?

A: The goal is to set up a system with an initial standing committee that will do a great job at self-sustaining and continuing a chain of these standing committees and study panels over 100 years. You can imagine how certain topics might become strong forerunners in different decades. For example, ethical issues with the automation of key high-stakes decisions might come to the fore sometime in the next decade, when we might have more autonomous cars on the roads and automated systems being used in warfare. I think it will be interesting for these panels to look back at what [earlier panels] addressed and had forecast, what guidance [they] provided, and how it all went. One hundred years can seem like a long time, but a lot can happen in 100 years of technology. Imagine [we were planning on] tracking how the rise of electricity might change the world from 1900 on.

Q: Could technological development outpace these 5-year studies?

A: I think the studies will be ahead of the wave. It’s not just studying and writing about phenomena, but also about playing the role of soothsayer and providing guidance to government agencies, funding agencies, and researchers—on both the costs and opportunities of AI. For example, in health care, we’ve built systems than can be very valuable for enhancing the delivery of health care while reducing costs. Yet there’s been such sluggish translation of these advances into the real world. You can imagine a focused study on these challenges of translation that could provide recommendations and guidance to the National Institutes of Health and the National Science Foundation. You can also imagine that government committees might call and say: “Hey, we need urgent research on X because of what we see with A, B, and C.” It’s a chance to, on a clock, have a reflection about a set of important issues when it comes to machine intelligence.

Q: Public attitudes about AI seem to be an important focus of your study. How do you plan to communicate the findings? Do you hope to change public opinion?

A: It’s not clear what public attitudes are on machine intelligence. I think many people really enjoy the fruits of systems like search engines without even thinking that they are AI. We’ve also seen, in the last year, luminaries like Stephen Hawking and Elon Musk talking in the press about how AI will threaten humanity someday. To many researchers, these kinds of scares are unfounded. Others are uncertain, and some share concerns. Either way, we need to address them and make sure that people understand that there are practical things that can be done to make sure that things go well as we build and deploy these intelligences.

As scientists, we do need to work on making sure that concerns are addressed about the safety and autonomy of AI systems and understand how to avoid or disprove some of these dystopian visions of the future by asking the questions scientifically: Are these outcomes that some people fear possible? And, if possible, how can we make sure they don’t happen by being proactive?

Q: Personally, what’s your vision for the future of AI?

A: My own view is that AI will be incredibly empowering to humanity. It will help solve problems, it will help us do better science, it promises to really help with challenges in education, health care, and hunger. I think there are lots of opportunities there on the upside. In many ways, some of the concerns that I’ve had over the years have been more about what I call the rough edges that can be addressed. I’m very optimistic about machine intelligence, and I see a need for studying and guiding its influences on people and society, and for continuing dialogue with the public.

*Update, 9 January, 11:33 a.m.: This article was revised to clarify and expand several of Horvitz's comments.