Guest post by Ariel Procaccia:

The recent AI magazine special issue on AGT is a good excuse to discuss an interesting question: Can AGT/E enable AI in some fundamental way? Or, are we (AI researchers working in AGT/E) betraying the legacy of our founding fathers – Alan Turing, John McCarthy, Marvin Minsky, and, well, Isaac Asimov – by not focusing on our true purpose – building intelligent robots, bringing about the singularity, or at least making better vacuum cleaners? These questions are all the more challenging because for now I want to avoid a related thorny issue, that of defining AI. I argue below that the answer to the first question is yes.

Here is the argument in a nutshell (it seems that a similar argument was independently proposed by the wise Aviv Zohar). One of the classic goals of AI is to create a software agent that seems intelligent to a human observing it (e.g., can pass the Turing test). However, in the last two decades a significant portion of AI research has moved from focusing on single agents to studying multiagent systems. Now, game theory attempts to distill the principles of rational interaction. But rationality is just another word for artificial intelligence: it is not how humans actually behave, but rather how we perceive intelligent behavior. Therefore, a multiagent system in which interactions are governed by game theory (or in which decision making is informed by social choice theory, for that matter) would be perceived as intelligent by a human observing it. In other words, AGT/E enables artificial intelligence on the system-wide level rather than on the individual level.

Now that we are convinced that AGT/E plays a fundamental role in AI (but Turing must be turning in his grave, or is Turning turing?), it remains to determine how AGT/E research in AI is distinct from AGT/E research in, say, theory of CS. Looking at the special issue’s table of contents, some major theory-oriented AGT/E topics are conspicuously missing, e.g., price of anarchy (which is admittedly on the decline even in theory) and algorithmic mechanism design in the Nisan-Ronen sense – truthful approximations for computationally intractable problems. The last question was eloquently addressed by Elkind and Leyton-Brown in their editorial for the special issue. They pointed out two (related) distinctions. First, AI researchers are interested in reasoning about practical multiagent systems, and thus tend to consider more realistic models, employ an empirical approach where analysis fails, and test their methods through competitions. Second, many AI researchers do not in general view computational hardness as an insurmountable obstacle, and thus employ heuristics where appropriate. I would like to raise a third point. Modern AI encompasses a world of ingenious ideas that, we are discovering, have a considerable conceptual interface with AGT/E. Therefore, some AGT/E work on the AI side emphasizes the connections with machine learning (the fascinating sociology of machine learning and AI is beyond the scope of this post), knowledge representation and reasoning, decision making under uncertainty, planning, and other well-studied areas of AI.

Wait, but what is AI? Unfortunately, no one can be told what AI is. You have to see it for yourself.