Talk Abstract

Artificial intelligence and machine learning have experienced a renaissance in the past decade, thanks largely to the success of deep learning methods. However, while deep learning has proven itself to be extremely powerful, most of today’s most successful deep learning systems suffer from a number of important limitations, ranging from the requirement for enormous training data sets to lack of interpretability to vulnerability to “hacking” via adversarial examples. In my talk, I will survey some of these limitations and propose that one path forward involves building hybrid systems that combine neural networks with techniques and ideas from symbolic AI, a parallel tradition of AI whose origins date back to the beginning of AI. I will show example neurosymbolic hybrid systems where neural networks and symbolic systems complement each other’s strengths and weaknesses, enabling systems that are accurate, sample efficient, and interpretable. Finally, I will show other directions we are pursuing in the space of neural-symbolic hybrid systems, and argue that these methods at the intersection provide a powerful path forward for the broad adoption of AI.

Speaker Bio

David Cox is the IBM Director of the MIT-IBM Watson AI Lab, a first of its kind industry-academic collaboration between IBM and MIT, focused on fundamental research in artificial intelligence. David's ongoing research is primarily focused on bringing insights from neuroscience into machine learning and computer vision research. His work has spanned a variety of disciplines, from imaging and electrophysiology experiments in living brains, to the development of machine learning and computer vision methods, to applied machine learning and high performance computing methods.