Bayesian models are rooted in Bayesian statistics, and easily benefit from the vast literature in the field. In contrast, deep learning lacks a solid mathematical grounding. Instead, empirical developments in deep learning are often justified by metaphors, evading the unexplained principles at play. These two fields are perceived as fairly antipodal to each other in their respective communities. It is perhaps astonishing then that most modern deep learning models can be cast as performing approximate inference in a Bayesian setting. The implications of this statement are profound: we can use the rich Bayesian statistics literature with deep learning models, explain away many of the curiosities with these, combine results from deep learning into Bayesian modelling, and much more.

In this talk I will explore the new theory linking Bayesian modelling and deep learning. The practical impact of the framework will be demonstrated with a range of real-world applications: from uncertainty modelling in deep learning, through training on small datasets, to new state-of-the-art results in image processing. I will finish by surveying open problems to research, problems which stand at the forefront of a new and exciting field combining modern deep learning and Bayesian techniques.