Keynote Talks

Josh Tenenbaum Ruslan Salakutdinov

Research Talks

Durk Kingma Alp Kucukelbir







Dustin Tran Jan-Willem van de Meent

Research Panel

Frank Wood Ruslan Salakutdinov







Dave Blei Zoubin Ghahramani

Language Talks and Panel

Koray Kavukcuoglu Michael Betancourt







Vikash Mansinghka Avi Pfeffer







Chung-chieh Shan Yi Wu





Daniel Tarlow

Overview

Probabilistic models have traditionally co-evolved with tailored algorithms for efficient learning and inference. One of the exciting developments of recent years has been the resurgence of black box methods, which make relatively few assumptions about the model structure, allowing application to broader model families.

In probabilistic programming systems, black box methods have greatly improved the capabilities of inference back ends. Similarly, the design of connectionist models has been simplified by the development of black box frameworks for training arbitrary architectures. These innovations open up opportunities to design new classes of models that smoothly negotiate the transition from low-level features of the data to high-level structured representations that are interpretable and generalize well across examples.

This workshop brings together developers of black box inference technologies, probabilistic programming systems, and connectionist computing frameworks. The goal is to formulate a shared understanding of how black box methods can enable advances in the design of intelligent learning systems. Topics of discussion will include: