Learning to Fly Deep Model-Based Reinforcement Learning in the real world A self-built drone controlled onboard by a learnt policy optimised in a learnt simulation In this work we show how to learn neural network policy for thrust-attitude control of a self-built drone via model-based reinforcement learning and variational inference methods. Continue Reading

Learning Flat Manifold of VAEs How to create latent spaces without nasty curvatures Flat Manifold VAE The powerful neural-network encoder/decoder in a VAE typically creates crooked latent spaces. This method, published at ICML 2020, regularises that. Continue Reading

Approximate Bayesian inference in spatial environments How to reason about space and moving agents A mobile agent can localise itself in an unknown maze using DVBF-LM. In this post, we will get familiar with a flexible probabilistic platform for spatial reasoning. Continue Reading

Learning Hierarchical Priors in VAEs A constrained optimisation approach Graph-based interpolation of human motion We address the issue of learning informative latent representations of data. In the normal VAE, the latent space prior is a standard normal distribution. This over-regularises the posterior distribution, resulting in latent representations that do not represent well the structure of the data. This post, describing our 2019 NeurIPS publication, proposes and demonstrates a solution by using an hierarchical latent space prior. Continue Reading

How to Learn Functions on Sets with Neural Networks And how to choose your aggregation The basic Deep Sets architecture for set functions: embed, aggregate, process. If you input a vector of data in a neural network, the order of the elements matters. But ssometimes the order doesn't carry any useful inforamtion: sometimes we are interested in working on sets of data. In this post, we will look into functions on sets, and how to learn them with the help of neural networks. Continue Reading

Approximate Geodesics for Deep Generative Models How to efficiently find the shortest path in latent space The graph of the fashion MNIST dataset in a 2D latent space, along with the magnification factor. Neural samplers such as variational autoencoders (VAEs) or generative adversarial networks (GANs) approximate distributions by transforming samples from a simple random source—the latent space—to a more complex distribution—corresponding to the distribution from which is data is sampled. Typically, the data set is sparse, while the latent space is compact. Consequently, points that are separated by low-density regions in observation space will be pushed together in latent space, and the spaces get distored. In effect, stationary distances in the latent space are poor proxies for similarity in the observation space. How can this be solved? Continue Reading

Network Architecture Optimisation The Bayesian way Intuition and experience. Probably, that's the answer you would get if you happen to ask deep learning engineers how they chose the hyperparameters of a neural network. Depending on their familiarity with the problem, they might have done some good three to five full dataset runs until a satisfactory result popped up. Now, you might say, we surely could automate this, right, after all we do it implicitly in our heads? Well, yes, we definitely could, but should we? Continue Reading

Deep Variational Bayes Filter DVBF: filter to learn what to filter Learnt latent representation of a swinging pendulum Machine-learning algorithms thrive in environments where data is abundant. In the land of scarce data, blessed are those who have simulators. The recent successes in Go or Atari games would be much harder to achieve without the ability to parallelise millions of perfect game simulations. Continue Reading