Event ended

On September 12 and 13 the laboratory invites four international researchers to give talks on modern applications of stochastic processes and probabilistic modeling in machine learning. The speakers will give an overview of the theory of Dirichlet, Pitman-Yor, Gamma and Gaussian processes, and how they can be applied to deal with large scale problems, interpretability and other tasks. Also, several speakers from Russian scientific groups will present their research.



Language: English



Location: Faculty of Computer Science, Moscow, Kochnovsky Proezd, 3, room 205



Dates: September, 12 and 13.



Materials: slides, lecture (Maurizio Filippone), lecture (Wray Buntine), lecture (Ilya Tolstihin), lecture (Novi Quadrianto), talks part1, talks part 2.







September, 12, room 205



14:00-15:30 Ethical Machine Learning

Novi Quadrianto Assistant Professor, University of Sussex, Great Britain

Scientific Advisor, HSE Laboratory of Deep Learning and Bayesian Methods Machine learning technologies have permeated everyday life and it is common nowadays that an automated system makes decisions for/about us, for example, perhaps deciding who is going to get a VTB 24 bank loan. Addressing ethical and legal aspects posed by those technologies constitutes a pressing problem. The long-term goal of our research group is to develop a machine learning framework with plug-and-play ethical and legal constraints that is able to handle fairness, confidentiality, and transparency constraints, their combinations, and also new constraints that might be stipulated in the future. In this talk, I will be mostly discussing background research in fairness and transparency, before presenting our own work in unifying several notions of fairness.

15:45-17:15 Implicit generative models: dual and primal approaches

Iliya Tolstikhin Postdoc, Max Planck Institute for Intelligent Systems, Tübingen, Germany The fields of unsupervised generative modelling and representation learning are rapidly growing. Empirical success of recently introduced methods, including Variational Auto-Encoders (VAE) and Generative Adversarial Nets (GAN), attracts attention many researchers working in various areas of Machine Learning. Last few years led to unprecedented amount of papers, trying to improve the performance of VAEs/GANs, introducing new versions of these algorithms, and coming up with completely new ideas. In this talk I will try to present a unifying view on many of the existing methods, showing that VAEs/GANs are approaching very similar objectives --- f-divergences, integral probability metrics, optimal transports --- from their primal/dual formulations respectively. I will discuss certain consequences of this duality and mention a recent work on optimal transport, establishing interesting links between VAEs/GANs.

17:30-19:00 Introduction to Dirichlet Processes and their use

Wray Buntine Professor, Monash University, Melbourne, Australia Assuming the attendee has knowledge of the Poisson, Gamma, multinomial and Dirichlet distributions, this talk will present the basic ideas and theory to understand and use the Dirichlet process and its close relatives, the Pitman-Yor process and the gamma process. We will first look at some motivating examples. Then we will look at the non-hierarchical versions of the processes, which are basically infinite parameter vectors. These have a number of handy properties and have simple, elegant marginal and posterior inference. Finally, we will look at the hierarchical versions of these processes. These are fundamentally different. To understand the hierarchical version we will briefly review some aspects of stochastic process theory and additive distributions. The hierarchical versions becomes Dirichlet and Gamma distributions (the process part disappears) but the techniques developed for the non-hierarchical process models can be borrowed to develop good algorithms, since the Dirichlet and Gamma are challenging when placed hierarchically.

September, 13, room 205



14:00-15:30 Gaussian Processes