We've had some fantastic people join over the past few months (and we're still hiring). Welcome, everyone!

Full-timers

Yura Burda. Yura finished a math PhD at the age of 24, and switched into machine learning a year and a half ago. He's focusing on generative models. He discovered a simple but fundamental improvement to the variational lower bound that had evaded notice since its original discovery decades ago.

Yura finished a math PhD at the age of 24, and switched into machine learning a year and a half ago. He's focusing on generative models. He discovered a simple but fundamental improvement to the variational lower bound that had evaded notice since its original discovery decades ago. Ian Goodfellow. Ian is well known for his many contributions to machine learning, including the MaxOut Network and the Generative Adversarial Network, the latter of which is a driver of excitement in generative modeling research. In addition, he's the lead author of the book on deep learning.

Ian is well known for his many contributions to machine learning, including the MaxOut Network and the Generative Adversarial Network, the latter of which is a driver of excitement in generative modeling research. In addition, he's the lead author of the book on deep learning. Alec Radford. Alec created DCGAN, a neural model that could generate large, coherent images containing an unprecedented level of global coherence and detail. In addition, his model has learned to do image analogies in an entirely unsupervised way.

Alec created DCGAN, a neural model that could generate large, coherent images containing an unprecedented level of global coherence and detail. In addition, his model has learned to do image analogies in an entirely unsupervised way. Tim Salimans. Tim is an expert on variational methods. The author of the first paper on stochastic gradient variational inference (for which he won the Lindley prize), he was at one point ranked number 2 overall on Kaggle.

Interns

Also joining us for the summer (and in some cases, to continue the collaboration once they return to their home institution):

As a closing note, we get a lot of questions about what we're working on, how we work, and what we're trying to achieve. We're not being intentionally mysterious; we've just been busy launching the organization (and finding awesome people to help us do so!).

We're currently focused on unsupervised learning and reinforcement learning. We should have interesting results to share over the next month or two. A bunch of us will be around ICLR, where we'll likely hold an event of some form. I'll also host a Quora Session in May or June to answer any questions for people we don't meet in Puerto Rico.