Reasoning, Attention, Memory (RAM) NIPS Workshop 2015

Facebook event page with real-time updates: http://fb.me/ram

Motivation and Objective of the Workshop

In spite of this resurgence, the research into developing learning algorithms combining these components and the analysis of those algorithms is still in its infancy. The purpose of this workshop is to bring together researchers from diverse backgrounds to exchange ideas which could lead to addressing the various drawbacks associated with such models leading to more interesting models in the quest for moving towards true AI. We thus plan to focus on addressing the following issues:

How to decide what to write and what not to write in the memory.

How to represent knowledge to be stored in memories.

Types of memory (arrays, stacks, or stored within weights of model), when they should be used, and how can they be learnt.

How to do fast retrieval of relevant knowledge from memories when the scale is huge.

How to build hierarchical memories, e.g. employing multiscale notions of attention.

How to build hierarchical reasoning, e.g. via composition of functions.

How to incorporate forgetting/compression of information which is not important.

How to properly evaluate reasoning models. Which tasks can have a proper coverage and also allow for unambiguous interpretation of systems' capabilities? Are artificial tasks a convenient way?

Can we draw inspiration from how animal or human memories are stored and used?

Key Dates

Submission Deadline: Oct 9, 2015

Notification of Acceptance: Oct 23, 2015

Workshop: Dec 12th, 2015

Paper Submission Instructions

The papers should be typeset according to NIPS format.

The paper should not exceed more than 4 pages (including references).

Submit to: ram.nips2015@gmail.com

The authors of all the accepted papers will be expected to give a 20 minute talk (15 for the talk + 5 min for questions).

Accepted papers will be displayed on the website.

There will be no posters.

References

Workshop Schedule

Juergen Schmidhuber, IDSIA.

Yoshua Bengio, University of Montreal.

Elman Mansimov, Emilio Parisotto, Jimmy Lei Ba, Ruslan Salakhutdinov (University of Toronto).

Alex Graves, Google Deepmind.

Mike Mozer, University of Colorado.

Wei Zhang, Yang Yu, Bowen Zhou (IBM Watson).

Baolin Peng, The Chinese University of Hong Kong; Zhengdong Lu, Noah's Ark Lab, Huawei Technologies; Hang Li, Noah's Ark Lab, Huawei Technologies; Kam-Fai Wong, The Chinese University of Hong Kong.

Tom Bosc, Inria.

Rasmus Boll Greve, Emil Juul Jacobsen, Sebastian Risi (IT University of Copenhagen).

Kyunghyun Cho, New York University.

Adrien Peyrache, New York University.

Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Richard Socher (MetaMind).

Sainbayar Sukhbaatar, Arthur Szlam, Rob Fergus (Facebook AI Research).

Volkan Cirik, Louis-Philippe Morency, Eduard Hovy (CMU).

Gabriel Recchia, University of Cambridge.

Tomas Mikolov, Facebook AI Research.

Marvin Zhang, Zoe McCarthy, Chelsea Finn, Sergey Levine, Pieter Abbeel (UC Berkeley).

Ilya Sutskever, Google Brain.

Accepted Papers

Contributed Talks

Lightning Talks

Invited Speakers

Yoshua Bengio, University of Montreal

Yoshua Bengio received a PhD in Computer Science from McGill University, Canada in 1991. After two post-doctoral years, one at M.I.T. with Michael Jordan and one at AT&T Bell Laboratories with Yann LeCun and Vladimir Vapnik, he became professor at the Department of Computer Science and Operations Research at Université de Montréal. He is the author of two books and more than 200 publications, the most cited being in the areas of deep learning, recurrent neural networks, probabilistic learning algorithms, natural language processing and manifold learning. He is among the most cited Canadian computer scientists and is or has been associate editor of the top journals in machine learning and neural networks. Since '2000 he holds a Canada Research Chair in Statistical Learning Algorithms, since '2006 an NSERC Industrial Chair, since '2005 his is a Senior Fellow of the Canadian Institute for Advanced Research and since 2014 he co-directs its program focused on deep learning. He is on the board of the NIPS foundation and has been program chair and general chair for NIPS. He has co-organized the Learning Workshop for 14 years and co-created the new International Conference on Learning Representations. His current interests are centered around a quest for AI through machine learning, and include fundamental questions on deep learning and representation learning, the geometry of generalization in high-dimensional spaces, manifold learning, biologically inspired learning algorithms, and challenging applications of statistical machine learning.



Ilya Sutskever, Google Brain

Ilya Sutskever received his PhD in 2012 from the University of Toronto working with Geoffrey Hinton. After completing his PhD, he cofounded DNNResearch with Geoffrey Hinton and Alex Krizhevsky which was acquired by Google. He is interested in all aspects of neural networks and their applications.



Kyunghyun Cho, New York University

Kyunghyun Cho is an assistant professor at the Department of Computer Science, Courant Institute of Mathematical Sciences and the Center for Data Science of New York University (NYU). Before joining NYU on Sep 2015, he was a postdoctoral researcher at the University of Montreal under the supervision of Prof. Yoshua Bengio after obtaining the doctorate degree at Aalto University (Finland) early 2014. His main research interest includes neural networks, generative models and their applications, especially, to natural language understanding.



Mike Mozer, University of Colorado

Michael Mozer received a Ph.D. in Cognitive Science at the University of California at San Diego in 1987. Following a postdoctoral fellowship with Geoffrey Hinton at the University of Toronto, he joined the faculty at the University of Colorado at Boulder and is presently an Professor in the Department of Computer Science and the Institute of Cognitive Science. He is secretary of the Neural Information Processing Systems Foundation and has served as chair of the Cognitive Science Society. His research involves developing computational models to help understand the mechanisms of cognition. He uses these models to build software that assists individuals in learning, remembering, and decision making.

Adrien Peyrache, New York University

After graduating in physics from ESPCI-ParisTech, Adrien Peyrache studied cognitive science in a joint MSc program at Pierre and Marie Curie University and Ecole Normale Supérieure, (Paris, France). In 2009, he completed his PhD in neuroscience at the Collège de France. His thesis focused on the neuronal substrate of sleep-dependent learning and memory. After a year of postdoctoral training at the CNRS (Gif-sur-Yvette, France) where he studied the coordination of neuronal activity during sleep, he moved four years ago to the laboratory of György Buzsaki at New York University Neuroscience Institute. Since then, he has devoted his work on leveraging the unique technical expertise in high density neuronal population recordings to characterize the self-organized mechanisms of neuronal activity in the navigation system.



Jürgen Schmidhuber, Swiss AI Lab IDSIA

Biography: Since age 15 or so, the main goal of professor Jürgen Schmidhuber (pronounce: You_again Shmidhoobuh) has been to build a self-improving Artificial Intelligence (AI) smarter than himself, then retire. He has pioneered self-improving general problem solvers since 1987, and Deep Learning Neural Networks (NNs) since 1991. The recurrent NNs (RNNs) developed by his research groups at the Swiss AI Lab IDSIA & USI & SUPSI & TU Munich were the first RNNs to win official international contests. They have revolutionized connected handwriting recognition, speech recognition, machine translation, optical character recognition, image caption generation, and are now in use at Google, Microsoft, IBM, Baidu, and many other companies. Founders & staff of DeepMind (sold to Google for over 600M) include 4 former PhD students from his lab. His team's Deep Learners were the first to win object detection and image segmentation contests, and achieved the world's first superhuman visual classification results, winning nine international competitions in machine learning & pattern recognition (more than any other team). They also were the first to learn control policies directly from high-dimensional sensory input using reinforcement learning. His research group also established the field of mathematically rigorous universal AI and optimal universal problem solvers. His formal theory of creativity & curiosity & fun explains art, science, music, and humor. He also generalized algorithmic information theory and the many-worlds theory of physics, and introduced the concept of Low-Complexity Art, the information age's extreme form of minimal art. Since 2009 he has been member of the European Academy of Sciences and Arts. He has published 333 peer-reviewed papers, earned seven best paper/best video awards, the 2013 Helmholtz Award of the International Neural Networks Society, and the 2016 IEEE Neural Networks Pioneer Award. He is also president of NNAISENSE, which aims at building the first practical general purpose AI.

Tomas Mikolov, Facebook AI Research

Tomas Mikolov is a research scientist at Facebook AI Research team. Previously, he has been working in the Google Brain team, where he lead development of the word2vec algorithm. He finished his PhD at the Brno University of Technology (Czech Republic) where he worked on recurrent neural network based language models (RNNLMs). His long term research goal is to develop intelligent machines capable of learning and natural communication with people.

Workshop Organizers

Antoine Bordes, Facebook AI Research (https://www.hds.utc.fr/~bordesan/)

Antoine Bordes is a staff research scientist at Facebook Artificial Intelligence Research. Prior to joining Facebook in 2014, he was a CNRS staff researcher in the Heudiasyc laboratory of the University of Technology of Compiegne in France. In 2010, he was a postdoctoral fellow in Yoshua Bengio's lab of University of Montreal. He received his PhD in machine learning from Pierre & Marie Curie University in Paris in early 2010. From 2004 to 2009, he collaborated regularly with Léon Bottou at NEC Labs of America in Princeton. He received two awards for best PhD from the French Association for Artificial Intelligence and from the French Armament Agency, as well as a Scientific Excellence Scholarship awarded by CNRS in 2013. Antoine's current interests cover knowledge bases/graphs modeling, natural language processing, deep learning and large scale learning.

Sumit Chopra, Facebook AI Research

Sumit Chopra is a research scientist at the Facebook Artificial Intelligence Research Lab. He graduated with a Ph.D., in computer science from New York University in 2008. His thesis proposed a first of its kind neural network model for doing relational regression, and was a conceptual foundation for a startup company for modeling residential real estate prices. Following his Ph.D., Sumit joined AT&T Labs - Research as a research scientist in the Statistics and Machine Learning Department, where he focused on building novel deep learning models for speech recognition, natural language processing, and computer vision. While at AT&T he also worked on other areas of machine learning, such as, recommender systems, computational advertisement, and ranking. He has been a research scientist at Facebook AI Research since April 2014, where he has been focusing primarily on natural language understanding.

Related Workshops

Theme Song