The fields of neuroscience and artificial intelligence (AI) have a long and intertwined history. In more recent times, however, communication and collaboration between the two fields has become less commonplace. In this article, we argue that better understanding biological brains could play a vital role in building intelligent machines. We survey historical interactions between the AI and neuroscience fields and emphasize current advances in AI that have been inspired by the study of neural computation in humans and other animals. We conclude by highlighting shared themes that may be key for advancing future research in both fields.

Main Text

Turing, 1936 Turing A.M. On computable numbers, with an application to the Entscheidungs problem. We begin with the premise that building human-level general AI (or “Turing-powerful” intelligent systems;) is a daunting task, because the search space of possible solutions is vast and likely only very sparsely populated. We argue that this therefore underscores the utility of scrutinizing the inner workings of the human brain— the only existing proof that such an intelligence is even possible. Studying animal cognition and its neural implementation also has a vital role to play, as it can provide a window into various important aspects of higher-level general intelligence.

The benefits to developing AI of closely examining biological intelligence are two-fold. First, neuroscience provides a rich source of inspiration for new types of algorithms and architectures, independent of and complementary to the mathematical and logic-based methods and ideas that have largely dominated traditional approaches to AI. For example, were a new facet of biological computation found to be critical to supporting a cognitive function, then we would consider it an excellent candidate for incorporation into artificial systems. Second, neuroscience can provide validation of AI techniques that already exist. If a known algorithm is subsequently found to be implemented in the brain, then that is strong support for its plausibility as an integral component of an overall general intelligence system. Such clues can be critical to a long-term research program when determining where to allocate resources most productively. For example, if an algorithm is not quite attaining the level of performance required or expected, but we observe it is core to the functioning of the brain, then we can surmise that redoubled engineering efforts geared to making it work in artificial systems are likely to pay off.

Marr and Poggio, 1976 Marr D.

Poggio T. From understanding computation to understanding neural circuitry. Markram, 2006 Markram H. The blue brain project. Esser et al., 2016 Esser S.K.

Merolla P.A.

Arthur J.V.

Cassidy A.S.

Appuswamy R.

Andreopoulos A.

Berg D.J.

McKinstry J.L.

Melano T.

Barch D.R.

et al. Convolutional networks for fast, energy-efficient neuromorphic computing. Of course from a practical standpoint of building an AI system, we need not slavishly enforce adherence to biological plausibility. From an engineering perspective, what works is ultimately all that matters. For our purposes then, biological plausibility is a guide, not a strict requirement. What we are interested in is a systems neuroscience-level understanding of the brain, namely the algorithms, architectures, functions, and representations it utilizes. This roughly corresponds to the top two levels of the three levels of analysis that Marr famously stated are required to understand any complex biological system (): the goals of the system (the computational level) and the process and computations that realize this goal (the algorithmic level). The precise mechanisms by which this is physically realized in a biological substrate are less relevant here (the implementation level). Note this is where our approach to neuroscience-inspired AI differs from other initiatives, such as the Blue Brain Project () or the field of neuromorphic computing systems (), which attempt to closely mimic or directly reverse engineer the specifics of neural circuits (albeit with different goals in mind). By focusing on the computational and algorithmic levels, we gain transferrable insights into general mechanisms of brain function, while leaving room to accommodate the distinctive opportunities and challenges that arise when building intelligent machines in silico.

Legg and Hutter, 2007 Legg S.

Hutter M. A collection of definitions of intelligence. The following sections unpack these points by considering the past, present, and future of the AI-neuroscience interface. Before beginning, we offer a clarification. Throughout this article, we employ the terms “neuroscience” and “AI.” We use these terms in the widest possible sense. When we say neuroscience, we mean to include all fields that are involved with the study of the brain, the behaviors that it generates, and the mechanisms by which it does so, including cognitive neuroscience, systems neuroscience and psychology. When we say AI, we mean work in machine learning, statistics, and AI research that aims to build intelligent machines ().

Mnih et al., 2015 Mnih V.

Kavukcuoglu K.

Silver D.

Rusu A.A.

Veness J.

Bellemare M.G.

Graves A.

Riedmiller M.

Fidjeland A.K.

Ostrovski G.

et al. Human-level control through deep reinforcement learning. Silver et al., 2016 Silver D.

Huang A.

Maddison C.J.

Guez A.

Sifre L.

van den Driessche G.

Schrittwieser J.

Antonoglou I.

Panneershelvam V.

Lanctot M.

et al. Mastering the game of Go with deep neural networks and tree search. Graves et al., 2016 Graves A.

Wayne G.

Reynolds M.

Harley T.

Danihelka I.

Grabska-Barwińska A.

Colmenarejo S.G.

Grefenstette E.

Ramalho T.

Agapiou J.

et al. Hybrid computing using a neural network with dynamic external memory. We begin by considering the origins of two fields that are pivotal for current AI research, deep learning and reinforcement learning, both of which took root in ideas from neuroscience. We then turn to the current state of play in AI research, noting many cases where inspiration has been drawn (sometimes without explicit acknowledgment) from concepts and findings in neuroscience. In this section, we particularly emphasize instances where we have combined deep learning with other approaches from across machine learning, such as reinforcement learning (), Monte Carlo tree search (), or techniques involving an external content-addressable memory (). Next, we consider the potential for neuroscience to support future AI research, looking at both the most likely research challenges and some emerging neuroscience-inspired AI techniques. While our main focus will be on the potential for neuroscience to benefit AI, our final section will briefly consider ways in which AI may be helpful to neuroscience and the broader potential for synergistic interactions between these two fields.