About

The Future of Humanity Institute is a unique world-leading research centre that works on big picture questions for human civilisation and explores what can be done now to ensure a flourishing long-term future. Its multidisciplinary research team includes several of the world’s most brilliant and famous minds working in this area. Its work spans the disciplines of mathematics, philosophy, computer science, engineering, ethics, economics, and political science.

FHI has originated or played a pioneering role in developing many of the key concepts that shape current thinking about humanity’s future. These include: simulation argument, existential risk, nanotechnology, information hazards, strategy and analysis related to machine superintelligence, astronomical waste, the ethics of digital minds, crucial considerations, observation selection effects in cosmology and other contexts of self-locating belief, prediction markets, infinitarian paralysis, brain emulation scenarios, human enhancement, the unilateralist’s curse, the parliamentary model of decision making under normative uncertainty, the vulnerable world hypothesis, and many others.

Current research groups include:

MACROSTRATEGY : Explores how long-term outcomes for humanity may be connected to present-day actions, and searches for crucial considerations.

: Explores how long-term outcomes for humanity may be connected to present-day actions, and searches for crucial considerations. CENTRE FOR THE GOVERNANCE OF AI : Studies how geopolitics, governance structures, and strategic trends will affect the development of advanced artificial intelligence.

: Studies how geopolitics, governance structures, and strategic trends will affect the development of advanced artificial intelligence. TECHNICAL AI SAFETY : Develops techniques for building artificially intelligent systems that are safe and aligned with human values, in close collaboration with leading AI labs such as DeepMind, OpenAI, and CHAI.

: Develops techniques for building artificially intelligent systems that are safe and aligned with human values, in close collaboration with leading AI labs such as DeepMind, OpenAI, and CHAI. BIOSECURITY : Focuses on developments in biotechnology that could alter fundamental parameters of the human condition (including techniques that raise biosecurity concerns and methods for human enhancement and modification).

: Focuses on developments in biotechnology that could alter fundamental parameters of the human condition (including techniques that raise biosecurity concerns and methods for human enhancement and modification). RESEARCH SCHOLARS PROGRAM: A highly selective two-year research training programme: each year around 8-10 salaried positions and 4-8 DPhil scholarships are offered to early-career researchers of outstanding promise who aim to work in FHI-relevant areas.

FHI is currently looking to build more capacity the following areas:

AI ETHICS AND PHILOSOPHY OF MIND

TRANSPARENCY AND SURVEILLANCE

PHILOSOPHICAL FOUNDATIONS

GRAND FUTURES

COOPERATIVE PRINCIPLES AND INSTITUTIONS

AI EPISTEMOLOGY, ECONOMICS, AND SOCIETY

AI CAPABILITIES

NANOTECHNOLOGY

The Institute is led by its founding director, Professor Nick Bostrom. He is the author of over 200 publications, including Anthropic Bias (2002), Global Catastrophic Risks (2008), Human Enhancement (2009), and Superintelligence: Paths, Dangers, Strategies (2014), a New York Times bestseller which helped spark a global conversation about artificial intelligence. His pioneering work has been highly influential in several of the areas in which the Institute is now active.

Contact

For Media requests, Job and Scholarship contacts, and more, please see the Contact page.

