The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Cooperative Inverse Reinforcement Learning, Learning the Preferences of Ignorant, Inconsistent Agents, Safe Reinforcement Learning via Human Intervention, and Deep RL from Human Preferences. The internship will give the opportunity to work on a specific project. Past interns at FHI have worked on a library for Inverse Reinforcement Learning, on for RL with a human teacher, and active learning for RL. You will also get the opportunity to live in Oxford – one of the most beautiful and historic cities in the UK.

Applicants should have a background in machine learning, computer science, mathematics or other related fields. Previous research experience in computer science (particular in machine learning) is desirable but not required.

This is a paid internship. Candidates from underrepresented demographic groups are especially encouraged to apply.

Internships are 2.5 months or longer. Applications for Summer internships are no longer open. We are accepting applications for internships starting in or after September 2018 on a rolling basis.

Selection Criteria

You should be fluent in English.

You must be available to come to Oxford for approximately 12 weeks, (please indicate the period when you would be available when you apply).

To apply, please submit a CV and a short statement of interest (including relevant experience in machine learning and any other programming experience) via this form. You will also be asked to indicate when you would be available to start your internship and for permission to share your application materials with partner organizations. Please direct questions about the application process to fhijobs@philosophy.ox.ac.uk.

Given the limited number of internship spots we have available, we encourage you to also consider positions at our partner organizations.