Current Vacancies

Internships in AI Safety and Reinforcement Learning

Applications ongoing but early application preferred

The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Cooperative Inverse Reinforcement LearningLearning the Preferences of Ignorant, Inconsistent AgentsLearning the Preferences of Bounded Agents, and Safely Interruptible Agents. The internship will give the opportunity to work on a specific project.  Interns at FHI have worked on software for Inverse Reinforcement Learning, on a framework for RL with a human teacher, and on RL agents that do active learning.  You will also get the opportunity to live in Oxford – one of the most beautiful and historic cities in the UK.

Jobs

Subscribe to our vacancies list to be notified when new positions open.

If you are a researcher interested in working at FHI, please send your academic CV to fhijobs@philosophy.ox.ac.uk.

If you are interested in volunteering or interning with FHI, please send your CV and a short statement of interest  to fhijobs@philosophy.ox.ac.uk.

Self-funded academic visitors interested in staying at FHI while they pursue independent research related FHI’s current priorities should send a CV to fhijobs@philosophy.ox.ac.uk.

Please note that sending an email to these addresses does not automatically sign you up to receive the vacancies newsletter.