The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Learning the Preferences of Ignorant, Inconsistent Agents, Safe Reinforcement Learning via Human Intervention, Deep RL from Human Preferences, and the Building Blocks of Interpretability. Past interns have collaborated with FHI researchers on a range of publications.
Applicants should have a background in machine learning or computer science, or in a related field (statistics, mathematics, physics, cognitive science). Previous research experience in machine learning or computer science is desirable but not required.
This is a paid internship. Candidates from underrepresented demographic groups are especially encouraged to apply.
Internships are for 2.5 months or longer. We are now accepting applications for internships starting in or after September 2018 on a rolling basis. Interns are usually based in Oxford but remote internships are sometimes possible. (As per University guidelines, candidates must be fluent in English.)
To apply, please submit a CV and a short statement of interest (including relevant experience in machine learning, computer science, and programming) via this form. You will also be asked to indicate when you would be available to start your internship and for permission to share your application materials with partner organizations. Please direct questions about the application process to email@example.com.