As of 2020, AI Safety and Machine Learning Internship has now been replaced with the AI Alignment Visiting Fellowship.

The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Learning the Preferences of Ignorant, Inconsistent Agents, Safe Reinforcement Learning via Human Intervention, Deep RL from Human Preferences, and the Building Blocks of Interpretability. Past interns have collaborated with FHI researchers on a range of publications.

Applicants should have research experience in machine learning or computer science, or in a related field (statistics, mathematics, physics, cognitive science). A successful applicant would typically be in CS grad school, have a technical PhD, or have published work related to AI safety.

This is a paid internship. Candidates from underrepresented demographic groups are especially encouraged to apply.

Internships are for 2.5 months or longer. We are now accepting applications for internships starting in or after January 2020 on a rolling basis. Interns will be based in Oxford. (As per University guidelines, candidates must be fluent in English.)

To apply, please submit a CV and a short statement of interest (including relevant experience in machine learning, computer science, and programming) via this form. You will also be asked to indicate when you would be available to start your internship and for permission to share your application materials with partner organisations. We apologise for any delay in reviewing applications, which due to limited researcher time is done about once per month. Please direct questions about the application process to Ryan Carey.

Posted in AI Safety.

Share on Facebook | Share on Twitter