The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Cooperative Inverse Reinforcement Learning, Learning the Preferences of Ignorant, Inconsistent Agents, Learning the Preferences of Bounded Agents, and Safely Interruptible Agents. The internship will give the opportunity to work on a specific project. Interns at FHI have worked on software for Inverse Reinforcement Learning, on a framework for RL with a human teacher, and on RL agents that do active learning. You will also get the opportunity to live in Oxford – one of the most beautiful and historic cities in the UK.
The ideal candidate will have a background in machine learning, computer science, statistics, mathematics, or another related field. Our internships are open to outstanding students about to enter the final year of their undergraduate degree or final year of a Master’s degree or who have recently completed a PhD.
This is a paid internship. Candidates from underrepresented demographic groups are especially encouraged to apply.
There are no deadlines for applications but it is best to contact us at least 3-4 months before the intended start of the internship, especially if you would require a visa to work in the UK. Internships for this summer holiday have been fully allocated, but it is still worth getting in touch with us if you are interested so that we can add you to our records.
- You should be able to undertake research at post-graduate level in the area of AI safety.
- You should be fluent in English
- You must be available to come to Oxford for approximately 12 weeks, (please indicate the period when you would be available when you apply).
To apply, please send a CV and a short statement of interest (including relevant experience in machine learning and any other programming experience) to firstname.lastname@example.org.