Note: The AI Alignment Fellowship will not open for applications in 2021. If you are interested in collaborating with the AI Safety team, feel free to reach out to Anne who can put you in touch or direct you to similar opportunities.

To be notified when the AI Alignment Fellowship opens and about future AI Safety research opportunities, sign up to our mailing list below. We will only send one or two emails each year.

The Future of Humanity Institute hopes to accept applications for our AI Alignment Fellowship starting in spring 2022. Fellows visit for a period of three or more months to pursue research related to the theory or design of human-aligned AI. 

The fellowship is an opportunity for people with relevant backgrounds to make progress on AI Safety problems and test their fit for research. Fellows will be supervised by Michael Cohen, Stuart Armstrong, and Ryan Carey – some of their research interests are represented here, here, and here

FHI’s existing research on AI Safety is broad. Our interests include idealized reasoning, the incentives of AI systems, and the limitations of value learning. You can read more about the AI Safety team’s work here. A full list of our publications can be found here.

If you have questions about the fellowship there is an FAQ tab. If you have further questions, please reach out to ann.iwashita-leroux@philosophy.ox.ac.uk.

We welcome both experienced researchers hoping to work autonomously on research of their own design as well as aspiring researchers who would benefit from more support and guidance from a supervisor. We will consider applicants ranging from undergraduate to postdoc level, and all applicants should demonstrate an interest in FHI’s AI Safety research. Women and researchers from under-represented groups are especially encouraged to apply. A successful applicant will usually meet two or more of the following criteria: 

  1. Published (or acclaimed) papers in machine learning or other theoretical fields
  2. Clear ideas for AI alignment research
  3. Relevant graduate education
  4. Strong academic referees
  5. Good Olympiad/competition results

Criteria I. and II. are essential for applicants who would like to conduct unsupervised research. For more junior applicants, we recommend you are also familiar with the ideas in at least one of the following papers:

The fellowship runs year-round and typically, fellows will visit for a period of three or more months. The exact length of your visit can be discussed with your supervisor. Since fellowships are mostly supervised, we prefer that they are conducted full-time however, in exceptional cases, we may offer a part-time opportunity. At present, all fellowships will be conducted remotely.

During the fellowship, you will have regular meetings with your supervisor (if applicable).  As a fellow, you will also have the opportunity to participate in internal FHI seminars and events. 

Previous fellows have received a stipend from the Berkeley Existential Risk Initiative for the duration of the fellowship. 

When does the fellowship run? The fellowship runs year-round and typically, fellows will visit for a period of three or more months.

Can I participate in the fellowship part time? Since fellowships are mostly supervised, we prefer that they are conducted full-time however, in exceptional cases, we may offer a part-time opportunity.

I would like to research AI, but not AI safety specifically. Should I still apply? The AI Alignment Fellowship is intended for applicants who have an interest in AI safety. If you are interested in AI topics more broadly, you could consider applying to the Governance of AI Fellowship or the Summer Research Fellowship instead.

Note: applications to the Alignment Fellowship are not currently open for the 2021 cycle.  

We will review application forms on a fortnightly basis and will reply to all applicants within a month. Candidates who are successful at the first stage will be invited to interview with a member of the AI Safety team, where you will have the opportunity to discuss your interests and relevant experience in more detail. Due to time constraints, we are not able to provide feedback to candidates who are unsuccessful at this stage. 

View our privacy policy here.