We always want to hear from world-class researchers at any level of seniority (PhDs, post-docs, and professors) and high-calibre operations, project management, and communications staff who might be interested in getting involved in our work. If this might be you, please get in touch, even if no current opening seems to fit. Due to the volume of interest we receive we strongly prefer statements of interest of 300 words or less, and specificity about the capacities in which you would like to engage with FHI.

Aside from the areas in which we are currently most visibly active (AI safety and capability, Biosecurity, Governance of AI, Macrostrategy), we are also interested in building more capacity on the following topics. There might even be opportunities for a suitably talented and effective individual to build up their own research group within FHI to focus on one of these:

Sign up to our vacancies newsletter (only occasional emails):

  • AI ethics and philosophy of mind: especially questions concerning which computations are conscious and which digital minds have what kinds of moral status
  • Transparency and surveillance: especially questions concerning the role of surveillance in preventing existential risks and how to architect global information systems.
  • Philosophical foundations: for example, questions related to anthropics, infinite ethics, decision theory, computationalism, cluelessness, and value theory pertaining to radically technologically empowered futures.
  • Grand futures: for example, questions related to the Fermi paradox, cosmological modeling of the opportunities available to technologically mature civilizations, implications of multiverse theories, the ultimate limits to technological advancement.
  • Cooperative principles and institutions: theoretical investigations into structures that facilitate cooperation at different scales; and search for levers where a relatively small effort could increase the chances of cooperative equilibria.
  • AI ethics: analysing ethical issues that arise with current uses of machine intelligence or in the context of possible future developments.
  • AI capabilities: seeking a better understanding of the capabilities and limitations of the current AI systems, and the likely paths along which these capabilities may grow as the field of machine intelligence advances; developing intuitions and theoretical understanding of how key machine learning algorithms scale, their limitations, and their epistemological, computations, decision theoretic properties.
  • Nanotechnology: analysing roadmaps to atomically precise manufacturing and related technologies, including possible intersections with advances in artificial intelligence, and potential impacts and strategic implications of progress in these areas.

Open positions

Administrative Assistant

[Applications have now closed for this post.] FHI is excited to invite applications for an Administrative Assistant within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 14 months from the date of appointment and full-time, with part-time considered.

Research Assistant to Director, Future of Humanity Institute

[Applications have now closed for this post.] Applications are invited for a Research Assistant for Professor Nick Bostrom, Director of FHI. We are looking for a general-purpose research assistant willing to conduct research on diverse topics meaningful to the work of the Director. We are able to sponsor a visa for applicants who do not […]

Website and Media Outreach Manager

[Applications have now closed for this post.] Applications are invited for a Website and Media Outreach manager. The successful candidate will be responsible for maintaining FHI’s website, social media outreach, and providing design support. The duties of this post are expected to evolve and change in response to the rapid advance of software technology and […]

AI Safety and Machine Learning Internship Program

The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Learning the Preferences of Ignorant, Inconsistent Agents, Safe Reinforcement Learning via Human Intervention, Deep RL from Human Preferences, and the Building Blocks of Interpretability. […]

Visiting FHI

FHI regularly welcomes senior visitors from a variety of different research disciplines. Visits can include seminars, meeting with researchers and/or research collaborations. Please contact us if you are interested in visiting.

Equality and Diversity

The University of Oxford is committed to fostering an inclusive culture which promotes equality, values diversity and maintains a working, learning and social environment in which the rights and dignity of all its staff and students are respected. Find out more about diversity and equality at Oxford University here.

DPhil Opportunities

FHI offers a DPhil scholarship scheme, granting full scholarships to successful DPhil applicants at the University of Oxford whose research aims to answer crucial questions for improving the long-term prospects of humanity. See here for more information.

Many of the researchers at FHI also welcome the opportunity to collaborate with other Oxford University departments in co-supervising DPhil students. Please send us your CV and a short outline of your research proposal if you are interested in pursuing a DPhil with FHI co-supervision.

Contact

Please get in touch on fhijobs@philosophy.ox.ac.uk for any enquiries.