FHI has individual researchers working across many topics related to humanity’s future.  We also currently have the following research groups:

  • Macrostrategy: How long-term outcomes for humanity are connected to present-day actions; global priorities; crucial considerations that may reorient our civilizational scheme of values or objectives.
  • Governance of Artificial Intelligence: The governance concerns of how humanity can best navigate the transition to advanced AI systems; how geopolitics, governance structures, and strategic trends shape the development or deployment of machine intelligence.
  • AI Safety: Techniques for building artificially intelligent systems that are scalably safe or aligned with human values (in close collaboration with labs such as DeepMind, OpenAI, and CHAI).
  • Biosecurity: How to make the world more secure against (both natural and human-made) catastrophic biological risks; how to ensure that capabilities created by advances in synthetic biology are handled well.
  • Digital Minds: Philosophy of mind and AI ethics, focusing on questions concerning which computations are conscious and which digital minds have what kinds of moral status, and what political systems would enable a harmonious coexistence of biological and nonbiological minds.

Sign up to our vacancies newsletter (only occasional emails):

Other areas in which we are active and are interested in expanding include (but are not limited to) the following:

  • Philosophical Foundations: When might deep uncertainties related to anthropics, infinite ethics, decision theory, computationalism, cluelessness, and value theory affect decisions we might make today? Can we resolve any of these uncertainties?
  • Existential Risk: Identification and characterisation of risks to humanity; improving conceptual tools for understanding and analysing these risks.
  • Grand Futures: Questions related to the Fermi paradox, cosmological modeling of the opportunities available to technologically mature civilizations, implications of multiverse theories, the ultimate limits to technological advancement, counterfactual histories or evolutionary trajectories, new physics.
  • Cooperative Principles and Institutions: Theoretical investigations into structures that facilitate future cooperation at different scales and search for levers to increase the chances of cooperative equilibria, e.g. with respect to rival AI developers, humans and digital minds, or among technologically mature civilizations.
  • Technology and Wisdom: What constitutes wisdom in choosing which new technological paths to pursue? Are there structures which enable society to act with greater wisdom both in making choices about what to develop and when or how to deploy new capabilities?
  • Sociotechnical Information Systems: Questions concerning the role of surveillance in preventing existential risks and how to design global information systems (e.g. recommender systems, social networks, peer review, discussion norms, prediction markets, futarchy) to mitigate epistemic dysfunction.
  • Reducing Risk from Malevolent Humans: Defining and operationalizing personality traits of potential concern (e.g. sadism, psychopathy, etc.), or promise (e.g. compassion, wisdom, integrity) especially ones relevant to existential risk; evaluating possible intervention strategies (e.g. cultural and biological mechanisms for minimising malevolence, personnel screening tools, shaping incentives in key situations).
  • Concepts, Capabilities and Trends in AI: Understanding the scaling properties and limitations of current AI systems; clarifying concepts used to analyze machine learning models and RL agents; assessing the latest breakthroughs and their potential contribution towards AGI; projecting trends in hardware cost and performance.
  • Space Law: What would be an ideal legal system for long-term space development, and what opportunities exist for adjusting existing treaties and norms?
  • Nanotechnology: Analyzing roadmaps to atomically precise manufacturing and related technologies, and potential impacts and strategic implications of progress in these areas.

For those interested in joining our research, there are a number of ways to get involved:

  • Join as a Full-time Researcher: If you are already pursuing research that would be a good fit for FHI, we would welcome applications to join as a Senior Research Fellow, Research Fellow, or Researcher. Sign up to our newsletter (above right) to be notified when positions open.
  • DPhil Scholarships: We have a small number of scholarships available for students beginning DPhils at Oxford with an interest in FHI-relevant areas. You should first apply to a DPhil programme, and then apply to the Scholarship when it opens (typically January-February).
  • Research Scholars Programme: A highly selective 2-year research training programme, intended for pre-docs or for postdocs interested in switching fields; this gives people space to find their feet in new research areas. Sign up to our newsletter (above right) to be notified when positions open.
  • AI Alignment Fellowship: This fellowship allows individuals to visit us for a period of three or more months to pursue research related to the theory or design of human-aligned AI. Apply here.
  • GovAI Fellowship: This fellowship allows individuals to visit us for a period of three months to pursue research related to the governance of advanced AI. Sign up here to be notified when positions open.
  • Summer Research Fellowship: This fellowship is for people to join for 6-8 weeks in the summer as part of a cohort, to work on anything related to FHI’s interests. Sign up here to be notified when positions open.
  • Give a Talk: If your research is relevant to FHI interests and you would be interested in speaking to us, please contact us via this form explaining in no more than 300 words who you are and what you would like to speak on. We’re afraid we receive a lot of requests so won’t be able to grant talk slots to all relevant parties.
  • Academic Visitor: We occasionally accept visitors to come and work at the Institute for periods ranging from days up to a year on some research project or core relevance to FHI. The first step would be to propose the idea to an FHI researcher who would be excited to serve as the host.  We are quite selective about whom we accept.  We may be able to arrange funding when there’s a good fit.

Note: No fellowships will run in 2021. Please see the AIA Fellowship, GovAI Fellowship, and Summer Research Fellowship pages for more information on opportunities to collaborate with our researchers. 

Open positions

Visiting FHI

FHI regularly welcomes senior visitors from a variety of different research disciplines. Visits can include seminars, meeting with researchers and/or research collaborations. Please contact us if you are interested in visiting.

Equality and Diversity

The University of Oxford is committed to fostering an inclusive culture which promotes equality, values diversity and maintains a working, learning and social environment in which the rights and dignity of all its staff and students are respected. Find out more about diversity and equality at Oxford University here.


Please email fhiadminassistant@philosophy.ox.ac.uk with any enquiries about open positions. Please get in contact via this form if you are submitting an open application. 


DPhil Opportunities

FHI offers a DPhil scholarship scheme, granting full scholarships to successful DPhil applicants at the University of Oxford whose research aims to answer crucial questions for improving the long-term prospects of humanity. See here for more information.

Many of the researchers at FHI also welcome the opportunity to collaborate with other Oxford University departments in co-supervising DPhil students. Please send us your CV and a short outline of your research proposal if you are interested in pursuing a DPhil with FHI co-supervision.