THESE POSITIONS ARE NOW CLOSED

The Future of Humanity Institute is opening several research positions to hire researchers who specialise either in one of our most visible current areas of research (Macrostrategy, Technical AI safetyCenter for the Governance of AI, and Biosecurity), or in areas where we are looking to build capacity (mentioned below). As FHI grows in these areas, there might even be opportunities for a suitably talented and energetic recruit to support and contribute to the growth of these research groups. 

  • AI ethics and philosophy of mind: especially questions concerning which computations are conscious and which digital minds have what kinds of moral status
  • Transparency and surveillance: especially questions concerning the role of surveillance in preventing existential risks and how to architect global information systems.
  • Philosophical foundations: for example, questions related to anthropics, infinite ethics, decision theory, computationalism, cluelessness, and value theory pertaining to radically technologically empowered futures.
  • Grand futures: for example, questions related to the Fermi paradox, cosmological modeling of the opportunities available to technologically mature civilizations, implications of multiverse theories, the ultimate limits to technological advancement.
  • Cooperative principles and institutions: theoretical investigations into structures that facilitate cooperation at different scales; and search for levers where a relatively small effort could increase the chances of cooperative equilibria.
  • AI ethics: analysing ethical issues that arise with current uses of machine intelligence or in the context of possible future developments.
  • AI capabilities: seeking a better understanding of the capabilities and limitations of the current AI systems, and the likely paths along which these capabilities may grow as the field of machine intelligence advances; developing intuitions and theoretical understanding of how key machine learning algorithms scale, their limitations, and their epistemological, computations, decision theoretic properties.
  • Nanotechnology: analysing roadmaps to atomically precise manufacturing and related technologies, including possible intersections with advances in artificial intelligence, and potential impacts and strategic implications of progress in these areas.

We are looking to hire across grades and experience levels. You can be guided by the criteria mentioned below (more details are available in the job descriptions which are accessible via the application links below): 

Researchers: You will hold a first degree, with evidence of research potential in a relevant field. You would be expected to pursue independent research projects, with little guidance from more experienced team members. For the Researcher positions apply here

Research Fellows: You will have a Bachelors and/or Masters degree with at least two years of research experience and evidence of research potential in a relevant field for your specialism. You would possess sufficient specialist knowledge in the discipline to pursue independent tracks, with an ability to manage your own research and related activities with little guidance. For the Research Fellow positions  apply here

Sr. Research Fellows: You would hold a relevant Ph.D.with either post-qualification research experience or equivalent experience in a non-academic setting. You would have a strong publication record or equivalent experience in a non-academic setting. Excellent candidates can set up their research teams, under the guidance of the Director. For the Senior Research Fellow positions apply here

If you aren’t sure which two grades applies to you, then please apply to both of them, and we will consider you for the right one. 

Overview of the selection process 

Stage 1: Submit your CV and cover letter on the university website (grade wise links for submission of material below)

  1. CV
  2. Cover letter

Your cover letter should be no more than 400 words and should:

  1. Explain how you meet the selection criteria for the role, and
  2. Outline one research idea, of something you would like to, or would like the FHI to pursue, and why.

 Stage 2: A subset of applicants will be invited for second stage in the process, which involves:

  1. Submitting a research proposal (no more than 1000 words). You can chose to explore the 400 word research idea further, or submit another proposal.
  2. Submitting a paper of your choice (single authored, if possible)
  3. A timed test
  4. Two academic references
  5. 30 minute interview

Note: The interviews would take place in early September.

Stage 3: A final shortlist of candidates will be invited for the third stage of the process which will involve:

  1. 30 minute interview

Note: We hope to wrap up the process and make final offers by end of September 2019.

More about: 

Technical AI safety

The long-term goal of this research is to contribute to safe, robust AGI systems that are aligned with human values.

We are hiring in *all* areas of AI Safety research. FHI’s existing research on AI Safety is broad. For example, on the theoretical end, our interests include models of causal influence, and the limitations of value learning.  On the experimental side, we have been interested in training deep learning models to decompose complex tasks, and to be more robust to large errors. Some examples of FHI’s research are: [1], [2], [3]. Groups at DeepMind, CHAI, MIRI and OpenAI are also conducting highly relevant research.

Researchers with a primary focus on AI Safety may also have the opportunity to do research in other areas. This might include research in/on:

  • AI or Machine Learning outside of AI Safety.
  • Research in any of FHI’s other research areas.

There are no general requirements for previous experience or qualifications beyond those specified in the general job description. However, we will look for experience relevant to the kind of research the candidate proposes.

For Stage 2, candidates are asked to submit a research paper “single authored, if possible”. We understand that papers in computer science and machine learning usually have multiple authors. There is no need to submit a single authored paper. We suggest that candidates select a paper on which they made a substantial contribution

For Stage 2, candidates are asked to submit a research paper “single authored, if possible”. We understand that papers in computer science and machine learning usually have multiple authors. There is no need to submit a single authored paper. We suggest that candidates select a paper on which they made a substantial contribution

Center for the Governance of AI

The Governance of AI team is seeking exceptional researchers to carry out collaborative research with our interdisciplinary team. Researchers in the team will have the opportunity to conduct cutting-edge research in a fast-growing field. We are looking to hire across grades and experience levels: Researchers, Research Fellows, and Senior Research Fellows/Associate Professors. 

Our work focuses on identifying and steering the high-stakes decisions regarding artificial intelligence. To do so, we conduct research into:

  • The technical landscape – What will AI systems be capable of, and when?
  • The strategic landscape – What are the most important geopolitical effects and dynamics, and how can they be steered for the common good? 
  • Ideal governance –  What norms and institutions we would ideally create to govern the transition to advanced artificial intelligence?
  • AI policy – What policies governments, researchers, and firms can pursue today to improve AI’s impact on society?

Our work is structured around our research agenda. Building on this, we have published work on policy considerations for transformative AI, surveys of machine learning experts and the public on AI governance issues, China’s AI strategy, malicious use of AI, strategic considerations of openness in AI, the offense-defense balance, and industry standards on AI

The research has been featured in The Economist, Foreign Affairs, The Financial Times, The MIT Technology Review, Wired, and Lawfare; our researchers are frequently invited to present their work to important audiences (e.g. the EU Parliament and the US Congress). 

Given the multidisciplinarity of our work, we are interested in candidates from a broad set of disciplines including International Relations, Public Policy, Political Science, History, Economics, Sociology, Law, Philosophy, Mathematics, and Computer Science. Technical expertise in or familiarity with machine learning is useful but not required. We are also interested in candidates with policy-making experience. To find out more about our work, visit our publication page, watch this video, or listen to this podcast.

If you are interested  in learning more about our work on AI governance, please contact markus.anderljung@philosophy.ox.ac.uk.

Posted in Featured News, Uncategorised.

Share on Facebook | Share on Twitter