Overview

Through research and policy engagement, the Governance of AI Program strives to steer the development of artificial intelligence for the common good. The Governance of AI Program is based at the University of Oxford’s Future of Humanity Institute, in close collaboration with Yale University. We track contemporary applications of AI in justice, the economy, cybersecurity, and the military, and take seriously the immediately pressing issues they pose to transparency, fairness, accountability, and security. Our particular focus, however, is on the challenges arising from transformative AI: advanced AI systems whose impact may be as profound as the industrial revolution.

Watch Allan Dafoe talk about governance of artificial intelligence at Effective Altruism Global

Our work looks at:

  • trends, causes, and forecasts of AI progress;
  • transformations of the sources of wealth and power;
  • global political dimensions of AI-induced unemployment and inequality;
  • risks and dynamics of international AI races;
  • possibilities for global cooperation;
  • associated emerging technologies such as those involving crypto-economic systems, weapons systems, nanotechnology, biotechnology, and surveillance;
  • global public opinion, values, ethics;
  • long-run possibilities for beneficial global governance of advanced AI

Some members of the Governance of AI Program

Featured Recent Work:

Deciphering China’s AI Dream: The context, components, capabilities, and consequences of China’s strategy to lead the world in AI


This report by Jeffrey Ding examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, […] Read More

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

 

 

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. The report explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks. Read More

Select Publications

Deciphering China’s AI Dream: The context, components, capabilities, and consequences of China’s strategy to lead the world in AI (2018)

Jeffrey Ding

This report examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, it benchmarks China’s current AI capabilities by developing a novel index to measure any country’s AI potential and highlights the potential implications of China’s AI dream for issues of AI safety, national security, economic development, and social governance.

Read More

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018)

Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, Dario Amodei

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. This report distills findings from a 2017 workshop as well as additional research done by the authors. It explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks.

Read More

When Will AI Exceed Human Performance? Evidence from AI Experts (2017)

Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans

Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI…

Read More

Policy Desiderata in the Development of Machine Superintelligence (2016)

Nick Bostrom, Allan Dafoe, Carrick Flynn

Machine superintelligence could plausibly be developed in the coming decades or century. The prospect of this transformative development presents a host of political challenges and opportunities. This paper seeks to initiate discussion of these by identifying a set of distinctive features of the transition to a machine intelligence era. From these distinctive features, we derive a correlative set of policy desiderata—considerations that should be given extra weight in long-term AI policy…

Read More

Strategic Implications of Openness in AI Development

Nick Bostrom

This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Short­term impacts of increased openness appear mostly socially beneficial in expectation…

Read More

The Team

Nick Bostrom

Co-Director, Governance of AI Program

Allan Dafoe

Co-Director, Governance of AI Program

Miles Brundage

Research Fellow

AI progress forecasting; science and technology policy

Baobao Zhang

Nuffield Collaborator

Public opinion research, American politics, and public policy

Carrick Flynn

Assistant Director

Law; governance; policy

Jade Leung

DPhil Researcher

International cooperation and institutions; risk and uncertainty governance

Jeffrey Ding

DPhil Researcher

China’s AI strategy, China’s approach to strategic technologies

Helen Toner

Research Associate

AI policy and strategy; progress in machine learning & AI; effective philanthropy

Ben Garfinkel

Visiting Researcher

Cryptography; physics; philosophy

Tanya Singh

Temporary Administrator

Paul de Font-Reaulx

Research Intern.

Authoritarian regimes; international cooperation and institutions; philosophy

International Collaborators

Sophie-Charlotte Fischer

ETH Zurich-based Researcher

International Security and Arms Control; Information Technology and Politics; Foreign Policy

Matthijs M. Maas

University of Copenhagen-based Researcher

AI governance; technology management regimes; nuclear deterrence stability; securitization theory

Remote Interns

Jordan Alexander, Ehrik Aldana, Cullen O’Keefe

Volunteers

Claudia Shi, Lee Sharkey, Morgan Macinnes, Jorgen Bosga, Chelsea Guo

Former Interns

Tamay Besiroglu, William Rathje, Clare Lyle

Opportunities

Job requirements

In all candidates, we seek high general aptitude, self-direction, openness to feedback, and a firm belief in our mission. All candidates will ideally also have experience with the effective altruism movement and familiarity with related ideas.

Across each of the roles below, we are especially interested in people with varying degrees of skill or expertise in the following areas:

  1. International relations, especially in the areas of international cooperation, international law, international political economy, global public goods, constitution and institutional design, diplomatic coordination and cooperation, arms race dynamics history, and the politics of transformative technologies, governance, and grand strategy.
  2. Mandarin and/or Chinese politics and/or the Chinese machine learning community.
  3. Game theory and mathematical modelling.
  4. Survey design and statistical analysis.
  5. Large intergovernmental scientific research organizations and projects (such as CERN, ISS, and ITER).
  6. Technology and other types of forecasting.
  7. Law and/or Policy.

While we prefer that non-administrator candidates have a doctorate degree and/or a strong demonstrated track record in a relevant area(s), we encourage candidates to apply if they can demonstrate aptitude in other ways, such as with independent research or operations work.

General job description

Our team members set their own hours, determine their research directions, and otherwise have ownership over how they pursue their work. We collaborate with one another online, in the office, and in weekly meetings, and with adjacent organizations through a range of mediums.

We would prefer that staff work with us in person from our Oxford, UK office. However, we will also consider remote working arrangements for highly qualified applicants.

Opportunities

Researchers

Researchers are the foundation of our program. They generally work independently, taking the lead on a fundamental research topic of their choosing. Researchers share feedback with one another, publish in journals, and present their work at university seminars and international conferences.

They address topics including:

  • long-run possibilities for beneficial global governance of advanced AI
  • institutions and possibilities for global cooperation;
  • risks and dynamics of international AI races;
  • trends, causes, and forecasts of AI progress;
  • transformations of the sources of wealth and power;
  • associated technologies like crypto-economic systems, nanotechnology, biotechnology, lie detection, and surveillance;
  • global public opinion, values, ethics, and broadly appealing visions;
  • global political dimensions of AI-induced unemployment and inequality;

Researcher candidates would ideally have:

  1. Advanced expertise in at least one of the areas outlined above.
  2. The ability to undertake high-quality independent research which results in academic publications in one of the areas outlined above with minimal supervision.
  3. A strong academic background in a relevant field. Preferably a doctorate.
Research Assistants

Research assistants work directly with our senior staff to contribute to, edit, and produce their work. Unlike many positions with this title, our RAs have lots of influence on the work they do, making high-level contributions to the research output.

Research Assistant candidates would ideally have:

  1. Moderate expertise in at least one of the areas outlined above.
  2. The ability to undertake some amount of independent research in these areas with minimal supervision.
  3. A strong academic background in a relevant field.
Policy Experts

Policy experts keep our research relevant. They reconcile ideal policy ideas with feasible policy proposals, which we turn into commissioned policy briefs for companies, government agencies, and NGOs. Policy experts serve as an interface between the Governance of AI Program and relevant actors by testifying at their committee meetings and serving on their advisory boards. They should be willing to travel as necessary to fulfil project requirements.

Policy Expert candidates would ideally have:

  1. At least moderate expertise in one of the areas outlined above.
  2. Previous experience working in a policy role or think tank with a close relationship to one of these fields, AI policy, or another related area.
  3. An existing policy network with access to relevant policymakers.
  4. Career plans that would benefit from this role.
Project Managers

Project managers expand our core functions. They take the lead formulating and executing projects from their inception. Including forming a team with which to implement a project by recruiting new or existing staff collaborators. Project managers develop, orient, and facilitate their teams’ activities, and work with personnel across FHI, the university, and partner institutions to do so.

Potential activities include running an internship or fellowship program, producing a research seminar series, and managing a cross-organization research project. Given the collaborative nature of most such activities, it is usually necessary for project managers to work from our Oxford, UK office.

Project manager candidates would ideally have:

  1. At least moderate knowledge in one of the areas listed above.
  2. A history of successfully managing research, policy, or other types of projects with a close relationship to one of these fields.
  3. Strong interpersonal, organizational, and management skills.
  4. A strong interest in AI governance and related topics.
Administrators

Administrators keep our wheels turning. On a day-to-day basis, they perform a range of tasks, including scheduling meetings and events, booking travel, compiling expense reports, interfacing with the university, and maintaining accurate and up to date website content. They work well with both, program staff and our large range of external stakeholders, as they participate in personnel processes, meetings, mentorship, media engagement, and networking.

Administrator candidates would ideally have:

  1. (For senior administrative roles) Several years of experience in administration and/or project management.
  2. (For junior administrative roles) A track record of organizing student groups or holding positions of responsibility in job, internship, or voluntary activities.
  3. Strong interpersonal, organizational, and management skills.
  4. Solid generalist competencies and quick, self-directed learning of new tasks.
  5. Moderate familiarity with the research topics or other related fields.
  6. A strong interest in AI governance and related topics.
AI Policy and Governance Internship

Previous interns at FHI have worked on issues of public opinion, technology race modelling, the bridge between short-term and long-term AI policy, the development of AI and AI policy in China, case studies in comparisons with related technologies, and many other topics.  You will also get the opportunity to live in Oxford – one of the most beautiful and historic cities in the UK.

Preferred traits of a candidate include

  1. having a strong background in a relevant field such as political science, public policy, economics, law, or computer science;
  2. previous work on AI or AI policy;
  3. career plans that would benefit from this internship;
  4. expertise in Chinese language skills and Chinese politics;
  5. experience with the effective altruism movement and familiarity with related ideas;
  6. highly talented undergraduates, graduate students, and those who have taken time off before graduate school are welcome to apply.

Selection Criteria

  • Closeness of match to preferred traits discussed above.
  • You should be able to undertake independent research at postgraduate level with minimal supervision.
  • You should be fluent in English
  • You must be available to come to Oxford for approximately 12 weeks (please indicate the period when you would be available when you apply).

This is a paid internship.  Candidates from underrepresented demographic groups are especially encouraged to apply.

For more information, please see this post.

Application

To apply for one or more of the positions above, please submit the following information to fhijobs@philosophy.ox.ac.uk with “Application: [Job Title(s)]” as the subject line:

  • Resume or CV
  • Statement of interest
    Only submit one, addressing all roles for which you are applying. If you would like to be considered for opportunities with our partner organizations should they arise, indicate permission to pass along your application materials in the body of the email.
  • Two references
    Names and email addresses; no letters required.
  • (For Researcher or Research Assistant roles) An approximately 2500-word research sample
    See below.

If you think that you might be a good fit for the Governance of AI Program but do not fit any of our listed positions, please submit the same details as above, as well as a project or function proposal.

 

Research Sample (for Researcher and Research Assistant candidates only):

Email fhijobs@philosophy.ox.ac.uk and request “AI Strategy: The Research Landscape.” Once you have received and read the document, choose a priority area or question from the text or footnotes of the paper. Then write an approximately 2500-word research report attempting to provide an original contribution to the topic. In the paper, please include context for your chosen topic, lines of enquiry for future research, and formal citations for any sources you use besides the provided landscape document.

If you have already written a paper on a relevant topic, feel free to submit that instead, along with a brief explanation of its relevance in your Statement of Interest.

If you are hired, the topic of your submitted paper will not necessarily determine your area of focus while at FHI. However, in order to best assess your fit in the program, we encourage you to demonstrate aptitude with a topic and/or type of research for which you are comparatively well-equipped and which you would be interested in investigating further.

Unfortunately, we do not have the capacity to help prospective hires choose research topics, nor review reports before submission. If you receive outside input while preparing your report for this application, please include the names and email addresses of those contributors in a footnote on the first page of the report.

All qualified applicants will be considered for employment without regard to race, color, religion, sex, or national origin.