Overview

The Governance of AI Program strives to steer the development of artificial intelligence for the common good using research and policy engagement. The Governance of AI Program is based at the University of Oxford’s Future of Humanity Institute, in close collaboration with Yale University. Our particular focus is on the challenges arising from transformative AI: advanced AI systems whose long-term impacts may be as profound as the industrial revolution.

The Program’s researchers examine the political, economic, military, governance, and ethical dimensions of how humanity can best navigate the transition to such advanced AI systems. We also track contemporary applications of AI in justice, the economy, cybersecurity, and the military, and consider the pressing issues they pose to transparency, fairness, accountability, and security.

Listen to Prof. Allan Dafoe’s interview on the governance of artificial intelligence 

See also the interview notes.

Our work looks at:

  • trends, causes, and forecasts of AI progress;
  • transformations of the sources of wealth and power;
  • global political dimensions of AI-induced unemployment and inequality;
  • risks and dynamics of international AI races;
  • possibilities for global cooperation;
  • associated emerging technologies such as those involving crypto-economic systems, weapons systems, nanotechnology, biotechnology, and surveillance;
  • global public opinion, values, ethics;
  • long-run possibilities for beneficial global governance of advanced AI

Some members of the Governance of AI Program

Featured Recent Work:

Deciphering China’s AI Dream: The context, components, capabilities, and consequences of China’s strategy to lead the world in AI


This report by Jeffrey Ding examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, […] Read More

‘Deciphering China’s AI Dream’ has received press attention in the MIT Technology Review, Bloomberg, and the South China Morning Post, among other media outlets.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

 

 

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. The report explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks. Read More

‘The Malicious Use of Artificial Intelligence’ received coverage from hundreds of news providers including the New York Times, the BBC, Reuters, and the Verge. The report was praised by Rory Stewart, UK Minister of Justice; Major General Mick Ryan, Commander at the Australian Defence College, and Tom Dietterich, former President of the Association for the Advancement of Artificial Intelligence.

Select Publications

Deciphering China’s AI Dream: The context, components, capabilities, and consequences of China’s strategy to lead the world in AI (2018)

Jeffrey Ding

This report examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, it benchmarks China’s current AI capabilities by developing a novel index to measure any country’s AI potential and highlights the potential implications of China’s AI dream for issues of AI safety, national security, economic development, and social governance.

Read More

‘Deciphering China’s AI Dream’ has received press attention in the MIT Technology Review, Bloomberg, and the South China Morning Post, among other media outlets.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018)

Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, Dario Amodei

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. This report distills findings from a 2017 workshop as well as additional research done by the authors. It explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks.

Read More

‘The Malicious Use of Artificial Intelligence’ received coverage from hundreds of news providers including the New York Times, the BBC, Reuters, and the Verge. The report was praised by Rory Stewart, UK Minister of Justice; Major General Mick Ryan, Commander at the Australian Defence College, and Tom Dietterich, former President of the Association for the Advancement of Artificial Intelligence.

When Will AI Exceed Human Performance? Evidence from AI Experts (2017)

Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans

Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI…

Read More

‘When will AI exceed human performance?’ was ranked #16 in Altmetric’s most discussed articles of 2017. The survey was covered by the BBCNewsweek, the New Scientist, the MIT Technology ReviewBusiness InsiderThe Economist, and many other international news providers.

Policy Desiderata in the Development of Machine Superintelligence (2016)

Nick Bostrom, Allan Dafoe, Carrick Flynn

Machine superintelligence could plausibly be developed in the coming decades or century. The prospect of this transformative development presents a host of political challenges and opportunities. This paper seeks to initiate discussion of these by identifying a set of distinctive features of the transition to a machine intelligence era. From these distinctive features, we derive a correlative set of policy desiderata—considerations that should be given extra weight in long-term AI policy…

Read More

Strategic Implications of Openness in AI Development

Nick Bostrom

This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Short­term impacts of increased openness appear mostly socially beneficial in expectation…

Read More

The Team

Nick Bostrom

Co-Director, Governance of AI Program

Allan Dafoe

Co-Director, Governance of AI Program

Miles Brundage

Research Fellow

AI progress forecasting; science and technology policy

Baobao Zhang

Nuffield Collaborator

Public opinion research, American politics, and public policy

Carrick Flynn

Assistant Director

Law; governance; policy

Jade Leung

DPhil Researcher

International cooperation and institutions; risk and uncertainty governance

Jeffrey Ding

DPhil Researcher

China’s AI strategy, China’s approach to strategic technologies

Helen Toner

Research Associate

AI policy and strategy; progress in machine learning & AI; effective philanthropy

Ben Garfinkel

Visiting Researcher

Cryptography; physics; philosophy

Tanya Singh

Temporary Administrator

Research Affiliates

Sophie-Charlotte Fischer

ETH Zurich-based Research Affiliate

International Security and Arms Control; Information Technology and Politics; Foreign Policy

Matthijs M. Maas

University of Copenhagen-based Research Affiliate

AI governance; technology management regimes; nuclear deterrence stability; securitization theory

Remco Zwetsloot

Yale University-based Research Affiliate

International security, arms racing and arms control, bargaining theory

Cullen O’Keefe

Research Affiliate

Law, Governance

Brian Tse

Research Affiliate

China-U.S. relations, global governance of existential risk, China’s AI safety development

Nathan Calvin

Research Affiliate

Private-public contracting relationships, intellectual property law, international cooperation

Katelynn Kyker

Research Affiliate

Industry self-regulation, AI ethics, global cooperation

Former Research Affiliates

Tamay Besiroglu, William Rathje, Clare Lyle, Paul de Font-Reaulx

Public Engagement

Researchers at the Governance of AI Program are actively involved in the public dialogue on the impact of advanced, transformative AI. We seek to offer authoritative, actionable and accessible insight to a range of audiences in policy, academia, and the public. The following is a selection from recent speaking engagements undertaken by our researchers at conferences and public venues.

Recent Speaking Engagements

Jade Leung: 'Prospects for firm-government cooperation in transformative AI futures' | (June 10th, 2018. EA Global San Francisco)

Today, private firms are the only prominent actors that have expressed ambitions to develop AGI, and lead at the cutting edge of advanced AI research. It is therefore critical to consider how these private firms should be involved in the future of AI governance. This talk will explore the challenges and opportunities associated with firm-government cooperation, and what strategic parameters encourage productive cooperation and avoid costly conflict between firms and states in steering towards safe AI governance.

Benjamin Garfinkel: 'The Future of Surveillance Doesn't Need to be Dystopian' | (June 9th, 2018, EA Global San Francisco)

This talk considers two worrisome narratives about technological progress and the future of surveillance. In the first narrative, progress threatens privacy by enabling ever-more-pervasive surveillance. For instance, it is becoming possible to automatically track and analyze the movements of individuals through facial recognition cameras. In the second narrative, progress threatens security by creating new risks that cannot be managed with present levels of surveillance. For instance, small groups developing cyber weapons or pathogens may be unusually difficult to detect. It is suggested that another, more optimistic narrative is also plausible. Technological progress, particularly in the domains of artificial intelligence and cryptography, may help to erase the trade-off between privacy and security.

Benjamin Garfinkel: 'Recent Developments in Cryptography and Why They Matter' | (May 1st 2018, Oxford Internet Institute)

This talk surveys a range of emerging technologies in the field of cryptography, including blockchain-based technologies and secure multiparty computation, it then analyzes their potential political significance in the long-term. These predictions include the views that a growing number of information channels used to conduct surveillance may “go dark,” that it may become easier to verify compliance with agreements without intrusive monitoring, that the roles of a number of centralized institutions ranging from banks to voting authorities may shrink, and that new transnational institutions known as “decentralized autonomous organizations” may emerge.

Miles Brundage: 'Offensive applications of AI' | (April 11th, 2018, CyberUK)

Presented the Malicious Use of AI report at a CyberUK2018 panel.

Sophie-Charlotte Fischer: 'Artificial Intelligence: What implications for Foreign Policy?' | (April 11th, 2018, German Federal Foreign Office)

This panel discussion, co-organized by the German Federal Foreign Office, the Stiftung Neue Verantwortung and the Mercator Foundation,  discussed the findings of a January report by SNV, “Artificial Intelligence and Foreign Policy“. The report seeks to provide a foundation for planning a foreign policy strategy that responds effectively to the emerging power of AI in international affairs.

Allan Dafoe: Chair of panel ‘Artificial Intelligence and Global Security: Risks, Governance, and Alternative Futures’ | (April 6th 2018, Annual Conference of the Johnson Center for the Study of American Diplomacy, Yale University)

The panel addressed cybersecurity leadership and strategy from the perspective of the Department of Defense. The panelists were Dario Amodei, Research Scientist and Team Lead for Safety at OpenAI; Jason Matheny, Director of the Intelligence Advanced Research Projects Agency; and the Honorable Robert Work, former Acting and Deputy Secretary of Defense and now Senior Counselor for Defense at the Center for a New American Security. The keynote address at the conference was given by Eric Schmidt, and Henry Kissinger also gave a talk.

Matthijs Maas: 'Regulating for ‘normal AI accidents’: operational lessons for the responsible governance of AI deployment' | (February 2nd, 2018, AAAI/ACM Conference on AI, Ethics and Society)

Paper presentation, arguing that many AI applications often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments, which ensures such systems are prone to ‘normal accident’-type failures. While this suggests that large-scale, cascading errors in AI systems are very hard to prevent or stop, an examination of the operational features that lead technologies to exhibit such failures enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safer deployment of AI systems. Conference paper available here.

Allan Dafoe: 'Governing the AI Revolution: The Research Landscape' | (January 25th, 2018, CISAC, Stanford University)

Artificial intelligence (AI) is rapidly improving. The opportunities are tremendous, but so are the risks. Existing and soon-to-exist capabilities pose several plausible extreme governance challenges. These include massive labor displacement, extreme inequality, an oligopolistic global market structure, reinforced authoritarianism, shifts and volatility in national power, and strategic instability. Further, there is no apparent ceiling to AI capabilities, experts envision that superhuman capabilities in strategic domains will be achieved in the coming four decades, and radical surprise breakthroughs are possible. Such achievements would likely transform wealth, power, and world order, though global politics will in turn crucially shape how AI is developed and deployed. The consequences are plausibly of a magnitude and on a timescale to dwarf other global concerns, leaders of governments and firms are asking for policy guidance, and yet scholarly attention to the AI revolution remains negligible. Research is thus urgently needed on the AI governance problem: the problem of devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.
Event information available here.

Allan Dafoe: ‘Strategic and Societal Implications of ML’ | (December 8th 2017, Neural Information Processing Systems Conference)

This paper was given at a workshop entitled ‘Machine Learning and Computer Security’.

Opportunities

Job requirements

In all candidates, we seek high general aptitude, self-direction, openness to feedback, and a firm belief in our mission. All candidates will ideally also have experience with the effective altruism movement and familiarity with related ideas.

Across each of the roles below, we are especially interested in people with varying degrees of skill or expertise in the following areas:

  1. International relations, especially in the areas of international cooperation, international law, international political economy, global public goods, constitution and institutional design, diplomatic coordination and cooperation, arms race dynamics history, and the politics of transformative technologies, governance, and grand strategy.
  2. Mandarin and/or Chinese politics and/or the Chinese machine learning community.
  3. Game theory and mathematical modelling.
  4. Survey design and statistical analysis.
  5. Large intergovernmental scientific research organizations and projects (such as CERN, ISS, and ITER).
  6. Technology and other types of forecasting.
  7. Law and/or Policy.

While we prefer that non-administrator candidates have a doctorate degree and/or a strong demonstrated track record in a relevant area(s), we encourage candidates to apply if they can demonstrate aptitude in other ways, such as with independent research or operations work.

General job description

Our team members set their own hours, determine their research directions, and otherwise have ownership over how they pursue their work. We collaborate with one another online, in the office, and in weekly meetings, and with adjacent organizations through a range of mediums.

We would prefer that staff work with us in person from our Oxford, UK office. However, we will also consider remote working arrangements for highly qualified applicants.

Opportunities

Researchers

Researchers are the foundation of our program. They generally work independently, taking the lead on a fundamental research topic of their choosing. Researchers share feedback with one another, publish in journals, and present their work at university seminars and international conferences.

They address topics including:

  • long-run possibilities for beneficial global governance of advanced AI
  • institutions and possibilities for global cooperation;
  • risks and dynamics of international AI races;
  • trends, causes, and forecasts of AI progress;
  • transformations of the sources of wealth and power;
  • associated technologies like crypto-economic systems, nanotechnology, biotechnology, lie detection, and surveillance;
  • global public opinion, values, ethics, and broadly appealing visions;
  • global political dimensions of AI-induced unemployment and inequality;

Researcher candidates would ideally have:

  1. Advanced expertise in at least one of the areas outlined above.
  2. The ability to undertake high-quality independent research which results in academic publications in one of the areas outlined above with minimal supervision.
  3. A strong academic background in a relevant field. Preferably a doctorate.
Research Assistants

Research assistants work directly with our senior staff to contribute to, edit, and produce their work. Unlike many positions with this title, our RAs have lots of influence on the work they do, making high-level contributions to the research output.

Research Assistant candidates would ideally have:

  1. Moderate expertise in at least one of the areas outlined above.
  2. The ability to undertake some amount of independent research in these areas with minimal supervision.
  3. A strong academic background in a relevant field.
Policy Experts

Policy experts keep our research relevant. They reconcile ideal policy ideas with feasible policy proposals, which we turn into commissioned policy briefs for companies, government agencies, and NGOs. Policy experts serve as an interface between the Governance of AI Program and relevant actors by testifying at their committee meetings and serving on their advisory boards. They should be willing to travel as necessary to fulfil project requirements.

Policy Expert candidates would ideally have:

  1. At least moderate expertise in one of the areas outlined above.
  2. Previous experience working in a policy role or think tank with a close relationship to one of these fields, AI policy, or another related area.
  3. An existing policy network with access to relevant policymakers.
  4. Career plans that would benefit from this role.
Project Managers

Project managers expand our core functions. They take the lead formulating and executing projects from their inception. Including forming a team with which to implement a project by recruiting new or existing staff collaborators. Project managers develop, orient, and facilitate their teams’ activities, and work with personnel across FHI, the university, and partner institutions to do so.

Potential activities include running an internship or fellowship program, producing a research seminar series, and managing a cross-organization research project. Given the collaborative nature of most such activities, it is usually necessary for project managers to work from our Oxford, UK office.

Project manager candidates would ideally have:

  1. At least moderate knowledge in one of the areas listed above.
  2. A history of successfully managing research, policy, or other types of projects with a close relationship to one of these fields.
  3. Strong interpersonal, organizational, and management skills.
  4. A strong interest in AI governance and related topics.
Administrators

Administrators keep our wheels turning. On a day-to-day basis, they perform a range of tasks, including scheduling meetings and events, booking travel, compiling expense reports, interfacing with the university, and maintaining accurate and up to date website content. They work well with both, program staff and our large range of external stakeholders, as they participate in personnel processes, meetings, mentorship, media engagement, and networking.

Administrator candidates would ideally have:

  1. (For senior administrative roles) Several years of experience in administration and/or project management.
  2. (For junior administrative roles) A track record of organizing student groups or holding positions of responsibility in job, internship, or voluntary activities.
  3. Strong interpersonal, organizational, and management skills.
  4. Solid generalist competencies and quick, self-directed learning of new tasks.
  5. Moderate familiarity with the research topics or other related fields.
  6. A strong interest in AI governance and related topics.
AI Policy and Governance Internship

Previous interns at FHI have worked on issues of public opinion, technology race modelling, the bridge between short-term and long-term AI policy, the development of AI and AI policy in China, case studies in comparisons with related technologies, and many other topics.  You will also get the opportunity to live in Oxford – one of the most beautiful and historic cities in the UK.

Preferred traits of a candidate include

  1. having a strong background in a relevant field such as political science, public policy, economics, law, or computer science;
  2. previous work on AI or AI policy;
  3. career plans that would benefit from this internship;
  4. expertise in Chinese language skills and Chinese politics;
  5. experience with the effective altruism movement and familiarity with related ideas;
  6. highly talented undergraduates, graduate students, and those who have taken time off before graduate school are welcome to apply.

Selection Criteria

  • Closeness of match to preferred traits discussed above.
  • You should be able to undertake independent research at postgraduate level with minimal supervision.
  • You should be fluent in English
  • You must be available to come to Oxford for approximately 12 weeks (please indicate the period when you would be available when you apply).

This is a paid internship.  Candidates from underrepresented demographic groups are especially encouraged to apply.

For more information, please see this post.

Application

To apply for the internship, please use this form.

To apply for one or more of the other positions above, please submit the following information to recruitment@governance.ai with “Application: [Job Title(s)]” as the subject line:

  • Resume or CV
  • Statement of interest
    Only submit one, addressing all roles for which you are applying. If you would like to be considered for opportunities with our partner organizations should they arise, indicate permission to pass along your application materials in the body of the email.
  • Two references
    Names and email addresses; no letters required.
  • (For Researcher, Research Assistant, and Intern roles) An approximately 2500-word research sample
    See below.

If you think that you might be a good fit for the Governance of AI Program but do not fit any of our listed positions, please submit the same details as above, as well as a project or function proposal.

 

Research Sample (for Researcher, Research Assistant, and Intern candidates only):

Email recruitment@governance.ai and request “AI Governance Research Landscape.” Once you have received and read the document, choose a priority area or question from the text or footnotes of the paper. Then write an approximately 2500-word research report attempting to provide an original contribution to the topic. In the paper, please include context for your chosen topic, lines of enquiry for future research, and formal citations for any sources you use besides the provided landscape document.

If you have already written a paper on a relevant topic, feel free to submit that instead, along with a brief explanation of its relevance in your Statement of Interest.

If you are hired, the topic of your submitted paper will not necessarily determine your area of focus while at FHI. However, in order to best assess your fit in the program, we encourage you to demonstrate aptitude with a topic and/or type of research for which you are comparatively well-equipped and which you would be interested in investigating further.

Unfortunately, we do not have the capacity to help prospective hires choose research topics, nor review reports before submission. If you receive outside input while preparing your report for this application, please include the names and email addresses of those contributors in a footnote on the first page of the report.

All qualified applicants will be considered for employment without regard to race, color, religion, sex, or national origin.