Overview

The Governance of AI Program strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. Our focus is on the political challenges arising from transformative AI: advanced AI systems whose long-term impacts may be as profound as the industrial revolution. The Program seeks to guide the development of AI for the common good by conducting research on important and neglected issues of AI governance, and advising decision makers on this research through policy engagement.

The Program produces research which is foundational to the field of AI governance, for example mapping crucial considerations to direct the research agenda, or identifying distinctive features of the transition to transformative AI and corresponding policy considerations. Our research also addresses more immediate policy issues, such as malicious use and China’s AI strategy. Current focuses include international security, the history of technology development, and public opinion.

In addition to research, the Governance of AI Program is active in international policy circles, and actively advises governments and industry leaders on AI strategy. Governance of AI Program researchers have spoken at the NIPS and AAAI/ACM conferences, and at events with participation from the German Federal Foreign Office, senior officials in the Canadian government, and the UK All Party Parliamentary Group on AI.

Listen to Prof. Allan Dafoe’s interview on the governance of artificial intelligence 

See also the interview notes.

The Future of Life Institute has also interviewed Allan along with Jessica Cussins in their podcast AI: Global Governance, National Policy and Public Trust.

Our work looks at:

  • trends, causes, and forecasts of AI progress;
  • transformations of the sources of wealth and power;
  • global political dimensions of AI-induced unemployment and inequality;
  • risks and dynamics of international AI races;
  • possibilities for global cooperation;
  • associated emerging technologies such as those involving crypto-economic systems, weapons systems, nanotechnology, biotechnology, and surveillance;
  • global public opinion, values, ethics;
  • long-run possibilities for beneficial global governance of advanced AI

Featured Recent Work:

AI Governance: A Research Agenda

This research agenda by Allan Dafoe proposes a framework for research on AI governance. It provides a foundation to introduce and orient researchers to the space of important problems in AI governance. It offers a framing of the overall problem, an enumeration of the questions that could be pivotal, and references to published articles relevant to these questions. Read More.

Syllabus: Artificial Intelligence and International Security

This syllabus by Research Affiliate Remco Zwetsloot covers material located at the intersection between artificial intelligence and international security. It is designed as a resource for those with a background in AI or international relations who are seeking to explore this intersection, and for those who are new to both fields. Find the syllabus here.

Deciphering China’s AI Dream

This report by Jeffrey Ding examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, […] Read More

‘Deciphering China’s AI Dream’ has received press attention in the MIT Technology Review, Bloomberg, and the South China Morning Post, among other media outlets.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

 

 

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. The report explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks. Read More

‘The Malicious Use of Artificial Intelligence’ received coverage from hundreds of news providers including the New York Times, the BBC, Reuters, and the Verge. The report was praised by Rory Stewart, UK Minister of Justice; Major General Mick Ryan, Commander at the Australian Defence College, and Tom Dietterich, former President of the Association for the Advancement of Artificial Intelligence.

Select Publications

AI Governance: A Research Agenda (2018)

Allan Dafoe

This research agenda by Allan Dafoe proposes a framework for research on AI governance. It provides a foundation to introduce and orient researchers to the space of important problems in AI governance. It offers a framing of the overall problem, an enumeration of the questions that could be pivotal, and references to published articles relevant to these questions.

Read More

Deciphering China’s AI Dream: The context, components, capabilities, and consequences of China’s strategy to lead the world in AI (2018)

Jeffrey Ding

This report examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, it benchmarks China’s current AI capabilities by developing a novel index to measure any country’s AI potential and highlights the potential implications of China’s AI dream for issues of AI safety, national security, economic development, and social governance.

Read More

‘Deciphering China’s AI Dream’ has received press attention in the MIT Technology Review, Bloomberg, and the South China Morning Post, among other media outlets.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018)

Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, Dario Amodei

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. This report distills findings from a 2017 workshop as well as additional research done by the authors. It explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks.

Read More

‘The Malicious Use of Artificial Intelligence’ received coverage from hundreds of news providers including the New York Times, the BBC, Reuters, and the Verge. The report was praised by Rory Stewart, UK Minister of Justice; Major General Mick Ryan, Commander at the Australian Defence College, and Tom Dietterich, former President of the Association for the Advancement of Artificial Intelligence.

When Will AI Exceed Human Performance? Evidence from AI Experts (2017)

Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans

Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI…

Read More

‘When will AI exceed human performance?’ was ranked #16 in Altmetric’s most discussed articles of 2017. The survey was covered by the BBCNewsweek, the New Scientist, the MIT Technology ReviewBusiness InsiderThe Economist, and many other international news providers.

Policy Desiderata in the Development of Machine Superintelligence (2016)

Nick Bostrom, Allan Dafoe, Carrick Flynn

Machine superintelligence could plausibly be developed in the coming decades or century. The prospect of this transformative development presents a host of political challenges and opportunities. This paper seeks to initiate discussion of these by identifying a set of distinctive features of the transition to a machine intelligence era. From these distinctive features, we derive a correlative set of policy desiderata—considerations that should be given extra weight in long-term AI policy…

Read More

Strategic Implications of Openness in AI Development (2016)

Nick Bostrom

This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Short­term impacts of increased openness appear mostly socially beneficial in expectation…

Read More

Other Publications

Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence (2018)

Published in Should we fear artificial intelligence?, a report by the Science and Technology Options Assessment division of the European Parliament.

Miles Brundage 

Expert opinions on the timing of future developments in artificial intelligence (AI) vary widely, with some expecting human-level AI in the next few decades and others thinking that it is much further off (Grace et al., 2017). Similarly, experts disagree on whether developments in AI are likely to be beneficial or harmful for human civilization, with the range of opinions including those who feel certain that it will be extremely beneficial, those who consider it likely to be extremely harmful (even risking human extinction), and many in between (AI Impacts, 2017). While the risks of AI development have recently received substantial attention (Bostrom, 2014; Amodei and Olah et al., 2016), there has been little systematic discussion of the precise ways in which AI might be beneficial in the long term.

Read More

The Team

Nick Bostrom

Director, Governance of AI Program

Macrostrategy; strategic implications of AI

Allan Dafoe

Director, Governance of AI Program

Miles Brundage

Research Associate

AI progress forecasting; science and technology policy

Baobao Zhang

Research Associate

Public opinion research; American politics; public policy

Carrick Flynn

Research Associate

Legal issues with AI; AI governance challenges; policy

Jade Leung

Head of Partnerships, Researcher

International cooperation and institutions; risk and uncertainty governance

Jeffrey Ding

Researcher

China’s AI strategy; China’s approach to strategic technologies

Helen Toner

Research Associate

AI policy and strategy; progress in machine learning & AI; effective philanthropy

Ben Garfinkel

Researcher

International security; surveillance; AI and cryptography

Tanya Singh

Temporary Administrator

Research Affiliates

Sophie-Charlotte Fischer

ETH Zurich-based Research Affiliate

International security and arms control; IT and politics; foreign policy

Matthijs M. Maas

University of Copenhagen-based Research Affiliate

AI governance; technology management regimes; nuclear deterrence stability; securitization theory

Remco Zwetsloot

Yale University-based Research Affiliate

International security; arms racing and arms control; bargaining theory

Cullen O’Keefe

Research Affiliate

Implications of corporate, US, and international law for AI governance; benevolent AI governance structures

Brian Tse

Policy Affiliate

China-U.S. relations; global governance of existential risk; China’s AI safety development

Peter Cihon

Visiting Researcher

Regulation, role of firms in global governance, multistakeholder governance models and design

Former Research Affiliates

Tamay Besiroglu, Nathan Calvin, Paul de Font-Reaulx, Genevieve Fried, Katelynn Kyker, Clare Lyle, William Rathje

Public Engagement

Researchers at the Governance of AI Program are actively involved in the public dialogue on the impact of advanced, transformative AI. We seek to offer authoritative, actionable and accessible insight to a range of audiences in policy, academia, and the public. The following is a selection from recent speaking engagements undertaken by our researchers at conferences and public venues.

Recent Speaking Engagements

Carrick Flynn: 'AI Governance Landscape' | (June 10th, 2018. EA Global San Francisco)

The development of artificial intelligence is well-poised to massively change the world. It’s possible that AI could make life better for all of us, but many experts think there’s a non-negligible chance that the overall impact of AI could be extremely bad. In this talk from Effective Altruism Global 2018: San Francisco, Carrick Flynn lays out what we know from history about controlling powerful technologies, and what the tiny field of AI governance is doing to help AI go well. A recording of Carrick’s talk is available here; a transcript is available here.

Jade Leung: 'Analyzing AI Actors' | (June 10th, 2018. EA Global San Francisco)

Who would you rather have access to human-level artificial intelligence: the US government, Google, the Chinese government or Baidu? The biggest governments and tech firms are the most likely to develop advanced AI, so understanding their goals, abilities and constraints is a vital part of predicting AI’s trajectory. In this talk from EA Global 2018: San Francisco, Jade Leung explores how we can think about major players in AI, including an informative case study. A recording of Jade’s talk is available here. A transcript is available here.

Benjamin Garfinkel: 'The Future of Surveillance Doesn't Need to be Dystopian' | (June 9th, 2018, EA Global San Francisco)

This talk considers two worrisome narratives about technological progress and the future of surveillance. In the first narrative, progress threatens privacy by enabling ever-more-pervasive surveillance. For instance, it is becoming possible to automatically track and analyze the movements of individuals through facial recognition cameras. In the second narrative, progress threatens security by creating new risks that cannot be managed with present levels of surveillance. For instance, small groups developing cyber weapons or pathogens may be unusually difficult to detect. It is suggested that another, more optimistic narrative is also plausible. Technological progress, particularly in the domains of artificial intelligence and cryptography, may help to erase the trade-off between privacy and security.

Jeffrey Ding: Participant in BBC Machine Learning Fireside Chat | (June 6th 2018, London)

BBC Machine Learning Fireside Chats hosted a discussion between Jeffrey Ding and Charlotte Stix of the Leverhulme Centre for the Future of Intelligence, University of Cambridge. The conversation covered China’s national AI development plan, the state of US ML research, and the position of Europe and Britain in the global AI space.

Allan Dafoe: ‘Keynote on AI Governance’ | (June 1st 2018, Public Policy Forum)

This keynote address on the governance of AI was given at a Public Policy Forum seminar series attended by deputy ministers and senior officials of the Canadian government.

Benjamin Garfinkel: 'Recent Developments in Cryptography and Why They Matter' | (May 1st 2018, Oxford Internet Institute)

This talk surveys a range of emerging technologies in the field of cryptography, including blockchain-based technologies and secure multiparty computation, it then analyzes their potential political significance in the long-term. These predictions include the views that a growing number of information channels used to conduct surveillance may “go dark,” that it may become easier to verify compliance with agreements without intrusive monitoring, that the roles of a number of centralized institutions ranging from banks to voting authorities may shrink, and that new transnational institutions known as “decentralized autonomous organizations” may emerge.

Miles Brundage: 'Offensive applications of AI' | (April 11th, 2018, CyberUK)

Presented the Malicious Use of AI report at a CyberUK2018 panel.

Sophie-Charlotte Fischer: 'Artificial Intelligence: What implications for Foreign Policy?' | (April 11th, 2018, German Federal Foreign Office)

This panel discussion, co-organized by the German Federal Foreign Office, the Stiftung Neue Verantwortung and the Mercator Foundation,  discussed the findings of a January report by SNV, “Artificial Intelligence and Foreign Policy“. The report seeks to provide a foundation for planning a foreign policy strategy that responds effectively to the emerging power of AI in international affairs.

Allan Dafoe: Chair of panel ‘Artificial Intelligence and Global Security: Risks, Governance, and Alternative Futures’ | (April 6th 2018, Annual Conference of the Johnson Center for the Study of American Diplomacy, Yale University)

The panel addressed cybersecurity leadership and strategy from the perspective of the Department of Defense. The panelists were Dario Amodei, Research Scientist and Team Lead for Safety at OpenAI; Jason Matheny, Director of the Intelligence Advanced Research Projects Agency; and the Honorable Robert Work, former Acting and Deputy Secretary of Defense and now Senior Counselor for Defense at the Center for a New American Security. The keynote address at the conference was given by Eric Schmidt, and Henry Kissinger also gave a talk.

Matthijs Maas: 'Regulating for ‘normal AI accidents’: operational lessons for the responsible governance of AI deployment' | (February 2nd, 2018, AAAI/ACM Conference on AI, Ethics and Society)

Paper presentation, arguing that many AI applications often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments, which ensures such systems are prone to ‘normal accident’-type failures. While this suggests that large-scale, cascading errors in AI systems are very hard to prevent or stop, an examination of the operational features that lead technologies to exhibit such failures enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safer deployment of AI systems. Conference paper available here.

Allan Dafoe: 'Governing the AI Revolution: The Research Landscape' | (January 25th, 2018, CISAC, Stanford University)

Artificial intelligence (AI) is rapidly improving. The opportunities are tremendous, but so are the risks. Existing and soon-to-exist capabilities pose several plausible extreme governance challenges. These include massive labor displacement, extreme inequality, an oligopolistic global market structure, reinforced authoritarianism, shifts and volatility in national power, and strategic instability. Further, there is no apparent ceiling to AI capabilities, experts envision that superhuman capabilities in strategic domains will be achieved in the coming four decades, and radical surprise breakthroughs are possible. Such achievements would likely transform wealth, power, and world order, though global politics will in turn crucially shape how AI is developed and deployed. The consequences are plausibly of a magnitude and on a timescale to dwarf other global concerns, leaders of governments and firms are asking for policy guidance, and yet scholarly attention to the AI revolution remains negligible. Research is thus urgently needed on the AI governance problem: the problem of devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.
Event information available here.

Allan Dafoe: ‘Strategic and Societal Implications of ML’ | (December 8th 2017, Neural Information Processing Systems Conference)

This paper was given at a workshop entitled ‘Machine Learning and Computer Security’.

Allan Dafoe: Evidence Panelist for All Party Parliamentary Group on Artificial Intelligence Evidence Meeting | (October 30th 2017, Evidence Meeting 7: International Perspective and Exemplars)

This panel discussion focused on which countries and communities are more prepared for AI, and how they could be used as case studies. Topics included best practice, national versus multilateral or international approaches, and probable timelines.

Opportunities

Current Opportunity: DPhil Scholarship

The Future of Humanity Institute is launching a new scholarship programme for DPhil students starting at the University of Oxford in the 2019-2020 academic year. We will be awarding up to 8 scholarships for scholars whose research aims to answer crucial questions for improving the long-term prospects of humanity. Candidates will be considered from a range of disciplines, including international relations, philosophy, public policy, computer science, mathematics, and economics. Successful applicants will be awarded full funding for their DPhil, office space in FHI, and the chance to participate in the Research Scholars Programme as well as other FHI activities.

At the initial stage there is no separate application for the scholarship. Applicants need to apply to an Oxford department by the relevant January deadline to be considered for the scholarship.  Once departments have made their offers, FHI will invite recommended candidates to complete a separate application. You can register your interest in the scholarship here.

General Applications (Researchers and Policy Experts)

The Governance of Artificial Intelligence Program is always looking for exceptional candidates for researcher and policy expert roles.

In all candidates, we seek high general aptitude, self-direction, openness to feedback, and a firm belief in our mission.

Across each of these roles, we are especially interested in people with varying degrees of skill or expertise in the following areas:

  1. International relations, especially in the areas of international cooperation, international law, international political economy, global public goods, constitution and institutional design, diplomatic coordination and cooperation, arms race dynamics history, and the politics of transformative technologies, governance, and grand strategy.
  2. Chinese politics and machine learning in China.
  3. Game theory and mathematical modelling.
  4. Survey design and statistical analysis.
  5. Large intergovernmental scientific research organizations and projects (such as CERN, ISS, and ITER).
  6. Technology and other types of forecasting.
  7. Law and/or policy.

Our goal is to identify exceptional talent. We are interested in hiring for full-time work at Oxford. We are also interested in getting to know exceptional people who might only be available part-time or for remote work.

As we work closely with leading AI labs and the effective altruism community, familiarity and involvement with them is a plus.

We’re currently looking for:

If you think that you might be a good fit for the Governance of AI Program but do not fit any of our listed positions, please email recruitment@governance.ai with your CV and a brief statement of interest outlining (i) why you want to work with us, (ii) what you can contribute to our team and (iii) how this role fits into your plans.

All qualified applicants will be considered for employment without regard to race, color, religion, sex, or national origin.