Overview

The Governance of AI Program strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. Our focus is on the political challenges arising from transformative AI: advanced AI systems whose long-term impacts may be as profound as the industrial revolution. The Program seeks to guide the development of AI for the common good by conducting research on important and neglected issues of AI governance, and advising decision makers on this research through policy engagement.

The Program produces research which is foundational to the field of AI governance, for example mapping crucial considerations to direct the research agenda, or identifying distinctive features of the transition to transformative AI and corresponding policy considerations. Our research also addresses more immediate policy issues, such as malicious use and China’s AI strategy. Current focuses include international security, the history of technology development, and public opinion.

In addition to research, the Governance of AI Program is active in international policy circles, and actively advises governments and industry leaders on AI strategy. Governance of AI Program researchers have spoken at the NIPS and AAAI/ACM conferences, and at events with participation from the German Federal Foreign Office, senior officials in the Canadian government, and the UK All Party Parliamentary Group on AI.

Listen to Prof. Allan Dafoe’s interview on the governance of artificial intelligence 

See also the interview notes.

The Future of Life Institute has also interviewed Allan along with Jessica Cussins in their podcast AI: Global Governance, National Policy and Public Trust.

Our work looks at:

  • trends, causes, and forecasts of AI progress;
  • transformations of the sources of wealth and power;
  • global political dimensions of AI-induced unemployment and inequality;
  • risks and dynamics of international AI races;
  • possibilities for global cooperation;
  • associated emerging technologies such as those involving crypto-economic systems, weapons systems, nanotechnology, biotechnology, and surveillance;
  • global public opinion, values, ethics;
  • long-run possibilities for beneficial global governance of advanced AI

Featured Recent Work:

AI Governance: A Research Agenda

This research agenda by Allan Dafoe proposes a framework for research on AI governance. It provides a foundation to introduce and orient researchers to the space of important problems in AI governance. It offers a framing of the overall problem, an enumeration of the questions that could be pivotal, and references to published articles relevant to these questions. Read More.

Syllabus: Artificial Intelligence and International Security

This syllabus by Research Affiliate Remco Zwetsloot covers material located at the intersection between artificial intelligence and international security. It is designed as a resource for those with a background in AI or international relations who are seeking to explore this intersection, and for those who are new to both fields. Find the syllabus here.

Deciphering China’s AI Dream

This report by Jeffrey Ding examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, […] Read More

‘Deciphering China’s AI Dream’ has received press attention in the MIT Technology Review, Bloomberg, and the South China Morning Post, among other media outlets.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

 

 

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. The report explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks. Read More

‘The Malicious Use of Artificial Intelligence’ received coverage from hundreds of news providers including the New York Times, the BBC, Reuters, and the Verge. The report was praised by Rory Stewart, UK Minister of Justice; Major General Mick Ryan, Commander at the Australian Defence College, and Tom Dietterich, former President of the Association for the Advancement of Artificial Intelligence.

Select Publications

AI Governance: A Research Agenda (2018)

Allan Dafoe

This research agenda by Allan Dafoe proposes a framework for research on AI governance. It provides a foundation to introduce and orient researchers to the space of important problems in AI governance. It offers a framing of the overall problem, an enumeration of the questions that could be pivotal, and references to published articles relevant to these questions.

Read More

Deciphering China’s AI Dream: The context, components, capabilities, and consequences of China’s strategy to lead the world in AI (2018)

Jeffrey Ding

This report examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, it benchmarks China’s current AI capabilities by developing a novel index to measure any country’s AI potential and highlights the potential implications of China’s AI dream for issues of AI safety, national security, economic development, and social governance.

Read More

‘Deciphering China’s AI Dream’ has received press attention in the MIT Technology Review, Bloomberg, and the South China Morning Post, among other media outlets.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018)

Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, Dario Amodei

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. This report distills findings from a 2017 workshop as well as additional research done by the authors. It explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks.

Read More

‘The Malicious Use of Artificial Intelligence’ received coverage from hundreds of news providers including the New York Times, the BBC, Reuters, and the Verge. The report was praised by Rory Stewart, UK Minister of Justice; Major General Mick Ryan, Commander at the Australian Defence College, and Tom Dietterich, former President of the Association for the Advancement of Artificial Intelligence.

When Will AI Exceed Human Performance? Evidence from AI Experts (2017)

Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans

Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI…

Read More

‘When will AI exceed human performance?’ was ranked #16 in Altmetric’s most discussed articles of 2017. The survey was covered by the BBCNewsweek, the New Scientist, the MIT Technology ReviewBusiness InsiderThe Economist, and many other international news providers.

Policy Desiderata in the Development of Machine Superintelligence (2016)

Nick Bostrom, Allan Dafoe, Carrick Flynn

Machine superintelligence could plausibly be developed in the coming decades or century. The prospect of this transformative development presents a host of political challenges and opportunities. This paper seeks to initiate discussion of these by identifying a set of distinctive features of the transition to a machine intelligence era. From these distinctive features, we derive a correlative set of policy desiderata—considerations that should be given extra weight in long-term AI policy…

Read More

Strategic Implications of Openness in AI Development (2016)

Nick Bostrom

This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Short­term impacts of increased openness appear mostly socially beneficial in expectation…

Read More

Other Publications

Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence (2018)

Published in Should we fear artificial intelligence?, a report by the Science and Technology Options Assessment division of the European Parliament.

Miles Brundage 

Expert opinions on the timing of future developments in artificial intelligence (AI) vary widely, with some expecting human-level AI in the next few decades and others thinking that it is much further off (Grace et al., 2017). Similarly, experts disagree on whether developments in AI are likely to be beneficial or harmful for human civilization, with the range of opinions including those who feel certain that it will be extremely beneficial, those who consider it likely to be extremely harmful (even risking human extinction), and many in between (AI Impacts, 2017). While the risks of AI development have recently received substantial attention (Bostrom, 2014; Amodei and Olah et al., 2016), there has been little systematic discussion of the precise ways in which AI might be beneficial in the long term.

Read More

The Team

Nick Bostrom

Director, Governance of AI Program

Macrostrategy; strategic implications of AI

Allan Dafoe

Director, Governance of AI Program

Miles Brundage

Research Associate

AI progress forecasting; science and technology policy

Baobao Zhang

Research Associate

Public opinion research; American politics; public policy

Carrick Flynn

Research Associate

Legal issues with AI; AI governance challenges; policy

Jade Leung

Head of Partnerships, Researcher

International cooperation and institutions; risk and uncertainty governance

Jeffrey Ding

Researcher

China’s AI strategy; China’s approach to strategic technologies

Helen Toner

Research Associate

AI policy and strategy; progress in machine learning & AI; effective philanthropy

Ben Garfinkel

Researcher

International security; surveillance; AI and cryptography

Tanya Singh

Temporary Administrator

Research Affiliates

Sophie-Charlotte Fischer

ETH Zurich-based Research Affiliate

International security and arms control; IT and politics; foreign policy

Matthijs M. Maas

University of Copenhagen-based Research Affiliate

AI governance; technology management regimes; nuclear deterrence stability; securitization theory

Remco Zwetsloot

Yale University-based Research Affiliate

International security; arms racing and arms control; bargaining theory

Cullen O’Keefe

Research Affiliate

Implications of corporate, US, and international law for AI governance; benevolent AI governance structures

Brian Tse

Research Affiliate

China-U.S. relations; global governance of existential risk; China’s AI safety development

Nathan Calvin

Research Affiliate

Private-public contracting relationships; intellectual property law; international cooperation

Former Research Affiliates

Tamay Besiroglu, Paul de Font-Reaulx, Genevieve Fried, Katelynn Kyker, Clare Lyle, William Rathje

Public Engagement

Researchers at the Governance of AI Program are actively involved in the public dialogue on the impact of advanced, transformative AI. We seek to offer authoritative, actionable and accessible insight to a range of audiences in policy, academia, and the public. The following is a selection from recent speaking engagements undertaken by our researchers at conferences and public venues.

Recent Speaking Engagements

Jade Leung: 'Prospects for firm-government cooperation in transformative AI futures' | (June 10th, 2018. EA Global San Francisco)

Today, private firms are the only prominent actors that have expressed ambitions to develop AGI, and lead at the cutting edge of advanced AI research. It is therefore critical to consider how these private firms should be involved in the future of AI governance. This talk will explore the challenges and opportunities associated with firm-government cooperation, and what strategic parameters encourage productive cooperation and avoid costly conflict between firms and states in steering towards safe AI governance.

Benjamin Garfinkel: 'The Future of Surveillance Doesn't Need to be Dystopian' | (June 9th, 2018, EA Global San Francisco)

This talk considers two worrisome narratives about technological progress and the future of surveillance. In the first narrative, progress threatens privacy by enabling ever-more-pervasive surveillance. For instance, it is becoming possible to automatically track and analyze the movements of individuals through facial recognition cameras. In the second narrative, progress threatens security by creating new risks that cannot be managed with present levels of surveillance. For instance, small groups developing cyber weapons or pathogens may be unusually difficult to detect. It is suggested that another, more optimistic narrative is also plausible. Technological progress, particularly in the domains of artificial intelligence and cryptography, may help to erase the trade-off between privacy and security.

Jeffrey Ding: Participant in BBC Machine Learning Fireside Chat | (June 6th 2018, London)

BBC Machine Learning Fireside Chats hosted a discussion between Jeffrey Ding and Charlotte Stix of the Leverhulme Centre for the Future of Intelligence, University of Cambridge. The conversation covered China’s national AI development plan, the state of US ML research, and the position of Europe and Britain in the global AI space.

Allan Dafoe: ‘Keynote on AI Governance’ | (June 1st 2018, Public Policy Forum)

This keynote address on the governance of AI was given at a Public Policy Forum seminar series attended by deputy ministers and senior officials of the Canadian government.

Benjamin Garfinkel: 'Recent Developments in Cryptography and Why They Matter' | (May 1st 2018, Oxford Internet Institute)

This talk surveys a range of emerging technologies in the field of cryptography, including blockchain-based technologies and secure multiparty computation, it then analyzes their potential political significance in the long-term. These predictions include the views that a growing number of information channels used to conduct surveillance may “go dark,” that it may become easier to verify compliance with agreements without intrusive monitoring, that the roles of a number of centralized institutions ranging from banks to voting authorities may shrink, and that new transnational institutions known as “decentralized autonomous organizations” may emerge.

Miles Brundage: 'Offensive applications of AI' | (April 11th, 2018, CyberUK)

Presented the Malicious Use of AI report at a CyberUK2018 panel.

Sophie-Charlotte Fischer: 'Artificial Intelligence: What implications for Foreign Policy?' | (April 11th, 2018, German Federal Foreign Office)

This panel discussion, co-organized by the German Federal Foreign Office, the Stiftung Neue Verantwortung and the Mercator Foundation,  discussed the findings of a January report by SNV, “Artificial Intelligence and Foreign Policy“. The report seeks to provide a foundation for planning a foreign policy strategy that responds effectively to the emerging power of AI in international affairs.

Allan Dafoe: Chair of panel ‘Artificial Intelligence and Global Security: Risks, Governance, and Alternative Futures’ | (April 6th 2018, Annual Conference of the Johnson Center for the Study of American Diplomacy, Yale University)

The panel addressed cybersecurity leadership and strategy from the perspective of the Department of Defense. The panelists were Dario Amodei, Research Scientist and Team Lead for Safety at OpenAI; Jason Matheny, Director of the Intelligence Advanced Research Projects Agency; and the Honorable Robert Work, former Acting and Deputy Secretary of Defense and now Senior Counselor for Defense at the Center for a New American Security. The keynote address at the conference was given by Eric Schmidt, and Henry Kissinger also gave a talk.

Matthijs Maas: 'Regulating for ‘normal AI accidents’: operational lessons for the responsible governance of AI deployment' | (February 2nd, 2018, AAAI/ACM Conference on AI, Ethics and Society)

Paper presentation, arguing that many AI applications often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments, which ensures such systems are prone to ‘normal accident’-type failures. While this suggests that large-scale, cascading errors in AI systems are very hard to prevent or stop, an examination of the operational features that lead technologies to exhibit such failures enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safer deployment of AI systems. Conference paper available here.

Allan Dafoe: 'Governing the AI Revolution: The Research Landscape' | (January 25th, 2018, CISAC, Stanford University)

Artificial intelligence (AI) is rapidly improving. The opportunities are tremendous, but so are the risks. Existing and soon-to-exist capabilities pose several plausible extreme governance challenges. These include massive labor displacement, extreme inequality, an oligopolistic global market structure, reinforced authoritarianism, shifts and volatility in national power, and strategic instability. Further, there is no apparent ceiling to AI capabilities, experts envision that superhuman capabilities in strategic domains will be achieved in the coming four decades, and radical surprise breakthroughs are possible. Such achievements would likely transform wealth, power, and world order, though global politics will in turn crucially shape how AI is developed and deployed. The consequences are plausibly of a magnitude and on a timescale to dwarf other global concerns, leaders of governments and firms are asking for policy guidance, and yet scholarly attention to the AI revolution remains negligible. Research is thus urgently needed on the AI governance problem: the problem of devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.
Event information available here.

Allan Dafoe: ‘Strategic and Societal Implications of ML’ | (December 8th 2017, Neural Information Processing Systems Conference)

This paper was given at a workshop entitled ‘Machine Learning and Computer Security’.

Allan Dafoe: Evidence Panelist for All Party Parliamentary Group on Artificial Intelligence Evidence Meeting | (October 30th 2017, Evidence Meeting 7: International Perspective and Exemplars)

This panel discussion focused on which countries and communities are more prepared for AI, and how they could be used as case studies. Topics included best practice, national versus multilateral or international approaches, and probable timelines.

Opportunities

General Applications

The Governance of Artificial Intelligence Program is always looking for exceptional candidates for researcher and policy expert roles. 

In all candidates, we seek high general aptitude, self-direction, openness to feedback, and a firm belief in our mission. All candidates will ideally also have experience with the effective altruism movement and familiarity with related ideas.

Across each of these roles, we are especially interested in people with varying degrees of skill or expertise in the following areas:

  1. International relations, especially in the areas of international cooperation, international law, international political economy, global public goods, constitution and institutional design, diplomatic coordination and cooperation, arms race dynamics history, and the politics of transformative technologies, governance, and grand strategy.
  2. Mandarin and/or Chinese politics and/or the Chinese machine learning community.
  3. Game theory and mathematical modelling.
  4. Survey design and statistical analysis.
  5. Large intergovernmental scientific research organizations and projects (such as CERN, ISS, and ITER).
  6. Technology and other types of forecasting.
  7. Law and/or Policy.

General job description

Our team members set their own hours, determine their research directions, and otherwise have ownership over how they pursue their work. We collaborate with one another online, in the office, and in weekly meetings, and with adjacent organizations through a range of mediums.

We would prefer that staff work with us in person from our Oxford, UK office. However, we will also consider remote working arrangements for highly qualified applicants.

Opportunities

Researchers

Researchers are the foundation of our program. They generally work independently, taking the lead on a fundamental research topic of their choosing. Researchers share feedback with one another, publish in journals, and present their work at university seminars and international conferences.

They address topics including:

  • long-run possibilities for beneficial global governance of advanced AI
  • institutions and possibilities for global cooperation;
  • risks and dynamics of international AI races;
  • trends, causes, and forecasts of AI progress;
  • transformations of the sources of wealth and power;
  • associated technologies like crypto-economic systems, nanotechnology, biotechnology, lie detection, and surveillance;
  • global public opinion, values, ethics, and broadly appealing visions;
  • global political dimensions of AI-induced unemployment and inequality;

Researcher candidates would ideally have:

  1. Advanced expertise in at least one of the areas outlined above.
  2. The ability to undertake high-quality independent research which results in academic publications in one of the areas outlined above with minimal supervision.
  3. A strong academic background in a relevant field. Preferably a doctorate.
Policy Experts

Policy experts keep our research relevant. They reconcile ideal policy ideas with feasible policy proposals, which we turn into commissioned policy briefs for companies, government agencies, and NGOs. Policy experts serve as an interface between the Governance of AI Program and relevant actors by testifying at their committee meetings and serving on their advisory boards. They should be willing to travel as necessary to fulfil project requirements.

Policy Expert candidates would ideally have:

  1. At least moderate expertise in one of the areas outlined above.
  2. Previous experience working in a policy role or think tank with a close relationship to one of these fields, AI policy, or another related area.
  3. An existing policy network with access to relevant policymakers.
  4. Career plans that would benefit from this role.

Application

To apply for one or more of the general positions above, please submit the following information to recruitment@governance.ai with “Application: [Job Title(s)]” as the subject line:

  • Resume or CV
  • Statement of interest
    Only submit one, addressing all roles for which you are applying. If you would like to be considered for opportunities with our partner organizations should they arise, indicate permission to pass along your application materials in the body of the email.
  • Two references
    Names and email addresses; no letters required.
  • (For Researcher role) An approximately 2500-word research sample
    See below.

If you think that you might be a good fit for the Governance of AI Program but do not fit any of our listed positions, please submit the same details as above, as well as a project or function proposal.

 

Research Sample (for Researcher candidates only):

Read AI Governance: A Research Agenda”. Choose a priority area or question from the text or footnotes of the paper. Then write an approximately 2500-word research report attempting to provide an original contribution to the topic. In the paper, please include context for your chosen topic, lines of enquiry for future research, and formal citations for any sources you use besides the provided landscape document.

If you are hired, the topic of your submitted paper will not necessarily determine your area of focus while at FHI. However, in order to best assess your fit in the program, we encourage you to demonstrate aptitude with a topic and/or type of research for which you are comparatively well-equipped and which you would be interested in investigating further.

Unfortunately, we do not have the capacity to help prospective hires choose research topics, nor review reports before submission. If you receive outside input while preparing your report for this application, please include the names and email addresses of those contributors in a footnote on the first page of the report.

All qualified applicants will be considered for employment without regard to race, color, religion, sex, or national origin.