We are currently hiring for a Survey Research Contractor. Details and application form are here. The deadline is 23:59 BST on August 10th 2020.

We are growing the field of AI governance, at GovAI and in the world. If you are interested in working in this field, please reach out. You could consider applying for our Governance of AI Fellowship (applications likely to open in the Fall of 2020) or send us a general application. We are interested in researchers and policy experts at all levels of experience, including pre-docs, postdocs, professors, and senior collaborators.

The Centre for the Governance of AI (GovAI), part of the Future of Humanity Institute at the University of Oxford, strives to help humanity capture the benefits and manage the risks of artificial intelligence. We conduct research into important and neglected issues within AI governance, drawing on Political Science, International Relations, Computer Science, Economics, Law, and Philosophy. Our research is used to advise decision-makers in private industry, civil society, and policy. 


Our work is guided by our Research Agenda and includes examination of how technological trends, geopolitics, and governance structures will affect the development of advanced artificial intelligence.

Our work includes:

Our full list of publications can be found here

Policy Engagement and Events

We are active in international policy circles, regularly hosting discussions with leading academics in the field, and advising governments and industry leaders

Recent policy engagement and events include writing in The Washington Post about Covid-19 contact tracing apps, presenting evidence to The US Congress on China’s AI strategy, and a live webinar with Daron Acemoğlu, Diane Coyle, and Joseph Stiglitz on the economics of AI and COVID-19. 

The Team 

GovAI is directed by Professor Allan Dafoe. Our core staff comprises an interdisciplinary team of policy experts and researchers. Our research affiliates work on a wide variety of domains, including China-US relations, cybersecurity, EU policy, and AI progress forecasting.

Learn more about our work

2019 Beneficial AGI Conference, Puerto Rico

You can also learn more about the Centre for the Governance of AI on various podcasts. Listen e.g. to:

Our work looks at:

  • trends, causes, and forecasts of AI progress;
  • transformations of the sources of wealth and power;
  • global political dimensions of AI-induced unemployment and inequality;
  • risks and dynamics of international AI races;
  • possibilities for global cooperation;
  • associated emerging technologies such as those involving crypto-economic systems, weapons systems, nanotechnology, biotechnology, and surveillance;
  • global public opinion, values, ethics;
  • long-run possibilities for beneficial global governance of advanced AI

Sign up below to receive periodic updates from us about e.g. events and open positions.

You can sign up to the Future of Humanity mailing list here.

Featured Recent Work:

For all our publications, see Publications.

AI Governance: A Research Agenda

This research agenda by Allan Dafoe proposes a framework for research on AI governance. It provides a foundation to introduce and orient researchers to the space of important problems in AI governance. It offers a framing of the overall problem, an enumeration of the questions that could be pivotal, and references to published articles relevant to these questions. Read More.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation


This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. The report explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks. Read More

‘The Malicious Use of Artificial Intelligence’ received coverage from hundreds of news providers including the New York Times, the BBC, Reuters, and the Verge. The report was praised by Rory Stewart, UK Minister of Justice; Major General Mick Ryan, Commander at the Australian Defence College, and Tom Dietterich, former President of the Association for the Advancement of Artificial Intelligence.

Artificial Intelligence: American Attitudes and Trends

This report by Baobao Zhang and Allan Dafoe presents the results from an extensive look at the American public’s attitudes toward AI and AI governance, with questions touching on: workplace automation; attitudes regarding international cooperation; the public’s trust in various actors to develop and regulate AI; views about the importance and likely impact of different AI governance challenges; and historical and cross-national trends in public opinion regarding AI. Our results provide preliminary insights into the character of U.S. public opinion regarding AI. Read More; and see HTML version.

Featured in BloombergVox, Axios, the MIT Technology Review and the Future of Life Institute podcast.

Deciphering China’s AI Dream

This report by Jeffrey Ding examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, […] Read More

‘Deciphering China’s AI Dream’ has received press attention in the MIT Technology Review, Bloomberg, and the South China Morning Post, among other media outlets.

Featured Policy Writing:

For all our policy writing, see Policy & Public Engagement.

A Guide to Writing the NeurIPS Impact Statement

Carolyn Ashurst, Markus Anderljung, Carina Prunkl, Jan Leike, Yarin Gal, Toby Shevlane, Allan Dafoe

13 May 2020

Contact tracing apps can help stop coronavirus. But they can hurt privacy.

Toby Shevlane, Ben Garfinkel, and Allan Dafoe

28 April 2020

Export Controls in the Age of AI

Jade Leung, Sophie-Charlotte Fischer, and Allan Dafoe

28 August 2019

Thinking About Risks From AI: Accidents, Misuse and Structure

Remco Zwetsloot, Allan Dafoe

11 February 2019

Beyond the AI Arms Race

Remco Zwetsloot, Helen Toner, and Jeffrey Ding

16 November 2018

JAIC: Pentagon debuts artificial intelligence hub

Jade Leung, Sophie-Charlotte Fischer

8 August 2018

Our work has been featured in:

You can also find our publications on our Google Scholar page

Below, you’ll find our:

Select Publications

The Windfall Clause: Distributing the Benefits of AI for the Common Good (2020)

Cullen O’Keefe, Peter Cihon, Ben Garfinkel, Carrick Flynn, Jade Leung, and Allan Dafoe

The Windfall Clause is a policy proposal for an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits garnered from the development of transformative AI. This report reviews the motivations for such a policy, enumerates central considerations regarding the design of the Clause, weighs up its limitations against alternative solutions, and situates the Windfall Clause in the broader conversation on the distribution of gains from AI. We hope to spark productive debate on such crucial issues, and contribute to the growing, global discussion centered around channeling technology-driven economic growth towards robustly equitable, broadly beneficial outcomes. 

Want to learn more? Read a summary of the report, the full report, a paper published at the AAAI / ACM AI Ethics & Society conference, listen to Cullen O’Keefe’s talk about the idea, or watch this short video.

Artificial Intelligence: American Attitudes and Trends (2019)

Baobao Zhang and Allan Dafoe

This report by Baobao Zhang and Allan Dafoe presents the results from an extensive look at the American public’s attitudes toward AI and AI governance, with questions touching on: workplace automation; attitudes regarding international cooperation; the public’s trust in various actors to develop and regulate AI; views about the importance and likely impact of different AI governance challenges; and historical and cross-national trends in public opinion regarding AI. Our results provide preliminary insights into the character of U.S. public opinion regarding AI.

See HTML version.

Featured in BloombergVox, Axios and the MIT Technology Review.

AI Governance: A Research Agenda (2018)

Allan Dafoe

This research agenda by Allan Dafoe proposes a framework for research on AI governance. It provides a foundation to introduce and orient researchers to the space of important problems in AI governance. It offers a framing of the overall problem, an enumeration of the questions that could be pivotal, and references to published articles relevant to these questions.

Deciphering China’s AI Dream: The context, components, capabilities, and consequences of China’s strategy to lead the world in AI (2018)

Jeffrey Ding

This report examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, it benchmarks China’s current AI capabilities by developing a novel index to measure any country’s AI potential and highlights the potential implications of China’s AI dream for issues of AI safety, national security, economic development, and social governance.

‘Deciphering China’s AI Dream’ has received press attention in the MIT Technology Review, Bloomberg, and the South China Morning Post, among other media outlets.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018)

Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, Dario Amodei

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. This report distills findings from a 2017 workshop as well as additional research done by the authors. It explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks.

‘The Malicious Use of Artificial Intelligence’ received coverage from hundreds of news providers including the New York Times, the BBC, Reuters, and the Verge. The report was praised by Rory Stewart, UK Minister of Justice; Major General Mick Ryan, Commander at the Australian Defence College, and Tom Dietterich, former President of the Association for the Advancement of Artificial Intelligence.

The Vulnerable World Hypothesis (2018)

Nick Bostrom

Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.

When Will AI Exceed Human Performance? Evidence from AI Experts (2017)

Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans

Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI…

‘When will AI exceed human performance?’ was ranked #16 in Altmetric’s most discussed articles of 2017. The survey was covered by the BBCNewsweek, the New Scientist, the MIT Technology ReviewBusiness InsiderThe Economist, and many other international news providers.

Strategic Implications of Openness in AI Development (2016)

Nick Bostrom

This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Short­term impacts of increased openness appear mostly socially beneficial in expectation…

Technical Reports

Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society (2020)

Carina Prunkl, and Jess Whittlestone

One way of carving up the broad ‘AI ethics and society ’ research space that has emerged in recent years is to distinguish between ‘near-term’ and ‘long-term’ research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed.

We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions. We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims (2020)

Markus Anderljung, Carrick Flynn, Brian Tse, Carina Prunkl, Peter Eckersley, Cullen O’Keefe, Jade Leung, Helen Toner, Miles Brundage, and Gillian Hadfield

This report suggests various steps that different stakeholders in AI development can take to make it easier to verify claims about AI development, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. Implementation of such mechanisms can help make progress on the multifaceted problem of ensuring that AI development is conducted in a trustworthy fashion. The mechanisms outlined in this report deal with questions that various parties involved in AI development might face.

The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse? (2020)

Toby Shevlane and Allan Dafoe

There is growing concern over the potential misuse of artificial intelligence (AI) research. Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse. This paper addresses the balance between these two effects. Our theoretical framework elucidates the factors governing whether the published research will be more useful for attackers or defenders, such as the possibility for adequate defensive measures, or the independent discovery of the knowledge outside of the scientific community. The balance will vary across scientific fields. However, we show that the existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software
vulnerabilities. While disclosure of software vulnerabilities often favours defence, this cannot be assumed for AI research. The AI research community should consider concepts and policies from a broad set of adjacent fields, and ultimately needs to craft policy well-suited to its particular challenges.

Should Artificial Intelligence Governance be Centralised? Six Design Lessons from History (2019)

Peter Cihon, Matthijis M. Maas, and Luke Kemp

Can effective international governance for artificial intelligence remain fragmented, or is there a need for a centralised international organisation for AI? We draw on the history of other international regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak in favour of centralisation. Conversely, the risk of creating a slow and brittle institution speaks against it, as does the difficulty in securing participation while creating stringent rules. Other considerations depend on the specific design of a centralised institution. A well-designed body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial and a fragmented landscape of institutions can be self-organising. Centralisation entails trade-offs and the details matter. We conclude with two core recommendations. First, the outcome will depend on the exact design of a centralinstitution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate.

Social and Governance Implications of Improved Data Efficiency (2020)

Aaron D. Tucker, Markus Anderljung, and Allan Dafoe

Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed?This paper explores the social-economic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent data-rich AI firms, exposing them to more competition from data-poor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency – as more actors gain access to any level of capability–the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the “AI production function”, will be key to understanding the development of the AI industry and its societal impacts.

Who owns artificial intelligence? A preliminary analysis of corporate intellectual property strategies and why they matter (2020)

Nathan Calvin, and Jade Leung

This working paper is a preliminary analysis of the legal rules, norms, and strategies governing artificial intelligence (AI)-related intellectual property (IP). We analyze the existing AI-related IP practices of select companies and governments, and provide some tentative predictions for how these strategies and dynamics may continue to evolve in the future.

Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies (2019)

Jade Leung, DPhil thesis from the University of Oxford, International Relations

Artificial intelligence (AI) is a strategic general purpose technology (GPT) with the potential to deliver vast economic value and substantially affect national security. The central claim motivating this work is that the development of a strategic GPT follows a distinct pattern of politics. By modelling this pattern, we can make predictions about how the politics of AI will unfold.

The proposed model follows a life cycle of a strategic GPT. It focuses on three actors – the state, firms, and researchers. Each actor is defined by their goals, resources and constraints. The model analyses the relationships between these actors – specifically, the synergies and conflicts that emerge between them as their goals, resources, and constraints interact.

Case studies of strategic GPTs developed in the U.S. – specifically aerospace technology, biotechnology, and cryptography – show that the model captures much of history accurately. When applied to AI, the model also does well to capture political dynamics to date and motivates predictions about how we could expect the politics of AI to unfold. For example, I predict that AI firms will be increasingly constrained by the legislative environment, and more pressured to serve national defense and security interests. Some will be caught in the cross-hairs of public critique and researcher push back; some, however, will willingly sell AI technologies to the state with little friction. Further, I predict that the political influence of researchers will shrink, going against what some may view as a rise in researcher influence given recent events of employee backlash in AI firms. In turn, the inclination and capacity for the state to exert control over AI’s development and proliferation will likely grow, exercised via tools such as export controls.

Artificial intelligence is going to matter greatly, and indeed, already does. It matters, then, that we understand the politics that surrounds it, and that we ultimately lay the groundwork for the governance of a technology that is poised to be transformative.

How Does the Offense-Defense Balance Scale? (2019)

Ben Garfinkel and Allan Dafoe in Journal of Strategic Studies

We ask how the offense-defense balance scales, meaning how it changes as investments into a conflict increase. To do so we offer a general formalization of the offense-defense balance in terms of contest success functions. Simple models of ground invasions and cyberattacks that exploit software vulnerabilities suggest that, in both cases, growth in investments will favor offense when investment levels are sufficiently low and favor defense when they are sufficiently high. We refer to this phenomenon as offensive-then-defensive scaling or OD-scaling. Such scaling effects may help us understand the security implications of applications of artificial intelligence that in essence scale up existing capabilities.

Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development (2019)

Peter Cihon

Today, AI policy analysis tends to focus on national strategies, nascent international initiatives, and the policies of individual corporations. Yet, international standard produced by nongovernmental organizations are also an important site of forthcoming AI governance. International standards can impact national policies, international institutions, and individual corporations alike. International standards offer an impactful policy tool in the global coordination of beneficial AI development.

The case for further engagement in the development of international standards for AI R&D are detailed in this report. It explains the global policy benefits of AI standards, outlines the current landscape for AI standards around the world, and offers a series of recommendations to researchers, AI developers, and other AI organizations.

Stable Agreements in Turbulent Times: A Legal Toolkit for Constrained Temporal Decision Transmission (2019)

Cullen O’Keefe

This century, advanced artificial intelligence (“Advanced AI”) technologies could radically change economic or political power. Such changes produce a tension that is the focus of this Report. On the one hand, the prospect of radical change provides the motivation to craft, ex ante, agreements that positively shape those changes. On the other hand, a radical transition increases the difficulty of forming such agreements since we are in a poor position to know what the transition period will entail or produce. The difficulty and importance of crafting such agreements is positively correlated with the magnitude of the changes from Advanced AI. The difficulty of crafting long-term agreements in the face of radical changes from Advanced AI is the “turbulence” with which this Report is concerned. This Report attempts to give readers a toolkit for making stable agreements—ones that preserve the intent of their drafters—in light of this turbulence.

Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence (2018)

Published in Should we fear artificial intelligence?, a report by the Science and Technology Options Assessment division of the European Parliament.

Miles Brundage 

Expert opinions on the timing of future developments in artificial intelligence (AI) vary widely, with some expecting human-level AI in the next few decades and others thinking that it is much further off (Grace et al., 2017). Similarly, experts disagree on whether developments in AI are likely to be beneficial or harmful for human civilization, with the range of opinions including those who feel certain that it will be extremely beneficial, those who consider it likely to be extremely harmful (even risking human extinction), and many in between (AI Impacts, 2017). While the risks of AI development have recently received substantial attention (Bostrom, 2014; Amodei and Olah et al., 2016), there has been little systematic discussion of the precise ways in which AI might be beneficial in the long term.

Recent Developments in Cryptography and Possible Long-Run Consequences (2018)

Unpublished manuscript

Ben Garfinkel

Historically, progress in the eld of cryptography has been enormously consequential. Over the past century, for instance, cryptographic discoveries have played a key role in a world war and made it possible to use the internet for business and private communication. In the interest of exploring the impact the eld may have in the future, I consider a suite of more recent developments. My primary focus is on blockchain-based technologies (such as cryptocurrencies and smart contracts) and on techniques for computing on confidential data (such as homomorphic encryption and secure multiparty computation). I provide an introduction to these technologies that assumes no previous knowledge of cryptography. Then, I consider eight speculative predictions about the long-term consequences these emerging technologies could have. These predictions include the views that a growing number of information channels used to conduct surveillance may go dark, that it may become easier to verify compliance with agreements without intrusive monitoring, that the roles of a number of centralized institutions ranging from banks to voting authorities may shrink, and that new transnational institutions known as decentralized autonomous organizations may emerge. Finally, I close by discussing some challenges that could limit the significance of emerging cryptographic technologies. On the basis of these challenges, it is premature to predict that any of them will approach the transformativeness of previous technologies. However, this remains a rapidly-developing area well worth following.

To request the full version of the report, contact the author on benmgarfinkel [at] gmail.com.

Accounting for the Neglected Dimensions of AI Progress (2018)

Fernando Martínez-Plumed, Shahar Avin, Miles Brundage*, Allan Dafoe*, Sean Ó hÉigeartaigh, José Hernández-Orallo

This paper analyzes and reframes AI progress. In addition to the prevailing metrics of performance, it highlights the usually neglected costs paid in the development and deployment of a system, including: data, expert knowledge, human oversight, software resources, computing cycles, hardware and network facilities, development time, etc. These costs are paid throughout the life cycle of an AI system, fall differentially on different individuals, and vary in magnitude depending on the replicability and generality of the AI solution. The multidimensional performance and cost space can be collapsed to a single utility metric for a user with transitive and complete preferences. Even absent a single utility function, AI advances can be generically assessed by whether they expand the Pareto (optimal) surface. We explore a subset of these neglected dimensions using the two case studies of Alpha* and ALE. This broadened conception of progress in AI should lead to novel ways of measuring success in AI, and can help set milestones for future progress…

* – Centre for the Governance of AI

Policy Desiderata in the Development of Machine Superintelligence (2016)

Nick Bostrom, Allan Dafoe, Carrick Flynn

Machine superintelligence could plausibly be developed in the coming decades or century. The prospect of this transformative development presents a host of political challenges and opportunities. This paper seeks to initiate discussion of these by identifying a set of distinctive features of the transition to a machine intelligence era. From these distinctive features, we derive a correlative set of policy desiderata—considerations that should be given extra weight in long-term AI policy…

Policy writing

Thinking About Risks From AI: Accidents, Misuse and Structure

Remco Zwetsloot, Allan Dafoe

11 February 2019

Beyond the AI Arms Race

Remco Zwetsloot, Helen Toner, and Jeffrey Ding

16 November 2018

JAIC: Pentagon debuts artificial intelligence hub

Jade Leung, Sophie-Charlotte Fischer

8 August 2018


A Guide to Writing the NeurIPS Impact Statement (May 2020)

Carolyn Ashurst, Markus Anderljung, Carina Prunkl, Jan Leike, Yarin Gal, Toby Shevlane, Allan Dafoe

Over time, the exercise of assessing impact could enhance the ML community’s expertise in technology governance, and otherwise help build bridges to other researchers and policymakers. To help maximize the chances of success, Carolyn Ashurst, Markus Anderljung, Carina Prunkl, Jan Leike, Yarin Gal, Toby Shevlane, and Allan Dafoe have compiled some suggestions and an (unofficial) guide for how to write the NeurIPS Impact Statement.

Syllabus: Artificial Intelligence and China (January 2020)

Jeffrey Ding, Sophie-Charlotte Fischer, Brian Tse, Chris Byrd

In recent years, China’s ambitious development of artificial intelligence (AI) has attracted much attention in policymaking and academic circles. This syllabus aims to broadly cover the research landscape surrounding China’s AI ecosystem, including the context, components, capabilities, and consequences of China’s AI development. The materials presented range from blogs to books, with an emphasis on English translations of Mandarin source materials. The reading list is not exhaustive, and it will benefit from feedback and revisions.

Syllabus: Artificial Intelligence and International Security (July 2018)

Remco Zwetsloot

This syllabus covers material located at the intersection between artificial intelligence (AI) and international security. The syllabus can be used in structured self-study (or group-study) for those new to this space, or as a resource for instructors designing class-specific syllabi (which would probably have to be significantly shorter). It is designed to be useful to (a) people new to both AI and international relations (IR); (b) people coming from AI who are interested in an IR angle on the problems; (c) people coming from IR who are interested in working on AI. Depending on which of groups (a)-(c) you fall in, it may be feasible to skip or skim certain sections. For sections that you are particularly interested in, do consider diving into the sources cited in the readings—for most topics what I have assigned just skims the surface and is intended only as a starting point.

The Team

Allan Dafoe

Director, Centre for the Governance of AI

Nick Bostrom

Director, Future of Humanity Institute

Macrostrategy; strategic implications of AI

Jade Leung

Project Manager: Research & Partnerships; Researcher

Emerging & dual-use technology governance, role of private companies, firm-government relations, international cooperation

Markus Anderljung

Project Manager: Operations & Policy Engagement

Team growth; Policy engagement; Research support

Ben Garfinkel


International security; privacy; long-term forecasting

Toby Shevlane


Scientific regulation of dangerous technologies; publication norms; technological power

Jeffrey Ding


China’s AI strategy; China’s approach to strategic technologies

Research Affiliates and Associates

Sophie-Charlotte Fischer

ETH Zurich-based Research Affiliate

International security and arms control; IT and politics; foreign policy

Matthijs M. Maas

University of Copenhagen-based Research Affiliate

AI governance; technology management regimes; nuclear deterrence stability; securitization theory

Remco Zwetsloot

Yale University-based Research Affiliate

International security; arms racing and arms control; bargaining theory

Miles Brundage

Research Affiliate

AI progress forecasting; science and technology policy

Carrick Flynn

Research Affiliate

Legal issues with AI; AI governance challenges; policy

Helen Toner

Research Associate

AI policy and strategy; progress in machine learning & AI; effective philanthropy

Baobao Zhang

Research Affiliate

Public opinion research; American politics; public policy

Cullen O’Keefe

Research Affiliate

Implications of corporate, US, and international law for AI governance; benevolent AI governance structures

Brian Tse

Policy Affiliate

China-U.S. relations; global governance of existential risk; China’s AI safety development

Waqar Zaidi

Research Affiliate

History of Science and Technology; international control of powerful technologies

Aaron Tucker

Research Affiliate

Technical AI safety, the AI production function, AI forecasting

Emefa Agawu

Research Communication Consultant

Public affairs, cybersecurity, US policy, political perceptions of global catastrophic risks

Dr. Carina Prunkl

Carina Prunkl

Research Affiliate

Senior Research Scholar, Research Scholars Programme

Ethics of AI; Philosophy; Quantum Technologies

Max Daniel

Research Affiliate

Senior Research Scholar, Research Scholars Programme

Macrostrategy; firm incentives; race dynamics

Hiski Haukkala

Policy Expert

Theory, policy and practice of international politics; Mechanisms to increase stability in the world through the use of AI; AI and the EU

Andrew Trask

Research Affiliate

Structured Transparency, Technological Solutions to Governance Problems, Machine Learning

Anton Korinek

Research Affiliate

International Finance, Macroeconomics, Artificial Intelligence

Ethics of AI; Philosophy; Quantum Technologies

Ulrike Franke

Policy Affiliate

Future of War, EU Policy, National Security

External DPhil Supervisors

Duncan Snidal
Professor of International Relations, University of Oxford

Karolina Milewicz
Associate Professor of International Relations, University of Oxford

Former Research Affiliates

Max Negele, Sören Mindermann, Aaron Tucker, Tamay Besiroglu, Nathan Calvin, Paul de Font-Reaulx, Genevieve Fried, Roxanne Heston, Katelynn Kyker, Clare Lyle, William Rathje, Tom Sittler.

GovAI alumni now work at e.g. Microsoft, OpenAI, Center for Security and Emerging Technology, and AI Now.

Policy & Public Engagement

Researchers at the Centre for the Governance of AI are actively involved in the policy and public dialogues on the impact of advanced, transformative AI. We seek to offer authoritative, actionable and accessible insight to a range of audiences in policy, academia, and the public. The following is a selection of speaking engagements, podcasts, media appearances and policy writings from our team.

Recent Policy & Public Engagement

Consultation on the European Commission’s White Paper on Artificial Intelligence: a European approach to excellence and trust | June 2020

Stefan Torges, a 2020 GovAI Fellow wrote a response to the European Commission’s White Paper on Artificial Intelligence. In this response, he focussed the analysis and recommendations on the proposed “ecosystem of trust” and associated international efforts.

Read the full consultation here.

A Guide to Writing the NeurIPS Impact Statement | 13th May 2020

Over time, the exercise of assessing impact could enhance the ML community’s expertise in technology governance, and otherwise help build bridges to other researchers and policymakers. To help maximize the chances of success, Carolyn Ashurst, Markus Anderljung, Carina Prunkl, Jan Leike, Yarin Gal, Toby Shevlane,  and Allan Dafoe have compiled some suggestions and an (unofficial) guide for how to write the NeurIPS Impact Statement.

Read the full post here.

Contact tracing apps for Covid-19 | (April 28, 2020. The Washington Post)

Allan Dafoe, Benjamin Garfinkel, and Toby Shevlane discuss contact tracing apps for Covid-19, in an article for The Washignton Post. In this article, they look at how these new technologies can help preserve privacy.

Read the full article here.

“AI Governance in 2019: A Year in Review” | (April 20, 2020)

Allan Dafoe and Markus Anderljung’s observations, along with other experts in the field, were collated to form the “AI Governance in 2019: A Year in Review” report. This was presented to the public, and the event was joined by numerous mainstream media and press in China.

Read the full report here.

Brian Tse: 'Towards A Global Community Of Shared Future in AGI' | (January 5, 2020. Beneficial AGI conference: Puerto Rico)

Allan Dafoe, Jade Leung, and Brian Tse from our team participated in the Beneficial AGI conference in Puerto Rico. Allan and Brian presented.

Watch Brian’s presentation here.

Allan Dafoe: 'Frontier Exploration and Innovation Practices of Artificial Intelligence Security' | (August 30, 2019. World AI Conference: Shanghai)

Allan Dafoe spoke at the World AI Conference in Shanghai on AI governance and security. He delivered a keynote speeche, expressing his opinions around the theme of the frontiers of artificial intelligence security.

Read more about the conference here.

Jeffrey Ding: 'What People Get Wrong About China and Artificial Intelligence' | (July 9, 2019. Fortune)

In an interview with Fortune, Ding explained that much of what is written about China’s multi-billion dollar push into A.I. often seems like it’s written in a “vacuum.” There’s little context or comparison between China’s A.I. abilities and those of other countries.

Read the article here.

Jade Leung: 'What happens when AI fails – Concrete solutions for a better AI' | (Hello Tomorrow Global Summit: Paris)

Jade Leung participated in the panel “What happens when AI fails – Concrete solutions for a better AI” at the Hello Tomorrow Global Summit in Paris.

Read more about the summit here.

Carina Prunkl: 'Ethics of AI course' | (Michaelmas term. The University of Oxford)

Carina Prunkl organised an Ethics of AI course in Michaelmas term in Oxford, UK. This course was for the Chevening Gurukul Fellowship for Leadership and Excellence, at the Said Business School, University of Oxford.

Read more about the course here.

Profile picture of Carina Prinkl

Sophie-Charlotte Fischer: 'The Emergence of Artificial Intelligence' | (WEF, ETH and Microsoft: Davos)

Sophie-Charlotte Fischer spoke at a World Economic Forum side event in Davos hosted by ETH and Microsoft on “The Emergence of Artificial Intelligence.”

Sophie-Charlotte Fischer: 'Robotics, Artificial Intelligence & Humanity' | (May 16, 2019. Robotics, AI & Humanity Conference: Vatican)

Sophie-Charlotte Fischer spoke at a 2-day conference focusing on the impact of robotics and artificial intelligence on humanity, that was held in the Vatican and organized by the Pontifical Academy of Social Sciences and the Pontifical Academy of Sciences.

Jade Leung: 'How Can We See the Impact of AI Strategy Research?' | (June 23, 2019. EA Global: San Francisco)

Watch the video here.

Brian Tse: 'Improving Coordination with China to Reduce AI Risk' | (June 23, 2019. EA Global: San Francisco)

Watch the video here.

Markus Anderljung: 'Governing Transformative Artificial Intelligence' | (EAGx Nordics)

Watch the video here.

An Interview with Ben Garfinkel, Governance of AI Program Researcher

Ben Garfinkel was interviewed in The Politic. The Yale College Journal of Politics is a monthly Yale University student publication.

Read the interview here

Markus Anderljung: 'AI Safety - Human values aligned with AI' | (September 21, 2019. AICast podcast)

Markus talks about Human values and how we should start planning and implementing when we as humans start building artificial general intelligence (AGI). The goal of long-term artificial intelligence safety is to ensure that advanced AI systems are aligned with human values — that they reliably do things that people want them to do.

Listen to the podcast here.

Jeffrey Ding: 'Artificial Intelligence in China' | (May 21, 2019. Ark Investment Podcast)

In this podcast, Jeffrey talks about his work in the AI field and his focus on translating the developments from China to a more western audience. He talks about the reasons why he started his newsletter, namely that the AI community in China are mostly abreast of advancements from the US and UK, while the same cannot be said in the opposite direction. This language asymmetry, as Jeffrey calls it, means there is a gap in the knowledge base in the Americas and Europe around the burgeoning Chinese AI scene.

Listen to the podcast here.

Cullen O'Keefe: 'The Windfall Clause: Sharing the benefits of advanced AI' | (June 23, 2019. EA Global: San Francisco)

The potential upsides of advanced AI are enormous, but there’s no guarantee they’ll be distributed optimally. In this talk, Cullen O’Keefe, discusses one way we could work toward equitable distribution of AI’s benefits — the Windfall Clause, a commitment by AI firms to share a significant portion of their future profits — as well as the legal validity of such a policy and some of the challenges to implementing it.

Watch the video here.

Jade Leung, Sophie-Charlotte Fischer, & Allan Dafoe: "Export Controls in the Age of AI" | (28 August 2019: War on the Rocks)

What does technological leadership look like in an era of artificial intelligence? The United States, like other countries, is in the midst of grappling with this question against a backdrop of the rise of China and the growing realization that “business as usual” will no longer suffice for America to maintain its technological advantage. Washington has begun to take some important steps to translate this realization into action. In February, President Donald Trump launched the American AI Initiative in recognition that “American leadership in AI is of paramount importance to maintaining the economic and national security of the United States.” In a less constructive fashion, two months later Sen. Josh Hawley (R-Mo.) introduced the China Technology Transfer Control Act of 2019 that would “make it harder for American companies to export major emerging technologies to China.” Clearly, AI is on the agenda.

Unfortunately, Washington appears to be defaulting to traditional, 20th-century policy tools to address a 21st-century problem…

Read the full text here.

Jeffrey Ding: "AI Alignment Podcast: China’s AI Superpower Dream" | (16 August 2019: Future of Life Institute AI Alignment Podcast)

In July 2017, The State Council of China released the New Generation Artificial Intelligence Development Plan. This policy outlines China’s strategy to build a domestic AI industry worth nearly US$150 billion in the next few years and to become the leading AI power by 2030. This officially marked the development of the AI sector as a national priority and it was included in President Xi Jinping’s grand vision for China.” (FLI’s AI Policy – China page) In the context of these developments and an increase in conversations regarding AI and China, Lucas spoke with Jeffrey Ding from the Center for the Governance of AI (GovAI). Jeffrey is the China lead for GovAI where he researches China’s AI development and strategy, as well as China’s approach to strategic technologies more generally.
Find the podcast here.

Jade Leung: "AI Alignment Podcast: On the Governance of AI" | (22 July 2019: Future of Life Institute AI Alignment Podcast)

In this podcast, Lucas spoke with Jade Leung from the Center for the Governance of AI (GovAI). GovAI strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. The center focuses on the political challenges arising from transformative AI, and they seek to guide the development of such technology for the common good by researching issues in AI governance and advising decision makers. Jade is Head of Research and Partnerships at GovAI, where her research focuses on modeling the politics of strategic general purpose technologies, with the intention of understanding which dynamics seed cooperation and conflict.
Find the podcast here.

Jeffrey Ding & Helen Toner: "US Senate Hearing on Technology, Trade, and Military-Civil Fusion: China's Pursuit of Artificial Intelligence, New Materials, and New Energy" | (7 June 2019: U.S.-China Economic and Security Review Commission)

The U.S.-China Economic and Security Review Commission held a hearing on Technology, Trade, and Military-Civil Fusion: China’s Pursuit of Artificial Intelligence, New Materials, and New Energy. Jeffrey Ding, researcher at GovAI, and Helen Toner, Research Associate with GovAI, were among those providing evidence.
Listen to the hearing here. Read Jeffrey Ding’s testimony here. Read Helen Toner’s testimony here.

Peter Cihon with others: "Comment on National Institute of Standards and Technology – RFI: Developing a Federal AI Standards Engagement Plan" | (6 June 2019)

GovAI submitted written comments (work led by Peter Cihon) as well as a second round of targeted edits to the National Institute of Standards and Technology (NIST) to support its ongoing work to develop a federal plan for technical artificial intelligence (AI) standards. We collaborated with several organizations on this effort, including the Center for Long-Term Cybersecurity, the Future of Life Institute, and certain researchers at the Leverhulme Centre for the Future of Intelligence. On August 9, NIST published their final report, “U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools”, taking numerous suggestions from CLTC into consideration. The plan recommends the federal government “commit to deeper, consistent, long-term engagement in AI standards development activities to help the United States to speed the pace of reliable, robust, and trustworthy AI technology development.”

Remco Zwetsloot & Allan Dafoe: "Thinking About Risks From AI: Accidents, Misuse and Structure" | (11 February 2019: Lawfare)

Read more here

Baobao Zhang: "Opinion It’s 2043. We Need a New American Dream for the A.I. Revolution." | (12 August 2019: The New York Times)

A fictional op-ed written as if in 2043. Read more here.

Allan Dafoe: 'Private Sector Leadership in AI Governance' | (December 11th, 2018: The Digital Society Conference)

December 10-11, 2018 the Digital Society Conference 2018 – Empowering Ecosystems took place at ESMT Berlin. The two-day conference included panels, presentations, and workshops from many different perspectives such as science, industry, and politics. This year’s conference covered new developments in security and privacy, digital politics, and industrial strategies. The reality of the rise of artificial intelligence (AI) was a particular focus, including its societal implications and how to understand and harness the battle for AI dominance. More about the conference here.

Allan Dafoe & Jade Leung: 'What does AI mean for the future of humanity?' | (December 10th, 2018: Futuremakers podcast)

Philosopher Peter Millican discusses the future of society and AI, and some of the difficult ethical choices that lie ahead. As we hand some of these choices over to machines, are we confident they will reach conclusions that we can accept? Can, or should, a human always be in control of an artificial intelligence? Can we train automated systems to avoid catastrophic failures that humans might avoid instinctively? To explore these questions, Millican interviews Allan and Jade along with Mike Osborne, co-director of the Oxford Martin programme on Technology and Employment.
The podcast is available at:

Allan Dafoe: 'The AI Revolution and International Politics' | (November 14th, 2018. Oxford Artificial Intelligence Society)

Through research and policy engagement, the Center for the Governance of AI strives to steer the development of artificial intelligence for the common good. At this Oxford AI Society event, Allan discussed the Center’s lines of work, which examines the political, economic, military, governance, and ethical dimensions of transformative AI.

Jade Leung: 'Why Companies Should be Leading on AI Governance' | (October 27th, 2018. EA Global London)

Governance is usually a job that we associate with governments, states, and international organisations. This talk makes the case for why, in the case of AI, private companies are not only necessary for governance, but are best placed to lead in laying the foundations for a credible, scalable AI governance regime.

Benjamin Garfinkel: 'How Sure Are We About This AI Stuff?' | (October 27, 2018, EA Global London)

In this talk, Benjamin Garfinkel reviews what he understands to be the case for prioritizing AI issues, as well as identifying areas where further published analysis would be valuable to underwrite the prominence of this topic within Effective Altruism.

Allan Dafoe: 'Regulating Artificial Intelligence in the area of defence' | (October 10th, 2018. SEDE Public Hearing on Artificial intelligence and its future impact on security)

By invitation, Allan Dafoe spoke at a public hearing on ‘Artificial intelligence and its future impact on security’, organized by the Subcommittee on Security and Defence of the European Parliament. A recording of the talk is available here.

Carrick Flynn: 'AI Governance Landscape' | (June 10th, 2018. EA Global San Francisco)

The development of artificial intelligence is well-poised to massively change the world. It’s possible that AI could make life better for all of us, but many experts think there’s a non-negligible chance that the overall impact of AI could be extremely bad. In this talk from Effective Altruism Global 2018: San Francisco, Carrick Flynn lays out what we know from history about controlling powerful technologies, and what the tiny field of AI governance is doing to help AI go well. A recording of Carrick’s talk is available here; a transcript is available here.

Jade Leung: 'Analyzing AI Actors' | (June 10th, 2018. EA Global San Francisco)

Who would you rather have access to human-level artificial intelligence: the US government, Google, the Chinese government or Baidu? The biggest governments and tech firms are the most likely to develop advanced AI, so understanding their goals, abilities and constraints is a vital part of predicting AI’s trajectory. In this talk from EA Global 2018: San Francisco, Jade Leung explores how we can think about major players in AI, including an informative case study. A recording of Jade’s talk is available here. A transcript is available here.

Benjamin Garfinkel: 'The Future of Surveillance Doesn't Need to be Dystopian' | (June 9th, 2018, EA Global San Francisco)

This talk considers two worrisome narratives about technological progress and the future of surveillance. In the first narrative, progress threatens privacy by enabling ever-more-pervasive surveillance. For instance, it is becoming possible to automatically track and analyze the movements of individuals through facial recognition cameras. In the second narrative, progress threatens security by creating new risks that cannot be managed with present levels of surveillance. For instance, small groups developing cyber weapons or pathogens may be unusually difficult to detect. It is suggested that another, more optimistic narrative is also plausible. Technological progress, particularly in the domains of artificial intelligence and cryptography, may help to erase the trade-off between privacy and security.

Jeffrey Ding: Participant in BBC Machine Learning Fireside Chat | (June 6th 2018, London)

BBC Machine Learning Fireside Chats hosted a discussion between Jeffrey Ding and Charlotte Stix of the Leverhulme Centre for the Future of Intelligence, University of Cambridge. The conversation covered China’s national AI development plan, the state of US ML research, and the position of Europe and Britain in the global AI space.

Allan Dafoe: ‘Keynote on AI Governance’ | (June 1st 2018, Public Policy Forum)

This keynote address on the governance of AI was given at a Public Policy Forum seminar series attended by deputy ministers and senior officials of the Canadian government.

Benjamin Garfinkel: 'Recent Developments in Cryptography and Why They Matter' | (May 1st 2018, Oxford Internet Institute)

This talk surveys a range of emerging technologies in the field of cryptography, including blockchain-based technologies and secure multiparty computation, it then analyzes their potential political significance in the long-term. These predictions include the views that a growing number of information channels used to conduct surveillance may “go dark,” that it may become easier to verify compliance with agreements without intrusive monitoring, that the roles of a number of centralized institutions ranging from banks to voting authorities may shrink, and that new transnational institutions known as “decentralized autonomous organizations” may emerge.

Miles Brundage: 'Offensive applications of AI' | (April 11th, 2018, CyberUK)

Presented the Malicious Use of AI report at a CyberUK2018 panel.

Sophie-Charlotte Fischer: 'Artificial Intelligence: What implications for Foreign Policy?' | (April 11th, 2018, German Federal Foreign Office)

This panel discussion, co-organized by the German Federal Foreign Office, the Stiftung Neue Verantwortung and the Mercator Foundation,  discussed the findings of a January report by SNV, “Artificial Intelligence and Foreign Policy“. The report seeks to provide a foundation for planning a foreign policy strategy that responds effectively to the emerging power of AI in international affairs.

Allan Dafoe: Chair of panel ‘Artificial Intelligence and Global Security: Risks, Governance, and Alternative Futures’ | (April 6th 2018, Annual Conference of the Johnson Center for the Study of American Diplomacy, Yale University)

The panel addressed cybersecurity leadership and strategy from the perspective of the Department of Defense. The panelists were Dario Amodei, Research Scientist and Team Lead for Safety at OpenAI; Jason Matheny, Director of the Intelligence Advanced Research Projects Agency; and the Honorable Robert Work, former Acting and Deputy Secretary of Defense and now Senior Counselor for Defense at the Center for a New American Security. The keynote address at the conference was given by Eric Schmidt, and Henry Kissinger also gave a talk.

Matthijs Maas: 'Regulating for ‘normal AI accidents’: operational lessons for the responsible governance of AI deployment' | (February 2nd, 2018, AAAI/ACM Conference on AI, Ethics and Society)

Paper presentation, arguing that many AI applications often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments, which ensures such systems are prone to ‘normal accident’-type failures. While this suggests that large-scale, cascading errors in AI systems are very hard to prevent or stop, an examination of the operational features that lead technologies to exhibit such failures enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safer deployment of AI systems. Conference paper available here.

Allan Dafoe: 'Governing the AI Revolution: The Research Landscape' | (January 25th, 2018, CISAC, Stanford University)

Artificial intelligence (AI) is rapidly improving. The opportunities are tremendous, but so are the risks. Existing and soon-to-exist capabilities pose several plausible extreme governance challenges. These include massive labor displacement, extreme inequality, an oligopolistic global market structure, reinforced authoritarianism, shifts and volatility in national power, and strategic instability. Further, there is no apparent ceiling to AI capabilities, experts envision that superhuman capabilities in strategic domains will be achieved in the coming four decades, and radical surprise breakthroughs are possible. Such achievements would likely transform wealth, power, and world order, though global politics will in turn crucially shape how AI is developed and deployed. The consequences are plausibly of a magnitude and on a timescale to dwarf other global concerns, leaders of governments and firms are asking for policy guidance, and yet scholarly attention to the AI revolution remains negligible. Research is thus urgently needed on the AI governance problem: the problem of devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.
Event information available here.

Allan Dafoe: ‘Strategic and Societal Implications of ML’ | (December 8th 2017, Neural Information Processing Systems Conference)

This paper was given at a workshop entitled ‘Machine Learning and Computer Security’.

Allan Dafoe: Evidence Panelist for All Party Parliamentary Group on Artificial Intelligence Evidence Meeting | (October 30th 2017, Evidence Meeting 7: International Perspective and Exemplars)

This panel discussion focused on which countries and communities are more prepared for AI, and how they could be used as case studies. Topics included best practice, national versus multilateral or international approaches, and probable timelines.


Survey Research Contractor

Deadline 23:59 BST on August 10th 2020. We will consider applications on a rolling basis.

We are seeking a Survey Research Contractor interested in the use of survey methodology to answer crucial questions in AI policy and governance. The Survey Research Contractor, who is likely to have a PhD in a related area, or otherwise be working towards one, will work with other researchers at the Centre for the Governance of AI, Future of Humanity Institute, University of Oxford, and the Department of Government, Cornell University to conduct survey research. The contract is anticipated to last 3 months in the first instance at a £28/hour rate, but the period may be extended in due course. Full details and application form here

Governance of AI Fellowship

Applications are closed. However, we are likely to open up applications in the Fall for Summer or Spring 2021.

The Centre for the Governance of AI at the University of Oxford is seeking 2-5 exceptional researchers to join our interdisciplinary team during the Governance of AI Fellowship for a limited period of three months. Participants in the Fellowship will receive a generous stipend and have the opportunity to participate in cutting-edge research in a fast-growing field, while gaining expertise in parts of our Research Agenda.

General Applications

The Centre for the Governance of AI, at the Future of Humanity Institute is always looking for exceptional researchers to join our team, be it as collaborators, research assistants, or full-time researchers. 

In all candidates, we seek high general aptitude, self-direction, openness to feedback, and a firm belief in our mission.

Across each of these roles, we are especially interested in people with varying degrees of skill or expertise in the following areas:

  1. International relations, especially in the areas of international cooperation, international law, international political economy, global public goods, constitution and institutional design, diplomatic coordination and cooperation, arms race dynamics history, and the politics of transformative technologies, governance, and grand strategy.
  2. Chinese politics and machine learning in China.
  3. Game theory and mathematical modelling.
  4. Survey design and statistical analysis.
  5. Large intergovernmental scientific research organizations and projects (such as CERN, ISS, and ITER).
  6. Technology and other types of forecasting.
  7. Law and/or policy.

Our goal is to identify exceptional talent. We are interested in hiring for full-time work at Oxford. We are also interested in getting to know talented individuals who might only be available part-time or for remote work.

As we work closely with leading AI labs and the effective altruism community, familiarity and involvement with them is a plus.

If you are interested, send an email to markus.anderljung@philosophy.ox.ac.uk, putting “general application” in the subject line, with your CV and a brief statement of interest outlining (i) why you want to work with us, (ii) what you would like to contribute and (iii) how this role fits into your career plans. We consider these applications in batches. You can expect a response within about a month. 

All qualified applicants will be considered for employment without regard to race, color, religion, sex, age or national origin.

Governance of AI Seminar Series

Past Events

GovAI Webinar #2 Carles Boix and Sir Tim Besley on Democratic Capitalism at the Crossroads: Technological Change and the Future of Politics

Monday June 8th, 1700-1815 BST (0900-1015 PT, 1200-1315 ET)

Register here

The twentieth century witnessed the triumph of democratic capitalism in the industrialized West, with widespread popular support for both free markets and representative elections. Today, that political consensus appears to be breaking down, disrupted by polarization and income inequality, widespread dissatisfaction with democratic institutions, and insurgent populism. Tracing the history of democratic capitalism over the past two centuries, Carles Boix new book explains how we got here-and where we could be headed. Sir Tim Besley will be sharing his thoughts on the topic, and discussing the content of Carles’ book – Democratic Capitalism at the Crossroads.

Boix looks at three defining stages of capitalism, each originating in a distinct time and place with its unique political challenges, structure of production and employment, and relationship with democracy. He begins in nineteenth-century Manchester, where factory owners employed unskilled laborers at low wages, generating rampant inequality and a restrictive electoral franchise. He then moves to Detroit in the early 1900s, where the invention of the modern assembly line shifted labor demand to skilled blue-collar workers. Boix shows how growing wages, declining inequality, and an expanding middle class enabled democratic capitalism to flourish. Today, however, the information revolution that began in Silicon Valley in the 1970s is benefitting the highly educated at the expense of the traditional working class, jobs are going offshore, and inequality has risen sharply, making many wonder whether democracy and capitalism are still compatible.

Carles Boix is the Robert Garrett Professor of Politics and Public Affairs in the Department of Politics and the Woodrow Wilson School of Public and International Affairs at Princeton University. In 2015, he published Political Order and Inequality, followed by Democratic Capitalism at the Crossroads: Technological Change and the Future of Politics, the subject of our webinar, in 2019.

Sir Tim Besley is School Professor of Economics of Political Science and W. Arthur Lewis Professor of Development Economics in the Department of Economics at LSE.  He is also a member of the National Infrastructure Commission and was President of the Econometric Society in 2018.  He is a Fellow of the Econometric Society and British Academy.  He is also a Foreign Honorary Member of the American Economic Association and the American Academy of Arts and Sciences. In 2016 he published Contemporary Issues in Development Economics.

GovAI Webinar #1: COVID-19 and the Economics of AI

This event has already taken place, but you can watch the recording and read the transcript here

Wednesday May 20th, 1700-1815 BST (0900-1015 PT, 1200-1315 ET)

Join the inaugural Governance and Economics of AI webinar on Wednesday, May 20th, 1700-1815 BST (0900-1015 PT, 1200-1315 ET), featuring Daron Acemoğlu, Diane Coyle, and Joseph Stiglitz in a discussion about COVID-19 and the economics of AI. 

The panel will focus on questions such as: Will COVID-19 cause automation to increase? A decline in labour share of income? A rise of superstar companies? What does COVID-19 teach us about policy responses discussed in the economics of AI, such as universal basic income? Will we see a stable increase in AI-enabled surveillance technologies?

The event will start with a brief introduction to the seminar series by Anton Korinek and Allan Dafoe, followed by a panel discussion and a Q&A. To join the event, please register using the link above.

The event is hosted by the Centre for the Governance of AI, at the Future of Humanity Institute, based at the University of Oxford. Our focus is on the political challenges arising from transformative AI. We seek to guide the development of AI for the common good by conducting research on important and neglected issues of AI governance, and advising decision makers on this research through policy engagement.

Daron Acemoğlu is an economist and the Elizabeth and James Killian Professor of Economics and Institute Professor at the Massachusetts Institute of Technology (MIT), where he has taught since 1993. He was awarded the John Bates Clark Medal in 2005 and co-authored Why Nations Fail: The Origins of Power, Prosperity, and Poverty with James A. Robinson in 2012.

Diane Coyle, CBE, OBE, FAcSS is an economist, former advisor to the UK Treasury, and the Bennett Professor of Public Policy at the University of Cambridge, where she has co-directed the Bennett Institute since 2018. She was vice-chairman of the BBC Trust, the governing body of the British Broadcasting Corporation, and was a member of the UK Competition Commission from 2001 until 2019. In 2020, she published Markets, State, and People: Economics for Public Policy.

Joseph Stiglitz is an economist, public policy analyst, and a University Professor at Columbia University. He is a recipient of the Nobel Memorial Prize in Economic Sciences (2001) and the John Bates Clark Medal (1979). He is a former senior vice president and chief economist of the World Bank and is a former member and chairman of the US President’s Council of Economic Advisers. His most recent book, Measuring What Counts; The Global Movement for Well-Being came out in 2019.

A perspective on fairness in machine learning from DeepMind
Silvia Chiappa, Research Scientist and William Isaac, Research Scientist, DeepMind

Thu, 17 October 2019, 16:00 – 17:30 BST

As the world moves towards applying machine learning techniques in high stakes societal contexts – from the criminal justice system to education to healthcare – ensuring the fairness of these systems becomes an evermore important and urgent issue. In this talk DeepMind’s Research Scientists, Silvia and William, will explain how Causal Bayesian Networks (CBNs) can be used as a tool for reasoning about and addressing fairness issues.

In the first part of the talk we will show that CBNs can provide us with a simple and intuitive visual tool for describing different possible unfairness scenarios underlying a dataset. We will use this viewpoint to revisit the recent debate surrounding the COMPAS pretrial risk assessment tool and, more generally, to point out that fairness evaluation on a model requires careful considerations on the patterns of unfairness underlying the training data.

In the second part of the talk we will explain how CBNs can provide us with a powerful quantitative tool to measure unfairness in a dataset, and to help researchers in the development of techniques to address complex fairness issues.

This talk is based on two recent papers: A Causal Bayesian Networks Viewpoint on Fairness and Path-Specific Counterfactual Fairness

This event is co-hosted by the Centre for the Governance of Artificial Intelligence (GovAI), Future of Humanity Institute and the Rhodes Artificial Intelligence Lab.

About the speakers

Silvia Chiappa is a Research Scientist in Machine Learning at DeepMind. She received a Diploma di Laurea in Mathematics from University of Bologna and a PhD in Machine Learning from École Polytechnique Fédérale de Lausanne. Before joining DeepMind, Silvia worked in the Empirical Inference Department at the Max-Planck Institute for Intelligent Systems (Prof. Dr. Bernhard Schölkopf), in the Machine Intelligence and Perception Group at Microsoft Research Cambridge (Prof. Christopher Bishop), and at the Statistical Laboratory, University of Cambridge (Prof. Philip Dawid). Her research interests are based around Bayesian & causal reasoning, graphical models, variational inference, time-series models, and ML fairness and bias.

William Isaac is a Research Scientist with DeepMind’s Ethics and Society Team. Prior to DeepMind, WIlliam served as an Open Society Foundations Fellow and Research Advisor for the Human Rights Data Analysis Group focusing on algorithmic bias and fairness. William’s prior research centering on deployments of automated decision systems in the US criminal justice system has been featured in publications such as Science, the New York Times, and the Wall Street Journal. William received his Doctorate in Political Science from Michigan State University and a Masters in Public Policy from George Mason University.

Tony Hoare Room, Department of Computer Science, Robert Hooke Building, Parks Road, Oxford

Sarah Kreps – All the News that’s Fit to Fabricate: A Study of AI-Generated Text

Thursday, September 26, 2019⋅15:00 – 16:00

The use of misinformation online has become a constant; only the way actors create and distribute that information is changing. Until now, their ability to interfere has been limited by resources and bandwidth. New technologies such as natural language processing models can help overcome those limitations by synthetically generating text in ways that mimic the style and substance of news stories. In this article, we present the findings from three experiments intended to study a particular NLP model called GPT-2, developed by the research group OpenAI. We first analyze three different sized models to examine whether respondents can distinguish the difference between the synthetic and original text. We then automate the production of outputs in the second experiment to plot the full credibility distribution of each model size. Lastly, we conducted an experiment to understand the prospects for disinformation in an age of political polarization, studying whether credulity intersects with individuals’ partisan priors and the partisan angle of the synthetically-generated news source. The findings have important implications for understanding the role of artificial intelligence in the generation of misinformation and the potential for foreign election interference.

Sarah Kreps is a Professor of Government and Adjunct Professor of Law at Cornell University. In 2017-2018, she is an Adjunct Scholar at the Modern War Institute (West Point). She is also a Faculty Fellow in the Milstein Program in Technology and Humanity at the Cornell Tech Campus in New York City.

When Speed Kills: Autonomous Weapon Systems, Deterrence, and Stability
Michael C. Horowitz, Professor of political science at the University of Pennsylvania

Wed, 5 June 2019, 17:30 – 19:00 BST

Autonomy on the battlefield represents one possible usage of narrow AI by militaries around the world. Research and development on autonomous weapon systems (AWS) by major powers, middle powers, and non-state actors makes exploring the consequences for the security environment a crucial task.

Michael will draw on classic research in security studies and examples from military history to assess how AWS could influence two outcome areas: the development and deployment of systems, including arms races, and the stability of deterrence, including strategic stability, the risk of crisis instability, and wartime escalation. He focuses on these questions through the lens of two characteristics of AWS: the potential for increased operational speed and the potential for decreased human control over battlefield choices.

Seminar Room A, Manor Road Building

Securing a World of Physically Capable Computers
Bruce Schneier, Computer security and cryptography expert

Mon, 17 June 2019, 17:30 – 19:00 BST

Computer security is no longer about data; it’s about life and property. This change makes an enormous difference, and will inevitably disrupt technology industries. Firstly – data authentication and integrity will become more important than confidentiality. Secondly – our largely regulation-free Internet will become a thing of the past. Soon we will no longer have a choice between government regulation and no government regulation. Our choice is between smart government regulation and stupid government regulation.

Given this future, Bruce Schneier makes a case for why it is vital that we look back at what we’ve learned from past attempts to secure these systems, and forward at what technologies, laws, regulations, economic incentives, and social norms we need to secure them in the future. Bruce will also discuss how AI could be used to benefit cybersecurity, and how government regulation in the cybersecurity realm could suggest ways forward for government regulation for AI.

About the speakers

Bruce Schneier is an American cryptographer, computer security professional, privacy specialist and writer. Schneier is a fellow at the Berkman Center for Internet & Society at Harvard Law School, and a program fellow at the New America Foundation’s Open Technology Institute.

Gillian Hadfield is the inaugural Schwartz Reisman Chair in Technology and Society at the University of Toronto as well as Professor of Law and Professor of Strategic Management. She is also Director of the Schwartz Reisman Institute for Technology and Society. Gillian will be speaking about Reflections on Governance Solutions.

Lecture Theatre B, Wolfson Building, Department of Computer Science

The Character & Consequences of Today’s Technology Tsunami
Richard Danzig, former Secretary of the US Navy, Director at the Center for a New American Security

Tue, 14 May 2019, 17:30 – 19:00 BST

It is often observed that we live amidst a flood of scientific discoveries and technological inventions. The timing, and in important respects, even the direction, of future developments cannot confidently be predicted. But this lecture draws on examples from many disparate technologies to identify important characteristics of technological change in our era; it outlines their implications for international security and our domestic well-being; and it describes ways in which recent failings should prompt new policies as increasingly powerful technologies unfold.

About the speakers

Richard Danzig is an American politician and lawyer who served as the 71st Secretary of the Navy under President Bill Clinton. He served as an advisor of President Barack Obama during his presidential campaign and was later the Chairman of the national security think-tank, the Center for a New American Security.

Todd H. Hall is Associate Professor in the Department of Politics and International Relations and Tutorial Fellow in Politics, Saint Anne’s College, at the University of Oxford.

Janina Dill is the John G. Winant Associate Professor of U.S. Foreign Policy at the Department of Politics and International Relations (DPIR) of the University of Oxford and a Professorial Fellow at Nuffield College and Co-Director of the Oxford Institute for Ethics, Law, and Armed Conflict (ELAC).

Rhodes House, South Parks Road

Alenka Turnsek, Rob McArgow, Neville Howlett, Rayna Taback, and Dave Murray on the Political Economy of AI

Friday, February 9, 201812:00 – 3:00pm

An event with tax experts from PwC discussing the following:

  • Value chains: Key value drivers/generators and their relative contribution to the traditional economy compared with the digital and AI economies are likely to be different.  What are the distinguishing elements between the illustrative value chains of these three different types of economy? What are the relative contributions of the constituent elements of the respective value chains?
  • Operating models: Defining different digital operating models – and how best to categorise them e.g. identify the hallmarks of the digital operating models. How, if at all, does AI disrupt the digital operating models?
  • Data/ Value of data – For tax purposes raw data is thought to have nominal, if any value, but from the business perspective data is fuel for AI. How can this gap in perception of value of data be explained? We would like to discuss whether the capital/ investment in the processing technology (algorithms) and skilled labour is the bridge between the two positions or are there any other elements that that need to be taken into account?
  • Role of people in profit allocation of MNCs: During the 2015 overhaul of the international taxation framework, allocation of profits to intangibles (including technology), risks and other assets was skewed towards the time and skill of the personnel developing, protecting and exploiting them rather than merely the capital being put at risk for developing and deploying them. Although the AI economy also consists of people, intangibles (incl. data and technology) and capital, it seems to be more biased towards the latter two than people, and may replace people at some stage. Is that correct or is/will the role of people in AI economy remain as prominent as it is today?
  • Impact on international trade: Could the proposed new short term taxation measures (taxation of revenue, withholding taxes, equalisation levy etc) help or hinder the digital economy from the source (market) or residence country position? Could these measures raise international trade barriers and what would be the consequences?

James D Morrow: Mutual restraint in war (looking towards AI)

Wednesday, February 28, 20181:30 – 3:00pm

Professor Morrow’s research addresses theories of international politics, both the logical development and empirical testing of such theories. He is best known for pioneering the application of noncooperative game theory, drawn from economics, to international politics. His published work covers crisis bargaining, the causes of war, military alliances, arms races, power transition theory, links between international trade and conflict, the role of international institutions, and domestic politics and foreign policy.

Professor Morrow’s current research addresses the effects of norms on international politics. The latter project examines the laws of war in detail as an example of such norms.

Erik Gartzke: War and Peace in the Virtual World: How Drones and Cyberspace will (re)shape the nature of conflict

Tuesday, February 27, 20184:30 – 6:00pm

New ways of warfare challenge what we think we know about the nature of conflict. At the same time, these new modes of interaction are subject to core logics of contestation that are as stable as the human penchant to compete. When and how virtual warfare can supplant terrestrial conflict depends on its ability to achieve the objectives of traditional uses of force. Similarly, automating battle will shape the politics of combat to the degree that it both fulfills the political purpose of force and supplants or mitigates its shortcomings. The presentation builds on several published studies by the author, laying out a basic logic of virtual warfare and explaining where and how it does, and does not, meet the objectives of military violence. The presentation will then move on to automation of military force, again applying a logic of warfare to the process of supplanting humans with machines on the battlefield.

Professor Gartzke studies the impact of information on war, peace and international institutions. Students of international politics are increasingly aware that what leaders and others know or believe is key to understanding fundamental international processes. Professor Gartzke’s research has appeared in the American Journal of Political ScienceInternational Organization,International Studies Quarterlythe Journal of Conflict Resolutionthe Journal of Politics and elsewhere. He is currently working on two books, one on globalization and the other on the democratic peace, as well as dozens of articles.


The Windfall Clause: Distributing the Benefits of AI for the Common Good 

The Windfall Clause is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits. Read more in the full report.

GovAI Annual Report 2019

2019 has been an eventful year for AI governance and the Centre. Here follows a brief summary of our activities during the year.

New Technical Report: Standards for AI Governance

Peter Cihon, Research Affiliate at the Centre for the Governance of AI, explains the relevance of international standards to global governance of AI in a new technical report. Summary of the report here and report here

Course on the Ethics of AI with OxAI

Carina Prunkl, GovAI collaborator will be running a course on the ethics of AI with Oxford University student group OxAI, investigating the moral and social implications of Artificial Intelligence.

GovAI Annual Report 2018

The governance of AI is in my view the most important global issue of the coming decades, and it remains highly neglected. It is heartening to see how rapidly this field is growing, and exciting to be part of that growth. This report provides a short summary of our work in 2018, with brief notes on our plans for 2019.
Allan Dafoe – Director, Centre for the Governance of AI