The AI Governance Research Group strives to help humanity capture the benefits and manage the risks of artificial intelligence. We conduct research into important and neglected issues within AI governance, drawing on Political Science, International Relations, Computer Science, Economics, Law, and Philosophy. Our research is used to advise decision-makers in private industry, civil society, and policy.
Introductions to AI Governance
Research
Our work is guided by our Research Agenda and includes examination of how technological trends, geopolitics, and governance structures will affect the development of advanced artificial intelligence. We summarize here how we view the Opportunity and Theory of Impact.
Policy Engagement & Events
We are active in international policy circles, regularly hosting discussions with leading academics in the field, and advising governments and industry leaders.
Recent policy engagement and events include writing in The Washington Post about Covid-19 contact tracing apps, presenting evidence to The US Congress on China’s AI strategy, and a live webinar with Daron Acemoğlu, Diane Coyle, and Joseph Stiglitz on the economics of AI and COVID-19. For all our policy writing, see Policy & Public Engagement.
The Team
Our core staff comprises an interdisciplinary team of policy experts and researchers. Our research affiliates work on a wide variety of domains, including China-US relations, cybersecurity, EU policy, and AI progress forecasting.
We are growing the field of AI governance. If you are interested in working in this field, please reach out. You could consider applying for our Governance of AI Fellowship or send us a general application. We are interested in researchers and policy experts at all levels of experience, including pre-docs, postdocs, professors, and senior collaborators.
Our work has been featured in:
You can also find our publications on our Google Scholar page.
Below, you’ll find our:
Select Publications
Institutionalizing ethics in AI through broader impact requirements (2020)
Carina Prunkl, Carolyn Ashurst, Markus Anderljung, Helena Webb, Jan Leike & Allan Dafoe
Turning principles into practice is one of the most pressing challenges of artificial intelligence (AI) governance. In this Perspective, we reflect on a governance initiative by one of the world’s largest AI conferences. In 2020, the Conference on Neural Information Processing Systems (NeurIPS) introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research. Drawing insights from similar governance initiatives, including institutional review boards (IRBs) and impact requirements for funding applications, we investigate the risks, challenges and potential benefits of such an initiative. Among the challenges, we list a lack of recognized best practice and procedural transparency, researcher opportunity costs, institutional and social pressures, cognitive biases and the inherently difficult nature of the task. The potential benefits, on the other hand, include improved anticipation and identification of impacts, better communication with policy and governance experts, and a general strengthening of the norms around responsible research. To maximize the chance of success, we recommend measures to increase transparency, improve guidance, create incentives to engage earnestly with the process, and facilitate public deliberation on the requirement’s merits and future. Perhaps the most important contribution from this analysis are the insights we can gain regarding effective community-based governance and the role and responsibility of the AI research community more broadly.
Open Problems in Cooperative AI (2020)
Allan Dafoe, Edward Hughes, Yoram Bachrach, Tantum Collins, Kevin R. McKee, Joel Z. Leibo, Kate Larson, Thore Graepel
Problems of cooperation–in which agents seek ways to jointly improve their welfare–are ubiquitous and important. They can be found at scales ranging from our daily routines–such as driving on highways, scheduling meetings, and working collaboratively–to our global challenges–such as peace, commerce, and pandemic preparedness. Arguably, the success of the human species is rooted in our ability to cooperate. Since machines powered by artificial intelligence are playing an ever greater role in our lives, it will be important to equip them with the capabilities necessary to cooperate and to foster cooperation.
We see an opportunity for the field of artificial intelligence to explicitly focus effort on this class of problems, which we term Cooperative AI. The objective of this research would be to study the many aspects of the problems of cooperation and to innovate in AI to contribute to solving these problems. Central goals include building machine agents with the capabilities needed for cooperation, building tools to foster cooperation in populations of (machine and/or human) agents, and otherwise conducting AI research for insight relevant to problems of cooperation. This research integrates ongoing work on multi-agent systems, game theory and social choice, human-machine interaction and alignment, natural-language processing, and the construction of social tools and platforms. However, Cooperative AI is not the union of these existing areas, but rather an independent bet about the productivity of specific kinds of conversations that involve these and other areas. We see opportunity to more explicitly focus on the problem of cooperation, to construct unified theory and vocabulary, and to build bridges with adjacent communities working on cooperation, including in the natural, social, and behavioural sciences.
The Windfall Clause: Distributing the Benefits of AI for the Common Good (2020)
Cullen O’Keefe, Peter Cihon, Ben Garfinkel, Carrick Flynn, Jade Leung, and Allan Dafoe
The Windfall Clause is a policy proposal for an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits garnered from the development of transformative AI. This report reviews the motivations for such a policy, enumerates central considerations regarding the design of the Clause, weighs up its limitations against alternative solutions, and situates the Windfall Clause in the broader conversation on the distribution of gains from AI. We hope to spark productive debate on such crucial issues, and contribute to the growing, global discussion centered around channeling technology-driven economic growth towards robustly equitable, broadly beneficial outcomes.
Want to learn more? Read a summary of the report, the full report, a paper published at the AAAI / ACM AI Ethics & Society conference, listen to Cullen O’Keefe’s talk about the idea, or watch this short video.
Artificial Intelligence: American Attitudes and Trends (2019)
Baobao Zhang and Allan Dafoe
This report by Baobao Zhang and Allan Dafoe presents the results from an extensive look at the American public’s attitudes toward AI and AI governance, with questions touching on: workplace automation; attitudes regarding international cooperation; the public’s trust in various actors to develop and regulate AI; views about the importance and likely impact of different AI governance challenges; and historical and cross-national trends in public opinion regarding AI. Our results provide preliminary insights into the character of U.S. public opinion regarding AI.
See HTML version.
Featured in Bloomberg, Vox, Axios and the MIT Technology Review.
AI Governance: A Research Agenda (2018)
Allan Dafoe
This research agenda by Allan Dafoe proposes a framework for research on AI governance. It provides a foundation to introduce and orient researchers to the space of important problems in AI governance. It offers a framing of the overall problem, an enumeration of the questions that could be pivotal, and references to published articles relevant to these questions.
Deciphering China’s AI Dream: The context, components, capabilities, and consequences of China’s strategy to lead the world in AI (2018)
Jeffrey Ding
This report examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, it benchmarks China’s current AI capabilities by developing a novel index to measure any country’s AI potential and highlights the potential implications of China’s AI dream for issues of AI safety, national security, economic development, and social governance.
‘Deciphering China’s AI Dream’ has received press attention in the MIT Technology Review, Bloomberg, and the South China Morning Post, among other media outlets.
The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018)
Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, and Dario Amodei
This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. This report distills findings from a 2017 workshop as well as additional research done by the authors. It explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks.
‘The Malicious Use of Artificial Intelligence’ received coverage from hundreds of news providers including the New York Times, the BBC, Reuters, and the Verge. The report was praised by Rory Stewart, UK Minister of Justice; Major General Mick Ryan, Commander at the Australian Defence College, and Tom Dietterich, former President of the Association for the Advancement of Artificial Intelligence.
The Vulnerable World Hypothesis (2019)
Nick Bostrom
Global Policy Volume 10. Issue 4. November 2019
Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order
When Will AI Exceed Human Performance? Evidence from AI Experts (2018)
Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, and Owain Evans
Journal of Artificial Intelligence Research 62 (2018) 729-754
Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI…
‘When will AI exceed human performance?’ was ranked #16 in Altmetric’s most discussed articles of 2017. The survey was covered by the BBC, Newsweek, the New Scientist, the MIT Technology Review, Business Insider, The Economist, and many other international news providers.
Strategic Implications of Openness in AI Development (2017)
Nick Bostrom
Global Policy Volume 8. Issue 2. May 2017
This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Shortterm impacts of increased openness appear mostly socially beneficial in expectation…
Journal and Conference Publications
The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse? (2020)
Toby Shevlane and Allan Dafoe
Accepted to AAAI AIES conference 2020
There is growing concern over the potential misuse of artificial intelligence (AI) research. Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse. This paper addresses the balance between these two effects. Our theoretical framework elucidates the factors governing whether the published research will be more useful for attackers or defenders, such as the possibility for adequate defensive measures, or the independent discovery of the knowledge outside of the scientific community. The balance will vary across scientific fields. However, we show that the existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software
vulnerabilities. While disclosure of software vulnerabilities often favours defence, this cannot be assumed for AI research. The AI research community should consider concepts and policies from a broad set of adjacent fields, and ultimately needs to craft policy well-suited to its particular challenges.
Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society (2020)
Carina Prunkl and Jess Whittlestone
Accepted to AAAI AIES conference 2020
One way of carving up the broad ‘AI ethics and society ’ research space that has emerged in recent years is to distinguish between ‘near-term’ and ‘long-term’ research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed.
We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions. We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.
Should Artificial Intelligence Governance be Centralised? Six Design Lessons from History (2020)
Peter Cihon, Matthijis Maas, and Luke Kemp
Accepted to AAAI AIES conference 2020
Can effective international governance for artificial intelligence remain fragmented, or is there a need for a centralised international organisation for AI? We draw on the history of other international regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak in favour of centralisation. Conversely, the risk of creating a slow and brittle institution speaks against it, as does the difficulty in securing participation while creating stringent rules. Other considerations depend on the specific design of a centralised institution. A well-designed body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial and a fragmented landscape of institutions can be self-organising. Centralisation entails trade-offs and the details matter. We conclude with two core recommendations. First, the outcome will depend on the exact design of a centralinstitution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate.
Social and Governance Implications of Improved Data Efficiency (2020)
Aaron D. Tucker, Markus Anderljung, and Allan Dafoe
Accepted to AAAI AIES conference 2020
Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed?This paper explores the social-economic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent data-rich AI firms, exposing them to more competition from data-poor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency – as more actors gain access to any level of capability–the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the “AI production function”, will be key to understanding the development of the AI industry and its societal impacts.
Public Policy and Superintelligent AI: A Vector Field Approach (2020)
Nick Bostrom, Allan Dafoe, and Carrick Flynn
Ethics of Artificial Intelligence, Oxford University Press, ed. S. Matthew Liao
We consider the speculative prospect of superintelligent AI and its normative implications for governance and global policy. Machine superintelligence would be a transformative development that would present a host of political challenges and opportunities. This paper identifies a set of distinctive features of this hypothetical policy context, from which we derive a correlative set of policy desiderata—considerations that should be given extra weight in long-term AI policy compared to in other policy contexts. Our contribution describes a desiderata “vector field” showing the directional change from a variety of possible normative baselines or policy positions. The focus on directional normative change should make our findings relevant to a wide range of actors, although the development of concrete policy options that meet these abstractly formulated desiderata will require further work.
How Does the Offense-Defense Balance Scale? (2019)
Ben Garfinkel and Allan Dafoe
Journal of Strategic Studies, 42:6, 736-763
We ask how the offense-defense balance scales, meaning how it changes as investments into a conflict increase. To do so we offer a general formalization of the offense-defense balance in terms of contest success functions. Simple models of ground invasions and cyberattacks that exploit software vulnerabilities suggest that, in both cases, growth in investments will favor offense when investment levels are sufficiently low and favor defense when they are sufficiently high. We refer to this phenomenon as offensive-then-defensive scaling or OD-scaling. Such scaling effects may help us understand the security implications of applications of artificial intelligence that in essence scale up existing capabilities.
Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence (2018)
Miles Brundage
Published in Should we fear artificial intelligence?, a report by the Science and Technology Options Assessment division of the European Parliament
Expert opinions on the timing of future developments in artificial intelligence (AI) vary widely, with some expecting human-level AI in the next few decades and others thinking that it is much further off (Grace et al., 2017). Similarly, experts disagree on whether developments in AI are likely to be beneficial or harmful for human civilization, with the range of opinions including those who feel certain that it will be extremely beneficial, those who consider it likely to be extremely harmful (even risking human extinction), and many in between (AI Impacts, 2017). While the risks of AI development have recently received substantial attention (Bostrom, 2014; Amodei and Olah et al., 2016), there has been little systematic discussion of the precise ways in which AI might be beneficial in the long term.
Accounting for the Neglected Dimensions of AI Progress (2018)
Fernando Martínez-Plumed, Shahar Avin, Miles Brundage, Allan Dafoe, Sean Ó hÉigeartaigh, José Hernández-Orallo
This paper analyzes and reframes AI progress. In addition to the prevailing metrics of performance, it highlights the usually neglected costs paid in the development and deployment of a system, including: data, expert knowledge, human oversight, software resources, computing cycles, hardware and network facilities, development time, etc. These costs are paid throughout the life cycle of an AI system, fall differentially on different individuals, and vary in magnitude depending on the replicability and generality of the AI solution. The multidimensional performance and cost space can be collapsed to a single utility metric for a user with transitive and complete preferences. Even absent a single utility function, AI advances can be generically assessed by whether they expand the Pareto (optimal) surface. We explore a subset of these neglected dimensions using the two case studies of Alpha* and ALE. This broadened conception of progress in AI should lead to novel ways of measuring success in AI, and can help set milestones for future progress…
Technical Reports
AI Policy Levers: A Review of the U.S. Government’s Tools to Shape AI Research, Development, and Deployment (2021)
Sophie-Charlotte Fischer, Jade Leung, Markus Anderljung, Cullen O’Keefe, Stefan Torges, Saif M. Khan, Ben Garfinkel, and Allan Dafoe
The U.S. government (USG) has taken increasing interest in the national security implications of artificial intelligence (AI). In this report, we ask: Given its national security concerns, how might the USG attempt to influence AI research, development, and deployment—both within the U.S. and abroad? We provide an accessible overview of some of the USG’s policy levers within the current legal framework. For each lever, we describe its origin and legislative basis as well as its past and current uses; we then assess the plausibility of its future application to AI technologies. In descending order of likelihood of use for explicit national security purposes, we cover the following policy levers: federal R&D spending, foreign investment restrictions, export controls, visa vetting, extended visa pathways, secrecy orders, prepublication screening procedures, the Defense Production Act, antitrust enforcement, and the “born secret doctrine.”
International Control of Powerful Technology: Lessons from the Baruch Plan for Nuclear Weapons (2021)
Waqar Zaidi and Allan Dafoe
The invention of atomic energy posed a novel global challenge: could the technology be controlled to avoid destructive uses and an existentially dangerous arms race while permitting the broad sharing of its benefits? From 1944 onwards, scientists, policymakers, and other technical specialists began to confront this challenge and explored policy options for dealing with the impact of nuclear technology. We focus on the years 1944 to 1951 and review this period for lessons for the governance of powerful technologies, and find the following: Radical schemes for international control can get broad support when confronted by existentially dangerous technologies, but this support can be tenuous and cynical. Secrecy is likely to play an important, and perhaps harmful, role. The public sphere may be an important source of influence, both in general and in particular in favor of cooperation, but also one that is manipulable and poorly informed. Technical experts may play a critical role, but need to be politically savvy. Overall, policymaking may look more like “muddling through” than clear-eyed grand strategy. Cooperation may be risky, and there may be many obstacles to success.
The Financial Times discussed this paper in an op-ed in international control.
Economic growth under transformative AI: A guide to the vast range of possibilities for output growth, wages, and the laborshare (2021)
Philip Trammell and Anton Korinek
At least since Herbert Simon’s (1960) prediction that artificial intelligence would soon replace all human labor, many economists have understood that there is a possibility that sooner or later artificial intelligence (AI) will dramatically transform the global economy. AI could have a transformative impact on a wide variety of domains; indeed, it could transform market structure, the value of education, the geopolitical balance of power, and practically anything else. The authors focus on three of the clearest and best-studied classes of potential transformations in economics: the potential impacts on output growth, on wage growth, and on the labor share, i.e. the share of output paid as wages. On all counts they focus on long-run impacts rather than transition dynamics. Instead of attempting to predict the future, they focus on surveying the vast range of possibilities identified in the economics literature to survey the predictions of the various models and pinpoint the reasons for why they differ so starkly.
The Immigration Preferences of Top AI Researchers: New Survey Evidence (2021)
Remco Zwetsloot, Baobao Zhang, Markus Anderljung, Michael C. Horowitz, Allan Dafoe
Artificial intelligence (AI) talent is global. AI researchers and engineers come from, and are in high demand, all over theworld. Countries and companies trying to recruit and retain AI talent thus face immense competition. In order to understand current and prospective flows of talent, we investigate the drivers of AI researchers’ immigration decisions and preferences. Immigration questions are particularly salient for the United States today, as half of its current AI workforce and two-thirds of graduate students in AI-related graduate programs were born elsewhere. Some experts believe that the current U.S. immigration system will prevent or dissuade many of these international graduates from staying in the country, potentially undermining the vitality of the U.S. technology sector. Many other countries have also seen recent immigration policy debates centered on attracting AI talent. To better understand the immigration decisions and preferences of this global AI workforce, we conducted a survey of more than 500 active researchers who publish in the leading machine learning conferences.
Beyond Privacy Trade-offs with Structured Transparency (2020)
Andrew Trask, Emma Bluemke, Ben Garfinkel, Claudia Ghezzou Cuervas-Mons, Allan Dafoe
Many socially valuable activities depend on sensitive information, such as medical research, public health policies, political coordination, and personalized digital services. This is often posed as an inherent privacy trade-off: we can benefit from data analysis or retain data privacy, but not both. Across several disciplines, a vast amount of effort has been directed toward overcoming this trade-off to enable productive uses of information without also enabling undesired misuse, a goal we term ‘structured transparency’. In this paper, we provide an overview of the frontier of research seeking to develop structured transparency. We offer a general theoretical framework and vocabulary, including characterizing the fundamental components – input privacy, output privacy, input verification, output verification, and flow governance – and fundamental problems of copying, bundling, and recursive oversight. We argue that these barriers are less fundamental than they often appear. Recent progress in developing ‘privacy-enhancing technologies’ (PETs), such as secure computation and federated learning, may substantially reduce lingering use-misuse trade-offs in a number of domains. We conclude with several illustrations of structured transparency – in open research, energy management, and credit scoring systems – and a discussion of the risks of misuse of these tools.
The biosecurity benefits of genetic engineering attribution (2020)
Gregory Lewis, …, Jade Leung, Allan Dafoe, Cassidy Nelson, …
Biology can be misused, and the risk of this causing widespread harm increases in step with the rapid march of technological progress. A key security challenge involves attribution: determining, in the wake of a human-caused biological event, who was responsible. Recent scientific developments have demonstrated a capability for detecting whether an organism involved in such an event has been genetically modified and, if modified, to infer from its genetic sequence its likely lab of origin. We believe this technique could be developed into powerful forensic tools to aid the attribution of outbreaks caused by genetically engineered pathogens, and thus protect against the potential misuse of synthetic biology.
How Will National Security Considerations Affect Antitrust Decisions in AI? An Examination of Historical Precedents (2020)
Cullen O’Keefe
Artificial Intelligence (AI)—like past general purpose technologies such as railways, the internet, and electricity—is likely to have significant effects on both national security and market structure. These market structure effects, as well as AI firms’ efforts to cooperate on AI safety and trustworthiness, may implicate antitrust in the coming decades. Meanwhile, as AI becomes increasingly seen as important to national security, such considerations may come to affect antitrust enforcement. By examining historical precedents, this paper sheds light on the possible interactions between traditional—that is, economic—antitrust considerations and national security in the United States.
Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims (2020)
Miles Brundage, Shahar Avin, … , Helen Toner, Andrew Trask, Carina Prunkl, Jade Leung, Allan Dafoe, Cullen O’Keefe, Brian Tse, Carrick Flynn, Markus Anderljung, …
NB: The report was a collaboration of a large number of organisations, all of whom are listed in the report. FHI staff listed above contributed equally and are corresponding authors.
This report suggests various steps that different stakeholders in AI development can take to make it easier to verify claims about AI development, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. Implementation of such mechanisms can help make progress on the multifaceted problem of ensuring that AI development is conducted in a trustworthy fashion. The mechanisms outlined in this report deal with questions that various parties involved in AI development might face.
Who owns artificial intelligence? A preliminary analysis of corporate intellectual property strategies and why they matter (2020)
Nathan Calvin and Jade Leung
This working paper is a preliminary analysis of the legal rules, norms, and strategies governing artificial intelligence (AI)-related intellectual property (IP). We analyze the existing AI-related IP practices of select companies and governments, and provide some tentative predictions for how these strategies and dynamics may continue to evolve in the future.
Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies (2019)
Jade Leung
DPhil thesis from the University of Oxford, International Relations
Artificial intelligence (AI) is a strategic general purpose technology (GPT) with the potential to deliver vast economic value and substantially affect national security. The central claim motivating this work is that the development of a strategic GPT follows a distinct pattern of politics. By modelling this pattern, we can make predictions about how the politics of AI will unfold.
The proposed model follows a life cycle of a strategic GPT. It focuses on three actors – the state, firms, and researchers. Each actor is defined by their goals, resources and constraints. The model analyses the relationships between these actors – specifically, the synergies and conflicts that emerge between them as their goals, resources, and constraints interact.
Case studies of strategic GPTs developed in the U.S. – specifically aerospace technology, biotechnology, and cryptography – show that the model captures much of history accurately. When applied to AI, the model also does well to capture political dynamics to date and motivates predictions about how we could expect the politics of AI to unfold. For example, I predict that AI firms will be increasingly constrained by the legislative environment, and more pressured to serve national defense and security interests. Some will be caught in the cross-hairs of public critique and researcher push back; some, however, will willingly sell AI technologies to the state with little friction. Further, I predict that the political influence of researchers will shrink, going against what some may view as a rise in researcher influence given recent events of employee backlash in AI firms. In turn, the inclination and capacity for the state to exert control over AI’s development and proliferation will likely grow, exercised via tools such as export controls.
Artificial intelligence is going to matter greatly, and indeed, already does. It matters, then, that we understand the politics that surrounds it, and that we ultimately lay the groundwork for the governance of a technology that is poised to be transformative.
Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development (2019)
Peter Cihon
Today, AI policy analysis tends to focus on national strategies, nascent international initiatives, and the policies of individual corporations. Yet, international standard produced by nongovernmental organizations are also an important site of forthcoming AI governance. International standards can impact national policies, international institutions, and individual corporations alike. International standards offer an impactful policy tool in the global coordination of beneficial AI development.
The case for further engagement in the development of international standards for AI R&D are detailed in this report. It explains the global policy benefits of AI standards, outlines the current landscape for AI standards around the world, and offers a series of recommendations to researchers, AI developers, and other AI organizations.
Stable Agreements in Turbulent Times: A Legal Toolkit for Constrained Temporal Decision Transmission (2019)
Cullen O’Keefe
This century, advanced artificial intelligence (“Advanced AI”) technologies could radically change economic or political power. Such changes produce a tension that is the focus of this Report. On the one hand, the prospect of radical change provides the motivation to craft, ex ante, agreements that positively shape those changes. On the other hand, a radical transition increases the difficulty of forming such agreements since we are in a poor position to know what the transition period will entail or produce. The difficulty and importance of crafting such agreements is positively correlated with the magnitude of the changes from Advanced AI. The difficulty of crafting long-term agreements in the face of radical changes from Advanced AI is the “turbulence” with which this Report is concerned. This Report attempts to give readers a toolkit for making stable agreements—ones that preserve the intent of their drafters—in light of this turbulence.
Recent Developments in Cryptography and Possible Long-Run Consequences (2018)
Ben Garfinkel
Unpublished manuscript
Historically, progress in the eld of cryptography has been enormously consequential. Over the past century, for instance, cryptographic discoveries have played a key role in a world war and made it possible to use the internet for business and private communication. In the interest of exploring the impact the eld may have in the future, I consider a suite of more recent developments. My primary focus is on blockchain-based technologies (such as cryptocurrencies and smart contracts) and on techniques for computing on confidential data (such as homomorphic encryption and secure multiparty computation). I provide an introduction to these technologies that assumes no previous knowledge of cryptography. Then, I consider eight speculative predictions about the long-term consequences these emerging technologies could have. These predictions include the views that a growing number of information channels used to conduct surveillance may go dark, that it may become easier to verify compliance with agreements without intrusive monitoring, that the roles of a number of centralized institutions ranging from banks to voting authorities may shrink, and that new transnational institutions known as decentralized autonomous organizations may emerge. Finally, I close by discussing some challenges that could limit the significance of emerging cryptographic technologies. On the basis of these challenges, it is premature to predict that any of them will approach the transformativeness of previous technologies. However, this remains a rapidly-developing area well worth following.
To request the full version of the report, contact the author on benmgarfinkel [at] gmail.com.
Policy Desiderata in the Development of Machine Superintelligence (2016)
Nick Bostrom, Allan Dafoe, and Carrick Flynn
Machine superintelligence could plausibly be developed in the coming decades or century. The prospect of this transformative development presents a host of political challenges and opportunities. This paper seeks to initiate discussion of these by identifying a set of distinctive features of the transition to a machine intelligence era. From these distinctive features, we derive a correlative set of policy desiderata—considerations that should be given extra weight in long-term AI policy…
Policy writing
Thinking About Risks From AI: Accidents, Misuse and Structure
Remco Zwetsloot, Allan Dafoe
11 February 2019
JAIC: Pentagon debuts artificial intelligence hub
Jade Leung, Sophie-Charlotte Fischer
8 August 2018
Resources
AI Governance: Opportunity and Theory of Impact (2020)
Allan Dafoe
Advances in AI are likely to be among the most impactful global developments in the coming decades, and that AI governance will become among the most important global issue areas. AI governance is a new field and is relatively neglected. Allan explains here how he thinks about this as a cause area and his perspective on how best to pursue positive impact in this space. The value of investing in this field can be appreciated whether one is primarily concerned with contemporary policy challenges or long-term risks and opportunities (“longtermism”); this piece is primarily aimed at a longtermist perspective. Differing from some other longtermist work on AI, he emphasizes the importance of also preparing for more conventional scenarios of AI development.
A Guide to Writing the NeurIPS Impact Statement (2020)
Carolyn Ashurst, Markus Anderljung, Carina Prunkl, Jan Leike, Yarin Gal, Toby Shevlane, Allan Dafoe
Over time, the exercise of assessing impact could enhance the ML community’s expertise in technology governance, and otherwise help build bridges to other researchers and policymakers. To help maximize the chances of success, Carolyn Ashurst, Markus Anderljung, Carina Prunkl, Jan Leike, Yarin Gal, Toby Shevlane, and Allan Dafoe have compiled some suggestions and an (unofficial) guide for how to write the NeurIPS Impact Statement.
Syllabus: Artificial Intelligence and China (2020)
Jeffrey Ding, Sophie-Charlotte Fischer, Brian Tse, and Chris Byrd
In recent years, China’s ambitious development of artificial intelligence (AI) has attracted much attention in policymaking and academic circles. This syllabus aims to broadly cover the research landscape surrounding China’s AI ecosystem, including the context, components, capabilities, and consequences of China’s AI development. The materials presented range from blogs to books, with an emphasis on English translations of Mandarin source materials. The reading list is not exhaustive, and it will benefit from feedback and revisions.
Syllabus: Artificial Intelligence and International Security (2018)
Remco Zwetsloot
This syllabus covers material located at the intersection between artificial intelligence (AI) and international security. The syllabus can be used in structured self-study (or group-study) for those new to this space, or as a resource for instructors designing class-specific syllabi (which would probably have to be significantly shorter). It is designed to be useful to (a) people new to both AI and international relations (IR); (b) people coming from AI who are interested in an IR angle on the problems; (c) people coming from IR who are interested in working on AI. Depending on which of groups (a)-(c) you fall in, it may be feasible to skip or skim certain sections. For sections that you are particularly interested in, do consider diving into the sources cited in the readings—for most topics what I have assigned just skims the surface and is intended only as a starting point.
The Team
Researcher
Scientific regulation of dangerous technologies; publication norms; technological power
External DPhil Supervisors
Duncan Snidal
Professor of International Relations, University of Oxford
Karolina Milewicz
Associate Professor of International Relations, University of Oxford
Policy & Public Engagement
Researchers are actively involved in the policy and public dialogues on the impact of advanced, transformative AI. We seek to offer authoritative, actionable and accessible insight to a range of audiences in policy, academia, and the public. The following is a selection of speaking engagements, podcasts, media appearances and policy writings from our team.
Recent Policy & Public Engagement
Carolyn Ashurst: Ethics in AI Seminar - Responsible Research and Publication in AI
- What role should the technical AI community play in questions of AI ethics and those concerning the broader impacts of AI? Are technical researchers well placed to reason about the potential societal impacts of their work?
- What does it mean to conduct and publish AI research responsibly?
- What challenges does the AI community face in reaching consensus about responsibilities, and adopting appropriate norms and governance mechanisms?
- How can we maximise the benefits while minimizing the risks of increasingly advanced AI research?
Carolyn Ashurst’s talk in this seminar was centred on the NeurIPS requirement as a case study for self governance. You can watch the seminar here
Baobao Zhang: How social science research can inform AI governance (January 2021)
Baobao Zhang explains how social science research can help people in government, tech companies, and advocacy organizations make decisions regarding artificial intelligent (AI) governance. After explaining her work on public attitudes toward AI and automation, she explores other important topics of research. She also reflects on how researchers could make broad impacts outside of academia.
Allan Dafoe, AI Governance: Opportunity and Theory of Impact (September 2020)
Allan Dafoe wrote a post for the Effective Altruism Forum explaining how he thinks about AI governance as a cause area and his perspective on how best to pursue positive impact in the space.
Read the full piece here.
Allan Dafoe, AI Social Responsibility | AI Summit London (September 2020)
Allan Dafoe presented on AI Social Responsibility at The Virtual AI Summit London.
Watch the full talk here.
Ben Garfinkel on scrutinising classic AI risk arguments | 80,000 Hours Podcast (July 2020)
Ben Garfinkel was interviewed on the 80,000 Hours podcast where he discussed whether classic AI risk arguments have been subject to sufficient scrutiny given the current level of investment in the area.
Listen to the full podcast here.
Consultation on the European Commission’s White Paper on Artificial Intelligence: a European approach to excellence and trust | June 2020
Stefan Torges, a 2020 GovAI Fellow wrote a response to the European Commission’s White Paper on Artificial Intelligence. In this response, he focussed the analysis and recommendations on the proposed “ecosystem of trust” and associated international efforts.
Read the full consultation here.
A Guide to Writing the NeurIPS Impact Statement | 13th May 2020
Over time, the exercise of assessing impact could enhance the ML community’s expertise in technology governance, and otherwise help build bridges to other researchers and policymakers. To help maximize the chances of success, Carolyn Ashurst, Markus Anderljung, Carina Prunkl, Jan Leike, Yarin Gal, Toby Shevlane, and Allan Dafoe have compiled some suggestions and an (unofficial) guide for how to write the NeurIPS Impact Statement.
Read the full post here.
Contact tracing apps for Covid-19 | (April 28, 2020. The Washington Post)
Allan Dafoe, Benjamin Garfinkel, and Toby Shevlane discuss contact tracing apps for Covid-19, in an article for The Washignton Post. In this article, they look at how these new technologies can help preserve privacy.
Read the full article here.
“AI Governance in 2019: A Year in Review” | (April 20, 2020)
Allan Dafoe and Markus Anderljung’s observations, along with other experts in the field, were collated to form the “AI Governance in 2019: A Year in Review” report. This was presented to the public, and the event was joined by numerous mainstream media and press in China.
Read the full report here.
Brian Tse: 'Towards A Global Community Of Shared Future in AGI' | (January 5, 2020. Beneficial AGI conference: Puerto Rico)
Allan Dafoe, Jade Leung, and Brian Tse from our team participated in the Beneficial AGI conference in Puerto Rico. Allan and Brian presented.
Watch Brian’s presentation here.
Allan Dafoe: 'Frontier Exploration and Innovation Practices of Artificial Intelligence Security' | (August 30, 2019. World AI Conference: Shanghai)
Allan Dafoe spoke at the World AI Conference in Shanghai on AI governance and security. He delivered a keynote speeche, expressing his opinions around the theme of the frontiers of artificial intelligence security.
Read more about the conference here.
Jeffrey Ding: 'What People Get Wrong About China and Artificial Intelligence' | (July 9, 2019. Fortune)
In an interview with Fortune, Ding explained that much of what is written about China’s multi-billion dollar push into A.I. often seems like it’s written in a “vacuum.” There’s little context or comparison between China’s A.I. abilities and those of other countries.
Read the article here.
Jade Leung: 'What happens when AI fails – Concrete solutions for a better AI' | (Hello Tomorrow Global Summit: Paris)
Jade Leung participated in the panel “What happens when AI fails – Concrete solutions for a better AI” at the Hello Tomorrow Global Summit in Paris.
Read more about the summit here.
Carina Prunkl: 'Ethics of AI course' | (Michaelmas term. The University of Oxford)
Carina Prunkl organised an Ethics of AI course in Michaelmas term in Oxford, UK. This course was for the Chevening Gurukul Fellowship for Leadership and Excellence, at the Said Business School, University of Oxford.
Read more about the course here.
Sophie-Charlotte Fischer: 'The Emergence of Artificial Intelligence' | (WEF, ETH and Microsoft: Davos)
Sophie-Charlotte Fischer spoke at a World Economic Forum side event in Davos hosted by ETH and Microsoft on “The Emergence of Artificial Intelligence.”
Sophie-Charlotte Fischer: 'Robotics, Artificial Intelligence & Humanity' | (May 16, 2019. Robotics, AI & Humanity Conference: Vatican)
Sophie-Charlotte Fischer spoke at a 2-day conference focusing on the impact of robotics and artificial intelligence on humanity, that was held in the Vatican and organized by the Pontifical Academy of Social Sciences and the Pontifical Academy of Sciences.
Jade Leung: 'How Can We See the Impact of AI Strategy Research?' | (June 23, 2019. EA Global: San Francisco)
Watch the video here.
Brian Tse: 'Improving Coordination with China to Reduce AI Risk' | (June 23, 2019. EA Global: San Francisco)
Watch the video here.
Markus Anderljung: 'Governing Transformative Artificial Intelligence' | (EAGx Nordics)
Watch the video here.
An Interview with Ben Garfinkel, Governance of AI Program Researcher
Ben Garfinkel was interviewed in The Politic. The Yale College Journal of Politics is a monthly Yale University student publication.
Read the interview here
Markus Anderljung: 'AI Safety - Human values aligned with AI' | (September 21, 2019. AICast podcast)
Markus talks about Human values and how we should start planning and implementing when we as humans start building artificial general intelligence (AGI). The goal of long-term artificial intelligence safety is to ensure that advanced AI systems are aligned with human values — that they reliably do things that people want them to do.
Listen to the podcast here.
Jeffrey Ding: 'Artificial Intelligence in China' | (May 21, 2019. Ark Investment Podcast)
In this podcast, Jeffrey talks about his work in the AI field and his focus on translating the developments from China to a more western audience. He talks about the reasons why he started his newsletter, namely that the AI community in China are mostly abreast of advancements from the US and UK, while the same cannot be said in the opposite direction. This language asymmetry, as Jeffrey calls it, means there is a gap in the knowledge base in the Americas and Europe around the burgeoning Chinese AI scene.
Listen to the podcast here.
Cullen O'Keefe: 'The Windfall Clause: Sharing the benefits of advanced AI' | (June 23, 2019. EA Global: San Francisco)
The potential upsides of advanced AI are enormous, but there’s no guarantee they’ll be distributed optimally. In this talk, Cullen O’Keefe, discusses one way we could work toward equitable distribution of AI’s benefits — the Windfall Clause, a commitment by AI firms to share a significant portion of their future profits — as well as the legal validity of such a policy and some of the challenges to implementing it.
Watch the video here.
Jade Leung, Sophie-Charlotte Fischer, & Allan Dafoe: "Export Controls in the Age of AI" | (28 August 2019: War on the Rocks)
What does technological leadership look like in an era of artificial intelligence? The United States, like other countries, is in the midst of grappling with this question against a backdrop of the rise of China and the growing realization that “business as usual” will no longer suffice for America to maintain its technological advantage. Washington has begun to take some important steps to translate this realization into action. In February, President Donald Trump launched the American AI Initiative in recognition that “American leadership in AI is of paramount importance to maintaining the economic and national security of the United States.” In a less constructive fashion, two months later Sen. Josh Hawley (R-Mo.) introduced the China Technology Transfer Control Act of 2019 that would “make it harder for American companies to export major emerging technologies to China.” Clearly, AI is on the agenda.
Unfortunately, Washington appears to be defaulting to traditional, 20th-century policy tools to address a 21st-century problem…
Jeffrey Ding: "AI Alignment Podcast: China’s AI Superpower Dream" | (16 August 2019: Future of Life Institute AI Alignment Podcast)
Jade Leung: "AI Alignment Podcast: On the Governance of AI" | (22 July 2019: Future of Life Institute AI Alignment Podcast)
Jeffrey Ding & Helen Toner: "US Senate Hearing on Technology, Trade, and Military-Civil Fusion: China's Pursuit of Artificial Intelligence, New Materials, and New Energy" | (7 June 2019: U.S.-China Economic and Security Review Commission)
Peter Cihon with others: "Comment on National Institute of Standards and Technology – RFI: Developing a Federal AI Standards Engagement Plan" | (6 June 2019)
GovAI submitted written comments (work led by Peter Cihon) as well as a second round of targeted edits to the National Institute of Standards and Technology (NIST) to support its ongoing work to develop a federal plan for technical artificial intelligence (AI) standards. We collaborated with several organizations on this effort, including the Center for Long-Term Cybersecurity, the Future of Life Institute, and certain researchers at the Leverhulme Centre for the Future of Intelligence. On August 9, NIST published their final report, “U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools”, taking numerous suggestions from CLTC into consideration. The plan recommends the federal government “commit to deeper, consistent, long-term engagement in AI standards development activities to help the United States to speed the pace of reliable, robust, and trustworthy AI technology development.”
Remco Zwetsloot & Allan Dafoe: "Thinking About Risks From AI: Accidents, Misuse and Structure" | (11 February 2019: Lawfare)
Baobao Zhang: "Opinion It’s 2043. We Need a New American Dream for the A.I. Revolution." | (12 August 2019: The New York Times)
Allan Dafoe: 'Private Sector Leadership in AI Governance' | (December 11th, 2018: The Digital Society Conference)
December 10-11, 2018 the Digital Society Conference 2018 – Empowering Ecosystems took place at ESMT Berlin. The two-day conference included panels, presentations, and workshops from many different perspectives such as science, industry, and politics. This year’s conference covered new developments in security and privacy, digital politics, and industrial strategies. The reality of the rise of artificial intelligence (AI) was a particular focus, including its societal implications and how to understand and harness the battle for AI dominance. More about the conference here.
Allan Dafoe & Jade Leung: 'What does AI mean for the future of humanity?' | (December 10th, 2018: Futuremakers podcast)
Allan Dafoe: 'The AI Revolution and International Politics' | (November 14th, 2018. Oxford Artificial Intelligence Society)
Jade Leung: 'Why Companies Should be Leading on AI Governance' | (October 27th, 2018. EA Global London)
Benjamin Garfinkel: 'How Sure Are We About This AI Stuff?' | (October 27, 2018, EA Global London)
Allan Dafoe: 'Regulating Artificial Intelligence in the area of defence' | (October 10th, 2018. SEDE Public Hearing on Artificial intelligence and its future impact on security)
Carrick Flynn: 'AI Governance Landscape' | (June 10th, 2018. EA Global San Francisco)
Jade Leung: 'Analyzing AI Actors' | (June 10th, 2018. EA Global San Francisco)
Benjamin Garfinkel: 'The Future of Surveillance Doesn't Need to be Dystopian' | (June 9th, 2018, EA Global San Francisco)
Jeffrey Ding: Participant in BBC Machine Learning Fireside Chat | (June 6th 2018, London)
Allan Dafoe: ‘Keynote on AI Governance’ | (June 1st 2018, Public Policy Forum)
Benjamin Garfinkel: 'Recent Developments in Cryptography and Why They Matter' | (May 1st 2018, Oxford Internet Institute)
Miles Brundage: 'Offensive applications of AI' | (April 11th, 2018, CyberUK)
Sophie-Charlotte Fischer: 'Artificial Intelligence: What implications for Foreign Policy?' | (April 11th, 2018, German Federal Foreign Office)
Allan Dafoe: Chair of panel ‘Artificial Intelligence and Global Security: Risks, Governance, and Alternative Futures’ | (April 6th 2018, Annual Conference of the Johnson Center for the Study of American Diplomacy, Yale University)
The panel addressed cybersecurity leadership and strategy from the perspective of the Department of Defense. The panelists were Dario Amodei, Research Scientist and Team Lead for Safety at OpenAI; Jason Matheny, Director of the Intelligence Advanced Research Projects Agency; and the Honorable Robert Work, former Acting and Deputy Secretary of Defense and now Senior Counselor for Defense at the Center for a New American Security. The keynote address at the conference was given by Eric Schmidt, and Henry Kissinger also gave a talk.
Matthijs Maas: 'Regulating for ‘normal AI accidents’: operational lessons for the responsible governance of AI deployment' | (February 2nd, 2018, AAAI/ACM Conference on AI, Ethics and Society)
Paper presentation, arguing that many AI applications often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments, which ensures such systems are prone to ‘normal accident’-type failures. While this suggests that large-scale, cascading errors in AI systems are very hard to prevent or stop, an examination of the operational features that lead technologies to exhibit such failures enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safer deployment of AI systems. Conference paper available here.
Allan Dafoe: 'Governing the AI Revolution: The Research Landscape' | (January 25th, 2018, CISAC, Stanford University)
Allan Dafoe: ‘Strategic and Societal Implications of ML’ | (December 8th 2017, Neural Information Processing Systems Conference)
Allan Dafoe: Evidence Panelist for All Party Parliamentary Group on Artificial Intelligence Evidence Meeting | (October 30th 2017, Evidence Meeting 7: International Perspective and Exemplars)
Opportunities
Governance of AI Fellowship
The Governance of AI Fellowship will not open for applications in 2021. If you are interested in collaborating with the AI Governance Research Group, feel free to reach out to Anne who can put you in touch or direct you to similar opportunities.
We are seeking 2-5 exceptional researchers to join our interdisciplinary team during the Governance of AI Fellowship for a limited period of three months. Participants in the Fellowship will receive a generous stipend and have the opportunity to participate in cutting-edge research in a fast-growing field, while gaining expertise in parts of our Research Agenda.
General Applications
We are always looking for exceptional researchers to join our team, be it as collaborators, research assistants, or full-time researchers.
In all candidates, we seek high general aptitude, self-direction, openness to feedback, and a firm belief in our mission.
Across each of these roles, we are especially interested in people with varying degrees of skill or expertise in the following areas:
- International relations, especially in the areas of international cooperation, international law, international political economy, global public goods, constitution and institutional design, diplomatic coordination and cooperation, arms race dynamics history, and the politics of transformative technologies, governance, and grand strategy.
- Chinese politics and machine learning in China.
- Game theory and mathematical modelling.
- Survey design and statistical analysis.
- Large intergovernmental scientific research organizations and projects (such as CERN, ISS, and ITER).
- Technology and other types of forecasting.
- Law and/or policy.
Our goal is to identify exceptional talent. We are interested in hiring for full-time work at Oxford. We are also interested in getting to know talented individuals who might only be available part-time or for remote work.
As we work closely with leading AI labs and the effective altruism community, familiarity and involvement with them is a plus.
If you are interested, send an email to Anne, putting “General Application” in the subject line, with your CV and a brief statement of interest outlining (i) why you want to work with us, (ii) what you would like to contribute and (iii) how this role fits into your career plans. We consider these applications in batches. You can expect a response within about a month.
All qualified applicants will be considered for employment without regard to race, color, religion, sex, age or national origin.
Governance of AI Seminar Series
Upcoming Events
Past Events
Redesigning AI for Shared Prosperity: An Agenda by Stephanie Bell and Katya Klinova
Monday, 21st June, 1700–1830 BST (0900–1030 PT, 1200–1330 EDT)
This event has already taken place, but you can watch the recording and read the transcript here.
AI poses a risk of automating and degrading jobs around the world, creating harmful effects to vulnerable workers’ livelihoods and well-being. How can we deliberately account for the impacts on workers when designing and commercializing AI products in order to benefit workers’ prospects while simultaneously boosting companies’ bottom lines and increasing overall productivity? The Partnership on AI’s recently released report Redesigning AI for Shared Prosperity: An Agenda puts forward a proposal for such accounting. The Agenda outlines a blueprint for how industry and government can contribute to AI that advances shared prosperity.
Stephanie Bell is a Research Fellow at the Partnership on AI affiliated with the AI and Shared Prosperity Initiative. Her work focuses on how workers and companies can collaboratively design and develop AI products that create equitable growth and high quality jobs. She holds a DPhil in Politics and an MPhil in Development Studies from the University of Oxford, where her ethnographic research examined how people can combine expertise developed in their everyday lives with specialized knowledge to better advocate for their needs and well-being.
Katya Klinova is the Head of AI, Labor, and the Economy Programs at the Partnership on AI. In this role, she oversees the AI and Shared Prosperity Initiative and other workstreams which focus on the mechanisms for steering AI progress towards greater equality of opportunity and improving the working conditions along the AI supply chain. She holds an M.Sc. in Data Science from University of Reading and MPA in International Development from Harvard University, where her work examined the potential impact of AI advancement on the economic growth prospects of low- and middle-income countries.
Robert Seamans is an Associate Professor at New York University’s Stern School of Business. His research focuses on how firms use technology in their strategic interactions with each other, and also focuses on the economic consequences of AI, robotics and other advanced technologies. His research has been published in leading academic journals and been cited in numerous outlets including The Atlantic, Forbes, Harvard Business Review, The New York Times, The Wall Street Journal and others. During 2015-2016, Professor Seamans was a Senior Economist for technology and innovation on President Obama’s Council of Economic Advisers.
AI and Inequality: Joseph Stiglitz in discussion with Anton Korinek
Monday, 3rd May, 1700–1830 BST (0900–1030 PT, 1200–1330 EDT)
This even has already taken place, but you can watch the recording and read the transcript here.
Over the next decades, AI will dramatically change the economic landscape. It may also magnify inequality, both within and across countries. Joseph E. Stiglitz, Nobel Laureate in Economics, will join us for a conversation with Anton Korinek on the economic consequences of increased AI capabilities. They will discuss the relationship between technology and inequality, the potential impact of AI on the global economy, and the economic policy and governance challenges that may arise in an age of transformative AI. Korinek and Stiglitz have co-authored several papers on the economic effects of AI.
Joseph Stiglitz is University Professor at Columbia University. He is also the co-chair of the High-Level Expert Group on the Measurement of Economic Performance and Social Progress at the OECD, and the Chief Economist of the Roosevelt Institute. A recipient of the Nobel Memorial Prize in Economic Sciences (2001) and the John Bates Clark Medal (1979), he is a former senior vice president and chief economist of the World Bank and a former member and chairman of the US President’s Council of Economic Advisers. Known for his pioneering work on asymmetric information, Stiglitz’s research focuses on income distribution, risk, corporate governance, public policy, macroeconomics and globalization.
Anton Korinek is an Associate Professor at the University of Virginia, Department of Economics and Darden School of Business as well as a Research Associate at the NBER, a Research Fellow at the CEPR and a Research Affiliate at the AI Governance Research Group. His areas of expertise include macroeconomics, international finance, and inequality. His most recent research investigates the effects of progress in automation and artificial intelligence for macroeconomic dynamics and inequality.
The Frontier of Democracy: Audrey Tang on Taiwan’s digital democracy, collaborative civic technologies, and beneficial information flows
Tuesday, 9th March, 1000-1130 GMT, 1800-1930 GMT+8 (0200–0330 PT, 0500–0630 EST)
Hélène Landemore (Yale) and Ben Garfinkel (GovAI) were discussants for this event, Allan Dafoe (GovAI) was moderator.
This event has already taken place, but you can watch the recording and read the transcript here.
Following the 2014 Sunflower Movement protests, Audrey Tang—a prominent member of the civic social movement g0v—was headhunted by Taiwanese President Tsai Ing-wen’s administration to become the country’s first Digital Minister. In this webinar, Audrey will discuss collaborative civic technologies in Taiwan, and their potential to improve governance and beneficial information flows.
vTaiwan is designed to facilitate constructive conversation and consensus-building between diverse opinion groups. It includes a combination of crowdsourcing facts and evidence, and mass deliberation using machine learning-enabled software pol.is. To date, the process has been used by government ministries and representatives, scholars, business leaders, civil society organizations, and citizens. Over 30 cases have been discussed, leading to decisive government action on topics including Financial Tech and Uber.
Such tools are examples of information systems that create socially beneficial information flows. Enabling productive uses of information, without also enabling undesired misuse, is a goal GovAI calls “structured transparency.” Taiwan’s experience constitutes an exciting example of structured transparency’s potential.
Audrey Tang is Taiwan’s Digital Minister in charge of social innovation, open governance, and youth engagement. They are Taiwan’s first transgender cabinet member and became the youngest minister in the country’s history at the age of 35. Tang is known for civic hacking and strengthening democracy using technology. They served on the Taiwanese National Development Council’s Open Data Committee and are an active contributor to g0v, a community focused on creating tools for civil society. Audrey plays a key role in combating foreign disinformation campaigns and in formulating Taiwan’s COVID-19 response.
Hélène Landemore is an Associate Professor of Political Science at Yale University. Her research and teaching interests include democratic theory, political epistemology, theories of justice, the philosophy of social sciences (particularly economics), constitutional processes and theories, and workplace democracy.
Ben Garfinkel is a Research Fellow at the Future of Humanity Insitute and a DPhil student at Oxford’s Department of Politics and International Relations. His research interests include the security and privacy implications of artificial intelligence, the causes of interstate war, and the methodological challenge of forecasting and reducing technological risks.
The Work of the Future: Building Better Jobs in an Age of Intelligent Machines – David Autor, Katya Klinova & Ioana Marinescu
Wednesday, 20th January, 1700-1815 GMT, (0900–1015 PT, 1200–1315 EST)
This event was moderated by Anton Korinek (University of Virginia)
This event has already taken place, but you can watch the recording and read the transcript here.
In the spring of 2018, MIT President L. Rafael Reif commissioned the MIT Task Force on the Work of the Future. He tasked them with understanding the relationships between emerging technologies and work, to help shape public discourse around realistic expectations of technology, and to explore strategies to enable a future of shared prosperity.
In this webinar David Autor, co-chair of the Task Force discussed their latest report: The Work of the Future: Building Better Jobs in an Age of Intelligent Machines. The report documents that the labour market impacts of technologies like AI and robotics are taking years to unfold, however, we have no time to spare in preparing for them.
If those technologies deploy into the labour institutions of today, which were designed for the last century, we will see similar effects to recent decades: downward pressure on wages, skills, and benefits, and an increasingly bifurcated labour market. This report, and the MIT Work of the Future Task Force, suggest a better alternative: building a future for work that harvests the dividends of rapidly advancing automation and ever-more powerful computers to deliver opportunity and economic security for workers. To channel the rising productivity stemming from technological innovations into broadly shared gains, we must foster institutional innovations that complement technological change.
David Autor is Ford Professor of Economics and associate department head of the Massachusetts Institute of Technology Department of Economics. He is also Faculty Research Associate of the National Bureau of Economic Research, Research Affiliate of the Abdul Jameel Latin Poverty Action Lab, Co-director of the MIT School Effectiveness and Inequality Initiative, Director of the NBER Disability Research Center and former editor in chief of the Journal of Economic Perspectives. He is an elected officer of the American Economic Association and the Society of Labor Economists and a fellow of the Econometric Society.
Katya Klinova directs the strategy and execution of the AI, Labor, and the Economy Research Programs at the Partnership on AI, focusing on studying the mechanisms for steering AI progress towards greater equality of opportunity and improving the working conditions along the AI supply chain. In this role, she oversees multiple programs including the AI and Shared Prosperity Initiative.
Ioana Marinescu is assistant professor at the University of Pennsylvania School of Social Policy & Practice, and a Faculty Research Fellow at the National Bureau of Economic Research. She studies the labor market to craft policies that can enhance employment, productivity, and economic security. Her research expertise includes wage determination and monopsony power, antitrust law for the labor market, the universal basic income, unemployment insurance, the minimum wage, and employment contracts.
GovAI Webinar – Margaret Roberts & Jeffrey Ding: Censorship’s Implications for Artificial Intelligence
Wednesday, October 28th, 1700-1815 GMT, (1000-1115 PT, 1300-1415 EST)
This event has already taken place, but you can watch the recording and read the transcript here.
In this webinar, Jeffrey and Molly will discuss an upcoming paper co-authored by Molly and Eddie Yang.
Abstract: While artificial intelligence provides the backbone for many tools people use around the world, recent work has brought attention to the potential biases that may be baked into these algorithms. While most work in this area has focused on the ways in which these tools can exacerbate existing inequalities and discrimination, we bring to light another way in which algorithmic decision making may be affected by institutional and societal forces. We study how censorship has affected the development of Wikipedia corpuses, which are in turn regularly used as training data that provide inputs to NLP algorithms. We show that word embeddings trained on the regularly censored Baidu Baike have very different associations between adjectives and a range of concepts about democracy, freedom, collective action, equality, and people and historical events in China than its uncensored counterpart Chinese language Wikipedia. We examine the origins of these discrepancies using surveys from mainland China and their implications by examining their use in downstream AI applications.
Molly Roberts is an Associate Professor in the Department of Political Science and the Halıcıoğlu Data Science Institute at the University of California, San Diego. She co-directs the China Data Lab at the 21st Century China Center. She is also part of the Omni-Methods Group. Her research interests lie in the intersection of political methodology and the politics of information, with a specific focus on methods of automated content analysis and the politics of censorship and propaganda in China.
Jeffrey Ding is the China lead for the AI Governance Research Group. Jeff researches China’s development of AI at the Future of Humanity Institute, University of Oxford. His work has been cited in the Washington Post, South China Morning Post, MIT Technology Review, Bloomberg News, Quartz, and other outlets. A fluent Mandarin speaker, he has worked at the U.S. Department of State and the Hong Kong Legislative Council. He is also reading for a D.Phil. in International Relations as a Rhodes Scholar at the University of Oxford.
GovAI Webinar – Economic Growth in the Long Run: Artificial Intelligence Explosion or an Empty Planet? by Ben Jones & Chad Jones
Wednesday October 7th, 1700-1815 BST, (0900-1015 PT, 1200-1315 EST)
This event has already taken place, but you can watch the recording and read the transcript here
How will economic growth evolve in the long run? This session explored the wide range of plausible scenarios. Aghion, Jones & Jones 2017 analyze how artificial intelligence may super-charge the growth trajectory, causing a potential speed-up in economic growth as either production or the process of innovation itself – often considered the main driver of economic growth – become more and more automated. In the limit, these processes may lead to growth singularities. By contrast, Jones 2020 uses a similar framework to show that a markedly different outcome is possible – continuously declining living standards – if population growth becomes negative and slows down the process of ideas production.
The session was moderated by Anton Korinek (UVA) and featured Rachael Ngai (LSE) and Phil Trammell (Oxford) as discussants.
Benjamin Jones is the Gordon and Llura Gund Family Professor of Entrepreneurship at the Kellogg School of Management at Northwestern. He studies the sources of economic growth in advanced economies, with an emphasis innovation, entrepreneurship, and scientific progress. He also studies global economic development, including the roles of education, climate, and national leadership in explaining the wealth and poverty of nations. His research has appeared in journals such as Science, the Quarterly Journal of Economics and the American Economic Review, and has been profiled in media outlets such as the Wall Street Journal, the Economist, and The New Yorker.
Chad Jones is the STANCO 25 Professor of Economics at the Stanford Graduate School of Business. He is noted for his research on long-run economic growth. In particular, he has examined theoretically and empirically the fundamental sources of growth in incomes over time and the reasons underlying the enormous differences in standards of living across countries. In recent years, he has used his expertise in macroeconomic methods to study the economic causes behind the rise in health spending and top income inequality. He is the author of one of the most popular textbooks of Macroeconomics, and his research has been published in the top journals of economics.
The Transparency-Security Tradeoff in Arms Control: Implications of Emerging Technology by Jane Vaynman
Friday October 2nd, 1430-1530 BST, (0630-0730 PT, 0830-0930 EST)
This event has already taken place
States can, in theory, avoid costly arms races through mutual agreements to restrain arming. Yet in reality, arms control is rare and difficult to negotiate. In considering cooperation, states face a tradeoff between beneficial and adverse aspects of information: states need transparency to observe behavior and assure compliance, but the same information used to monitor an agreement can also be used to gain a military advantage by revealing other capabilities or targets. The logic of the transparency- security tradeoff (developed in related work) has implications for several questions. To what extent do emerging technologies – including artificial intelligence, nanosatellites, autonomous vehicles, and additive manufacturing – make cooperation more likely? New technologies can affect the tradeoff by altering the amount of information collected or ease of information processing. While intuition suggests that technologies which improve monitoring should make arms control easier to achieve, this paper argues otherwise. Some technology may undermine prospects for cooperation by providing too much information. Monitoring allows for more effective espionage and so transparency may come at the expense of even higher threats to security.
Jane Vaynman is an Assistant Professor in Political Science at Temple University. Her research focuses on security cooperation between adversarial states, the design of arms control agreements, and the nuclear nonproliferation regime.
“China Won’t Win the Race for AI Dominance” Carl Frey & Michael Osborne in discussion with Helen Toner
Thursday, September 24th, 1330-1430 BST
A discussion centred on Carl Frey and Michael Osborne’s piece “China Won’t Win the Race for AI Dominance” in Foreign Affairs, featuring Helen Toner, Director of Strategy at CSET and FHI Research Associate.
Helen Toner is a Research Associate with the AI Governance Research Group, and Director of Strategy at the Center for Security and Emerging Technology. She previously worked as a Senior Research Analyst at the Open Philanthropy Project, where she focused on policy and strategy issues related to progress in machine learning and artificial intelligence, including consulting with governments and policymakers as well as advising on grants to scholars working on AI policy issues. She also hired and managed a team to handle the operational side of Open Philanthropy’s scale-up from making $10 million in grants per year to over $200 million.
Carl Frey is Oxford Martin Citi Fellow at the University of Oxford where he directs the programme on the Future of Work at the Oxford Martin School. After studying economics, history and management at Lund University, Frey completed his PhD at the Max Planck Institute for Innovation and Competition in 2011.
Michael Osbourne is an Associate Professor in Machine Learning, an Official Fellow of Exeter College, and a Faculty Member of the Oxford-Man Institute for Quantitative Finance, all at the University of Oxford. He joined the Oxford Martin School as Lead Researcher on the Oxford Martin Programme on Technology and Employment in January 2015.
GovAI Webinar – Noah Feldman, Sophie-Charlotte Fischer, and Gillian Hadfield on the Design of Facebook’s Oversight Board
Wednesday September 23rd, 1700-1815 BST, (0900-1015 PT, 1200-1315 EST)
This event has already taken place, but you can watch the recording and read the transcript here
In September 2019, Facebook announced that it was establishing an independent Facebook Oversight Board, with the power to make binding decisions on content moderation. Noah Feldman, one of the Board’s architects, discusses the background of Facebook’s decision and whether other platforms will follow suit. Noah also presents his approach to designing institutions to improve and legitimize complex controversial decisions by Big Tech companies and sketches out what a meaningful review board for the AI industry would look like.
Noah Feldman is an American author, columnist, public intellectual, and host of the podcast Deep Background. He is the Felix Frankfurter Professor of Law at Harvard Law School and Chairman of the Society of Fellows at Harvard University. His work is devoted to constitutional law, with an emphasis on free speech, law & religion, and the history of constitutional ideas.
Sophie-Charlotte Fischer is a PhD candidate at the Center for Security Studies (CSS), ETH Zurich and a Research Affiliate at the AI Governance Research Group. She holds a Master’s degree in International Security Studies from Sciences Po Paris and a Bachelor’s degree in Liberal Arts and Sciences from University College Maastricht. Sophie is an alumna of the German National Academic Foundation.
Gillian Hadfield is the director of the Schwartz Reisman Institute for Technology and Society. She is the Schwartz Reisman Chair in Technology and Society, professor of law and of strategic management at the University of Toronto, a faculty affiliate at the Vector Institute for Artificial Intelligence, and a senior policy advisor at OpenAI. Her current research is focused on innovative design for legal and regulatory systems for AI and other complex global technologies; computational models of human normative systems; and working with machine learning researchers to build ML systems that understand and respond to human norms.
GovAI Webinar – Carles Boix and Sir Tim Besley on Democratic Capitalism at the Crossroads: Technological Change and the Future of Politics
Monday June 8th, 1700-1815 BST (0900-1015 PT, 1200-1315 ET)
This event has already taken place, but you can watch the recording and read the transcript here
The twentieth century witnessed the triumph of democratic capitalism in the industrialized West, with widespread popular support for both free markets and representative elections. Today, that political consensus appears to be breaking down, disrupted by polarization and income inequality, widespread dissatisfaction with democratic institutions, and insurgent populism. Tracing the history of democratic capitalism over the past two centuries, Carles Boix new book explains how we got here-and where we could be headed. Sir Tim Besley will be sharing his thoughts on the topic, and discussing the content of Carles’ book – Democratic Capitalism at the Crossroads.
Boix looks at three defining stages of capitalism, each originating in a distinct time and place with its unique political challenges, structure of production and employment, and relationship with democracy. He begins in nineteenth-century Manchester, where factory owners employed unskilled laborers at low wages, generating rampant inequality and a restrictive electoral franchise. He then moves to Detroit in the early 1900s, where the invention of the modern assembly line shifted labor demand to skilled blue-collar workers. Boix shows how growing wages, declining inequality, and an expanding middle class enabled democratic capitalism to flourish. Today, however, the information revolution that began in Silicon Valley in the 1970s is benefitting the highly educated at the expense of the traditional working class, jobs are going offshore, and inequality has risen sharply, making many wonder whether democracy and capitalism are still compatible.
Carles Boix is the Robert Garrett Professor of Politics and Public Affairs in the Department of Politics and the Woodrow Wilson School of Public and International Affairs at Princeton University. In 2015, he published Political Order and Inequality, followed by Democratic Capitalism at the Crossroads: Technological Change and the Future of Politics, the subject of our webinar, in 2019.
Sir Tim Besley is School Professor of Economics of Political Science and W. Arthur Lewis Professor of Development Economics in the Department of Economics at LSE. He is also a member of the National Infrastructure Commission and was President of the Econometric Society in 2018. He is a Fellow of the Econometric Society and British Academy. He is also a Foreign Honorary Member of the American Economic Association and the American Academy of Arts and Sciences. In 2016 he published Contemporary Issues in Development Economics.
GovAI Webinar – COVID-19 and the Economics of AI
This event has already taken place, but you can watch the recording and read the transcript here
Wednesday May 20th, 1700-1815 BST (0900-1015 PT, 1200-1315 ET)
Join the inaugural Governance and Economics of AI webinar on Wednesday, May 20th, 1700-1815 BST (0900-1015 PT, 1200-1315 ET), featuring Daron Acemoğlu, Diane Coyle, and Joseph Stiglitz in a discussion about COVID-19 and the economics of AI.
The panel will focus on questions such as: Will COVID-19 cause automation to increase? A decline in labour share of income? A rise of superstar companies? What does COVID-19 teach us about policy responses discussed in the economics of AI, such as universal basic income? Will we see a stable increase in AI-enabled surveillance technologies?
The event will start with a brief introduction to the seminar series by Anton Korinek and Allan Dafoe, followed by a panel discussion and a Q&A. To join the event, please register using the link above.
The event is hosted by the AI Governance Research Group, at the Future of Humanity Institute, based at the University of Oxford. Our focus is on the political challenges arising from transformative AI. We seek to guide the development of AI for the common good by conducting research on important and neglected issues of AI governance, and advising decision makers on this research through policy engagement.
Daron Acemoğlu is an economist and the Elizabeth and James Killian Professor of Economics and Institute Professor at the Massachusetts Institute of Technology (MIT), where he has taught since 1993. He was awarded the John Bates Clark Medal in 2005 and co-authored Why Nations Fail: The Origins of Power, Prosperity, and Poverty with James A. Robinson in 2012.
Diane Coyle, CBE, OBE, FAcSS is an economist, former advisor to the UK Treasury, and the Bennett Professor of Public Policy at the University of Cambridge, where she has co-directed the Bennett Institute since 2018. She was vice-chairman of the BBC Trust, the governing body of the British Broadcasting Corporation, and was a member of the UK Competition Commission from 2001 until 2019. In 2020, she published Markets, State, and People: Economics for Public Policy.
Joseph Stiglitz is an economist, public policy analyst, and a University Professor at Columbia University. He is a recipient of the Nobel Memorial Prize in Economic Sciences (2001) and the John Bates Clark Medal (1979). He is a former senior vice president and chief economist of the World Bank and is a former member and chairman of the US President’s Council of Economic Advisers. His most recent book, Measuring What Counts; The Global Movement for Well-Being came out in 2019.
Heather Williams: Four Models of Arms Control for AI
Thu, 20 February 2020, 15:30 – 17:00 BST
Artificial intelligence and emerging technologies have the potential to challenge conventional wisdom around various international security issues. Of particular concern with AI, for example, is its impact on crisis escalation and stability, and potential integration with nuclear systems. Arms control has been a useful tool for managing these risks in the past, but it has historically struggled to incorporate new technologies (with a few important exceptions). This presentation will offer four models of why states have traditionally engaged in arms control and consider their applicability to AI. The presentation draws on ongoing research at King’s College London into the impact of emerging technologies on deterrence and strategic stability, and will conclude with recommendations for future scholarship or AI arms control and the role of AI in nuclear weapons policies.
About the speakers
Dr Heather Williams is a Lecturer in the Defence Studies Department and Centre for Science and Security Studies (CSSS). She is an Associate Fellow at the Royal United Services Institute (RUSI), a Senior Associate Fellow at the European Leadership Network and serves on the Wilton Park Advisory Council. She serves on the boards of the Nonproliferation Review and the UK Project on Nuclear Issues (PONI). From 2018-2019, Dr Williams served as Specialist Advisor to the House of Lords International Relations Committee inquiry into the Nuclear Non-Proliferation Treaty and Disarmament.
A perspective on fairness in machine learning from DeepMind
Silvia Chiappa, Research Scientist and William Isaac, Research Scientist, DeepMind
Thu, 17 October 2019, 16:00 – 17:30 BST
As the world moves towards applying machine learning techniques in high stakes societal contexts – from the criminal justice system to education to healthcare – ensuring the fairness of these systems becomes an evermore important and urgent issue. In this talk DeepMind’s Research Scientists, Silvia and William, will explain how Causal Bayesian Networks (CBNs) can be used as a tool for reasoning about and addressing fairness issues.
In the first part of the talk we will show that CBNs can provide us with a simple and intuitive visual tool for describing different possible unfairness scenarios underlying a dataset. We will use this viewpoint to revisit the recent debate surrounding the COMPAS pretrial risk assessment tool and, more generally, to point out that fairness evaluation on a model requires careful considerations on the patterns of unfairness underlying the training data.
In the second part of the talk we will explain how CBNs can provide us with a powerful quantitative tool to measure unfairness in a dataset, and to help researchers in the development of techniques to address complex fairness issues.
This talk is based on two recent papers: A Causal Bayesian Networks Viewpoint on Fairness and Path-Specific Counterfactual Fairness
This event is co-hosted by the AI Governance Research Group, Future of Humanity Institute and the Rhodes Artificial Intelligence Lab.
About the speakers
Silvia Chiappa is a Research Scientist in Machine Learning at DeepMind. She received a Diploma di Laurea in Mathematics from University of Bologna and a PhD in Machine Learning from École Polytechnique Fédérale de Lausanne. Before joining DeepMind, Silvia worked in the Empirical Inference Department at the Max-Planck Institute for Intelligent Systems (Prof. Dr. Bernhard Schölkopf), in the Machine Intelligence and Perception Group at Microsoft Research Cambridge (Prof. Christopher Bishop), and at the Statistical Laboratory, University of Cambridge (Prof. Philip Dawid). Her research interests are based around Bayesian & causal reasoning, graphical models, variational inference, time-series models, and ML fairness and bias.
William Isaac is a Research Scientist with DeepMind’s Ethics and Society Team. Prior to DeepMind, WIlliam served as an Open Society Foundations Fellow and Research Advisor for the Human Rights Data Analysis Group focusing on algorithmic bias and fairness. William’s prior research centering on deployments of automated decision systems in the US criminal justice system has been featured in publications such as Science, the New York Times, and the Wall Street Journal. William received his Doctorate in Political Science from Michigan State University and a Masters in Public Policy from George Mason University.
Tony Hoare Room, Department of Computer Science, Robert Hooke Building, Parks Road, Oxford
Sarah Kreps – All the News that’s Fit to Fabricate: A Study of AI-Generated Text
Thursday, September 26, 2019⋅15:00 – 16:00
The use of misinformation online has become a constant; only the way actors create and distribute that information is changing. Until now, their ability to interfere has been limited by resources and bandwidth. New technologies such as natural language processing models can help overcome those limitations by synthetically generating text in ways that mimic the style and substance of news stories. In this article, we present the findings from three experiments intended to study a particular NLP model called GPT-2, developed by the research group OpenAI. We first analyze three different sized models to examine whether respondents can distinguish the difference between the synthetic and original text. We then automate the production of outputs in the second experiment to plot the full credibility distribution of each model size. Lastly, we conducted an experiment to understand the prospects for disinformation in an age of political polarization, studying whether credulity intersects with individuals’ partisan priors and the partisan angle of the synthetically-generated news source. The findings have important implications for understanding the role of artificial intelligence in the generation of misinformation and the potential for foreign election interference.
Sarah Kreps is a Professor of Government and Adjunct Professor of Law at Cornell University. In 2017-2018, she is an Adjunct Scholar at the Modern War Institute (West Point). She is also a Faculty Fellow in the Milstein Program in Technology and Humanity at the Cornell Tech Campus in New York City.
When Speed Kills: Autonomous Weapon Systems, Deterrence, and Stability
Michael C. Horowitz, Professor of political science at the University of Pennsylvania
Wed, 5 June 2019, 17:30 – 19:00 BST
This event has already taken place, but you can watch the recording here
Autonomy on the battlefield represents one possible usage of narrow AI by militaries around the world. Research and development on autonomous weapon systems (AWS) by major powers, middle powers, and non-state actors makes exploring the consequences for the security environment a crucial task.
Michael will draw on classic research in security studies and examples from military history to assess how AWS could influence two outcome areas: the development and deployment of systems, including arms races, and the stability of deterrence, including strategic stability, the risk of crisis instability, and wartime escalation. He focuses on these questions through the lens of two characteristics of AWS: the potential for increased operational speed and the potential for decreased human control over battlefield choices.
Seminar Room A, Manor Road Building
Securing a World of Physically Capable Computers
Bruce Schneier, Computer security and cryptography expert
Mon, 17 June 2019, 17:30 – 19:00 BST
This event has already taken place, but you can watch the recording here
Computer security is no longer about data; it’s about life and property. This change makes an enormous difference, and will inevitably disrupt technology industries. Firstly – data authentication and integrity will become more important than confidentiality. Secondly – our largely regulation-free Internet will become a thing of the past. Soon we will no longer have a choice between government regulation and no government regulation. Our choice is between smart government regulation and stupid government regulation.
Given this future, Bruce Schneier makes a case for why it is vital that we look back at what we’ve learned from past attempts to secure these systems, and forward at what technologies, laws, regulations, economic incentives, and social norms we need to secure them in the future. Bruce will also discuss how AI could be used to benefit cybersecurity, and how government regulation in the cybersecurity realm could suggest ways forward for government regulation for AI.
About the speakers
Bruce Schneier is an American cryptographer, computer security professional, privacy specialist and writer. Schneier is a fellow at the Berkman Center for Internet & Society at Harvard Law School, and a program fellow at the New America Foundation’s Open Technology Institute.
Gillian Hadfield is the inaugural Schwartz Reisman Chair in Technology and Society at the University of Toronto as well as Professor of Law and Professor of Strategic Management. She is also Director of the Schwartz Reisman Institute for Technology and Society. Gillian will be speaking about Reflections on Governance Solutions.
Lecture Theatre B, Wolfson Building, Department of Computer Science
The Character & Consequences of Today’s Technology Tsunami
Richard Danzig, former Secretary of the US Navy, Director at the Center for a New American Security
Tue, 14 May 2019, 17:30 – 19:00 BST
It is often observed that we live amidst a flood of scientific discoveries and technological inventions. The timing, and in important respects, even the direction, of future developments cannot confidently be predicted. But this lecture draws on examples from many disparate technologies to identify important characteristics of technological change in our era; it outlines their implications for international security and our domestic well-being; and it describes ways in which recent failings should prompt new policies as increasingly powerful technologies unfold.
About the speakers
Richard Danzig is an American politician and lawyer who served as the 71st Secretary of the Navy under President Bill Clinton. He served as an advisor of President Barack Obama during his presidential campaign and was later the Chairman of the national security think-tank, the Center for a New American Security.
Todd H. Hall is Associate Professor in the Department of Politics and International Relations and Tutorial Fellow in Politics, Saint Anne’s College, at the University of Oxford.
Janina Dill is the John G. Winant Associate Professor of U.S. Foreign Policy at the Department of Politics and International Relations (DPIR) of the University of Oxford and a Professorial Fellow at Nuffield College and Co-Director of the Oxford Institute for Ethics, Law, and Armed Conflict (ELAC).
Rhodes House, South Parks Road
Alenka Turnsek, Rob McArgow, Neville Howlett, Rayna Taback, and Dave Murray on the Political Economy of AI
Friday, February 9, 2018⋅12:00 – 3:00pm
An event with tax experts from PwC discussing the following:
- Value chains: Key value drivers/generators and their relative contribution to the traditional economy compared with the digital and AI economies are likely to be different. What are the distinguishing elements between the illustrative value chains of these three different types of economy? What are the relative contributions of the constituent elements of the respective value chains?
- Operating models: Defining different digital operating models – and how best to categorise them e.g. identify the hallmarks of the digital operating models. How, if at all, does AI disrupt the digital operating models?
- Data/ Value of data – For tax purposes raw data is thought to have nominal, if any value, but from the business perspective data is fuel for AI. How can this gap in perception of value of data be explained? We would like to discuss whether the capital/ investment in the processing technology (algorithms) and skilled labour is the bridge between the two positions or are there any other elements that that need to be taken into account?
- Role of people in profit allocation of MNCs: During the 2015 overhaul of the international taxation framework, allocation of profits to intangibles (including technology), risks and other assets was skewed towards the time and skill of the personnel developing, protecting and exploiting them rather than merely the capital being put at risk for developing and deploying them. Although the AI economy also consists of people, intangibles (incl. data and technology) and capital, it seems to be more biased towards the latter two than people, and may replace people at some stage. Is that correct or is/will the role of people in AI economy remain as prominent as it is today?
- Impact on international trade: Could the proposed new short term taxation measures (taxation of revenue, withholding taxes, equalisation levy etc) help or hinder the digital economy from the source (market) or residence country position? Could these measures raise international trade barriers and what would be the consequences?
James D Morrow: Mutual restraint in war (looking towards AI)
Wednesday, February 28, 2018⋅1:30 – 3:00pm
Professor Morrow’s research addresses theories of international politics, both the logical development and empirical testing of such theories. He is best known for pioneering the application of noncooperative game theory, drawn from economics, to international politics. His published work covers crisis bargaining, the causes of war, military alliances, arms races, power transition theory, links between international trade and conflict, the role of international institutions, and domestic politics and foreign policy.
Professor Morrow’s current research addresses the effects of norms on international politics. The latter project examines the laws of war in detail as an example of such norms.
Erik Gartzke: War and Peace in the Virtual World: How Drones and Cyberspace will (re)shape the nature of conflict
Tuesday, February 27, 2018⋅4:30 – 6:00pm
New ways of warfare challenge what we think we know about the nature of conflict. At the same time, these new modes of interaction are subject to core logics of contestation that are as stable as the human penchant to compete. When and how virtual warfare can supplant terrestrial conflict depends on its ability to achieve the objectives of traditional uses of force. Similarly, automating battle will shape the politics of combat to the degree that it both fulfills the political purpose of force and supplants or mitigates its shortcomings. The presentation builds on several published studies by the author, laying out a basic logic of virtual warfare and explaining where and how it does, and does not, meet the objectives of military violence. The presentation will then move on to automation of military force, again applying a logic of warfare to the process of supplanting humans with machines on the battlefield.
Professor Gartzke studies the impact of information on war, peace and international institutions. Students of international politics are increasingly aware that what leaders and others know or believe is key to understanding fundamental international processes. Professor Gartzke’s research has appeared in the American Journal of Political Science, International Organization,International Studies Quarterly, the Journal of Conflict Resolution, the Journal of Politics and elsewhere. He is currently working on two books, one on globalization and the other on the democratic peace, as well as dozens of articles.
Updates
Annual Report 2020
2020 has been an eventful year for AI governance. Here follows a brief summary of our activities during the year.
The Windfall Clause: Distributing the Benefits of AI for the Common Good
The Windfall Clause is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits. Read more in the full report.
Annual Report 2019
2019 has been an eventful year for AI governance. Here follows a brief summary of our activities during the year.
New Technical Report: Standards for AI Governance
Peter Cihon, Research Affiliate, explains the relevance of international standards to global governance of AI in a new technical report. Summary of the report here and report here
Course on the Ethics of AI with OxAI
Carina Prunkl, GovAI collaborator will be running a course on the ethics of AI with Oxford University student group OxAI, investigating the moral and social implications of Artificial Intelligence.
Annual Report 2018
The governance of AI is in my view the most important global issue of the coming decades, and it remains highly neglected. It is heartening to see how rapidly this field is growing, and exciting to be part of that growth. This report provides a short summary of our work in 2018, with brief notes on our plans for 2019.
Allan Dafoe