We’re growing. Applications to join our 3-month Governance of AI Fellowship are closed. You can also send us a general application

 

The Centre for the Governance of AI (GovAI), housed at the Future of Humanity Institute, University of Oxford, strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. Our focus is on the political challenges arising from transformative AI: advanced AI systems whose long-term impacts may be as profound as the industrial revolution. We seek to guide the development of AI for the common good by conducting research on important and neglected issues of AI governance, and advising decision makers on this research through policy engagement.

GovAI produces research which is foundational to the field of AI governance, for example mapping crucial considerations to direct the research agenda, or identifying distinctive features of the transition to transformative AI and corresponding policy considerations. Our research also addresses more immediate policy issues, such as malicious use and China’s AI strategy. Current focuses include international security, the history of technology development, and public opinion.

In addition to research, the centre is active in international policy circles, and actively advises governments and industry leaders on AI strategy. Our researchers have spoken at the NIPS and AAAI/ACM conferences, and at events involving the German Federal Foreign Office, the European Commision, the European Parliament, the UK House of Lords, US Congress, and others.

GovAI’s papers and reports are available at https://www.fhi.ox.ac.uk/GovAI/#publications.

Learn more about our work

2019 Beneficial AGI Conference, Puerto Rico

You can also learn more about the Centre for the Governance of AI on various podcasts. Listen e.g. to:

Our work looks at:

  • trends, causes, and forecasts of AI progress;
  • transformations of the sources of wealth and power;
  • global political dimensions of AI-induced unemployment and inequality;
  • risks and dynamics of international AI races;
  • possibilities for global cooperation;
  • associated emerging technologies such as those involving crypto-economic systems, weapons systems, nanotechnology, biotechnology, and surveillance;
  • global public opinion, values, ethics;
  • long-run possibilities for beneficial global governance of advanced AI

Sign up below to receive periodic updates from us about e.g. events and open positions.

You can sign up to the Future of Humanity mailing list here.

Featured Recent Work:

For all our publications, see Publications.

AI Governance: A Research Agenda

This research agenda by Allan Dafoe proposes a framework for research on AI governance. It provides a foundation to introduce and orient researchers to the space of important problems in AI governance. It offers a framing of the overall problem, an enumeration of the questions that could be pivotal, and references to published articles relevant to these questions. Read More.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

 

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. The report explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks. Read More

‘The Malicious Use of Artificial Intelligence’ received coverage from hundreds of news providers including the New York Times, the BBC, Reuters, and the Verge. The report was praised by Rory Stewart, UK Minister of Justice; Major General Mick Ryan, Commander at the Australian Defence College, and Tom Dietterich, former President of the Association for the Advancement of Artificial Intelligence.

Artificial Intelligence: American Attitudes and Trends

This report by Baobao Zhang and Allan Dafoe presents the results from an extensive look at the American public’s attitudes toward AI and AI governance, with questions touching on: workplace automation; attitudes regarding international cooperation; the public’s trust in various actors to develop and regulate AI; views about the importance and likely impact of different AI governance challenges; and historical and cross-national trends in public opinion regarding AI. Our results provide preliminary insights into the character of U.S. public opinion regarding AI. Read More; and see HTML version.

Featured in BloombergVox, Axios, the MIT Technology Review and the Future of Life Institute podcast.

Deciphering China’s AI Dream

This report by Jeffrey Ding examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, […] Read More

‘Deciphering China’s AI Dream’ has received press attention in the MIT Technology Review, Bloomberg, and the South China Morning Post, among other media outlets.

Featured Policy Writing:

For all our policy writing, see Policy & Public Engagement.

Export Controls in the Age of AI

Jade Leung, Sophie-Charlotte Fischer, and Allan Dafoe

28 August 2019

Thinking About Risks From AI: Accidents, Misuse and Structure

Remco Zwetsloot, Allan Dafoe

11 February 2019

Beyond the AI Arms Race

Remco Zwetsloot, Helen Toner, and Jeffrey Ding

16 November 2018

JAIC: Pentagon debuts artificial intelligence hub

Jade Leung, Sophie-Charlotte Fischer

8 August 2018

Our work has been featured in:

You can also find our publications on our Google Scholar page

Below, you’ll find our:

Select Publications

The Windfall Clause: Distributing the Benefits of AI for the Common Good (2020)

Cullen O’Keefe, Peter Cihon, Ben Garfinkel, Carrick Flynn, Jade Leung, and Allan Dafoe

The Windfall Clause is a policy proposal for an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits garnered from the development of transformative AI. This report reviews the motivations for such a policy, enumerates central considerations regarding the design of the Clause, weighs up its limitations against alternative solutions, and situates the Windfall Clause in the broader conversation on the distribution of gains from AI. We hope to spark productive debate on such crucial issues, and contribute to the growing, global discussion centered around channeling technology-driven economic growth towards robustly equitable, broadly beneficial outcomes. 

Want to learn more? Read a summary of the report, the full report, a paper published at the AAAI / ACM AI Ethics & Society conference or listen to Cullen O’Keefe’s talk about the idea.

Artificial Intelligence: American Attitudes and Trends (2019)

Baobao Zhang and Allan Dafoe

This report by Baobao Zhang and Allan Dafoe presents the results from an extensive look at the American public’s attitudes toward AI and AI governance, with questions touching on: workplace automation; attitudes regarding international cooperation; the public’s trust in various actors to develop and regulate AI; views about the importance and likely impact of different AI governance challenges; and historical and cross-national trends in public opinion regarding AI. Our results provide preliminary insights into the character of U.S. public opinion regarding AI.

Read More; and see HTML version.

Featured in BloombergVox, Axios and the MIT Technology Review.

AI Governance: A Research Agenda (2018)

Allan Dafoe

This research agenda by Allan Dafoe proposes a framework for research on AI governance. It provides a foundation to introduce and orient researchers to the space of important problems in AI governance. It offers a framing of the overall problem, an enumeration of the questions that could be pivotal, and references to published articles relevant to these questions.

Read More

Deciphering China’s AI Dream: The context, components, capabilities, and consequences of China’s strategy to lead the world in AI (2018)

Jeffrey Ding

This report examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, it benchmarks China’s current AI capabilities by developing a novel index to measure any country’s AI potential and highlights the potential implications of China’s AI dream for issues of AI safety, national security, economic development, and social governance.

Read More

‘Deciphering China’s AI Dream’ has received press attention in the MIT Technology Review, Bloomberg, and the South China Morning Post, among other media outlets.

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018)

Miles Brundage, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, Rebecca Crootof, Owain Evans, Michael Page, Joanna Bryson, Roman Yampolskiy, Dario Amodei

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy. This report distills findings from a 2017 workshop as well as additional research done by the authors. It explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks.

Read More

‘The Malicious Use of Artificial Intelligence’ received coverage from hundreds of news providers including the New York Times, the BBC, Reuters, and the Verge. The report was praised by Rory Stewart, UK Minister of Justice; Major General Mick Ryan, Commander at the Australian Defence College, and Tom Dietterich, former President of the Association for the Advancement of Artificial Intelligence.

The Vulnerable World Hypothesis (2018)

Nick Bostrom

Scientific and technological progress might change people’s capabilities or incentives in ways that would destabilize civilization. For example, advances in DIY biohacking tools might make it easy for anybody with basic training in biology to kill millions; novel military technologies could trigger arms races in which whoever strikes first has a decisive advantage; or some economically advantageous process may be invented that produces disastrous negative global externalities that are hard to regulate. This paper introduces the concept of a vulnerable world: roughly, one in which there is some level of technological development at which civilization almost certainly gets devastated by default, i.e. unless it has exited the “semi-anarchic default condition”. Several counterfactual historical and speculative future vulnerabilities are analyzed and arranged into a typology. A general ability to stabilize a vulnerable world would require greatly amplified capacities for preventive policing and global governance. The vulnerable world hypothesis thus offers a new perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order.

Read More

When Will AI Exceed Human Performance? Evidence from AI Experts (2017)

Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, Owain Evans

Advances in artificial intelligence (AI) will transform modern life by reshaping transportation, health, science, finance, and the military. To adapt public policy, we need to better anticipate these advances. Here we report the results from a large survey of machine learning researchers on their beliefs about progress in AI…

Read More

‘When will AI exceed human performance?’ was ranked #16 in Altmetric’s most discussed articles of 2017. The survey was covered by the BBCNewsweek, the New Scientist, the MIT Technology ReviewBusiness InsiderThe Economist, and many other international news providers.

Strategic Implications of Openness in AI Development (2016)

Nick Bostrom

This paper attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals). Short­term impacts of increased openness appear mostly socially beneficial in expectation…

Read More

Technical Reports

Beyond Near- and Long-Term: Towards a Clearer Account of Research Priorities in AI Ethics and Society (2020)

Carina Prunkl, and Jess Whittlestone

One way of carving up the broad ‘AI ethics and society ’ research space that has emerged in recent years is to distinguish between ‘near-term’ and ‘long-term’ research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed.

We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions. We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.

Should Artificial Intelligence Governance be Centralised? Six Design Lessons from History (2019)

Peter Cihon, Matthijis M. Maas, and Luke Kemp

Can effective international governance for artificial intelligence remain fragmented, or is there a need for a centralised international organisation for AI? We draw on the history of other international regimes to identify advantages and disadvantages in centralising AI governance. Some considerations, such as efficiency and political power, speak in favour of centralisation. Conversely, the risk of creating a slow and brittle institution speaks against it, as does the difficulty in securing participation while creating stringent rules. Other considerations depend on the specific design of a centralised institution. A well-designed body may be able to deter forum shopping and ensure policy coordination. However, forum shopping can be beneficial and a fragmented landscape of institutions can be self-organising. Centralisation entails trade-offs and the details matter. We conclude with two core recommendations. First, the outcome will depend on the exact design of a centralinstitution. A well-designed centralised regime covering a set of coherent issues could be beneficial. But locking-in an inadequate structure may pose a fate worse than fragmentation. Second, for now fragmentation will likely persist. This should be closely monitored to see if it is self-organising or simply inadequate.

Social and Governance Implications of Improved Data Efficiency (2020)

Aaron D. Tucker, Markus Anderljung, and Allan Dafoe

Many researchers work on improving the data efficiency of machine learning. What would happen if they succeed?This paper explores the social-economic impact of increased data efficiency. Specifically, we examine the intuition that data efficiency will erode the barriers to entry protecting incumbent data-rich AI firms, exposing them to more competition from data-poor firms. We find that this intuition is only partially correct: data efficiency makes it easier to create ML applications, but large AI firms may have more to gain from higher performing AI systems. Further, we find that the effect on privacy, data markets, robustness, and misuse are complex. For example, while it seems intuitive that misuse risk would increase along with data efficiency – as more actors gain access to any level of capability–the net effect crucially depends on how much defensive measures are improved. More investigation into data efficiency, as well as research into the “AI production function”, will be key to understanding the development of the AI industry and its societal impacts.

Who owns artificial intelligence? A preliminary analysis of corporate intellectual property strategies and why they matter (2020)

Nathan Calvin, and Jade Leung

This working paper is a preliminary analysis of the legal rules, norms, and strategies governing artificial intelligence (AI)-related intellectual property (IP). We analyze the existing AI-related IP practices of select companies and governments, and provide some tentative predictions for how these strategies and dynamics may continue to evolve in the future.

Read more here

Who will govern artificial intelligence? Learning from the history of strategic politics in emerging technologies (2019)

Jade Leung, DPhil thesis from the University of Oxford, International Relations

Artificial intelligence (AI) is a strategic general purpose technology (GPT) with the potential to deliver vast economic value and substantially affect national security. The central claim motivating this work is that the development of a strategic GPT follows a distinct pattern of politics. By modelling this pattern, we can make predictions about how the politics of AI will unfold.

The proposed model follows a life cycle of a strategic GPT. It focuses on three actors – the state, firms, and researchers. Each actor is defined by their goals, resources and constraints. The model analyses the relationships between these actors – specifically, the synergies and conflicts that emerge between them as their goals, resources, and constraints interact.

Case studies of strategic GPTs developed in the U.S. – specifically aerospace technology, biotechnology, and cryptography – show that the model captures much of history accurately. When applied to AI, the model also does well to capture political dynamics to date and motivates predictions about how we could expect the politics of AI to unfold. For example, I predict that AI firms will be increasingly constrained by the legislative environment, and more pressured to serve national defense and security interests. Some will be caught in the cross-hairs of public critique and researcher push back; some, however, will willingly sell AI technologies to the state with little friction. Further, I predict that the political influence of researchers will shrink, going against what some may view as a rise in researcher influence given recent events of employee backlash in AI firms. In turn, the inclination and capacity for the state to exert control over AI’s development and proliferation will likely grow, exercised via tools such as export controls.

Artificial intelligence is going to matter greatly, and indeed, already does. It matters, then, that we understand the politics that surrounds it, and that we ultimately lay the groundwork for the governance of a technology that is poised to be transformative.

Read More

How Does the Offense-Defense Balance Scale? (2019)

Ben Garfinkel and Allan Dafoe in Journal of Strategic Studies

We ask how the offense-defense balance scales, meaning how it changes as investments into a conflict increase. To do so we offer a general formalization of the offense-defense balance in terms of contest success functions. Simple models of ground invasions and cyberattacks that exploit software vulnerabilities suggest that, in both cases, growth in investments will favor offense when investment levels are sufficiently low and favor defense when they are sufficiently high. We refer to this phenomenon as offensive-then-defensive scaling or OD-scaling. Such scaling effects may help us understand the security implications of applications of artificial intelligence that in essence scale up existing capabilities.

Read More

Standards for AI Governance: International Standards to Enable Global Coordination in AI Research & Development (2019)

Peter Cihon

Today, AI policy analysis tends to focus on national strategies, nascent international initiatives, and the policies of individual corporations. Yet, international standard produced by nongovernmental organizations are also an important site of forthcoming AI governance. International standards can impact national policies, international institutions, and individual corporations alike. International standards offer an impactful policy tool in the global coordination of beneficial AI development.

The case for further engagement in the development of international standards for AI R&D are detailed in this report. It explains the global policy benefits of AI standards, outlines the current landscape for AI standards around the world, and offers a series of recommendations to researchers, AI developers, and other AI organizations.

Read More

Stable Agreements in Turbulent Times: A Legal Toolkit for Constrained Temporal Decision Transmission (2019)

Cullen O’Keefe

This century, advanced artificial intelligence (“Advanced AI”) technologies could radically change economic or political power. Such changes produce a tension that is the focus of this Report. On the one hand, the prospect of radical change provides the motivation to craft, ex ante, agreements that positively shape those changes. On the other hand, a radical transition increases the difficulty of forming such agreements since we are in a poor position to know what the transition period will entail or produce. The difficulty and importance of crafting such agreements is positively correlated with the magnitude of the changes from Advanced AI. The difficulty of crafting long-term agreements in the face of radical changes from Advanced AI is the “turbulence” with which this Report is concerned. This Report attempts to give readers a toolkit for making stable agreements—ones that preserve the intent of their drafters—in light of this turbulence.

Read More

Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence (2018)

Published in Should we fear artificial intelligence?, a report by the Science and Technology Options Assessment division of the European Parliament.

Miles Brundage 

Expert opinions on the timing of future developments in artificial intelligence (AI) vary widely, with some expecting human-level AI in the next few decades and others thinking that it is much further off (Grace et al., 2017). Similarly, experts disagree on whether developments in AI are likely to be beneficial or harmful for human civilization, with the range of opinions including those who feel certain that it will be extremely beneficial, those who consider it likely to be extremely harmful (even risking human extinction), and many in between (AI Impacts, 2017). While the risks of AI development have recently received substantial attention (Bostrom, 2014; Amodei and Olah et al., 2016), there has been little systematic discussion of the precise ways in which AI might be beneficial in the long term.

Read More

Recent Developments in Cryptography and Possible Long-Run Consequences (2018)

Unpublished manuscript

Ben Garfinkel

Historically, progress in the eld of cryptography has been enormously consequential. Over the past century, for instance, cryptographic discoveries have played a key role in a world war and made it possible to use the internet for business and private communication. In the interest of exploring the impact the eld may have in the future, I consider a suite of more recent developments. My primary focus is on blockchain-based technologies (such as cryptocurrencies and smart contracts) and on techniques for computing on confidential data (such as homomorphic encryption and secure multiparty computation). I provide an introduction to these technologies that assumes no previous knowledge of cryptography. Then, I consider eight speculative predictions about the long-term consequences these emerging technologies could have. These predictions include the views that a growing number of information channels used to conduct surveillance may go dark, that it may become easier to verify compliance with agreements without intrusive monitoring, that the roles of a number of centralized institutions ranging from banks to voting authorities may shrink, and that new transnational institutions known as decentralized autonomous organizations may emerge. Finally, I close by discussing some challenges that could limit the significance of emerging cryptographic technologies. On the basis of these challenges, it is premature to predict that any of them will approach the transformativeness of previous technologies. However, this remains a rapidly-developing area well worth following.

To request the full version of the report, contact the author on benmgarfinkel [at] gmail.com.

Accounting for the Neglected Dimensions of AI Progress (2018)

Fernando Martínez-Plumed, Shahar Avin, Miles Brundage*, Allan Dafoe*, Sean Ó hÉigeartaigh, José Hernández-Orallo

This paper analyzes and reframes AI progress. In addition to the prevailing metrics of performance, it highlights the usually neglected costs paid in the development and deployment of a system, including: data, expert knowledge, human oversight, software resources, computing cycles, hardware and network facilities, development time, etc. These costs are paid throughout the life cycle of an AI system, fall differentially on different individuals, and vary in magnitude depending on the replicability and generality of the AI solution. The multidimensional performance and cost space can be collapsed to a single utility metric for a user with transitive and complete preferences. Even absent a single utility function, AI advances can be generically assessed by whether they expand the Pareto (optimal) surface. We explore a subset of these neglected dimensions using the two case studies of Alpha* and ALE. This broadened conception of progress in AI should lead to novel ways of measuring success in AI, and can help set milestones for future progress…

* – Centre for the Governance of AI

Read More

Policy Desiderata in the Development of Machine Superintelligence (2016)

Nick Bostrom, Allan Dafoe, Carrick Flynn

Machine superintelligence could plausibly be developed in the coming decades or century. The prospect of this transformative development presents a host of political challenges and opportunities. This paper seeks to initiate discussion of these by identifying a set of distinctive features of the transition to a machine intelligence era. From these distinctive features, we derive a correlative set of policy desiderata—considerations that should be given extra weight in long-term AI policy…

Read More

Policy writing

Thinking About Risks From AI: Accidents, Misuse and Structure

Remco Zwetsloot, Allan Dafoe

11 February 2019

Beyond the AI Arms Race

Remco Zwetsloot, Helen Toner, and Jeffrey Ding

16 November 2018

JAIC: Pentagon debuts artificial intelligence hub

Jade Leung, Sophie-Charlotte Fischer

8 August 2018

Resources

Syllabus: Artificial Intelligence and China (January 2020)

Jeffrey Ding, Sophie-Charlotte Fischer, Brian Tse, Chris Byrd

In recent years, China’s ambitious development of artificial intelligence (AI) has attracted much attention in policymaking and academic circles. This syllabus aims to broadly cover the research landscape surrounding China’s AI ecosystem, including the context, components, capabilities, and consequences of China’s AI development. The materials presented range from blogs to books, with an emphasis on English translations of Mandarin source materials. The reading list is not exhaustive, and it will benefit from feedback and revisions.

Syllabus: Artificial Intelligence and International Security (July 2018)

Remco Zwetsloot

This syllabus covers material located at the intersection between artificial intelligence (AI) and international security. The syllabus can be used in structured self-study (or group-study) for those new to this space, or as a resource for instructors designing class-specific syllabi (which would probably have to be significantly shorter). It is designed to be useful to (a) people new to both AI and international relations (IR); (b) people coming from AI who are interested in an IR angle on the problems; (c) people coming from IR who are interested in working on AI. Depending on which of groups (a)-(c) you fall in, it may be feasible to skip or skim certain sections. For sections that you are particularly interested in, do consider diving into the sources cited in the readings—for most topics what I have assigned just skims the surface and is intended only as a starting point.

The Team

Allan Dafoe

Director, Centre for the Governance of AI

Nick Bostrom

Director, Future of Humanity Institute

Macrostrategy; strategic implications of AI

Jade Leung

Project Manager: Research & Partnerships; Researcher

Emerging & dual-use technology governance, role of private companies, firm-government relations, international cooperation

Markus Anderljung

Project Manager: Operations & Policy Engagement

Team growth; Policy engagement; Research support

Ben Garfinkel

Researcher

International security; privacy; long-term forecasting

Toby Shevlane

Researcher

Private governance; cooperation; innovation sharing

Jeffrey Ding

Researcher

China’s AI strategy; China’s approach to strategic technologies

Research Affiliates and Associates

Sophie-Charlotte Fischer

ETH Zurich-based Research Affiliate

International security and arms control; IT and politics; foreign policy

Matthijs M. Maas

University of Copenhagen-based Research Affiliate

AI governance; technology management regimes; nuclear deterrence stability; securitization theory

Remco Zwetsloot

Yale University-based Research Affiliate

International security; arms racing and arms control; bargaining theory

Miles Brundage

Research Affiliate

AI progress forecasting; science and technology policy

Carrick Flynn

Research Affiliate

Legal issues with AI; AI governance challenges; policy

Helen Toner

Research Associate

AI policy and strategy; progress in machine learning & AI; effective philanthropy

Baobao Zhang

Research Affiliate

Public opinion research; American politics; public policy

Cullen O’Keefe

Research Affiliate

Implications of corporate, US, and international law for AI governance; benevolent AI governance structures

Brian Tse

Policy Affiliate

China-U.S. relations; global governance of existential risk; China’s AI safety development

Waqar Zaidi

Research Affiliate

History of Science and Technology; international control of powerful technologies

Aaron Tucker

Research Affiliate

Technical AI safety, the AI production function, AI forecasting

Emefa Agawu

Research Communication Consultant

Public affairs, cybersecurity, US policy, political perceptions of global catastrophic risks

Carina Prunkl

Research Affiliate

Senior Research Scholar, Research Scholars Programme

Ethics of AI; Philosophy; Quantum Technologies

Max Daniel

Research Affiliate

Senior Research Scholar, Research Scholars Programme

Macrostrategy; firm incentives; race dynamics

Hiski Haukkala

Policy Expert

Theory, policy and practice of international politics; Mechanisms to increase stability in the world through the use of AI; AI and the EU

Andrew Trask

Research Affiliate

Structured Transparency, Technological Solutions to Governance Problems, Machine Learning

Ulrike Franke

Policy Affiliate

Future of War, EU Policy, National Security

External DPhil Supervisors

Duncan Snidal
Professor of International Relations, University of Oxford

Karolina Milewicz
Associate Professor of International Relations, University of Oxford

Former Research Affiliates

Max Negele, Sören Mindermann, Aaron Tucker, Tamay Besiroglu, Nathan Calvin, Paul de Font-Reaulx, Genevieve Fried, Roxanne Heston, Katelynn Kyker, Clare Lyle, William Rathje, Tom Sittler.

GovAI alumni now work at e.g. Microsoft, OpenAI, Center for Security and Emerging Technology, and AI Now.

Policy & Public Engagement

Researchers at the Centre for the Governance of AI are actively involved in the policy and public dialogues on the impact of advanced, transformative AI. We seek to offer authoritative, actionable and accessible insight to a range of audiences in policy, academia, and the public. The following is a selection of speaking engagements, podcasts, media appearances and policy writings from our team.

Recent Policy & Public Engagement

Brian Tse: 'Towards A Global Community Of Shared Future in AGI' | (January 5, 2020. Beneficial AGI conference: Puerto Rico)

Allan Dafoe, Jade Leung, and Brian Tse from our team participated in the Beneficial AGI conference in Puerto Rico. Allan and Brian presented.

Watch Brian’s presentation here.

Allan Dafoe: 'Frontier Exploration and Innovation Practices of Artificial Intelligence Security' | (August 30, 2019. World AI Conference: Shanghai)

Allan Dafoe spoke at the World AI Conference in Shanghai on AI governance and security. He delivered a keynote speeche, expressing his opinions around the theme of the frontiers of artificial intelligence security.

Read more about the conference here.

Jeffrey Ding: 'What People Get Wrong About China and Artificial Intelligence' | (July 9, 2019. Fortune)

In an interview with Fortune, Ding explained that much of what is written about China’s multi-billion dollar push into A.I. often seems like it’s written in a “vacuum.” There’s little context or comparison between China’s A.I. abilities and those of other countries.

Read the article here.

Jade Leung: 'What happens when AI fails – Concrete solutions for a better AI' | (Hello Tomorrow Global Summit: Paris)

Jade Leung participated in the panel “What happens when AI fails – Concrete solutions for a better AI” at the Hello Tomorrow Global Summit in Paris.

Read more about the summit here.

Carina Prunkl: 'Ethics of AI course' | (Michaelmas term. The University of Oxford)

Carina Prunkl organised an Ethics of AI course in Michaelmas term in Oxford, UK. This course was for the Chevening Gurukul Fellowship for Leadership and Excellence, at the Said Business School, University of Oxford.

Read more about the course here.

Profile picture of Carina Prinkl

Sophie-Charlotte Fischer: 'The Emergence of Artificial Intelligence' | (WEF, ETH and Microsoft: Davos)

Sophie-Charlotte Fischer spoke at a World Economic Forum side event in Davos hosted by ETH and Microsoft on “The Emergence of Artificial Intelligence.”

Sophie-Charlotte Fischer: 'Robotics, Artificial Intelligence & Humanity' | (May 16, 2019. Robotics, AI & Humanity Conference: Vatican)

Sophie-Charlotte Fischer spoke at a 2-day conference focusing on the impact of robotics and artificial intelligence on humanity, that was held in the Vatican and organized by the Pontifical Academy of Social Sciences and the Pontifical Academy of Sciences.

Jade Leung: 'How Can We See the Impact of AI Strategy Research?' | (June 23, 2019. EA Global: San Francisco)

Watch the video here.

Brian Tse: 'Improving Coordination with China to Reduce AI Risk' | (June 23, 2019. EA Global: San Francisco)

Watch the video here.

Markus Anderljung: 'Governing Transformative Artificial Intelligence' | (EAGx Nordics)

Watch the video here.

An Interview with Ben Garfinkel, Governance of AI Program Researcher

Ben Garfinkel was interviewed in The Politic. The Yale College Journal of Politics is a monthly Yale University student publication.

Read the interview here

Markus Anderljung: 'AI Safety - Human values aligned with AI' | (September 21, 2019. AICast podcast)

Markus talks about Human values and how we should start planning and implementing when we as humans start building artificial general intelligence (AGI). The goal of long-term artificial intelligence safety is to ensure that advanced AI systems are aligned with human values — that they reliably do things that people want them to do.

Listen to the podcast here.

Jeffrey Ding: 'Artificial Intelligence in China' | (May 21, 2019. Ark Investment Podcast)

In this podcast, Jeffrey talks about his work in the AI field and his focus on translating the developments from China to a more western audience. He talks about the reasons why he started his newsletter, namely that the AI community in China are mostly abreast of advancements from the US and UK, while the same cannot be said in the opposite direction. This language asymmetry, as Jeffrey calls it, means there is a gap in the knowledge base in the Americas and Europe around the burgeoning Chinese AI scene.

Listen to the podcast here.

Cullen O'Keefe: 'The Windfall Clause: Sharing the benefits of advanced AI' | (June 23, 2019. EA Global: San Francisco)

The potential upsides of advanced AI are enormous, but there’s no guarantee they’ll be distributed optimally. In this talk, Cullen O’Keefe, discusses one way we could work toward equitable distribution of AI’s benefits — the Windfall Clause, a commitment by AI firms to share a significant portion of their future profits — as well as the legal validity of such a policy and some of the challenges to implementing it.

Watch the video here.

Jade Leung, Sophie-Charlotte Fischer, & Allan Dafoe: "Export Controls in the Age of AI" | (28 August 2019: War on the Rocks)

What does technological leadership look like in an era of artificial intelligence? The United States, like other countries, is in the midst of grappling with this question against a backdrop of the rise of China and the growing realization that “business as usual” will no longer suffice for America to maintain its technological advantage. Washington has begun to take some important steps to translate this realization into action. In February, President Donald Trump launched the American AI Initiative in recognition that “American leadership in AI is of paramount importance to maintaining the economic and national security of the United States.” In a less constructive fashion, two months later Sen. Josh Hawley (R-Mo.) introduced the China Technology Transfer Control Act of 2019 that would “make it harder for American companies to export major emerging technologies to China.” Clearly, AI is on the agenda.

Unfortunately, Washington appears to be defaulting to traditional, 20th-century policy tools to address a 21st-century problem…

Read the full text here.

Jeffrey Ding: "AI Alignment Podcast: China’s AI Superpower Dream" | (16 August 2019: Future of Life Institute AI Alignment Podcast)

In July 2017, The State Council of China released the New Generation Artificial Intelligence Development Plan. This policy outlines China’s strategy to build a domestic AI industry worth nearly US$150 billion in the next few years and to become the leading AI power by 2030. This officially marked the development of the AI sector as a national priority and it was included in President Xi Jinping’s grand vision for China.” (FLI’s AI Policy – China page) In the context of these developments and an increase in conversations regarding AI and China, Lucas spoke with Jeffrey Ding from the Center for the Governance of AI (GovAI). Jeffrey is the China lead for GovAI where he researches China’s AI development and strategy, as well as China’s approach to strategic technologies more generally.
Find the podcast here.

Jade Leung: "AI Alignment Podcast: On the Governance of AI" | (22 July 2019: Future of Life Institute AI Alignment Podcast)

In this podcast, Lucas spoke with Jade Leung from the Center for the Governance of AI (GovAI). GovAI strives to help humanity capture the benefits and mitigate the risks of artificial intelligence. The center focuses on the political challenges arising from transformative AI, and they seek to guide the development of such technology for the common good by researching issues in AI governance and advising decision makers. Jade is Head of Research and Partnerships at GovAI, where her research focuses on modeling the politics of strategic general purpose technologies, with the intention of understanding which dynamics seed cooperation and conflict.
Find the podcast here.

Jeffrey Ding & Helen Toner: "US Senate Hearing on Technology, Trade, and Military-Civil Fusion: China's Pursuit of Artificial Intelligence, New Materials, and New Energy" | (7 June 2019: U.S.-China Economic and Security Review Commission)

The U.S.-China Economic and Security Review Commission held a hearing on Technology, Trade, and Military-Civil Fusion: China’s Pursuit of Artificial Intelligence, New Materials, and New Energy. Jeffrey Ding, researcher at GovAI, and Helen Toner, Research Associate with GovAI, were among those providing evidence.
Listen to the hearing here. Read Jeffrey Ding’s testimony here. Read Helen Toner’s testimony here.

Peter Cihon with others: "Comment on National Institute of Standards and Technology – RFI: Developing a Federal AI Standards Engagement Plan" | (6 June 2019)

GovAI submitted written comments (work led by Peter Cihon) as well as a second round of targeted edits to the National Institute of Standards and Technology (NIST) to support its ongoing work to develop a federal plan for technical artificial intelligence (AI) standards. We collaborated with several organizations on this effort, including the Center for Long-Term Cybersecurity, the Future of Life Institute, and certain researchers at the Leverhulme Centre for the Future of Intelligence. On August 9, NIST published their final report, “U.S. Leadership in AI: A Plan for Federal Engagement in Developing Technical Standards and Related Tools”, taking numerous suggestions from CLTC into consideration. The plan recommends the federal government “commit to deeper, consistent, long-term engagement in AI standards development activities to help the United States to speed the pace of reliable, robust, and trustworthy AI technology development.”

Remco Zwetsloot & Allan Dafoe: "Thinking About Risks From AI: Accidents, Misuse and Structure" | (11 February 2019: Lawfare)

Read more here

Baobao Zhang: "Opinion It’s 2043. We Need a New American Dream for the A.I. Revolution." | (12 August 2019: The New York Times)

A fictional op-ed written as if in 2043. Read more here.

Allan Dafoe: 'Private Sector Leadership in AI Governance' | (December 11th, 2018: The Digital Society Conference)

December 10-11, 2018 the Digital Society Conference 2018 – Empowering Ecosystems took place at ESMT Berlin. The two-day conference included panels, presentations, and workshops from many different perspectives such as science, industry, and politics. This year’s conference covered new developments in security and privacy, digital politics, and industrial strategies. The reality of the rise of artificial intelligence (AI) was a particular focus, including its societal implications and how to understand and harness the battle for AI dominance. More about the conference here.

Allan Dafoe & Jade Leung: 'What does AI mean for the future of humanity?' | (December 10th, 2018: Futuremakers podcast)

Philosopher Peter Millican discusses the future of society and AI, and some of the difficult ethical choices that lie ahead. As we hand some of these choices over to machines, are we confident they will reach conclusions that we can accept? Can, or should, a human always be in control of an artificial intelligence? Can we train automated systems to avoid catastrophic failures that humans might avoid instinctively? To explore these questions, Millican interviews Allan and Jade along with Mike Osborne, co-director of the Oxford Martin programme on Technology and Employment.
The podcast is available at:

Allan Dafoe: 'The AI Revolution and International Politics' | (November 14th, 2018. Oxford Artificial Intelligence Society)

Through research and policy engagement, the Center for the Governance of AI strives to steer the development of artificial intelligence for the common good. At this Oxford AI Society event, Allan discussed the Center’s lines of work, which examines the political, economic, military, governance, and ethical dimensions of transformative AI.

Jade Leung: 'Why Companies Should be Leading on AI Governance' | (October 27th, 2018. EA Global London)

Governance is usually a job that we associate with governments, states, and international organisations. This talk makes the case for why, in the case of AI, private companies are not only necessary for governance, but are best placed to lead in laying the foundations for a credible, scalable AI governance regime.

Benjamin Garfinkel: 'How Sure Are We About This AI Stuff?' | (October 27, 2018, EA Global London)

In this talk, Benjamin Garfinkel reviews what he understands to be the case for prioritizing AI issues, as well as identifying areas where further published analysis would be valuable to underwrite the prominence of this topic within Effective Altruism.

Allan Dafoe: 'Regulating Artificial Intelligence in the area of defence' | (October 10th, 2018. SEDE Public Hearing on Artificial intelligence and its future impact on security)

By invitation, Allan Dafoe spoke at a public hearing on ‘Artificial intelligence and its future impact on security’, organized by the Subcommittee on Security and Defence of the European Parliament. A recording of the talk is available here.

Carrick Flynn: 'AI Governance Landscape' | (June 10th, 2018. EA Global San Francisco)

The development of artificial intelligence is well-poised to massively change the world. It’s possible that AI could make life better for all of us, but many experts think there’s a non-negligible chance that the overall impact of AI could be extremely bad. In this talk from Effective Altruism Global 2018: San Francisco, Carrick Flynn lays out what we know from history about controlling powerful technologies, and what the tiny field of AI governance is doing to help AI go well. A recording of Carrick’s talk is available here; a transcript is available here.

Jade Leung: 'Analyzing AI Actors' | (June 10th, 2018. EA Global San Francisco)

Who would you rather have access to human-level artificial intelligence: the US government, Google, the Chinese government or Baidu? The biggest governments and tech firms are the most likely to develop advanced AI, so understanding their goals, abilities and constraints is a vital part of predicting AI’s trajectory. In this talk from EA Global 2018: San Francisco, Jade Leung explores how we can think about major players in AI, including an informative case study. A recording of Jade’s talk is available here. A transcript is available here.

Benjamin Garfinkel: 'The Future of Surveillance Doesn't Need to be Dystopian' | (June 9th, 2018, EA Global San Francisco)

This talk considers two worrisome narratives about technological progress and the future of surveillance. In the first narrative, progress threatens privacy by enabling ever-more-pervasive surveillance. For instance, it is becoming possible to automatically track and analyze the movements of individuals through facial recognition cameras. In the second narrative, progress threatens security by creating new risks that cannot be managed with present levels of surveillance. For instance, small groups developing cyber weapons or pathogens may be unusually difficult to detect. It is suggested that another, more optimistic narrative is also plausible. Technological progress, particularly in the domains of artificial intelligence and cryptography, may help to erase the trade-off between privacy and security.

Jeffrey Ding: Participant in BBC Machine Learning Fireside Chat | (June 6th 2018, London)

BBC Machine Learning Fireside Chats hosted a discussion between Jeffrey Ding and Charlotte Stix of the Leverhulme Centre for the Future of Intelligence, University of Cambridge. The conversation covered China’s national AI development plan, the state of US ML research, and the position of Europe and Britain in the global AI space.

Allan Dafoe: ‘Keynote on AI Governance’ | (June 1st 2018, Public Policy Forum)

This keynote address on the governance of AI was given at a Public Policy Forum seminar series attended by deputy ministers and senior officials of the Canadian government.

Benjamin Garfinkel: 'Recent Developments in Cryptography and Why They Matter' | (May 1st 2018, Oxford Internet Institute)

This talk surveys a range of emerging technologies in the field of cryptography, including blockchain-based technologies and secure multiparty computation, it then analyzes their potential political significance in the long-term. These predictions include the views that a growing number of information channels used to conduct surveillance may “go dark,” that it may become easier to verify compliance with agreements without intrusive monitoring, that the roles of a number of centralized institutions ranging from banks to voting authorities may shrink, and that new transnational institutions known as “decentralized autonomous organizations” may emerge.

Miles Brundage: 'Offensive applications of AI' | (April 11th, 2018, CyberUK)

Presented the Malicious Use of AI report at a CyberUK2018 panel.

Sophie-Charlotte Fischer: 'Artificial Intelligence: What implications for Foreign Policy?' | (April 11th, 2018, German Federal Foreign Office)

This panel discussion, co-organized by the German Federal Foreign Office, the Stiftung Neue Verantwortung and the Mercator Foundation,  discussed the findings of a January report by SNV, “Artificial Intelligence and Foreign Policy“. The report seeks to provide a foundation for planning a foreign policy strategy that responds effectively to the emerging power of AI in international affairs.

Allan Dafoe: Chair of panel ‘Artificial Intelligence and Global Security: Risks, Governance, and Alternative Futures’ | (April 6th 2018, Annual Conference of the Johnson Center for the Study of American Diplomacy, Yale University)

The panel addressed cybersecurity leadership and strategy from the perspective of the Department of Defense. The panelists were Dario Amodei, Research Scientist and Team Lead for Safety at OpenAI; Jason Matheny, Director of the Intelligence Advanced Research Projects Agency; and the Honorable Robert Work, former Acting and Deputy Secretary of Defense and now Senior Counselor for Defense at the Center for a New American Security. The keynote address at the conference was given by Eric Schmidt, and Henry Kissinger also gave a talk.

Matthijs Maas: 'Regulating for ‘normal AI accidents’: operational lessons for the responsible governance of AI deployment' | (February 2nd, 2018, AAAI/ACM Conference on AI, Ethics and Society)

Paper presentation, arguing that many AI applications often involve networked (tightly coupled, opaque) systems operating in complex or competitive environments, which ensures such systems are prone to ‘normal accident’-type failures. While this suggests that large-scale, cascading errors in AI systems are very hard to prevent or stop, an examination of the operational features that lead technologies to exhibit such failures enables us to derive both tentative principles for precautionary policymaking, and practical recommendations for the safer deployment of AI systems. Conference paper available here.

Allan Dafoe: 'Governing the AI Revolution: The Research Landscape' | (January 25th, 2018, CISAC, Stanford University)

Artificial intelligence (AI) is rapidly improving. The opportunities are tremendous, but so are the risks. Existing and soon-to-exist capabilities pose several plausible extreme governance challenges. These include massive labor displacement, extreme inequality, an oligopolistic global market structure, reinforced authoritarianism, shifts and volatility in national power, and strategic instability. Further, there is no apparent ceiling to AI capabilities, experts envision that superhuman capabilities in strategic domains will be achieved in the coming four decades, and radical surprise breakthroughs are possible. Such achievements would likely transform wealth, power, and world order, though global politics will in turn crucially shape how AI is developed and deployed. The consequences are plausibly of a magnitude and on a timescale to dwarf other global concerns, leaders of governments and firms are asking for policy guidance, and yet scholarly attention to the AI revolution remains negligible. Research is thus urgently needed on the AI governance problem: the problem of devising global norms, policies, and institutions to best ensure the beneficial development and use of advanced AI.
Event information available here.

Allan Dafoe: ‘Strategic and Societal Implications of ML’ | (December 8th 2017, Neural Information Processing Systems Conference)

This paper was given at a workshop entitled ‘Machine Learning and Computer Security’.

Allan Dafoe: Evidence Panelist for All Party Parliamentary Group on Artificial Intelligence Evidence Meeting | (October 30th 2017, Evidence Meeting 7: International Perspective and Exemplars)

This panel discussion focused on which countries and communities are more prepared for AI, and how they could be used as case studies. Topics included best practice, national versus multilateral or international approaches, and probable timelines.

Opportunities

Governance of AI Fellowship

Applications are closed. However, we are likely to open up applications in the Fall for Summer or Spring 2021.

The Centre for the Governance of AI at the University of Oxford is seeking 2-5 exceptional researchers to join our interdisciplinary team during the Governance of AI Fellowship for a limited period of three months. Participants in the Fellowship will receive a generous stipend and have the opportunity to participate in cutting-edge research in a fast-growing field, while gaining expertise in parts of our Research Agenda.

General Applications (Researchers and Policy Experts)

The Centre for the Governance of AI, at the Future of Humanity Institute is always looking for exceptional candidates. We are looking for Policy Experts who can translate our research into policy impact. We are also looking for researchers to collaborate with. If you the Governance of AI Fellowship is not a good fit for you, e.g. due to your seniority, feel free to contact markus.anderljung@philosophy.ox.ac.uk. 

In all candidates, we seek high general aptitude, self-direction, openness to feedback, and a firm belief in our mission.

Across each of these roles, we are especially interested in people with varying degrees of skill or expertise in the following areas:

  1. International relations, especially in the areas of international cooperation, international law, international political economy, global public goods, constitution and institutional design, diplomatic coordination and cooperation, arms race dynamics history, and the politics of transformative technologies, governance, and grand strategy.
  2. Chinese politics and machine learning in China.
  3. Game theory and mathematical modelling.
  4. Survey design and statistical analysis.
  5. Large intergovernmental scientific research organizations and projects (such as CERN, ISS, and ITER).
  6. Technology and other types of forecasting.
  7. Law and/or policy.

Our goal is to identify exceptional talent. We are interested in hiring for full-time work at Oxford. We are also interested in getting to know exceptional people who might only be available part-time or for remote work.

As we work closely with leading AI labs and the effective altruism community, familiarity and involvement with them is a plus.

For the Policy Expert role, click here for more information. If you are interested in research collaborations, contact markus.anderljung@philosophy.ox.ac.uk.

If you think that you might be a good fit for the centre but do not fit any of our listed positions, please email markus.anderljung@philosophy.ox.ac.uk with your CV and a brief statement of interest outlining (i) why you want to work with us, (ii) what you can contribute to our team and (iii) how this role fits into your plans.

All qualified applicants will be considered for employment without regard to race, color, religion, sex, age or national origin.

Past events

A perspective on fairness in machine learning from DeepMind
Silvia Chiappa, Research Scientist and William Isaac, Research Scientist, DeepMind

More info and registration here

As the world moves towards applying machine learning techniques in high stakes societal contexts – from the criminal justice system to education to healthcare – ensuring the fairness of these systems becomes an evermore important and urgent issue. In this talk DeepMind’s Research Scientists, Silvia and William, will explain how Causal Bayesian Networks (CBNs) can be used as a tool for reasoning about and addressing fairness issues.

In the first part of the talk we will show that CBNs can provide us with a simple and intuitive visual tool for describing different possible unfairness scenarios underlying a dataset. We will use this viewpoint to revisit the recent debate surrounding the COMPAS pretrial risk assessment tool and, more generally, to point out that fairness evaluation on a model requires careful considerations on the patterns of unfairness underlying the training data.

In the second part of the talk we will explain how CBNs can provide us with a powerful quantitative tool to measure unfairness in a dataset, and to help researchers in the development of techniques to address complex fairness issues.

This talk is based on two recent papers: A Causal Bayesian Networks Viewpoint on Fairness and Path-Specific Counterfactual Fairness

This event is co-hosted by the Centre for the Governance of Artificial Intelligence (GovAI), Future of Humanity Institute and the Rhodes Artificial Intelligence Lab.

About the speakers

Silvia Chiappa is a Research Scientist in Machine Learning at DeepMind. She received a Diploma di Laurea in Mathematics from University of Bologna and a PhD in Machine Learning from École Polytechnique Fédérale de Lausanne. Before joining DeepMind, Silvia worked in the Empirical Inference Department at the Max-Planck Institute for Intelligent Systems (Prof. Dr. Bernhard Schölkopf), in the Machine Intelligence and Perception Group at Microsoft Research Cambridge (Prof. Christopher Bishop), and at the Statistical Laboratory, University of Cambridge (Prof. Philip Dawid). Her research interests are based around Bayesian & causal reasoning, graphical models, variational inference, time-series models, and ML fairness and bias.

William Isaac is a Research Scientist with DeepMind’s Ethics and Society Team. Prior to DeepMind, WIlliam served as an Open Society Foundations Fellow and Research Advisor for the Human Rights Data Analysis Group focusing on algorithmic bias and fairness. William’s prior research centering on deployments of automated decision systems in the US criminal justice system has been featured in publications such as Science, the New York Times, and the Wall Street Journal. William received his Doctorate in Political Science from Michigan State University and a Masters in Public Policy from George Mason University.

Thu, 17 October 2019, 16:00 – 17:30 BST

Tony Hoare Room, Department of Computer Science, Robert Hooke Building, Parks Road, Oxford

When Speed Kills: Autonomous Weapon Systems, Deterrence, and Stability
Michael C. Horowitz, Professor of political science at the University of Pennsylvania

More info and registration here

Autonomy on the battlefield represents one possible usage of narrow AI by militaries around the world. Research and development on autonomous weapon systems (AWS) by major powers, middle powers, and non-state actors makes exploring the consequences for the security environment a crucial task.

Michael will draw on classic research in security studies and examples from military history to assess how AWS could influence two outcome areas: the development and deployment of systems, including arms races, and the stability of deterrence, including strategic stability, the risk of crisis instability, and wartime escalation. He focuses on these questions through the lens of two characteristics of AWS: the potential for increased operational speed and the potential for decreased human control over battlefield choices.

Wed, 5 June 2019, 17:30 – 19:00 BST

Seminar Room A, Manor Road Building

Securing a World of Physically Capable Computers
Bruce Schneier, Renowned computer security and cryptography expert

More info and registration here

Computer security is no longer about data; it’s about life and property. This change makes an enormous difference, and will inevitably disrupt technology industries. Firstly – data authentication and integrity will become more important than confidentiality. Secondly – our largely regulation-free Internet will become a thing of the past. Soon we will no longer have a choice between government regulation and no government regulation. Our choice is between smart government regulation and stupid government regulation.

Given this future, Bruce Schneier makes a case for why it is vital that we look back at what we’ve learned from past attempts to secure these systems, and forward at what technologies, laws, regulations, economic incentives, and social norms we need to secure them in the future. Bruce will also discuss how AI could be used to benefit cybersecurity, and how government regulation in the cybersecurity realm could suggest ways forward for government regulation for AI.

Mon, 17 June 2019, 17:30 – 19:00 BST

Lecture Theatre B, Wolfson Building, Department of Computer Science

The Character & Consequences of Today’s Technology Tsunami
Richard Danzig, former Secretary of the US Navy, Director at the Center for a New American Security

Register here

It is often observed that we live amidst a flood of scientific discoveries and technological inventions. The timing, and in important respects, even the direction, of future developments cannot confidently be predicted. But this lecture draws on examples from many disparate technologies to identify important characteristics of technological change in our era; it outlines their implications for international security and our domestic well-being; and it describes ways in which recent failings should prompt new policies as increasingly powerful technologies unfold.

Tue, 14 May 2019, 17:30 – 19:00 BST

Rhodes House, South Parks Road

Updates

The Windfall Clause: Distributing the Benefits of AI for the Common Good 

The Windfall Clause is an ex ante commitment by AI firms to donate a significant amount of any eventual extremely large profits. Read more in the full report.

GovAI Annual Report 2019

2019 has been an eventful year for AI governance and the Centre. Here follows a brief summary of our activities during the year.

New Technical Report: Standards for AI Governance

Peter Cihon, Research Affiliate at the Centre for the Governance of AI, explains the relevance of international standards to global governance of AI in a new technical report. Summary of the report here and report here

Course on the Ethics of AI with OxAI

Carina Prunkl, GovAI collaborator will be running a course on the ethics of AI with Oxford University student group OxAI, investigating the moral and social implications of Artificial Intelligence.

GovAI Annual Report 2018

The governance of AI is in my view the most important global issue of the coming decades, and it remains highly neglected. It is heartening to see how rapidly this field is growing, and exciting to be part of that growth. This report provides a short summary of our work in 2018, with brief notes on our plans for 2019.
Allan Dafoe – Director, Centre for the Governance of AI