Best Paper Award at the 1st International Workshop on Evaluating Progress in Artificial Intelligence – EPAI 2020

Update: This paper has now been published. Carla Zoe Cremer (FHI Research Scholar) and Jess Whittlestone (Leverhulme CFI Senior Research Fellow) won the Best Paper Award at the 1st International Workshop on Evaluating Progress in Artificial Intelligence - EPAI 2020. The workshop was part of the 24th European Conference on Artificial Intelligence - ECAI 2020 Santiago [...]

Artificial intelligence: American attitudes and trends (Zhang, B. & Dafoe, A.). Available at SSRN 3312874.

This report presents a broad look at the American public’s attitudes toward artificial intelligence (AI) and AI governance, based on findings from a nationally representative survey of 2,000 American adults. As the study of the public opinion toward AI is relatively new, we aimed for breadth over depth, with our questions touching on: workplace automation; attitudes regarding international cooperation; the public’s trust in various actors to develop and regulate AI; views about the importance and likely impact of different AI governance challenges; and historical and cross-national trends in public opinion regarding AI. Our results provide preliminary insights into the character of US public opinion regarding AI.

US Public Opinion on the Governance of Artificial Intelligence (Zhang, B. & Dafoe, A.). In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 187-193)

Artificial intelligence (AI) has widespread societal implications, yet social scientists are only beginning to study public attitudes toward the technology. Existing studies find that the public’s trust in institutions can play a major role in shaping the regulation of emerging technologies. Using a large-scale survey (N=2000), we examined Americans’ perceptions of 13 AI governance challenges as well as their trust in governmental, corporate, and multistakeholder institutions to responsibly develop and manage AI. While Americans perceive all of the AI governance issues to be important for tech companies and governments to manage, they have only low to moderate trust in these institutions to manage AI applications.

The Logic of Strategic Assets: From Oil to Artificial Intelligence (Ding, J. & Dafoe, A.). arXiv preprint arXiv:2001.03246

What resources and technologies are strategic? This question is often the focus of policy and theoretical debates, where the label “strategic” designates those assets that warrant the attention of the highest levels of the state. But these conversations are plagued by analytical confusion, flawed heuristics, and the rhetorical use of “strategic” to advance particular agendas. We aim to improve these conversations through conceptual clarification, introducing a theory based on important rivalrous externalities for which socially optimal behavior will not be produced alone by markets or individual national security entities. We distill and theorize the most important three forms of these externalities, which involve cumulative-, infrastructure-, and dependency strategic logics. We then employ these logics to clarify three important cases: the Avon 2 engine in the 1950s, the U.S.-Japan technology rivalry in the late 1980s, and contemporary conversations about artificial intelligence.

The American Public’s Attitudes Concerning Artificial Intelligence

A report published by the Center for the Governance of AI (GovAI), housed in the Future of Humanity Institute, surveys Americans’ attitudes on artificial intelligence. The impact of artificial intelligence technology on society is likely to be large. While the technology industry and governments currently predominate policy conversations on AI, the authors expect the public […]

The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (Brundage, M., Avin, S. & Clark, J. et al., 2018)

This report distils findings from a workshop held last year as well as additional research done by the authors. It explores possible risks to security posed by malicious applications of AI in the digital, physical, and political domains, and lays out a research agenda for further work in addressing such risks.

This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a New American Security, and 9 other institutions, drawing on expertise from a wide range of areas, including AI, cybersecurity, and public policy.

Parliament - Photo by Heidi Sandstrom. on Unsplash

FHI researchers advise UK government on Artificial Intelligence

Nick Bostrom, Miles Brundage and Allan Dafoe are advising the UK government on issues concerning the developments in artificial intelligence. Miles Brundage presented evidence on 11 September on the topic ‘Governance, social and organisational perspective for AI’ (evidence meeting 5), looking at AI and cultural systems and new forms of organisational structure. On 10 October, […]

Crowd in conversation during workshop

Bad Actors and Artificial Intelligence Workshop

On the 19th and 20th of February, FHI hosted a workshop on the potential risks posed by the malicious misuse of emerging technologies in machine learning and artificial intelligence. The workshop, co-chaired by Miles Brundage at FHI and Shahar Avin of the Centre for the Study of Existential Risk, invited experts in cybersecurity, AI governance, […]