In the third quarter of 2017, FHI staff have continued their work in the institute’s four focus areas: AI Safety, AI Strategy, Biorisk, and Macrostrategy. Below, we outline some of our key outputs over the last quarter, current vacancies, and details on what our researchers have recently been working on.

This quarter, we are saying goodbye to two staff members who have been core to FHI’s operational success over several years. Kyle Scott will be moving back to the US, where he will continue his work on existential risk at the Berkeley Existential Risk Initiative (BERI). Niel Bowerman is moving onto a pilot project aimed at improving the US government’s decision-making capacity on AI policy. We want to say thanks to both Kyle and Niel for their amazing work. We wish you all the best with your future projects.

Key outputs from the last quarter

  • Finland’s president Sauli Niinistö has addressed the UN General assembly, pointing out the importance of international efforts to decrease biorisks, following FHI’s advice on the subject area (full speech).
  • On our way to extending our research in the area of biorisk, FHI researchers have published three biosafety papers in Health Security this quarter.
  • Our AI Safety team published an article on safe exploration of reinforcement learning agents with human intervention: ‘Trial without error’. An exploration of the research with video content can be found here. 
  • Toby Ord talked about the importance of far-future concerns on an episode of the 80,000 Hours podcast in conversation with Rob Wiblin.

Individual researcher outputs

Nick Bostrom
Nick Bostrom is continuing work on AI policy and foundations of macrostrategy. He is advising some key actors in the AI space and has given some lectures over the last quarter.

Toby Ord
Toby Ord has finished the manuscript for his book on Moral Uncertainty (with William MacAskill and Krister Bykvist) and is devoting most of his time to writing his book on existential risk. He has also recorded an in-depth podcast on the importance of the long-term future with Rob Wiblin and had his paper with Hilary Greaves, ‘Moral Uncertainty about Population Ethics’, accepted for publication.

Miles Brundage
Miles Brundage spoke to the All Party Parliamentary Group on AI in the UK parliament on 11 September. He co-authored a review paper on reinforcement learning (pre-print available here, full paper forthcoming in IEEE Signal Processing magazine). Brundage also spoke at Effective Altruism Global San Francisco on the “Careers in AI panel”, building on his career guide published by 80,000 Hours. In collaboration with colleagues and external co-authors, Brundage is in the process of completing a report on preventing and mitigating the misuse of AI, building on a workshop held earlier this year.

Owen Cotton-Barratt
Owen has been working on plans for researcher training at FHI. He spoke at EA Global on how uncertainty about AI timelines should influence our actions, and on a panel on advanced AI. His paper with Sebastian Farquhar and Andrew Snyder-Beattie on pricing externalities of risky research was published in Health Security.

Allan Dafoe
Allan Dafoe is an Assistant Professor at Yale, the lead of the Global Politics of AI Research Group, Research Associate at FHI, and collaborator with many at FHI. Allan has taken a sabbatical leave year from Yale to work with the group at FHI. Allan’s activities during the past three months include: organizing FHI’s strategy team (with Carrick Flynn), producing FHI’s written submission to the Lords Select Committee on AI (with Miles Brundage), writing an AI Strategy research agenda (not yet public), developing the relationship with the University of Oxford, analyzing strategic considerations related to openness in AI, analysis related to global cooperation in AI, and consulting.

Anders Sandberg
Anders Sandberg has been chair of the Gothenburg Center for Advanced Studies (Gothenburg University and Chalmers University of Technology) program on existential risk September-October. He has submitted two book chapters on existential risk, and his chapter on catastrophic cybersecurity risks has been accepted. Several papers have been prepared as part of the GoCAS project (information hazards, differential technology development, minimal human population, Tasmanian technology traps, nuclear-near misses, vacuum decay game theory, species trajectories) as well as ongoing work on his and Toby Ord’s book “Grand Futures”. He gave several talks on a variety of subject areas and participated in several conferences, including a workshop on autonomous systems at DCDC (Ministry of Defense). His interviews have appeared in several media outlets, including Scientific American, Gizmodo and Forskning.se.

Stuart Armstrong
Stuart Armstrong has continued his work on Oracle designs, low-impact AI and human value incoherence, as well as research investigating approaches for artificial agents to obey the intent of human instructions. He has also published a technical report on anthropic decision theory for self-locating beliefs.

Eric Drexler
Eric Drexler presented an overview of progress and prospects in atomically precise manufacturing at an international conference in Bristol. He expanded a set of documents that will support a broad reframing of questions of AI safety and strategy. Additionally, he is participating in a series of seminars with DeepMind, where he will be presenting the above work next quarter.

Carl Shulman 
Carl Shulman has been assisting Nick Bostrom in his work on AI policy. He has also been working on the implications of human enhancement technologies for global catastrophic risk.

Carrick Flynn
Carrick Flynn has been helping to build and organise FHI’s AI strategy team and has been researching issues of surveillance, multipolarity and unipolarity, international research coordination, and field building.

Ben Garfinkel
Ben Garfinkel has been researching case studies and issues of international governance relevant to the development of advanced artificial intelligence. He has also explored the political significance of blockchain technology.

Owain Evans
Owain Evans published the paper “Trial without Error: Towards Safe RL with Human Intervention” on Arxiv (Access New Scientist coverage of the research here, a blog post and videos are available here.) Evans also gave a well-attended talk at the Gatsby Institute (UCL). He is working with Clare Lyle and Sebastian Schulze on Active Reinforcement learning, and he started a new project with intern Neal Jean (Stanford), Girish Sastry, Andreas Stuhlmueller (Ought.org) and Ryan Carey.

Andrew Snyder-Beattie
Andrew Snyder-Beattie published three papers on biosecurity in the popular journal ‘Health Security’, and is now working on a model of the Fermi paradox and setting up a bigger team at FHI working on longer-term biosecurity issues.

Piers Millett
Piers Millett has been developing concepts around distributed diagnostics, taking the first steps in a new collaboration on managing biotechnology information hazards, and reviewing the UK plans to strengthen reporting of biorisks and threats. He has continued to raise awareness of Global Catastrophic Risks, existential risk and biosecurity issues amongst the biotech community, including with the Asia-Pacific Biosafety Association and the European synthetic biology community. He also published two papers on biosecurity in the latest issue of Health Security in collaboration with other FHI researchers.

Tamay Beriroglu
Tamay Besiroglu has joined FHI as an intern this quarter working with the AI strategy team on game-theoretic modelling of AI races and analysing interventions and strategies that could reduce risks in race scenarios.

Baobao Zhang
Baobao Zhang is a PhD candidate in Political Science at Yale University and a junior visiting scholar at Nuffield College, Oxford. She has fielded surveys to study the public’s attitudes towards the regulation of artificial intelligence and concerns about the potential risks of AI.

Jade Leung
Jade has just joined the AI Strategy team as a DPhil candidate in International Relations at the University of Oxford. She will be focusing on global governance regime design for long-term artificial intelligence and has been focusing on canvassing relevant literature on emerging technology governance, international cooperation theory and related fields of global governance and international relations.

 

Posted in News, Reporting Page.

Share on Facebook | Share on Twitter