This Summer Update highlights our key achievements and provides links to particularly interesting outputs.

Capture38

In April 2016, Nick Bostrom led a talk on macrostrategy as part of the Bank of England’s One Bank Flagship Seminar. He outlined crucial considerations when one’s objective has a time neutral and altruistic component.

In May, we conducted two workshops. The first, in collaboration with the Machine Intelligence Research Institute (MIRI), covered goals and principles of AI policy and strategy, value alignment for advanced machine learning, the relative importance of AI v. other x-risk, geopolitical strategy, the prospects of international-space-station-like  coordinated AGI development, and an enormous array of technical AI control topics. The second with the Centre for Study of Existential Risk (CSER) considered how to classify risks, the role of international governance, rapid problem attacks, surveillance, and opportunities for funders to reduce existential risk.

In May, the Global Priorities Project, in association with FHI, released the Global Catastrophic Report 2016. The report considers global catastrophic risks, events which may kill a tenth of the world’s population, and argues that these risks may even be growing. The report explores options for the international community to engage in risk reduction.

Nick Bostrom addressed the US National Academies of Sciences, Engineering, and Medicine. His participation was part of an initiative to inform decision-making related to recent advances in human gene-editing research. Bostrom shared a broader perspective on historical attitudes towards medical interventions and introduced useful epistemic tools, such as the reversal test.  

The policy team stayed busy with Owen Cotton-Barratt giving oral and written evidence to the UK Parliament’s Science and Technology Commons Select Committee on the need for robust and transparent AI systems, and Niel Bowerman giving evidence on the ethics of artificial intelligence to the legal affairs committee of the European Parliament.

Capture24In June, Stuart Armstrong and Laurent Orseau presented Safely Interruptible Agents, a collaboration between Google DeepMind and FHI collaboration, at the Conference on Uncertainty in Artificial Intelligence (UAI). Orseau and Armstrong’s research explores a method to ensure that certain reinforcement learning agents can be safely interrupted repeatedly by human or automatic overseers. The paper was mentioned in over 100 media articles and was the subject of pinned tweets by Demis Hassabis and Shane Legg, co-founders of Google DeepMind.  

Nick Bostrom’s book Superintelligence is now out in paperback. Additionally, Research Associate Robin Hanson published The Age of Em: Work, Love, and Life when Robots Rule the Earth.  FHI Research Associate Paul Christiano joined OpenAI as an intern and has written new posts including Efficient and Safely Scalable, Learning with Catastrophes, Red Teams, and The Reward Engineering Problem. He coauthored Concrete Problems in AI Safety where he considered potential unintended and harmful behavior emerging from poorly designed AI systems.

We have expanded our research capacity, hiring Jan Leike as a Machine Learning Research Fellow and Miles Brundage as a Policy Research Fellow, both as part of the Strategic AI Research Center. FHI has additionally recruited Prof. William MacAskill and Prof. Hilary Greaves to start the combined FHI and Centre for Effective Altruism’s new ‘Programme on the Philosophical Foundations of Effective Altruism.’  

Posted in News, Reporting Page, Uncategorised.

Share on Facebook | Share on Twitter