Picture of Radcliffe Camera

FHI receives £1.7m grant from Open Philanthropy Project

The Open Philanthropy Project recently announced a grant of £1,620,452 to the Future of Humanity Institute (FHI) to provide general support as well as a grant of £88,922 to allow us to hire Piers Millet to lead our work on biosecurity . Most of the larger grant adds unrestricted funding to FHI’s reserves, which will […]

Asilomar AI principles announced

AI researchers gathered at Asilomar from the 3rd-8th of January 2017 for a conference on Beneficial Artificial Intelligence organised by the Future of Life Institute. Nick Bostrom spoke about his recent research on the interaction between AI control problems and governance strategy within AI risk, and the role of openness (slides/video). Bostrom and co-authors have […]

FHI Reporting and review

FHI Annual Review 2016

Future of Humanity Institute Annual Review 2016 [pdf] In 2016, we continued our mission of helping the world think more systematically about how to craft a better future. We advanced our core research areas of macrostrategy, technical artificial intelligence (AI) safety, AI strategy, and biotechnology safety. The Future of Humanity Institute (FHI) has grown by one third, […]

Quarterly Update Autumn 2016

This post outlines activities at the Future of Humanity Institute during July, August and September 2016. We published three new papers, attended several conferences, hired Prof. William MacAskill, hosted four interns and one summer research fellow, and made progress in a number of research areas.

FHI researchers attend IEEE

Miles Brundage, Anders Sandberg and Andrew Snyder-Beattie attended the Symposium on Ethics of Autonomous Systems (SEAS), an event organised by the Institute of Electrical and Electronics Engineers (IEEE) and attended by globally recognised experts from a diversity of fields.

Colloquium Series on Robust and Beneficial AI

We recently teamed up with the Machine Intelligence Research Institute (MIRI) to co-host a 22-day Colloquium Series on Robust and Beneficial AI (CSRBAI) at the MIRI office. The colloquium was aimed at bringing together safety-conscious AI scientists from academia and industry to share their recent work. The event served that purpose well, initiating some new collaborations and a number of new conversations between researchers who hadn’t interacted before or had only talked remotely.

Quarterly Update Summer 2016

This Q2 update highlights some of our key achievements and provides links to particularly interesting outputs. We conducted two workshops, released a report with the Global Priorities Project and the Global Challenges Foundation, published a paper with DeepMind, and hired Jan Leike and Miles Brundage.

Safely Interruptible Agents

Laurent Orseau of Google DeepMind and Stuart Armstrong of the Future of Humanity Institute present research exploring a method to ensure that reinforcement learning agents can be repeatedly safely interrupted by human or automatic overseers. This work aims to prevent agents from “learning” about these interruptions or taking steps to avoid or manipulate the interruptions.