The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford.  It enables a select set of leading intellects to bring the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects.  The Institute belongs to the Faculty of Philosophy and is affiliated with the Oxford Martin School.

superintel Order Superintelligence: Paths, Dangers, Strategies
ft-small Oxford Martin Programme on the Impacts of Future Technology
cards-small Amlin Research Collaboration on Systemic Risk of Modelling
gpp-small Global Priorities Project



FHI contributes chapter on existential risk to UK Chief Scientific Advisor’s report — November 2014

The 2014 UK Chief Scientific Advisor’s report has included a chapter on existential risk, written by FHI researchers Toby Ord and Nick Beckstead. The report describes the risks posed by AI, biotechnology, and geoengineering, as well as the ethical framework under which we ought to evaluate existential risk.

FHI Research Featured in New York Times — November 2014

On November 5th, FHI’s recent work on the future dangers of artificial intelligence was featured in the New York Times. 

Oxford Martin Lecture on Superintelligence — October 2014

On October 13th Professor Nick Bostrom will present his recent book Superintelligence: Paths, Dangers, Strategies at the Oxford Martin School. The lecture will be followed by a book signing and drink reception, open to the public.

Now Hiring: Interdisciplinary Researchers Needed for Risk Analysis — October 2014

The Future of Humanity Institute is hiring for two positions.  We are looking for ambitious interdisciplinary researchers interested in issues of systemic risk.

Seminar: Deterrence Theory and Global Catastrophic Risk Reduction — October 2014

On October 13th, Dr. Seth Baum, the executive director of the Global Catastrophic Risk Institute, will lead a seminar on deterrence theory and global catastrophic risk reduction at FHI. 

More news…