The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford.  It enables a select set of leading intellects to bring the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects.  The Institute belongs to the Faculty of Philosophy and is affiliated with the Oxford Martin School.

superintel Order Superintelligence: Paths, Dangers, Strategies
ft-small Oxford Martin Programme on the Impacts of Future Technology
cards-small Amlin Research Collaboration on Systemic Risk of Modelling
gpp-small Global Priorities Project



Risks and impacts of AI: conference, open letter, and new funding program — January 2015

Over the weekend of January 2, much of our research staff from the Oxford Martin Programme on the Impacts of Future Technology attended The Future of AI: Opportunities and Challenges, a conference held by the Future of Life Institute to bring together AI researchers from academia and industry, AI safety researchers, lawyers, economists, and many others to discuss short and long-term issues in AI’s impact on society.

FHI in 2014 — January 2015

In 2014 FHI produced over 20 publications and policy reports, and our research was the topic of over 1000 media pieces.  The highlight of the year was the publication of Superintelligence, Paths, Dangers, Strategies, which has opened a broader discussion on how to ensure our future AI systems remain safe.

New ideas on value porosity and utility diversification — January 2015

Nick Bostrom has completed a draft paper on value porosity and utility diversification.  This theory could used as part of a ‘Hail Mary’ approach to the AI safety problem. 

FHI research in Scientific American — December 2014

On December 16th, FHI researcher Carl Frey published a piece in Scientific American describing the challenges of a digital economy. 

More news…