The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford.  It enables a select set of leading intellects to bring the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects.  The Institute belongs to the Faculty of Philosophy and is affiliated with the Oxford Martin School.

superintel Order Superintelligence: Paths, Dangers, Strategies
ft-small Oxford Martin Programme on the Impacts of Future Technology
cards-small Amlin Research Collaboration on Systemic Risk of Modelling
gpp-small Global Priorities Project



Allocating Existential Risk Mitigation Across Time — March 2015

In a recent technical report, Dr. Owen Cotton-Barratt discusses how we ought to allocate existential risk mitigation effort across time. The primary finding is that all else being equal we should prefer work earlier and prefer to work on risks that might come early. 

Existential Risk and Existential Hope: Definitions — February 2015

In a recent report, FHI researchers examine the strengths and weaknesses of two existing definitions of existential risk, and suggest a new definition based on expected value. This leads to a parallel concept: ‘existential hope’, the chance of something extremely good happening. 

Risks and impacts of AI: conference, open letter, and new funding program — January 2015

Over the weekend of January 2, much of our research staff from the Oxford Martin Programme on the Impacts of Future Technology attended The Future of AI: Opportunities and Challenges, a conference held by the Future of Life Institute to bring together AI researchers from academia and industry, AI safety researchers, lawyers, economists, and many others to discuss short and long-term issues in AI’s impact on society.

More news…