The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford.  It enables a select set of leading intellects to bring the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects.  The Institute belongs to the Faculty of Philosophy and is affiliated with the Oxford Martin School.

GCReven Now Hiring Postdoctoral Researchers
superintel Order Superintelligence: Paths, Dangers, Strategies
ft-small Oxford Martin Programme on the Impacts of Future Technology
cards-small Amlin Research Collaboration on Systemic Risk of Modelling
gpp-small Global Priorities Project



Toby Ord on the likelihood of natural and anthropogenic existential risks — June 2015

At a lecture at the Cambridge Centre for the Study of Existential Risk, Dr. Toby Ord discussed the relative likelihood of natural existential risk, as opposed to anthropogenic risks.  His analysis of the issue indicates a much higher probability of anthropogenic existential risk.

Public lecture on June 2nd from Professor Marc Lipsitch on the ethics of potential pandemic pathogen creation — May 2015

On June 2nd Professor Marc Lipsitch will be giving a public lecture at FHI on the ethics of creating of potential pandemic pathogens. Professor Lipsitch is director of the Center of Communicable Disease Dynamics and Professor of Epidemiology at Harvard. 

Toby Ord disputes the ethics of potential pandemic pathogen experiments — May 2015

In a recent open letter, Toby Ord describes FHI’s position on experiments that create potential pandemic pathogens, noting that “the experiments involve risks of killing hundreds of thousands (or even millions) of individuals in the process.”

Nick Bostrom discusses machine superintelligence at TED — May 2015

At the latest TED conference in Vancouver, Professor Nick Bostrom discussed concerns about machine superintelligence and FHI’s research on AI safety.

More news…