The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford. It enables a select set of leading intellects to bring the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects. The Institute belongs to the Faculty of Philosophy and is affiliated with the Oxford Martin School.
|We are hiring|
|Order Superintelligence: Paths, Dangers, Strategies|
|Oxford Martin Programme on the Impacts of Future Technology|
|Amlin Research Collaboration on Systemic Risk of Modelling|
|Global Priorities Project|
We are now welcoming open expressions of interest from researchers and administrators who would would like to join our multi-disciplinary team focused on improving the long-run future of humanity. At this time we are particularly interested in computer scientists with a background in machine learning, and policy analysts with a background in the governance of emerging technologies.
At a lecture at the Cambridge Centre for the Study of Existential Risk, Dr. Toby Ord discussed the relative likelihood of natural existential risk, as opposed to anthropogenic risks. His analysis of the issue indicates a much higher probability of anthropogenic existential risk.
On June 2nd Professor Marc Lipsitch will be giving a public lecture at FHI on the ethics of creating of potential pandemic pathogens. Professor Lipsitch is director of the Center of Communicable Disease Dynamics and Professor of Epidemiology at Harvard.
In a recent open letter, Toby Ord describes FHI’s position on experiments that create potential pandemic pathogens, noting that “the experiments involve risks of killing hundreds of thousands (or even millions) of individuals in the process.”