The Future of Humanity Institute is now hiring a Research Director and interdisciplinary Research Fellows. The application deadline is September 25th, 2015.
Nick Bostrom was recently awarded a €2 million ERC Advanced Grant, widely considered to be the most prestigious grant available from the European Research Council. The grant will allow Nick Bostrom and a team of FHI researchers to continue their work on existential risk and crucial considerations. Continue reading
We are now welcoming open expressions of interest from researchers and administrators who would would like to join our multi-disciplinary team focused on improving the long-run future of humanity. At this time we are particularly interested in computer scientists with a background in machine learning, and policy analysts with a background in the governance of emerging technologies. Continue reading
At a lecture at the Cambridge Centre for the Study of Existential Risk, Dr. Toby Ord discussed the relative likelihood of natural existential risk, as opposed to anthropogenic risks. His analysis of the issue indicates a much higher probability of anthropogenic existential risk. Continue reading
On June 2nd Professor Marc Lipsitch will be giving a public lecture at FHI on the ethics of creating of potential pandemic pathogens. Professor Lipsitch is director of the Center of Communicable Disease Dynamics and Professor of Epidemiology at Harvard. Continue reading
In a recent open letter, Toby Ord describes FHI’s position on experiments that create potential pandemic pathogens, noting that “the experiments involve risks of killing hundreds of thousands (or even millions) of individuals in the process.” Continue reading
At the latest TED conference in Vancouver, Professor Nick Bostrom discussed concerns about machine superintelligence and FHI’s research on AI safety. Continue reading
In a recent discussion with Baidu CEO Robert Li, Bill Gates discussed FHI’s research, stating that he would “highly recommend” Superintelligence. Continue reading
In a newly published FHI Technical Report, “MDL Intelligence Distillation: Exploring strategies for safe access to superintelligent problem-solving capabilities”, Eric Drexler explores a general approach to separating learning capacity from domain knowledge, and then using controlled input and retention of specialised domain knowledge to focus and implicitly constrain the capabilities of domain-specific superintelligent problem solvers.
Applications are invited for a full-time Postdoctoral Research Fellow in Artificial Intelligence (AI) safety within the Future of Humanity Institute (FHI) at Oxford University. This post is fixed-term for 2 years from the date of appointment. Continue reading