The Future of Humanity Institute is a multidisciplinary research institute at the University of Oxford. It enables a select set of leading intellects to bring the tools of mathematics, philosophy, and science to bear on big-picture questions about humanity and its prospects. The Institute belongs to the Faculty of Philosophy and is affiliated with the Oxford Martin School.
|Now Hiring Postdoctoral Researchers|
|Order Superintelligence: Paths, Dangers, Strategies|
|Oxford Martin Programme on the Impacts of Future Technology|
|Amlin Research Collaboration on Systemic Risk of Modelling|
|Global Priorities Project|
In a recent discussion with Baidu CEO Robert Li, Bill Gates discussed FHI’s research, stating that he would “highly recommend” Superintelligence.
In a newly published FHI Technical Report, “MDL Intelligence Distillation: Exploring strategies for safe access to superintelligent problem-solving capabilities”, Eric Drexler explores a general approach to separating learning capacity from domain knowledge, and then using controlled input and retention of specialised domain knowledge to focus and implicitly constrain the capabilities of domain-specific superintelligent problem solvers.
Applications are invited for a full-time Postdoctoral Research Fellow in Artificial Intelligence (AI) safety within the Future of Humanity Institute (FHI) at Oxford University. This post is fixed-term for 2 years from the date of appointment.
FHI researcher Toby Ord has published recent research on moral trade in Ethics. Differing ethical viewpoints can allow for moral trade, arrangements that improve the state of affairs from all involved viewpoints.