In a recent discussion with Baidu CEO Robert Li, Bill Gates discussed FHI’s research, stating that he would “highly recommend” Superintelligence. Continue reading
In a newly published FHI Technical Report, “MDL Intelligence Distillation: Exploring strategies for safe access to superintelligent problem-solving capabilities”, Eric Drexler explores a general approach to separating learning capacity from domain knowledge, and then using controlled input and retention of specialised domain knowledge to focus and implicitly constrain the capabilities of domain-specific superintelligent problem solvers.
Applications are invited for a full-time Postdoctoral Research Fellow in Artificial Intelligence (AI) safety within the Future of Humanity Institute (FHI) at Oxford University. This post is fixed-term for 2 years from the date of appointment. Continue reading
FHI researcher Toby Ord has published recent research on moral trade in Ethics. Differing ethical viewpoints can allow for moral trade, arrangements that improve the state of affairs from all involved viewpoints. Continue reading
In a recent technical report, Dr. Owen Cotton-Barratt discusses how we ought to allocate existential risk mitigation effort across time. The primary finding is that all else being equal we should prefer work earlier and prefer to work on risks that might come early. Continue reading
The Future of Humanity Institute is pleased to announce the results for the 2014 Thesis Prize Competition: Crucial Considerations for the Future of Humanity.
Entrants submitted a two-page ‘thesis proposal’ consisting of a 300 word abstract and an outline plan of a thesis on crucial considerations for humanity’s future. Professor Nick Bostrom, Dr Toby Ord and Dr Cecilia Tilli have reviewed all submitted entries.
We received many strong proposals for the competition; given the similar quality of several entries we have decided to leave the top prize vacant and distribute the prize money among runner-ups and honorary mentions.
Thank you to all contestants for participating!
“Background Conditions for Human-Existential Risk Control”
Louis Fletcher, University of Cambridge
“Public Reason and Existential Catastrophe: Crucial Considerations for Justice as Fairness”
Patrick Kaczmarek, University of Glasgow
“Survival, Extinction and the Future of Humanity”
Henry Shevlin, CUNY Graduate Center
“Increasing Compassion to Reduce Existential Risk: A Global Initiative”
Luke Greeley, Rutgers University
“Empathy and its Limits”
Adam Lerner, Princeton University
“Why Moral Optimists Should Be Pessimistic About Artificial General Intelligence”
Amanda MacAskill, New York University
“Producing Dumb Animals: Crucial Considerations for Animal Diminishment”
Marcus Schultz-Bergin, Bowling Green State University
In a recent report, FHI researchers examine the strengths and weaknesses of two existing definitions of existential risk, and suggest a new definition based on expected value. This leads to a parallel concept: ‘existential hope’, the chance of something extremely good happening. Continue reading
Over the weekend of January 2, much of our research staff from the Oxford Martin Programme on the Impacts of Future Technology attended The Future of AI: Opportunities and Challenges, a conference held by the Future of Life Institute to bring together AI researchers from academia and industry, AI safety researchers, lawyers, economists, and many others to discuss short and long-term issues in AI’s impact on society.
In 2014 FHI produced over 20 publications and policy reports, and our research was the topic of over 1000 media pieces. The highlight of the year was the publication of Superintelligence, Paths, Dangers, Strategies, which has opened a broader discussion on how to ensure our future AI systems remain safe. Continue reading
Nick Bostrom has completed a draft paper on value porosity and utility diversification. This theory could used as part of a ‘Hail Mary’ approach to the AI safety problem. Continue reading