On June 2nd Professor Marc Lipsitch will be giving a public lecture at FHI on the ethics of creating of potential pandemic pathogens. Professor Lipsitch is director of the Center of Communicable Disease Dynamics and Professor of Epidemiology at Harvard. Continue reading
In a recent open letter, Toby Ord describes FHI’s position on experiments that create potential pandemic pathogens, noting that “the experiments involve risks of killing hundreds of thousands (or even millions) of individuals in the process.” Continue reading
At the latest TED conference in Vancouver, Professor Nick Bostrom discussed concerns about machine superintelligence and FHI’s research on AI safety. Continue reading
In a recent discussion with Baidu CEO Robert Li, Bill Gates discussed FHI’s research, stating that he would “highly recommend” Superintelligence. Continue reading
In a newly published FHI Technical Report, “MDL Intelligence Distillation: Exploring strategies for safe access to superintelligent problem-solving capabilities”, Eric Drexler explores a general approach to separating learning capacity from domain knowledge, and then using controlled input and retention of specialised domain knowledge to focus and implicitly constrain the capabilities of domain-specific superintelligent problem solvers.
Applications are invited for a full-time Postdoctoral Research Fellow in Artificial Intelligence (AI) safety within the Future of Humanity Institute (FHI) at Oxford University. This post is fixed-term for 2 years from the date of appointment. Continue reading
FHI researcher Toby Ord has published recent research on moral trade in Ethics. Differing ethical viewpoints can allow for moral trade, arrangements that improve the state of affairs from all involved viewpoints. Continue reading
In a recent technical report, Dr. Owen Cotton-Barratt discusses how we ought to allocate existential risk mitigation effort across time. The primary finding is that all else being equal we should prefer work earlier and prefer to work on risks that might come early. Continue reading
The Future of Humanity Institute is pleased to announce the results for the 2014 Thesis Prize Competition: Crucial Considerations for the Future of Humanity.
Entrants submitted a two-page ‘thesis proposal’ consisting of a 300 word abstract and an outline plan of a thesis on crucial considerations for humanity’s future. Professor Nick Bostrom, Dr Toby Ord and Dr Cecilia Tilli have reviewed all submitted entries.
We received many strong proposals for the competition; given the similar quality of several entries we have decided to leave the top prize vacant and distribute the prize money among runner-ups and honorary mentions.
Thank you to all contestants for participating!
“Background Conditions for Human-Existential Risk Control”
Louis Fletcher, University of Cambridge
“Public Reason and Existential Catastrophe: Crucial Considerations for Justice as Fairness”
Patrick Kaczmarek, University of Glasgow
“Survival, Extinction and the Future of Humanity”
Henry Shevlin, CUNY Graduate Center
“Increasing Compassion to Reduce Existential Risk: A Global Initiative”
Luke Greeley, Rutgers University
“Empathy and its Limits”
Adam Lerner, Princeton University
“Why Moral Optimists Should Be Pessimistic About Artificial General Intelligence”
Amanda MacAskill, New York University
“Producing Dumb Animals: Crucial Considerations for Animal Diminishment”
Marcus Schultz-Bergin, Bowling Green State University
In a recent report, FHI researchers examine the strengths and weaknesses of two existing definitions of existential risk, and suggest a new definition based on expected value. This leads to a parallel concept: ‘existential hope’, the chance of something extremely good happening. Continue reading