The 2014 UK Chief Scientific Advisor’s report has included a chapter on existential risk, written by FHI researchers Toby Ord and Nick Beckstead. The report describes the risks posed by AI, biotechnology, and geoengineering, as well as the ethical framework under which we ought to evaluate existential risk. Continue reading
On November 5th, FHI’s recent work on the future dangers of artificial intelligence was featured in the New York Times. Continue reading
On October 13th Professor Nick Bostrom will present his recent book Superintelligence: Paths, Dangers, Strategies at the Oxford Martin School. The lecture will be followed by a book signing and drink reception, open to the public. Continue reading
The Future of Humanity Institute is hiring for two positions. We are looking for ambitious interdisciplinary researchers interested in issues of systemic risk. Continue reading
On October 13th, Dr. Seth Baum, the executive director of the Global Catastrophic Risk Institute, will lead a seminar on deterrence theory and global catastrophic risk reduction at FHI. Continue reading
Today Carl Frey presented his economics research in an article in the Financial Times. Continue reading
Thanks to Investling Group for their recent financial contribution.
Professor Marc Lipsitch will be giving a talk on recent experiments with potential pandemic pathogens and their ethical alternatives on September 25th. Professor Lipsitch is a professor of epidemiology and the director of the Centre for Communicable Disease Dynamics at Harvard University.
The Chronicle of Higher Education highlighted work done at FHI in an article about the risks of artificial intelligence and other advanced technologies. Continue reading
Superintelligence: Paths, Dangers, Strategies has been featured on the NYT Science Bestseller’s list, sharing the list with Malcolm Gladwell’s David and Goliath and Daniel Kahneman’s Thinking, Fast and Slow. Continue reading