Expert meeting on powerful actor, high impact biothreats.

From 7-9 November 2018, 42 senior policy leaders and scientific and technical experts in science, engineering, bio-defence and bio-security, science policy, public health, infectious diseases, and catastrophic risks gathered at Wilton Park to consider powerful actor, high impact bio-threats. The initial report of the meeting is available here. This meeting was organised in partnership between Wilton Park, […]

Website and Communications Officer

THIS POSITION IS NOW CLOSED FHI is excited to invite applications for a full-time Website and Communications officer. The post is fixed-term for 24 months from the date of appointment. The role holder will be responsible for developing and implementing a communications strategy for all activities of the institute. S/he will develop and maintain FHI’s […]

Predicting Slow Judgments

A recent FHI project investigated whether AI systems can predict human deliberative judgments. Today’s AI systems are good at imitating quick, “intuitive” human judgments in areas including vision, speech recognition, and sentiment analysis. Yet some important decisions can’t be made quickly. They require careful thinking, research, and analysis. For example, a judge should not decide […]

FHI Publication at NeurIPS 2018

A recent paper by FHI researcher Stuart Armstrong and former intern Soren Mindermann (now at the Vector Institute) has been accepted at NeurIPS 2018. The paper, Impossibility of deducing preferences and rationality from human policy, considers the scenario in which AI system learns the values and biases of a human agent concurrently. This extends an existing […]

Head of Operations

The applications for this position are now closed. The FHI is extremely excited to announce applications are now open for the position of a full-time Head of Operations. The Head of Operations will play a leadership and co-ordinating role for FHI’s operations. Reporting to the Director of the Future of Humanity Institute, the person will […]

Future of Humanity Institute Scholarships

Future of Humanity Institute Scholarships FHI launches new scholarship programme for DPhil students starting at the University of Oxford. We will be awarding up to 8 scholarships for scholars whose research aims to answer crucial questions for improving the long-term prospects of humanity. Candidates will be considered from a range of disciplines, including computer science, […]

£13.3m boost for Future of Humanity Institute

Oxford University’s Future of Humanity Institute (FHI) is pleased to announce a contribution of up to £13.3 million from the Open Philanthropy Project. The donation, which includes a £6 million up-front commitment with the rest contingent on hiring, is the largest in the Faculty of Philosophy’s history. It will support FHI in its mission of […]

Project Managers

THESE POSITIONS ARE NOW CLOSED! We are seeking applications for two full time Project Manager roles. These roles will perform a critical function advancing FHI’s mission to ensure a long flourishing future. Both posts are fixed-term for 3 years from the date of appointment. Project Manager for the Research Scholars Programme (RSP). Applications are invited [...]

Senior Administrator

THIS POSITION IS NOW CLOSED   FHI is excited to invite applications for a full time Senior Administrator to work with the faculty of Philosophy, University of Oxford with responsibility for overseeing the effective and efficient day-to-day non-academic management and administration of two of the Faculty’s research centres, the Future of Humanity Institute (FHI) and […]

AI Safety Research Fellow

THIS POSITION IS NOW CLOSED   FHI is excited to invite applications for a full-time Research Fellow within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 24 months from the date of appointment. You will be responsible for conducting technical research in AI Safety. You can find […]

Research Fellow in Macrostrategy

THIS POSITION IS NOW CLOSED Applications are invited for a full-time Research Fellow within the Future of Humanity Institute (FHI) at the University of Oxford. This is a fixed-term post for 24 months from the date of appointment, located at the FHI offices in the beautiful city of Oxford. Reporting to the Director of Research […]

Deciphering China’s AI Dream

This report examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China's AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China's AI approach to the drivers [...]

Collecting data for an AI safety project

Ought and FHI’s AI Safety Group are collecting data on how people come to judgments over time. Take part and play games about: (A) Fermi arithmetic problems (B) Fact-checking political statements (decide if a statement is “fake news”) (C) Deciding how much you like a Machine Learning paper You’ll get feedback on your progress over […]

New report on the malicious use of AI

FHI is pleased to announce the publication of a report involving several of our researchers, entitled “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation.” This report was written by researchers at the Future of Humanity Institute, the Center for the Study of Existential Risk, OpenAI, the Electronic Frontier Foundation, the Center for a [...]

Quarterly Update Winter 2017

From the entire FHI team, we wish you all a great start into 2018! In this post, we would like to provide you with a summary of the FHI highlights over the last quarter of 2017. Highlights FHI launches the Governance of AI Program FHI is delighted to announce the formation of the Governance of AI […]

Executive Assistant to Nick Bostrom

APPLICATIONS ARE NOW CLOSED FHI is excited to announce applications are now open for the position of full-time Executive Assistant to Professor Nick Bostrom, Director of the Future of Humanity Institute at the University of Oxford. This post is a core part of FHI’s operations team. By freeing Prof. Bostrom’s schedule, prioritising items for his […]

Administrative Assistant

APPLICATIONS ARE NOW CLOSED FHI is excited to invite applications for a full-time Administrative Assistant within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 12 months from the date of appointment. You will be responsible for providing broad secretarial and general office support to administrative and research […]

AI Safety Postdoctoral Research Scientist

APPLICATIONS ARE NOW CLOSED FHI is excited to invite applications for a full-time Post-Doctoral Research Scientist within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 24 months from the date of appointment. You will advance the field of AI safety by conducting technical research. You can find […]

FHI researchers advise UK government on Artificial Intelligence

Nick Bostrom, Miles Brundage and Allan Dafoe are advising the UK government on issues concerning the developments in artificial intelligence. Miles Brundage presented evidence on 11 September on the topic ‘Governance, social and organisational perspective for AI’ (evidence meeting 5), looking at AI and cultural systems and new forms of organisational structure. On 10 October, […]

Quarterly Update Autumn 2017

In the third quarter of 2017, FHI staff have continued their work in the institute’s four focus areas: AI Safety, AI Strategy, Biorisk, and Macrostrategy. Below, we outline some of our key outputs over the last quarter, current vacancies, and details on what our researchers have recently been working on. This quarter, we are saying […]

FHI publishes three new biosecurity papers in ‘Health Security’

Three papers by FHI researchers in the area of biosecurity are forthcoming in the latest issue of Health Security.  

Trial without Error: Towards Safe RL with Human Intervention

How can AI systems learn safely in the real world? Self-driving cars have safety drivers, people who sit in the driver’s seat and constantly monitor the road, ready to take control if an accident looks imminent. Could reinforcement learning systems also learn safely by having a human overseer?

FHI seeks Senior Research Fellow in Macrostrategy

The Future of Humanity Institute is seeking a Senior Research Fellow on AI Macrostrategy, to identify crucial considerations for improving humanity’s long-run potential. We are looking for a polymath with an academic background related to economics, mathematics, physical sciences, computer science, philosophy, political science, or international governance, who has both outstanding analytical ability and a […]

Nick Bostrom gives talk to G30

Nick Bostrom delivered a talk at Group of Thirty (G30) in London on the 10th of June. Professor Bostrom spoke alongside Deepmind Co-Founder and CEO Demis Hassabis about Machine Learning and the AI Horizon.

Nick Bostrom speaks at World Intelligence Congress in China

At the World Intelligence Congress on the 1st of July in the city of Tianjin in China, Professor Bostrom spoke about the future of machine intelligence over the coming decades. Other speakers included leaders in Chinese tech and academia such as Jack Ma, Robin Li, and Bo Zhang. This follows a recent talk he gave to leading […]

Quarterly Update Summer 2017

Key outputs and activities from the first quarter of 2017 at FHI.

FHI seeks a research assistant for a book on Existential Risk

APPLICATIONS FOR THIS POSITION ARE NOW CLOSED. Please see our Jobs Page for information on current vacancies at FHI. — Applications are invited for a full time Research Assistant within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 6 months from the date of appointment. Reporting to […]

FHI joins the Partnership on AI

The Future of Humanity Institute (FHI) will be joining the Partnership on AI, a non-profit organisation founded by Amazon, Apple, Google/DeepMind, Facebook, IBM, and Microsoft, with the goal of formulating best practices for socially beneficial AI development.  We will be joining the Partnership alongside technology firms like Sony as well as third sector groups like […]

New Interactive Tutorial: Modeling Agents with Probabilistic Programs

FHI’s Owain Evans just released an online book describing and implementing models of rational agents for (PO)MDPs and Reinforcement Learning in collaboration with Andreas Stuhlmüller, John Salvatier, and Daniel Filan. The book aims to educate its readers on the creation of richer models of human planning, capturing human biases and bounded rationality. The book uses […]

Quarterly Update Spring 2017

Key outputs and activities from the first quarter of 2017 at FHI.

AI Safety and Reinforcement Learning Internship Programme 2018

The Future of Humanity Institute at the University of Oxford seeks interns to contribute to our work in the area of technical AI safety. Examples of this type of work include Cooperative Reinforcement Learning, Learning the Preferences of Ignorant, Inconsistent Agents, Learning the Preferences of Bounded Agents, and Safely Interrupible Agents.

FHI receives £1.7m grant from Open Philanthropy Project

The Open Philanthropy Project recently announced a grant of £1,620,452 to the Future of Humanity Institute (FHI) to provide general support as well as a grant of £88,922 to allow us to hire Piers Millet to lead our work on biosecurity . Most of the larger grant adds unrestricted funding to FHI’s reserves, which will […]

Bad Actors and Artificial Intelligence Workshop

On the 19th and 20th of February, FHI hosted a workshop on the potential risks posed by the malicious misuse of emerging technologies in machine learning and artificial intelligence. The workshop, co-chaired by Miles Brundage at FHI and Shahar Avin of the Centre for the Study of Existential Risk, invited experts in cybersecurity, AI governance, […]

Workshop on Normative Uncertainty

On the 10th February, the Future of Humanity Institute (FHI) hosted the Normative Uncertainty Workshop.

Asilomar AI principles announced

AI researchers gathered at Asilomar from the 3rd-8th of January 2017 for a conference on Beneficial Artificial Intelligence organised by the Future of Life Institute. Nick Bostrom spoke about his recent research on the interaction between AI control problems and governance strategy within AI risk, and the role of openness (slides/video). Bostrom and co-authors have […]

FHI Annual Review 2016

Future of Humanity Institute Annual Review 2016 [pdf] In 2016, we continued our mission of helping the world think more systematically about how to craft a better future. We advanced our core research areas of macrostrategy, technical artificial intelligence (AI) safety, AI strategy, and biotechnology safety. The Future of Humanity Institute (FHI) has grown by one third, […]

FHI holds workshop on AI safety and blockchain

The workshop explored the potential technical overlap between AI Safety and blockchain technologies and the possibilities for using blockchain, crypto-economics, and cryptocurrencies to facilitate greater global coordination. Key topics of discussion were the coordination of political actors, AI strategy and policy, blockchain frontiers and trends, prediction markets, coordination failures, and the potential impact of blockchain on governance. Attendees included: Vitalik Buterin, the inventor of Ethereum; Jaan Tallinn, a founding engineer of Skype and Kazaa; and Wei Dai, the creator of b-money and Crypto++.

Allan Dafoe and Stuart Russell publish response to Etzioni in MIT Technology Review

FHI Research Associate Allan Dafoe and Stuart Russell have published “Yes, We Are Worried About the Existential Risk of Artificial Intelligence” in the MIT Technology Review as a response to an article by Oren Etzioni.

Biotech horizon scanning workshop

At the start of November FHI researchers, Piers Millett and Eric Drexler participated in a one day biological engineering horizon scanning workshop hosted by the Centre for the Study of Existential Risk (CESR). The workshop was the culmination of a process that ran for several months in which experts in the biosciences, biotechnology, biosecurity, bioethics as well as existential and global catastrophic risks identified recent developments likely to have the greatest impact on our societies in the short to medium term.

FHI hires first biotech policy specialist

The Future of Humanity Institute are delighted to announce the hiring of our first policy specialist on biotechnology, Piers Millett. Dr. Millett is the former Acting Head of the Implementation Support Unit, UN Biological Weapons Convention.

Robin Hanson and FHI hold seminar and public talk on “The age of em”

On the 19th October, the Future of Humanity Institute (FHI) organised a workshop and public talk on the ‘The age of Em: work, love, and life when robots rule the earth”, with Research Associate Professor Robin Hanson.

FHI researchers cited in UK Parliamentary “Robotics and artificial intelligence” report

The UK House of Commons Science and Technology Committee have released a report concluding their recent enquiry on robotics and artificial intelligence. The report cites oral evidence given by FHI researcher Dr. Owen Cotton-Barratt, and discusses the work of FHI researcher Dr. Stuart Armstrong.

President Obama discusses Nick Bostrom’s work in Wired interview

Wired magazine has published a long interview between MIT’s Joi Ito, Wired’s Scott Dadich, and US President Barack Obama. The interview, titled “Barack Obama, neural nets, self-driving cars, and the future of the world”, discusses a range of topics including Prof. Nick Bostrom’s work on superintelligence.

Exploration potential

We introduce exploration potential, a quantity for that measures how much a reinforcement learning agent has explored its environment class. In contrast to information gain, exploration potential takes the problem’s reward structure into account. This leads to an exploration criterion that is both necessary and sufficient for asymptotic optimality (learning to act optimally across the entire environment class). Our experiments in multi-armed bandits use exploration potential to illustrate how different algorithms make the tradeoff between exploration and exploitation.

US IARPA Director Jason Matheny visits

The Director of the United States Intelligence Advanced Research Projects Activity (IARPA) visited the Future of Humanity Institute (FHI) today. Dr. Matheny joined researchers for discussions of biosecurity, artificial intelligence safety and existential risk reduction policy, among other topics.

Quarterly Update Autumn 2016

This post outlines activities at the Future of Humanity Institute during July, August and September 2016. We published three new papers, attended several conferences, hired Prof. William MacAskill, hosted four interns and one summer research fellow, and made progress in a number of research areas.

DeepMind collaboration

The Future of Humanity Institute and DeepMind are co-hosting monthly seminars aimed at deepening the ongoing fruitful collaboration between AI safety researchers in these organisations. The Future of Humanity Institute played host to the seminar series for the first time last week.

Collaboration with the Finnish Ministry of Foreign Affairs

Last month, FHI researchers in collaboration with the Centre for Effective Altruism met in Helsinki to discuss existential risk policy with a number of Finnish government agencies. A full-day workshop was followed by meetings held at the Office of the President, as well as with groups in policy planning and arms control.

Miles Brundage visits European Commission

FHI Research Fellow Miles Brundage recently met with policy-makers and analysts at the European Commission in Brussels. He participated in a roundtable discussion at the European Political Strategy Center (EPSC) featuring officials working on various aspects of trade, innovation, and research policy.

CSRBAI talks on preference specification

MIRI have uploaded a third set of videos from their co-hosted workshop with the Future of Humanity Institute; Colloquium Series on Robust and Beneficial AI.

FHI researchers attend IEEE

Miles Brundage, Anders Sandberg and Andrew Snyder-Beattie attended the Symposium on Ethics of Autonomous Systems (SEAS), an event organised by the Institute of Electrical and Electronics Engineers (IEEE) and attended by globally recognised experts from a diversity of fields.

CSRBAI talks on robustness and error-tolerance

MIRI have uploaded a third set of videos from their co-hosted workshop with the Future of Humanity Institute; Colloquium Series on Robust and Beneficial AI.

Colloquium Series on Robust and Beneficial AI

We recently teamed up with the Machine Intelligence Research Institute (MIRI) to co-host a 22-day Colloquium Series on Robust and Beneficial AI (CSRBAI) at the MIRI office. The colloquium was aimed at bringing together safety-conscious AI scientists from academia and industry to share their recent work. The event served that purpose well, initiating some new collaborations and a number of new conversations between researchers who hadn’t interacted before or had only talked remotely.

FHI New Hires

The Future of Humanity Institute is delighted to announce the hiring of Jan Leike, and Miles Brundage for the Strategic Artificial Intelligence Research Centre (SAIRC)

Jan Leike wins Best Student Paper Award at UAI 2016

Jan Leike’s research, co-authored with Tor Lattimore, Laurent Orseau and Marcus Hutter, discusses a variant of Thompson sampling for nonparametric reinforcement learning in countable classes of general stochastic environments. These environments can be non-Markov, nonergodic, and partially observable. It show that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear.

New paper: “A formal solution to the grain of truth problem”

Future of Humanity Institute Research Fellow Jan Leike and Machine Intelligence Research Institute Research Fellows Jessica Taylor and Benya Fallenstein presented new results at UAI 2016 that resolve a longstanding open problem in game theory.

The paper describes the first general reduction of game-theoretic reasoning to expected utility maximization.

Quarterly Update Summer 2016

This Q2 update highlights some of our key achievements and provides links to particularly interesting outputs. We conducted two workshops, released a report with the Global Priorities Project and the Global Challenges Foundation, published a paper with DeepMind, and hired Jan Leike and Miles Brundage.

Research Associate Robin Hanson publishes Age of Em

Congratulations to Research Associate Robin Hanson for the publication of his book, Age of Em. Summary from the book’s website: Robots may one day rule the world, but what is a robot-ruled Earth like? Many think the first truly smart robots will be brain emulations or ems. Scan a human brain, then run a model with […]

Owen Cotton-Barratt participated in UK Parliament’s call for evidence on AI and Robotics

FHI Researcher Owen Cotton-Barratt recently gave evidence to the UK Parliament’s Science and Technology Commons Select Committee.

Nick Bostrom speaks to US National Academies

In keeping with past leadership efforts, The US National Academies of Sciences, Engineering, and Medicine have launched a new initiative to inform decision making related to recent advances in human gene-editing research. As part of the comprehensive study, the committee convened a group of experts in Paris to review the principles underlying human gene editing governance […]

New working paper: “Strategic Implications of Openness in AI Development”

This working paper, by Prof. Nick Bostrom, attempts a preliminary analysis of the global desirability of different forms of openness in AI development (including openness about source code, science, data, safety techniques, capabilities, and goals).

Niel Bowerman gives evidence to the European Parliament

FHI’s Assistant Director Niel Bowerman gives oral evidence at the European Parliament

Nick Bostrom speaks at Bank of England

Nick Bostrom delivered a Flagship Seminar on Macrostrategy at the Bank of England. Go to the Bank of England event page for the video recording Nick discussed some of the challenges that appear if one is seeking to maximize the expected value of the long-term consequences of present actions, particularly if one’s objective function has a time-neutral altruistic […]

Beyond risk-benefit analysis: pricing externalities for gain-of-function research of concern

This piece has been cross-posted from the Global Priorities Project. Please press here to see the original post. The recent US moratorium on certain types of Gain-of-Function (GoF) research made it clear that a new approach is needed to balance the costs and benefits of potentially risky research. Current risk management tools work well in […]

Policy workshop hosted on existential risk

On February 8th and 9th, twenty leading academics and policy-makers from the UK, USA, Germany, Finland, and Sweden gathered at the University of Oxford to discuss governance in existential risks. This brought together a mixture of specialists in relevant subject domains, diplomats, policy experts, and researchers with broad methodological expertise in existential risk. The event […]

£10 million grant for new Centre for the Future of Intelligence

A new research centre to explore the opportunities and challenges to humanity from the development of artificial intelligence has been launched this week after a £10 million grant from the Leverhulme Trust.

The New Yorker’s article about Nick Bostrom

The latest article of The New Yorker titled ‘The Doomsday Invention’ about Nick Bostrom and his best-seller book ‘Superintelligence: Paths, Dangers, Strategies’ has now been published. The writer of the article, Raffi Khatchadourian, who was nominated for a National Magazine Award in profile writing, throws light on Nick Bostrom’s profile and on The Future of Humanity Institute’s research.

Nick Bostrom at UN 70th General Assembly

On October 7th Nick Bostrom will be speaking alongside Max Tegmark from the Future of Life Institute at the United Nations Headquarters in New York.

The event is titled CBRN National Action Plans: Rising to the Challenges of International Security and the Emergence of Artificial Intelligence.

New Future of Humanity Institute seminar series, ‘Big Picture Thinking’

On long timescales, where is humanity headed? What are the big uncertainties? What does that mean for decisions today? In this series of lectures, we will tackle these issues, and explore the questions that feed into them. Many are multidisciplinary, and progress often draws on knowledge and tools from economics and other sciences, philosophy, and mathematics.

Prof. Nick Bostrom’s HARDtalk interview on BBC

The guests on HARDtalk are people who do much to shape our world. More often than not they’re testament to the talent and potential of the human species.

But what if we’re living on the cusp of a new era? Shaped not by mankind, but by machines using artificial intelligence to build a post-human world. Science fiction?

Not according to HARDtalk’s guest scientist and philosopher Nick Bostrom who runs the Future of Humanity Institute. Stephen Sackur asks, when truly intelligent machines arrive, what happens to us?

FHI awarded prestigious €2m ERC Grant

Nick Bostrom was recently awarded a €2 million ERC Advanced Grant, widely considered to be the most prestigious grant available from the European Research Council. The grant will allow Nick Bostrom and a team of FHI researchers to continue their work on existential risk and crucial considerations.

The title of the grant is “UnPrEDICT: Uncertainty and Precaution—Ethical Decisions Involving Catastrophic Threats.”

Elon Musk funds Oxford and Cambridge University research on safe and beneficial artificial intelligence

The Future of Humanity Institute at Oxford University and the Centre for the Study of Existential Risk at Cambridge University are to receive a £1m grant for policy and technical research into the development of machine intelligence.

The grant is from the Future of Life Institute in Boston, USA, and has been funded by the Open Philanthropy Project and Elon Musk, CEO of Tesla Motors and Space X.

Toby Ord on the likelihood of natural and anthropogenic existential risks

At a lecture at the Cambridge Centre for the Study of Existential Risk, Dr. Toby Ord discussed the relative likelihood of natural existential risk, as opposed to anthropogenic risks.  His analysis of the issue indicates a much higher probability of anthropogenic existential risk.

Public lecture on June 2nd from Professor Marc Lipsitch on the ethics of potential pandemic pathogen creation

On June 2nd Professor Marc Lipsitch will be giving a public lecture at FHI on the ethics of creating of potential pandemic pathogens. Professor Lipsitch is director of the Center of Communicable Disease Dynamics and Professor of Epidemiology at Harvard. 

Toby Ord disputes the ethics of potential pandemic pathogen experiments

In a recent open letter, Toby Ord describes FHI’s position on experiments that create potential pandemic pathogens, noting that “the experiments involve risks of killing hundreds of thousands (or even millions) of individuals in the process.”

Nick Bostrom discusses machine superintelligence at TED

At the latest TED conference in Vancouver, Professor Nick Bostrom discussed concerns about machine superintelligence and FHI’s research on AI safety.

Bill Gates endorses FHI’s Superintelligence

In a recent discussion with Baidu CEO Robert Li, Bill Gates discussed FHI’s research, stating that he would “highly recommend” Superintelligence.

FHI Technical Report: MDL Intelligence Distillation by Eric Drexler

In a newly published FHI Technical Report, “MDL Intelligence Distillation: Exploring strategies for safe access to superintelligent problem-solving capabilities”, Eric Drexler explores a general approach to separating learning capacity from domain knowledge, and then using controlled input and retention of specialised domain knowledge to focus and implicitly constrain the capabilities of domain-specific superintelligent problem solvers.

Research on Moral Trade

FHI researcher Toby Ord has published recent research on moral trade in Ethics. Differing ethical viewpoints can allow for moral trade, arrangements that improve the state of affairs from all involved viewpoints.

Allocating Existential Risk Mitigation Across Time

In a recent technical report, Dr. Owen Cotton-Barratt discusses how we ought to allocate existential risk mitigation effort across time. The primary finding is that all else being equal we should prefer work earlier and prefer to work on risks that might come early. 

2014 Thesis Prize Competition – Results

The Future of Humanity Institute is pleased to announce the results for the 2014 Thesis Prize Competition: Crucial Considerations for the Future of Humanity. Entrants submitted a two-page ‘thesis proposal’ consisting of a 300 word abstract and an outline plan of a thesis on crucial considerations for humanity’s future. Professor Nick Bostrom, Dr Toby Ord […]

Existential Risk and Existential Hope: Definitions

In a recent report, FHI researchers examine the strengths and weaknesses of two existing definitions of existential risk, and suggest a new definition based on expected value. This leads to a parallel concept: ‘existential hope’, the chance of something extremely good happening. 

Risks and impacts of AI: conference, open letter, and new funding program

Over the weekend of January 2, much of our research staff from the Oxford Martin Programme on the Impacts of Future Technology attended The Future of AI: Opportunities and Challenges, a conference held by the Future of Life Institute to bring together AI researchers from academia and industry, AI safety researchers, lawyers, economists, and many […]

FHI in 2014

In 2014 FHI produced over 20 publications and policy reports, and our research was the topic of over 1000 media pieces.  The highlight of the year was the publication of Superintelligence, Paths, Dangers, Strategies, which has opened a broader discussion on how to ensure our future AI systems remain safe.

New ideas on value porosity and utility diversification

Nick Bostrom has completed a draft paper on value porosity and utility diversification.  This theory could used as part of a ‘Hail Mary’ approach to the AI safety problem. 

FHI research in Scientific American

On December 16th, FHI researcher Carl Frey published a piece in Scientific American describing the challenges of a digital economy. 

Professor Stuart Russell summarises AI risk

In a recent contribution to The Edge, Professor Stuart Russell describes FHI’s position on the opportunities and risks of future AI systems. 

FHI contributes chapter on existential risk to UK Chief Scientific Advisor’s report

The 2014 UK Chief Scientific Advisor’s report has included a chapter on existential risk, written by FHI researchers Toby Ord and Nick Beckstead. The report describes the risks posed by AI, biotechnology, and geoengineering, as well as the ethical framework under which we ought to evaluate existential risk.

FHI Research Featured in New York Times

On November 5th, FHI’s recent work on the future dangers of artificial intelligence was featured in the New York Times. 

Oxford Martin Lecture on Superintelligence

On October 13th Professor Nick Bostrom will present his recent book Superintelligence: Paths, Dangers, Strategies at the Oxford Martin School. The lecture will be followed by a book signing and drink reception, open to the public.

Seminar: Deterrence Theory and Global Catastrophic Risk Reduction

On October 13th, Dr. Seth Baum, the executive director of the Global Catastrophic Risk Institute, will lead a seminar on deterrence theory and global catastrophic risk reduction at FHI. 

Carl Frey discusses his research in the Financial Times

Today Carl Frey presented his economics research in an article in the Financial Times. 

Thanks

Thanks to Investling Group for their recent financial contribution.

Open talk: ethical alternatives to experiments to create potential pandemic pathogens

Professor Marc Lipsitch will be giving a talk on recent experiments with potential pandemic pathogens and their ethical alternatives on September 25th. Professor Lipsitch is a professor of epidemiology and the director of the Centre for Communicable Disease Dynamics at Harvard University.

FHI featured in Chronicle of Higher Education

The Chronicle of Higher Education highlighted work done at FHI in an article about the risks of artificial intelligence and other advanced technologies. 

Superintelligence featured NYT Science Bestsellers list

Superintelligence: Paths, Dangers, Strategies has been featured on the NYT Science Bestseller’s list, sharing the list with Malcolm Gladwell’s David and Goliath and Daniel Kahneman’s Thinking, Fast and Slow. 

Carl Frey in Scientific American on automation and employment

In an article featured in Scientific American, Oxford Martin and FHI research fellow Carl Frey discusses how cities can manage technological change, noting that the process of creative destruction works best when new occupations are fostered. 

Superintelligence released in United States

Superintelligence: Paths, Dangers, Strategies is now available in the United States.  To mark the event, Nick Bostrom is starting his book tour in Washington DC at Noblis. 

Nick Bostrom at US Presidential Bioethics Commission

Nick Bostrom recently advised Obama’s Presidential Commission for the Study of Bioethical Issues on issues regarding ethical considerations in cognitive enhancement. Discussion included how concerns about distributive justice and fairness might be addressed in light of potential individual or societal benefits of cognitive enhancement.

Superintelligence on NYT bestseller list

Superintelligence: Paths, Dangers, Strategies has been featured on the New York Times bestseller list. Ranked in the top 25 for nonfiction e-books, this week Superintelligence has topped books such as Daniel Kahneman’s Thinking, Fast and Slow.

Nick Bostrom book tour September 3-12th

Nick Bostrom will be touring the United States to discuss Superintelligence: Paths, Dangers, Strategies from September 3-12th. 

Former Assistant Secretary General to the UN visits FHI

On July 21st Professor Steve Stedman visited the Future of Humanity Institute to discuss global catastrophic risks and emerging technology.  Professor Stedman is the former Assistant Secretary General to the United Nations, where he proposed and implemented the United Nations Task Force on Counter-terrorism, among other accomplishments. 

Financial Times Reviews Superintelligence

A recent Financial Times review of Superintelligence states “there is no doubting the force of [Bostrom’s] arguments … the problem is a research challenge worthy of the next generation’s best mathematical talent. Human civilisation is at stake.” 

MIRIx at FHI

In collaboration with the Machine Intelligence Research Institute, FHI hosted a MIRIx Workshop to develop the technical agenda for AI safety. Attendees generated new strategic considerations for technical agenda setting, technical research ideas, and comments on existing topics in the technical agenda.

Anders Sandberg presents at US Army Research Laboratory

Anders Sandberg gave an invited talk about enhancement ethics and emerging technologies at the Army Research Labs Adelphi Center. His main theme was how automation will shift occupational demand – both in society at large and in a military setting – more towards skills and abilities where human enhancement is relevant.

Superintelligence: Paths, Dangers, Strategies now released in UK

“Nick Bostrom makes a persuasive case that the future impact of AI is perhaps the most important issue the human race has ever faced. Instead of passively drifting, we need to steer a course. Superintelligence charts the submerged rocks of the future with unprecedented detail. It marks the beginning of a new era.” – Stuart Russell

Superintelligence recommended by Financial Times

Superintelligence: Paths, Dangers, Strategies, has been placed at the top of the Financial Times scientific summer reading list, stating that “Bostrom … casts a philosopher’s eye at the past, present and future of artificial intelligence.”

Presentation at the RSA

Nick Bostrom will be presenting his new book Superintelligence: Paths, Dangers, Strategies at the Royal Society of Arts on July 3rd.  Join the waiting list here.

Laplace’s law of succession

How should one construct a prior for unprecedented events? Last week, Toby Ord described Laplace’s law of succession in an FHI seminar.

Biosecurity Seminar at FHI

Last month Edward Perello, Cofounder of Desktop Genetics Ltd, gave a lecture on emerging technologies and biosecurity at the Future of Humanity Institute. Topics included DNA synthesis, the biohacking movement, and government regulation.

Good Done Right: a Conference on Effective Altruism

Four representatives from the Future of Humanity Institute will be speaking at Good Done Right, a conference on effective altruism taking place on July 7th-9th at All Souls College in Oxford. The conference will seek to use insights from ethical theory, economics, and related disciplines to identify the best means to secure and promote the […]

Article by Anders Sandberg on Existential Risk

Last week Anders Sandberg wrote an article in the Conversation entitled “the five biggest threats to human existence.” The article discusses the risks of nuclear weapons, biotechnology, and superintelligence.

FHI Announces Alumni

Over the years a number of researchers have participated in our work, but have now continued to other positions.

Future of Life Institute hosts opening event at MIT, “The Future of Technology: Benefits and Risks”

On May 24, the Future of Life Institute at MIT will host an opening event with board members Jaan Tallinn, George Church, Alan Alda, and Frank Wilczek. The Future of Life Institute is dedicated to responsible innovation and the reduction of existential risk.

Future of Humanity Institute answers questions from the public

On May 12th, researchers from FHI participated in a public “ask me anything” series on Reddit, hosted by The Conversation.  Topics covered climate change, pandemics, bioethics, artificial intelligence, and existential risk, with the session achieving Reddit’s front-page status.

Oxford Martin School Seminar: “Containing the intelligence explosion” by Dr. Joanna Bryson

Is artificial intelligence an existential threat to humanity? On May 13th, Dr. Joanna Bryson will be delivering a lecture at the Oxford Martin School discussing the notion of an intelligence explosion.

IEET: Nick Bostrom ranked among top 15 world thinkers

Nick Bostrom has been included in Prospect Magazine’s top 15 world thinkers, taking the position of highest ranked analytic philosopher and the 3rd highest ranked philosopher in all areas.

Stephen Hawking calls for more research on existential AI risks

Citing work done by the Future of Humanity Institute, Stephen Hawking warned that dismissing the dangers of advanced artificial intelligence could be the “worst mistake in history.” 

Thesis Prize Competition: Crucial Considerations for the Future of Humanity

Can philosophical research contribute to securing a long and prosperous future for humanity and its descendants? What would you think about if you really wanted to make a difference?

Daniel Dewey interviewed on existential risk

Following on Stephen Hawking et al’s article on superintelligence, Daniel Dewey discussed artificial intelligence and existential risk in an interview on Motherboard. 

Nick Bostrom ranked as 1st analytic philosopher in Prospect Magazine’s top world thinkers

Nick Bostrom has been included in Prospect Magazine’s top 15 world thinkers, an honour shared with entrepreneur Elon Musk, Pope Francis, and Nobel Prize winners Peter Higgs and Daniel Kahneman. Out of all philosophers, Bostrom was ranked 3rd, and out of analytic philosophers Bostrom was ranked 1st.

Superintelligence: Paths, Dangers, Strategies now available for preorder from Oxford University Press

Nick Bostrom’s latest book is now available for preorder.  Superintelligence asks the questions: What happens when machines surpass humans in general intelligence? Will artificial agents save or destroy us?

Steven Hawking, Max Tegmark, Frank Wilczek, and Stuart Russell discuss importance of FHI’s research on AI risk

In an article published last week, notable scientists Steven Hawking, Max Tegmark, Frank Wilczek, and Stuart Russell discuss the risks associated with advances in artificial intelligence, citing work done by the Future of Humanity Institute. 

Toby Ord gives TEDx talk at Cambridge University

FHI’s Toby Ord recently presented his talk, “How to Save Hundreds of Lives,” at a TEDx event in Cambridge University. In it, Toby demonstrates that even a modest amount of giving can save thousands of quality adjusted life years.

Carl Frey’s work on employment featured in The Guardian

Could a “zero marginal cost society” maintain an equitable distribution of wealth?  An article in The Guardian takes on this issue, citing Carl Frey’s work on automation and employment. 

Daniel Dewey interview on Artificial Intelligence

What are some plausible future risks from advanced AI?  Last week Daniel Dewey was interviewed on NonTheology, a science and philosophy podcast.

Seán Ó hÉigeartaigh gives TEDx talk on existential risk

Seán Ó hÉigeartaigh gave the opening talk at Belgium’s TEDx UHasselt Conference last Saturday, titled “Technological risk: how unexpected connections will help us tackle humanity’s greatest challenge.” 

Nick Bostrom nominated as 2014 World Thinker

FHI director Nick Bostrom has made the Prospect Magazine’s top 50 World Thinkers, along with Pope Francis, Peter Higgs, and Elon Musk.  Online voting for a top World Thinker continues here until April 11th.

Toby Ord’s book chapter featured in Oxford Literary Festival

On March 28th, Toby Ord spoke at the Oxford Literary Festival about the forthcoming book he contributed to: Is The Planet Full? Toby’s chapter, ‘Overpopulation or Underpopulation?’ raises a number of philosophical questions concerning how we should think about population, involving ethics and technology.

Nick Bostrom to deliver keynote address in Brussels

Should the rate of innovation be accelerated? Nick Bostrom will discuss the risks and benefits in a keynote lecture at the Age of Wonder in Brussels this Friday. 

Ilya Shpitser discusses causal decision theory at FHI

On March 14th Dr. Ilya Shpister discussed the merits of causal decision theory (CDT).  His talk demonstrated the potential of CDT to resolve decision theory problems (such as Newcomb’s problem) so long as they are expressed in the correct causal graph.

Oxford Martin Seminar on Automation and Employment

On March 13th, Carl Frey and Mike Osborne presented their results on automation and employment at the Oxford Martin School. Topics included creative destruction, applications of machine learning, and the future role of human workers.

Stuart Armstrong interviewed on AI risks

Last week Stuart Armstrong was interviewed on the risks of artificial intelligence.  While predictions and consequences remain uncertain, FHI highlighted the need for more research on these issues.

FHI delivers report to House of Commons

Carl Frey discussed the future of automation and employment at the House of Commons on February 25th.  Our current research within the Oxford Martin School’s Programme on the Impacts of Future Technology suggests almost half of the modern workforce is vulnerable to increased automation.

Lecture from Lord Martin Rees, Jaan Tallinn, and Professor Huw Price

The Cambridge Centre for the Study of Existential Risk will be hosting a public lecture on February 26th entitled “Existential Risk: Surviving the 21st Century.”

Simulation argument featured in New York Times

As physics shines more light on the nature of our universe, Nick Bostrom’s simulation argument has received more attention.  Last week, the New York Times cited Bostrom’s work in explaining why physicists are conducting certain experiments.

Nick Bostrom discusses dangers of superintelligent AI on BBC Radio

Would Asimov’s laws or a simple kill switch be sufficient to avoid the harms of a superintelligent AI?  Nick Bostrom and James Barrat explain the difficulties of ensuring positive outcomes of an intelligence explosion on BBC Radio.

FHI-Amlin Conference on Systemic Risk

The Future of Humanity Institute is pleased to announce the Amlin-Oxford Martin School Conference on Systemic Risk, taking place on February 11th-12th.  Speakers will include Lord Robert May, Professor Didier Sornette, Professor Ian Goldin, and Professor Doyne Farmer.

FHI hosts Agent Based Modelling workshop

On February 4th, a diverse group met at the Future of Humanity Institute to discuss projects in agent based modelling (ABM).  Experts in cultural anthropology, neuroscience, complex systems, and ecology all shared insights into how ABM is used in their respective fields.

Stuart Armstrong discusses Google’s acquisition of DeepMind on BBC Today Programme

Today Dr. Stuart Armstrong lauded Google’s decision to establish an artificial intelligence (AI) ethics board following their acquisition of DeepMind.  It is a positive step forward, addressing the risks associated with continuing improvements in AI.

David Christian presents “Big History and the Place of Human Beings in the Cosmos”

David Christian, hosted in part by our Programme on the Impacts of Future Technology, will be presenting his work on “Big History” at the Oxford Martin School on January 31st at 16:00.

Carl Frey’s Automation Paper Featured in The Economist

In the past, technological innovation has increased long run employment.  Will this pattern hold in the future?  The Economist has featured Carl Frey and Mike Osborne’s paper on automation and unemployment in the January 18th print edition.

Nick Bostrom, MIRI, and the intelligence explosion hypothesis discussed in Io9

Luke Muehlhauser, executive director of the Machine Intelligence Research Institute, was interviewed by io9 regarding a paper he co-wrote with Nick Bostrom about the dangers of artificial intelligence.  

Eric Drexler to discuss atomically precise manufacturing at the Oxford Martin School

Eric Drexler, an academic visitor at the Future of Humanity Institute, will give a lecture on atomically precise manufacturing at the Oxford Martin School on January 22nd. The talk will be based on his new book, Radical Abundance.

FHI Hosts Machine Intelligence Research Institute Maths Workshop

The Future of Humanity Institute is hosting a maths workshop led by the Machine Intelligence Research Institute (MIRI).  The week long workshop covers topics such as mathematical logic, probability theory, and how these tools relate to artificial intelligence.

Nick Bostrom advises 10 Downing Street on Existential Risk, Alternative Institutions, and Development

Nick Bostrom visited 10 Downing Street on Tuesday 12 November 2013 to advise on topics ranging from existential risk to more effective institutions.  

Stuart Armstrong and Anders Sandburg featured in io9 article on self-replicating space probes

On Wednesday 09 November, ideas from Stuart Armstrong and Anders Sandberg were featured in George Dvorsky’s article on self-replicating space probes.

Daniel Dewey speaking at TEDx Vienna, 02 November 2013

On Saturday 02 November, Daniel Dewey will join speakers such as Aubrey de Gray and Mark Post at TEDx Vienna.

Bostrom, Sandberg, Drexler at Futurefest 2013

Professor Nick Bostrom, Dr. Anders Sandberg, and Dr. Eric Drexler will speak on September 28 at Futurefest 2013, an event designed to “enlarge our sense of what’s possible, so that we can all play our part in shaping things to come.”

Enhancing Humanity’s Collective Wisdom: Competition Results

The Future of Humanity Institute is pleased to announce the winners of our 2013 competition.

PTAI-2013: 21-22 September in Oxford, featuring Dennett, Russell, Bringsjord

The Oxford Martin Programme on the Impacts of Future Technology will be co-hosting the 2013 Philosophy and Theory of AI conference in Oxford this weekend, September 21-22.

Nick Bostrom and Stuart Armstrong discuss the future and AI at the Re.Work Technology Summit London, 19 September

Professor Nick Bostrom will be giving a closing keynote at the Re.Work Technology Summit this evening at LSO St Luke’s, London.

Dr. Stuart Armstrong’s “A little talk about the Future”

Dr. Stuart Armstrong gave a talk at the IARU Summer School on the Ethics of Technology. The talk addressed many of the research areas of our institute.

Anders Sandberg comments on PNAS study of Rat neurophysiology following heart failure

Dr. Anders Sandberg comments in SMC on a recent PNAS study of rat neurophysiology following heart failure.

Anders Sandberg to discuss the future of AI at IJCAI-13 Conference in Beijing

On Friday 09 August  Dr. Anders Sandberg will be one of the invited panelists discussing “The Future of AI: What if We Succeed?” at the 2013 Joint Conference on Artificial Intelligence in Beijing, China.

“Why is the FHI interested in looking for aliens?”

“Why is the FHI interested in looking for aliens? Isn’t it the Future of Humanity Institute?”

Nick Bostrom talks surveillance and transparency for Guardian’s Activate London

Professor Bostrom has given a closing keynote presentation at this year’s Guardian Activate London Summit which took place on Tuesday 09 July 2013.

Dr. Toby Ord at 10 Downing Street

Last week Dr. Toby Ord went to 10 Downing Street, where he met with a special advisor to the Prime Minister.

Nick Bostrom on UN risk panel

Professor Nick Bostrom is participating at the Annual Humanitarian Affairs Segment for the United Nations Economic and Social Council (ECOSOC) held in Geneva, 15-17 July 2013.

“Secret Snakes”: Secrecy and Surveillance

Dr. Anders Sandberg, on the Oxford Martin School blog: …unaccountable surveillance is much easier turned into a tool for evil than accountable surveillance: the key question is not who got what information about whom, or even security versus freedom, but whether there is appropriate oversight and safeguards for civil liberties.

Dr James Martin, 1933-2013

The Future of Humanity Institute mourns the passing of James Martin, the visionary founder of the Oxford Martin School. The FHI owes its establishment and much of its success over the years to Dr. Martin’s support and guidance.

Nick Bostrom quoted in The Observer

Professor Nick Bostrom is quoted in an editorial and a featured article in The Observer, discussing the topics of prosthetics and robotics, human enhancement, and transhumanism.

Dr Anders Sandberg talk at Global Future 2045 International Congress

On 15 June 2013, Dr Anders Sandberg gave a talk entitled “Making Minds Morally: the Research Ethics of Brain Emulation” at the GF2045 International Congress.

Video from Prof. Max Tegmark’s talk on ‘The Future Of Life: A Cosmic Perspective’

Max Tegmark, from the Massachusetts Institute of Technology and the Foundational Questions Institute (FQXi), presents a cosmic perspective on the future of life, covering our increasing scientific knowledge, the cosmic background radiation, the ultimate fate of the universe, and what we need to do to ensure the human race’s survival and flourishing in the short […]

Nick Bostrom at Founders Forum 2013

The Founders Forum is a global network of digital leaders which connect the brightest and most dynamic digital start-ups to key investors, select CEO’s and policy makers. The members meet at four key events around the globe – in NYC, Mumbai, Rio and at our flagship event in London. Founders Forum events are all about […]

Müller and Sandberg to present “Brain Surveillance” at Ethics of Surveillance Conference, Leeds

Vincent C. Müller and Anders Sandberg will present their paper on “Brain Surveillance” at the 2nd Ethics of Surveillance Conference, Leeds, June 24-25, 2013.

Professor Max Tegmark (MIT) talk “The Future of Life: a Cosmic Perspective”

The Future of Humanity Institute and the Department of Physics at the University of Oxford are pleased to invite you to a talk by one of the world’s foremost researchers in the field of cosmology.

FHI teams with Amlin Insurance to study Systemic Risk

The Future of Humanity Institute is pleased to announce the establishment of the FHI-Amlin Research Collaboration on Systemic Risk of Modelling.

Ó hÉigeartaigh speaks about the Future of Human Evolution for the “Pint of Science Festival”

The “Pint of Science Festival” in Oxford included a set of excellent talks on the brain, the body, and biotech in three of Oxford’s best pubs.

Nick Bostrom on Radio: BBC4 and BBC5

Professor Nick Bostrom, Director of the Future of Humanity Institute, will be speaking on both BBC4’s The World Tonight at 10PM and BBC5 at 10:30 tonight, elaborating on some of the topics discussed in today’s BBC article on existential risk.

BBC News: “How are humans going to become extinct?”

BBC News covers the FHI: An international team of scientists, mathematicians and philosophers at Oxford University’s Future of Humanity Institute is investigating the biggest dangers.

Machines and Employment: Which Tasks can be Automated?

Carl Frey joins Michael Osborne (Oxford Department of Engineering Science) in hosting an interdisciplinary workshop on the future effects of automation on the employment market.

Nick Bostrom at Economist’s “Technology Frontiers”

On March 5, 2013, Nick Bostrom gave a talk at The Economist’s “Technology Frontiers” conference, on the topic of human nature and the future of humanity.

Aeon Magazine Feature: “Omens”

Ross Andersen has published an essay about his visit to the Future of Humanity Institute at Aeon Magazine: When we peer into the fog of the deep future what do we see – human extinction or a future among the stars?

Crucial Considerations for the Future of Humanity: Competition Results

The Future of Humanity Institute and the Programme on the Impacts of Future Technology are pleased to announce the winners of the Crucial Considerations for the Future of Humanity Thesis Abstract Competition.

Winter Intelligence Conference 2011

Date: 14-17 January 2011 Venue: St Catherine’s College; Jesus College, Oxford This unusual conference, bridging philosophy, cognitive science, and machine intelligence brought together experts and students from a wide range of backgrounds for a long weekend of intense deliberation about the big questions: What holds together our experiences? What forms can intelligence take? How can […]

Winter Intelligence Conference 2011

This unusual conference, bridging philosophy, cognitive science, and machine intelligence brought together experts and students from a wide range of backgrounds for a long weekend of intense deliberation about the big questions: What holds together our experiences? What forms can intelligence take? How can we create effective collective or artificial intelligence?