Our staff members are often available for media interviews. We work with journalists to provide an expert opinion and perspective, commentary on breaking news, and participation in discussion and debate – for radio, television, and print media.

We understand that journalists often work to tight deadlines, and try to respond promptly to all media inquiries.

Areas of expertise include the following:

Contact us:

Email: fhipa@philosophy.ox.ac.uk

Phone: +44 (0) 1865 286800

Artificial Intelligence

Surveys of AI researchers suggest it is possible that the coming decades will see substantial progress in artificial intelligence, potentially even to the point where machines come to outperform humans in many or nearly all intellectual domains, though it is difficult or impossible to make confident forecasts in this area. These advances could lead to extremely positive developments, but could also potentially pose risks from misuse, accidents, or harmful societal effects, which could plausibly reach the level of existential risks. FHI carries out technical research that could reduce the risk of accidents, as well as strategic and policy work that has the potential to improve society’s preparedness for major advances in AI.

Media contacts

Prof. Nick Bostrom

Nick Bostrom, author of Superintelligence

Close up of researcher Miles Brundage

Miles Brundage, expert on AI policy

Anders Sandberg Close up

Anders Sandberg, expert on far future issues

Brain emulation

In Whole Brain Emulation (WBE), intelligent software would be produced by scanning and closely modeling the computational structure of a biological brain. The idea is that this structure is then be able to run a simulation model so that it responds essentially the same way as the human brain in all relevant areas. WBE is a potential path towards achieving Artificial General Intelligence (AGI) or superintelligence, although it is unclear if WBE has the potential of being faster than traditional AI approaches.

Further resources:

Whole Brain Emulation – A Roadmap (PDF)

Superintelligence, p. 35, Nick Bostrom (Amazon)

Media contacts

Biosafety and biosecurity

The field of biosafety aims to prevent large-scale negative effects to human health and ecology, in part by looking at improving laboratory guidelines in the field of synthetic biology. We look at technical and ethical questions around policy and technology that can have large influence on the likelihood of releasing harmful pathogens creating pandemics, as well as risks arising from developments in gene modification.

Media contacts

Human extinction and existential risk

An existential risk is one that endangers the survival of Earth-originating intelligent life, or that threatens to drastically destroy our future potential. FHI is working on identifying these existential risks, such as pandemics or negative outcomes of artificial intelligence, and researches ways to avoid them. Interventions to reduce these risks will help people today, as well as future generations.

Further resources:

Global Priorities project (PDF)

Global Catastrophic Risks (Amazon), Nick Bostrom, Milan M. Cirkovic

Media contacts

Pandemics

Whilst natural pandemics are relatively unlikely to cause human extinction, scientists have made significant advances to biotechnology that could pose concrete dangers to humanity. One of the most famous examples is the recent development of CRISPR technology, which scientists use to advance human health, but which could alternatively be applied to make viruses more dangerous. If the wrong individuals or groups were to gain access to these technologies they could deliberately or accidentally cause an existential catastrophe.

Further resources:

Global Priorities project (PDF)

Media contacts

Effective Altruism

FHI is part of the effective altruism movement, which aims to find the best ways to do the most good in the world. Rather than just doing what feels right, effective altruism uses evidence and careful analysis to prioritise causes to work on for the best possible outcomes in the world. As existential risk is a major threat to a prosperous future for generations to come, it is one of the prioritised causes in the effective altruism community.

Further resources:

Doing Good Better, Will MacAskill (Amazon)

Center for Effective Altruism

80,000 Hours Career Advice

Effective Altruism Funds

Givewell charity recommendations

Media contacts

Transhumanism and human enhancement

Humans tend to think of themselves as the endpoint of evolution, but what if we are not? Transhumanism follows the belief that the constant advancement in technology offers a real opportunity to enhance human intellectual, physical and emotional capacities. Current technologies (e.g. genetic engineering) and possible future developments (e.g. molecular nanotechnology, artificial intelligence) have the potential to extend human lifespan, eradicate disease or unnecessary suffering and even enable space colonisation.

Further resources:

Nick Bostrom on transhumanist values

Media contacts

Additional subject areas that FHI covers include:

  • Ethics of drone warfare
  • Nanotechnology
  • Biotechnology
  • Systemic risk
  • Surveillance society
  • Population ethics
  • Forecasting and prediction
  • Robotics and unemployment
  • Technological innovation
  • Virtual and augmented reality
  • Cognitive enhancement
  • Brain-computer interfaces
  • Genomic medicine
  • Fermi paradox and space colonization
  • Long term economic growth

FHI in the media – a selection of news coverage

The Doomsday Invention

Will artificial intelligence bring us utopia or destruction? This feature article following FHI director Nick Bostrom gives some insight into the work FHI does in the space of existential risk.

BBC Hardtalk

The BBC interviews the director of the Future of Humanity Institute, Nick Bostrom.

Financial Times Feature

This Financial Times articles asks if we will be able to control artificial intelligence.

FiveThirtyEight Article

What is existential risk, and how important is it to be thinking about it?