Our staff members are often available for media interviews. We work with journalists to provide an expert opinion and perspective, commentary on breaking news, and participation in discussion and debate – for radio, television, and print media.

We understand that journalists often work to tight deadlines, and try to respond promptly to all media inquiries.

Contact us:

Email: fhipa@philosophy.ox.ac.uk

Phone: +44 (0) 1865 286800

Subjects we cover

Artificial Intelligence and Emerging Technologies

Surveys of AI researchers suggest it is possible that the coming decades will see substantial progress in artificial intelligence, potentially even to the point where machines come to outperform humans in many or nearly all intellectual domains, though it is difficult or impossible to make confident forecasts in this area. These advances could lead to extremely positive developments, but could also potentially pose risks from misuse, accidents, or harmful societal effects, which could plausibly reach the level of existential risks. FHI carries out technical research that could reduce the risk of accidents, as well as strategic and policy work that has the potential to improve society’s preparedness for major advances in AI.

Biotechnology

The field of biosafety aims to prevent large-scale adverse effects to human health and ecology, in part by looking at improving laboratory guidelines in the field of synthetic biology. We look at technical and ethical questions around policy and technology that can have a significant influence on the likelihood of releasing harmful pathogens creating pandemics, as well as risks arising from developments in gene modification.

Existential risks and opportunities

An existential risk is one that endangers the survival of Earth-originating intelligent life, or that threatens to drastically destroy our future potential. FHI is working on identifying these existential risks, such as pandemics or negative outcomes of artificial intelligence, and researches ways to avoid them. Interventions to reduce these risks will help people today, as well as future generations.

Long-term future concerns

An existential risk is one that endangers the survival of Earth-originating intelligent life, or that threatens to drastically destroy our future potential. FHI is working on identifying these existential risks, such as pandemics or negative outcomes of artificial intelligence, and researches ways to avoid them. Interventions to reduce these risks will help people today, as well as future generations.

The Doomsday Invention

Will artificial intelligence bring us utopia or destruction? This feature article following FHI director Nick Bostrom gives some insight into the work FHI does in the space of existential risk.

BBC Hardtalk

The BBC interviews the director of the Future of Humanity Institute, Nick Bostrom.

Financial Times Feature

This Financial Times articles asks if we will be able to control artificial intelligence.

FiveThirtyEight Article

What is existential risk, and how important is it to be thinking about it?