FHI explores what we can do now to ensure a long flourishing future.

FHI is a multidisciplinary research institute at the University of Oxford. Academics at FHI bring the tools of mathematics, philosophy, social sciences, and science to bear on big-picture questions about humanity and its prospects. The Institute is led by founding Director Professor Nick Bostrom.

Studying the connections between present actions and long-term outcomes for humanity. Finding crucial considerations that might radically change our scheme of priorities.

Examples of work include:

See our macrostrategy research area page for more details.

Understanding how geopolitics, governance structures, and strategic trends will affect the development of advanced artificial intelligence.

Examples of work include:

See our Center for the Governance of AI page for more details.

Researching computer science techniques for building safer artificially intelligent systems.

Examples of this work include:

See our AI safety research area page for more details.

Working with institutions around the world to reduce risk from especially dangerous pathogens.

Examples of our work include:

This is a growing area of work for us and we expect to publish significantly more soon

The Doomsday Invention

Nick Bostrom asks: will we engineer our own extinction?
The New YorkerRead the full New Yorker piece


Researchers at the Future of Humanity Institute have originated or played a pioneering role in developing many of the concepts that shape current thinking about humanity’s deep future. These include: existential risk, astronomical waste, the simulation argument, nanotechnology, the great filter, infinitarian paralysis, prediction markets, and analysis of superintelligence, brain emulations scenarios, human enhancement, transhumanism, and anthropics.


We work closely with the Centre for Effective Altruism, DeepMind, OpenAI, the Machine Intelligence Research Institute, the Leverhulme Centre for the Future of Intelligence and the Cambridge Centre for the Study of Existential Risk.  Our researchers regularly give advice to philanthropic foundations, industry leaders and governments.

FHI at Oxford

the big creaky wheel
a thousand years to turn

thousand meetings, thousand emails, thousand rules
to keep things from changing
and heaven forbid
the setting of a precedent

yet in this magisterial inefficiency
there are spaces and hiding places
for fragile weeds to bloom
and maybe bear some singular fruit

like the FHI, a misfit prodigy
daytime a tweedy don
at dark a superhero
flying off into the night
cape a-fluttering
to intercept villains and stop catastrophes
somebody has to do it

and why not base it here?
our spandex costumes
blend in with the scholarly gowns
our unusual proclivities
are shielded from ridicule
where mortar boards are still in vogue

Bostrom (2018)