Research Areas

The Future of Humanity Institute’s mission is to bring excellent scholarship to bear on big-picture questions for humanity. We seek to focus our work where we can make the greatest positive difference. This means we pursue questions that are (a) critically important for humanity’s future, (b) unduly neglected, and (c) for which we have some idea for how to obtain an answer or at least some useful new insight. Through this work, we foster more reflective and responsible ways of dealing with humanity’s biggest challenges.

Our work spans four programmatic areas:

 

Macrostrategy

galIt is easy to lose track of the big picture.  Yet if we want to intervene in the world and we care about the long-term consequences of our actions, we can’t help but to place bets on how our local actions will affect the complicated dynamics that shape the future.  We therefore think it is valuable to develop analytic tools and insights that clarify our understanding of the macrostrategic context for humanity.

A significant interest of ours is existential risk: where an adverse outcome would either end Earth-originating intelligent life or drastically and permanently curtail its potential for realizing a valuable future.  Interventions that promise to reduce the integral of existential risk even slightly may be good candidates for actions that have very high expected value.

Our work on macrostrategy involves forays into deep issues in several fields, including detailed analysis of future technology capabilities and impacts, existential risk assessment, anthropics, population ethics, human enhancement ethics, game theory, consideration of the Fermi paradox, and other indirect arguments.  Many core concepts and techniques in macrostrategy have been originated by FHI scholars; they are already having a practical impact, such as in the effective altruism movement.

AI Safety

aeSurveys of leading AI researchers suggest a significant probability of human-level machine intelligence being achieved in this century.  Machines already outperform humans on several narrowly defined tasks, but the prospect of general machine intelligence would introduce novel challenges. The goal system would need to be carefully designed to ensure that the AI’s actions would be safe and beneficial.

Present-day machine learning algorithms (if scaled up to very high levels of intelligence) would not reliably preserve a valued human condition.  We therefore face a ‘control problem’: how to create advanced AI systems that we could deploy without risk of unacceptable side-effects.

Our research in this area focuses on the technical aspects of the control problem.  We also work on the broader strategic, ethical, and policy issues that arise in the context of efforts to reduce the risks of long-term developments in machine intelligence.  For an in-depth treatment of this topic, please see Superintelligence: Paths, Dangers, Strategies (OUP, 2014).

 

Technology Forecasting & Risk Assessment

GCReven

A handful of emerging technologies could fundamentally transform the human condition.  Advances in biotechnology and nanotechnology may enable dramatic human enhancement but also create unprecedented risks to civilization and biosphere alike.  Near-term narrow machine intelligence will have myriad economic benefits but could also contribute to technological unemployment, ubiquitous surveillance, or institutional lock-in.

Our research in these areas seeks to prioritize among emerging risks and opportunities, determine the interaction effects between emerging technologies, and identify actionable interventions that could improve humanity’s long-run potential.

 

Policy & Industry

unWe collaborate with a variety of governmental and industrial groups from around the world.  FHI has worked with or consulted for the US President’s Council on Bioethics, the UK Prime Minister’s Office, the United Nations, the World Bank, the Global Risk Register, and a handful of foreign ministries. We have an ongoing sponsorship with Amlin plc., a major reinsurance company, as well as research arrangements with leading groups in artificial intelligence.
We welcome expressions of interest from government and industry.  Please contact Andrew Snyder-Beattie for further details.