Research Areas

The Future of Humanity Institute’s mission is to bring excellent scholarship to bear on big-picture questions for humanity. We seek to focus our work where we can make the greatest positive difference. This means we pursue questions that are (a) critically important for humanity’s future, (b) unduly neglected, and (c) for which we have some idea for how to obtain an answer or at least some useful new insight. Through this work, we foster more reflective and responsible ways of dealing with humanity’s biggest challenges.

We use a diverse set of methodological tools, drawing on both scientific theory and data and on techniques of analytic philosophy. The specifics depend on which particular research question is being addressed. In addition to relying on our in-house multidisciplinary expertise, we frequently collaborate with external experts.

Our work spans four programmatic areas. Within these areas, we pursue a number of specific research projects.

 

Global Catastrophic Risk

 

gcrGlobal catastrophic risks are those that pose serious threats to human well-being on a global scale. An immensely diverse collection of events could constitute global catastrophes: they range from volcanic eruptions to pandemic infections, nuclear accidents to worldwide tyrannies, out-of-control scientific experiments to climatic changes, and cosmic hazards to economic collapse.

Global catastrophes have occurred many times in history, even if we only count disasters causing more than 10 million deaths. A very partial list of examples includes the An Shi Rebellion (756-763), the Taiping Rebellion (1851-1864), and the famine of the Great Leap Forward in China, the Black Death in Europe, the Spanish flu pandemic, the two World Wars, the Nazi genocides, the famines in British India, Stalinist totalitarianism, and the decimation of the native American population through smallpox and other diseases following the arrival of European colonizers. Many others could be added to this list.

A special focus for the FHI is the study of existential risks. These form a sub-category of global catastrophic risks, in which an adverse outcome would either cause the extinction of Earth-originating intelligent life or permanently and drastically destroy its future potential. It would spell an end to the human story. Because of the extreme severity of existential risks, they deserve extremely careful attention even if their probability could confidently be assessed to be very small. Reduction of existential risk is of singularly high expected utility. A necessary first step toward mitigation is improved understanding. The study of existential risk, however, faces a number of distinctive methodological challenges.

The problems we are working on in relation to global catastrophic and existential risk include (but are not limited) to the following:

  • What strategies should be pursued to reduce the risk of bioterrorism and other threats arising from the misuse of biotechnology?
  • How should we assess risks for which model uncertainty is the dominating factor?
  • What are the most probable existential risks?
  • How can we develop an appropriate framework for analyzing and assessing existential risk?
  • How can we best reduce risks arising from expected future technological developments, such as human enhancement, nanotechnology, and artificial intelligence?
  • How can we ensure that our methods of estimating existential risks are free from bias and systematic error?
  • When mitigating risks within complex systems, how can we ensure that the strategies we employ cost-effectively reduce the overall level of risk without introducing major new risks?

To visit the Global Catastrophic Risks website please click here.

 

Applied Epistemology

 

aeIn a century of increasing risks and opportunities, it is important to be able to form accurate beliefs about the future based on our evidence, and to make smart decisions based upon those beliefs. Improving our practical abilities in both of these areas is critical since it can help us deal with all of the big issues that will arise — even those we have not yet anticipated. This could involve developing techniques to improve the rationality of individuals (be they researchers, policy makers, or members of the general public), as well as improving the rationality of groups by changing their internal dynamics.

We are also interested in improving wisdom, which we define as the ability to get the big picture at least roughly right. It is common in decision making to lose track of the big picture, and this can have catastrophic consequences or can mean that we fail to achieve almost all of the value at stake. Humanity needs to be able to set global priorities and to develop the tools and techniques for keeping everything in proportion within that discussion.

Our research in this area includes the following topics:

  • How can we identify, understand, and reduce cognitive biases?
  • How can institutional innovations such as prediction markets improve information aggregation and probabilistic forecasting?
  • How should an ethically-motivated agent act under conditions of profound moral uncertainty?
  • How can we correct for observation selection effects in anthropic reasoning?
  • How can we integrate our thinking on big picture questions in such a way that all crucial considerations are properly taken into account?
  • What can developments in neuroscience and cognitive psychology teach us about moral cognition and about phenomena such as rationalization and social signaling?
  • Under what conditions, if any, can rational Bayesian agents knowingly agree to disagree?
  • How can we better evaluate claims to expertise in areas of relevance to public policy?
  • How can we foster better collaborative cognition and more honest truth-seeking in social processes such as the mass media and political decision making?

 

Human Enhancement

 

he“Human enhancement” refers to the use of medicine, technology, and techniques to improve the capacities of people beyond what we would consider normal or healthy. Such enhancement could include pills to make us happier, better at remembering things, or more alert; drugs and training techniques to improve physical capacities like strength and stamina; genetic intervention techniques to make our children smarter and healthier; and drugs and medical procedures to extend our lifespans.

Whilst attempts at enhancement are already familiar—we drink coffee to increase alertness and sports drinks to improve stamina—the new opportunities for enhancement offered by recent medical and technological advances raises some important technical, ethical, social, and policy questions, which FHI is addressing. These questions include:

  • How can enhancement help improve our lives?
  • What advances in enhancement techniques can we realistically expect?
  • What socioeconomic consequences would e.g. weak cognitive enhancement have?
  • What ethical issues arise from the use of enhancers?
  • Are our attitudes towards enhancement rational?
  • What risks and consequences might result from more powerful human enhancement methods that might be developed in the future?
  • What regulatory approach is best suited for enhancement medicine?

 

Future Technologies

 

ftThe Future of Humanity Institute is closely allied with the recently established Oxford Martin Programme on the Impacts of Future Technology, also directed by Professor Nick Bostrom. The Programme also works closely with the Institute of the Future of Computing, the Oxford University Computing Laboratory (Professor Bill Roscoe) and the Oxford e-Research Centre (Professor Anne Trefethen); the Institute for Science and Ethics (Professor Julian Savulescu); and other Oxford Martin School Institutes. Professor David Deutsch (Department of Atomic and Laser Physics, Centre for Quantum Computation, Clarendon Laboratory) serves as a senior consultant.

The Programme, established in September 2011 focuses on the analysis of possibilities related to long-range technological change and the potential social impacts of future transformative technologies. Research foci include issues related to the future of computing, existential risks, and methodology, including the following areas: Changing rates of change; Automation and complexity barriers; Machine intelligence capabilities and safety; Novel applications and unexpected societal impacts: Predictability horizons; and Existential risks and future technologies.

For more information, please visit the Programme’s website at futuretech.ox.ac.uk.