Strategic Artificial Intelligence Research Centre

sairc-logo

FHI houses the Strategic AI Research Centre, a joint Oxford-Cambridge initiative developing strategies and tools to ensure artificial intelligence (AI) remains safe and beneficial. The Centre has two primary lines of research, ‘technical’ and ‘strategic.’

The first line of research aims to solve the technical challenges of building AI systems that remain safe even when highly capable. Examples of this kind of research can be found in FHI’s previous work, such as the reinforcement learning approaches used in our recent paper with DeepMind and those used in our collaboration with Stanford. Other ideas for AI safety research are included in research agendas such as Concrete Problems in AI Safety by Google Brain and Aligning superintelligence with human interests: a technical research agenda by the Machine Intelligence Research Institute.

The second line of research aims to understand and shape the strategic landscape of long-term AI development. Examples of this kind of research include determining optimal levels of research openness, commitment structures that would prevent arms races between groups, the possible dynamics of an intelligence explosion, and the extent to which inputs like hardware and software will contribute to long-run AI development. Research in this area has utilised a diverse set of methods from game theory, microeconomics, forecasting, agent based modelling, and Bayesian inference, typically informed by results in machine learning, neuroscience, biological evolution, and the social sciences. Recent papers in this area include Strategic Implications of Openness in AI Development by Nick Bostrom and Algorithmic Progress in Six Domains by Katja Grace.