Applications are invited for a full-time Research Fellow within the Future of Humanity Institute (FHI) at the University of Oxford. This is a fixed-term post for 24 months from the date of appointment, located at the FHI offices in the beautiful city of Oxford.
Reporting to the Director of Research at the Future of Humanity Institute, the successful candidate will be responsible for identifying crucial considerations for improving humanity’s long-run potential. The Research Fellow will be evaluating strategies that could reduce existential risk, particularly with respect to the long-term outcomes of technologies such as artificial intelligence.
The postholder’s main responsibilities will include: contributing to the development of research agenda and conducting individual research; collaborating with partner institutions and research groups; assisting in completion of research for peer-reviewed publications; disseminating research findings by participating in seminars, lectures, and other public meetings; small scale project management and providing guidance to junior colleagues as well as developing ideas for research income and new research methodologies.
Applicants will be familiar with existing literature in existential risk and related fields or will have other equivalent evidence of outstanding research capability. Interdisciplinary research experience may be an advantage.
FHI’s work in the area of macrostrategy includes Nick Bostrom’s Superintelligence, the book Global Catastrophic Risks as well as this paper on the strategic implications of openness in AI development.
How to apply
Candidates should apply via this link and must submit a CV and supporting statement as part of their application. Applications received through any other channel will not be considered.
The closing date for applications is 12.00 midday on Friday 29 September 2017. Please contact email@example.com with questions about the role, and firstname.lastname@example.org with questions about the application process.