APPLICATIONS ARE NOW CLOSED
FHI is excited to invite applications for a full-time Post-Doctoral Research Scientist within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 24 months from the date of appointment.
You will advance the field of AI safety by conducting technical research. You can find examples of related work from FHI on our website. Your research is likely to involve collaboration with researchers at FHI and with outside researchers in AI or computer science. You will publish technical work at major conferences, raise research funds, manage your research budget, and potentially hire and supervise additional researchers.
The ideal candidate will have a PhD and a strong publication record in AI/machine learning or closely related fields. Exceptional candidates with a research background in nearby areas (statistics, computer science, mathematics, neuroscience, cognitive science, physics) and a demonstrated interest in AI Safety will also be considered. Experience conducting interdisciplinary research is a plus.
About the team
FHI is a leader in research on the future of Artificial Intelligence. FHI co-hosts a seminar on technical AI Safety with Google DeepMind. We have research collaborations with DeepMind, Stanford University, and Oxford’s Machine Learning Group. We also supervise internships for top PhD students in machine learning. As well as Prof. Nick Bostrom (author of “Superintelligence”), three researchers are working on technical AI Safety and we are recruiting a research scientist in addition to this post.
Apply now
Candidates should apply via this link and must submit a CV, supporting statement and a short research proposal as part of their application. Applications received through any other channel will not be considered.
The closing date for applications is 12.00 midday on Thursday 10 January 2018. Please contact fhiadmin@philosophy.ox.ac.uk with questions about the role or application process.