APPLICATIONS ARE NOW CLOSED
FHI is excited to invite applications for a full-time Research Scientist within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 24 months from the date of appointment.
You will be responsible for conducting technical research in AI Safety. You can find examples of related work from FHI on our website. Your research is likely to involve collaboration with researchers at FHI and with outside researchers in AI or computer science. You will co-publish technical work at major conferences, own research budget for your project, and contribute to the recruitment process and supervision of additional researchers.
You will have experience of contributing to publications in AI/Machine learning or closely related fields. Along with holders of Ph.D. / D.Phil, candidates who hold a Bachelors / Masters degree with research experience in nearby fields (statistics, computer science, mathematics, neuroscience, cognitive science, physics) and a demonstrated interest in AI Safety will also be considered. Experience conducting interdisciplinary research is a plus.
About the team
FHI is a leader in research on the future of Artificial Intelligence. FHI co-hosts a seminar on technical AI Safety with Google DeepMind. We have research collaborations with DeepMind, Stanford University, and Oxford’s Machine Learning Group. We also supervise internships for top PhD students in machine learning. As well as Prof. Nick Bostrom (author of “Superintelligence”), three researchers are working on technical AI Safety and we are recruiting a Post-Doctoral research scientist in addition to this post.
Candidates should apply via this link and must submit a CV, supporting statement and short research proposal as part of their application. Applications received through any other channel will not be considered.
The closing date for applications is 12.00 midday on Thursday 10 January 2018. Please contact email@example.com with questions about the role or application process.