FHI is excited to invite applications for a full-time Research Fellow within the Future of Humanity Institute (FHI) at the University of Oxford. The post is fixed-term for 24 months from the date of appointment.
You will be responsible for conducting technical research in AI Safety. You can find examples of related work from FHI on our website. Your research is likely to involve collaboration with researchers at FHI and with outside researchers in AI or computer science. You will co-publish technical work at major conferences, carry out collaborative research projects, and maintain relationships with relevant research labs and key individuals.
The ideal candidate will have a Bachelors and/or Masters degree along with the experience of contributing to publications in AI/machine learning or in closely related fields. Exceptional candidates with a research background in nearby fields (statistics, computer science, mathematics, neuroscience, cognitive science, physics) and a demonstrated interest in AI Safety will also be considered. For those without an AI Safety background, FHI will provide mentoring and support.
About the team
FHI is a leader in research on the future of Artificial Intelligence. FHI co-hosts a seminar on technical AI Safety with Google DeepMind. We have research collaborations with DeepMind, Stanford University, and Oxford’s Machine Learning Group. We also supervise internships for top PhD students in machine learning. As well as Prof. Nick Bostrom (author of “Superintelligence”), three researchers are working on technical AI Safety.
Candidates should apply via this link and must submit a CV, supporting statement, references and a short research proposal as part of their application. Applications received through any other channel will not be considered.
The closing date for applications is 12.00 midday on 30th April 2018. Please contact email@example.com with questions about the role or application process.