From the entire FHI team, we wish you all a great start into 2018! In this post, we would like to provide you with a summary of the FHI highlights over the last quarter of 2017.
FHI launches the Governance of AI Program
FHI is delighted to announce the formation of the Governance of AI Program. Co-directed by Nick Bostrom and Allan Dafoe, the program aims to engage with research and policy to steer the potentially transformative development of artificial intelligence toward the common good. The Governance of AI Program will track and engage with contemporary applications of AI in politics, justice, the economy, and cybersecurity, but will primarily focus on future advanced AI system. In the coming quarter, the program looks forward to furthering its key relationships and disseminating its research more broadly. We are always interested in hiring top-flight researchers and administrators; details here.
Biological Weapons Convention 2017
In December, four FHI researchers attended the annual UN biological weapons convention in Geneva. In order to bring biosafety into the focus of international cooperation, we hosted a side event on global catastrophic biological risks.
FHI researchers visit NIPS 2017
FHI hosted a well-attended AI Safety lunch mixer at NIPS (Conference on Neural Information Processing Systems) in Long Beach, California in December, organised with support by the Berkeley Existential Risk Initiative (BERI). In addition, Research Fellow Owain Evans and Research Associate Jan Leike both presented work on AI safety at the workshop on aligned artificial intelligence. Governance of AI Program Co-Director Allan Dafoe presented on AI governance at the workshop on machine learning and computer security.
Altmetric published their “Top 100 most discussed academic papers” list (across all fields) for 2017; the FHI paper “When Will AI Exceed Human Performance? Evidence from AI Experts” has taken the 16th place in that ranking. READ MORE
Other FHI Research
- A Brief Survey of Deep Reinforcement Learning (Arulkumaran, K., Deisenroth,M., Brundage, M., Bharath, A. 2017. arXiv:1708.05866)
- Anthropic decision theory for self-locating beliefs. (Armstrong, S. 2017. Technical Report #2017-1. Future of Humanity Institute, University of Oxford)
You can still apply for two Internships at FHI:
For additional opportunities at the Governance of AI Program, please visit this page. FHI is always looking for high-quality researchers and operations staff; please get in touch even if none of the current opportunities suits your profile. We expect to be announcing additional vacancies shortly, including posts in technical AI safety. Please keep an eye on our jobs page for further announcements.