Q2 2017 FHI Quarterly Update
In the second 3 months of 2017, FHI has continued its work as before exploring crucial considerations for the long-run flourishing of humanity in our four research focus areas:
- Macrostrategy – understanding which crucial considerations shape what is at stake for the future of humanity.
- AI safety – researching computer science techniques for building safer artificially intelligent systems.
- AI strategy – understanding how geopolitics, governance structures, and strategic trends will affect the development of advanced artificial intelligence.
- Biorisk – working with institutions around the world to reduce risk from especially dangerous pathogens.
We have been adapting FHI to our growing size. We’ve secured 50% more office space, which will be shared with the proposed Institute for Effective Altruism. We are developing plans to restructure to make our research management more modular and to streamline our operations team.
We have gained two staff in the last quarter. Tanya Singh is joining us as a temporary administrator, coming from a background in tech start-ups. Laura Pomarius has joined us as a Web Officer with a background in design and project management. Two of our staff will be leaving in this quarter. Kathryn Mecrow is continuing her excellent work at the Centre for Effective Altruism where she will be their Office Manager. Sebastian Farquhar will be leaving to do a DPhil at Oxford but expects to continue close collaboration. We thank them for their contributions and wish them both the best!
Key outputs you can read
A number of co-authors including FHI researchers Katja Grace and Owain Evans surveyed hundreds of researchers to understand their expectations about AI performance trajectories. They found significant uncertainty, but the aggregate subjective probability estimate suggested a 50% chance of high-level AI within 45 years. Of course, the estimates are subjective and expert surveys like this are not necessarily accurate forecasts, though they do reflect the current state of opinion. The survey was widely covered in the press.
An earlier overview of funding in the AI safety field by Sebastian Farquhar highlighted slow growth in AI strategy work. Miles Brundage’s latest piece, released via 80,000 Hours, aims to expand the pipeline of workers for AI strategy by suggesting practical paths for people interested in the area.
Anders Sandberg, Stuart Armstrong, and their co-author Milan Cirkovic published a paper outlining a potential strategy for advanced civilizations to postpone computation until the universe is much colder, and thereby producing up to a 1030 multiplier of achievable computation. This might explain the Fermi paradox, although a future paper from FHI suggests there may be no paradox to explain.
Individual research updates
Macrostrategy and AI Strategy
Nick Bostrom has continued work on AI strategy and the foundations of macrostrategy and is investing in advising some key actors in AI policy. He gave a speech at the G30 in London and presented to CEOs of leading Chinese technology firms in addition to a number of other lectures.
Miles Brundage wrote a career guide for AI policy and strategy, published by 80,000 Hours. He ran a scenario planning workshop on uncertainty in AI futures. He began a paper on verifiable and enforceable agreements in AI safety while a review paper on deep reinforcement learning he co-authored was accepted. He spoke at Newspeak House and participated in a RAND workshop on AI and nuclear security.
Owen Cotton-Barratt organised and led a workshop to explore potential quick-to-implement responses to a hypothetical scenario where AI capabilities grow much faster than the median expected case.
Sebastian Farquhar continued work with the Finnish government on pandemic preparedness, existential risk awareness, and geoengineering. They are currently drafting a white paper in three working groups on those subjects. He is contributing to a technical report on AI and security.
Carrick Flynn began working on structuredly transparent crime detection using AI and encryption and attended EAG Boston.
Clare Lyle has joined as a research intern and has been working with Miles Brundage on AI strategy issues including a workshop report on AI and security.
Toby Ord has continued work on a book on existential risk, worked to recruit two research assistants, ran a forecasting exercise on AI timelines and continues his collaboration with DeepMind on AI safety.
Anders Sandberg is beginning preparation for a book on ‘grand futures’. A paper by him and co-authors on the aestivation hypothesis was published in the Journal of the British Interplanetary Society. He contributed a report on the statistical distribution of great power war to a Yale workshop, spoke at a workshop on AI at the Johns Hopkins Applied Physics Lab, and at the AI For Good summit in Geneva, among many other workshop and conference contributions. Among many media appearances, he can be found in episodes 2-6 of National Geographic’s series Year Million.
AI Safety
Stuart Armstrong has made progress on a paper on oracle designs and low impact AI, a paper on value learning in collaboration with Jan Leike, and several other collaborations including those with DeepMind researchers. A paper on the aestivation hypothesis co-authored with Anders Sandberg was published.
Eric Drexler has been engaged in a technical collaboration addressing the adversarial example problem in machine learning and has been making progress toward a publication that reframes the AI safety landscape in terms of AI services, structured systems, and path-dependencies in AI research and development.
Owain Evans and his co-authors released their survey of AI researchers on their expectations of future trends in AI. It was covered in the New Scientist, MIT Technology Review, and leading newspapers and is under review for publication. Owain’s team completed a paper on using human intervention to help RL systems avoid catastrophe. Owain and his colleagues further promoted their online textbook on modelling agents.
Jan Leike and his co-authors released a paper on universal reinforcement learning, which makes fewer assumptions about its environment than most reinforcement learners. Jan is a research associate at FHI while working at DeepMind.
Girish Sastry, William Saunders, and Neal Jean have joined as interns and have been helping Owain Evans with research and engineering on the prevention of catastrophes during training of reinforcement learning agents.
Biosecurity
Piers Millett has been collaborating with Andrew Snyder-Beattie on a paper on the cost-effectiveness of interventions in biorisk, and the links between catastrophic biorisks and traditional biosecurity. Piers worked with biorisk organisations including the US National Academies of Science, the global technical synthetic biology meeting (SB7), and training for those overseeing Ebola samples among others.
Funding
FHI is currently in a healthy financial position, although we continue to accept donations. We expect to spend approximately £1.3m over the course of 2017. Including three new hires but no further growth, our current funds plus pledged income should last us until early 2020. Additional funding would likely be used to add to our research capacity in machine learning, technical AI safety and AI strategy. If you are interested in discussing ways to further support FHI, please contact Niel Bowerman.
Recruitment
Over the coming months we expect to recruit for a number of positions. At the moment, we are interested in applications for internships from talented individuals with a machine learning background to work in AI safety. We especially encourage applications from demographic groups currently under-represented at FHI.