The Future of Humanity Institute (FHI) will be joining the Partnership on AI, a non-profit organisation founded by Amazon, Apple, Google/DeepMind, Facebook, IBM, and Microsoft, with the goal of formulating best practices for socially beneficial AI development.  We will be joining the Partnership alongside technology firms like Sony as well as third sector groups like Human Rights Watch, UNICEF, and our partners in Cambridge, the Leverhulme Centre for the Future of Intelligence.

The Partnership on AI is organised around a set of thematic pillars, including Fair, transparent, and accountable AI, and AI and social good;  FHI is will focus its work on the first of these pillars: Safety-critical AI.

In the Partnership’s view,

Where AI tools are used to supplement or replace human decision-making, we must be sure that they are safe, trustworthy, and aligned with the ethics and preferences of people who are influenced by their actions.

Professor Nick Bostrom, director of FHI, said in response to the news, “We’re delighted to be joining the Partnership on AI, and to be expanding our industry and nonprofit collaborations on AI safety.”  FHI has previously worked with groups such as DeepMind on AI safety, co-authoring papers that detail how to ensure that reinforcement learning systems do not take actions that avoid or interfere with human interruption.

The full list of new partners includes the AI Forum of New Zealand (AIFNZ), Allen Institute for Artificial Intelligence (AI2), Centre for Democracy & Technology (CDT), Centre for Internet and Society, India (CIS), Cogitai, Data & Society Research Institute (D&S), Digital Asia Hub, eBay, Electronic Frontier Foundation (EFF), Future of Humanity Institute (FHI), Future of Privacy Forum (FPF), Human Rights Watch (HRW), Intel, Leverhulme Centre for the Future of Intelligence (CFI), McKinsey & Company, SAP, Salesforce.com, Sony, UNICEF, Upturn, XPRIZE Foundation, and Zalando.

Posted in Featured News, News.

Share on Facebook | Share on Twitter