A report published by the Center for the Governance of AI (GovAI), housed in the Future of Humanity Institute, surveys Americans’ attitudes on artificial intelligence. The impact of artificial intelligence technology on society is likely to be large. While the technology industry and governments currently predominate policy conversations on AI, the authors expect the public to become more influential over time. Understanding the public’s views on artificial intelligence will, therefore, be vital to future AI governance. The survey, carried out by Baobao Zhang and Allan Dafoe, is one of the most comprehensive surveys focusing on the American public’s opinions on artificial intelligence to date, including 2000 participants using the survey firm YouGov.
Key findings from our report include:
- Americans express mixed support for the development of AI. After reading a short explanation, a substantial minority (41%) somewhat support or strongly support the development of AI, while a smaller minority (22%) somewhat or strongly opposes it.
- Among 13 AI governance challenges, American prioritize preventing AI-assisted surveillance from violating privacy and civil liberties, preventing AI from being used to spread fake and harmful content online, preventing AI cyber attacks, and protecting data privacy. All challenges were rated as “important” and as over 50% likely to affect a large number of people in the US in the next 10 years by the respondents.
- Americans have discernibly different levels of trust in different organizations to develop AI for the best interests of the public. The most trusted are university researchers and the U.S. military; the least trusted is Facebook. There was no actor for which the average respondent had “a fair amount of confidence”.
- The median respondent predicts that there is a 54% chance that high-level machine intelligence will be developed by 2028. We define high-level machine intelligence as when machines are able to perform almost all tasks that are economically relevant today better than the median human (today) at each task.
Allan Dafoe, commenting on the report, said: “Our results show that the public regards as important the whole space of AI governance issues, including privacy, fairness, autonomous weapons, unemployment, and other extreme risks that may arise from advanced AI. Further, the public’s support for the development of AI cannot be taken for granted. There is no organisation that is highly trusted to develop AI in the public interest, though some are trusted much more than others. In order to ensure that the substantial benefits from AI are realised and broadly distributed, it is important that we work to understand and address these concerns.”
Thanks to funding from the Ethics and Governance of Artificial Intelligence Fund, we plan to regularly release similar reports based on our survey research in the U.S., China, and the European Union (EU). We welcome collaborators on future surveys.
The paper is available as a PDF and as a webpage.
The design and analysis of the study are pre-registered on the Open Science Framework. The data and code for analysis will be made public in six months on Dataverse.
Survey inquiries, including interview requests, can be emailed to firstname.lastname@example.org. More about the Center for the Governance of AI at governance.ai.