Winter Intelligence Conference 2011

Date: 14-17 January 2011 Venue: St Catherine’s College; Jesus College, Oxford This unusual conference, bridging philosophy, cognitive science, and machine intelligence brought together experts and students from a wide range of backgrounds for a long weekend of intense deliberation about the big questions: What holds together our experiences? What forms can intelligence take? How can […]

Winter Intelligence Conference 2011

This unusual conference, bridging philosophy, cognitive science, and machine intelligence brought together experts and students from a wide range of backgrounds for a long weekend of intense deliberation about the big questions: What holds together our experiences? What forms can intelligence take? How can we create effective collective or artificial intelligence?

Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims (arXiv:2004.07213)

With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable. Those outside of a given organization also need effective means of scrutinizing such claims. This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose–spanning institutions, software, and hardware–and make recommendations aimed at implementing, exploring, or improving those mechanisms.

The Offense-Defense Balance of Scientific Knowledge: Does Publishing AI Research Reduce Misuse? (Shevlane, T. & Dafoe, A.) In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 173-179)

There is growing concern over the potential misuse of artificial intelligence (AI) research. Publishing scientific research can facilitate misuse of the technology, but the research can also contribute to protections against misuse. This paper addresses the balance between these two effects. Our theoretical framework elucidates the factors governing whether the published research will be more useful for attackers or defenders, such as the possibility for adequate defensive measures, or the independent discovery of the knowledge outside of the scientific community. The balance will vary across scientific fields. However, we show that the existing conversation within AI has imported concepts and conclusions from prior debates within computer security over the disclosure of software vulnerabilities. While disclosure of software vulnerabilities often favours defence, this cannot be assumed for AI research. The AI research community should consider concepts and policies from a broad set of adjacent fields, and ultimately needs to craft policy well-suited to its particular challenges.

Predicting Human Deliberative Judgments with Machine Learning. (Evans O, Stuhlmüller A, Cundy C, Carey R, Kenton Z, McGrath T & Schreiber A, 2018)

This report examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, it benchmarks China’s current AI capabilities by developing a novel index to measure any country’s AI potential and highlights the potential implications of China’s AI dream for issues of AI safety, national security, economic development, and social governance.

The author, Jeffrey Ding, writes, “The hope is that this report can serve as a foundational document for further policy discussion and research on the topic of China’s approach to AI.” The report draws from the author’s translations of Chinese texts on AI policy, a compilation of metrics on China’s AI capabilities compared to other countries, and conversations with those who have consulted with Chinese companies and institutions involved in shaping the AI scene.

Deciphering China’s AI Dream (Jeffrey Ding, 2018, Future of Humanity Institute, University of Oxford)

This report examines the intersection of two subjects, China and artificial intelligence, both of which are already difficult enough to comprehend on their own. It provides context for China’s AI strategy with respect to past science and technology plans, and it also connects the consistent and new features of China’s AI approach to the drivers of AI development (e.g. hardware, data, and talented scientists). In addition, it benchmarks China’s current AI capabilities by developing a novel index to measure any country’s AI potential and highlights the potential implications of China’s AI dream for issues of AI safety, national security, economic development, and social governance.

The author, Jeffrey Ding, writes, “The hope is that this report can serve as a foundational document for further policy discussion and research on the topic of China’s approach to AI.” The report draws from the author’s translations of Chinese texts on AI policy, a compilation of metrics on China’s AI capabilities compared to other countries, and conversations with those who have consulted with Chinese companies and institutions involved in shaping the AI scene.