Risks and mitigation strategies for oracle AI. (Armstrong, S. 2013. In V. Müller (Ed.), Philosophy and theory of artificial intelligence. (pp. 335-347))

Oracle AIs(OAI), confined AIs that can only answer questions, are one particular approach to the problem of AI safety. However even Oracles are not particularly safe: humans are still vulnerable to traps, social engineering, or simply becoming dependent on the OAI. But OAIs are still strictly safer than general AIs, and there are many extra layers of precautions we can add on top of these. This paper looks at some of them and analyses their strengths and weaknesses.

How hard is artificial intelligence? Evolutionary arguments and selection effects. (Shulman, C., & Bostrom, N. 2012. Journal of Consciousness Studies, 19(7-8), 103-130)

We explore how the evolutionary argument might be salvaged from this objection, using a variety of considerations from observation selection theory and analysis of specific timing features and instances of convergent evolution in the terrestrial evolutionary record. We find that, depending on the resolution of disputed questions in observation selection theory, the objection can be either be wholly or moderately defused, although other challenges for the evolutionary argument remain.