Thompson sampling is asymptotically optimal in general environments. (Leike, J., Lattimore, T., Orseau, L. & Hutter, M. (2016). Proceedings of the Thirty-Second Uncertainty in Artificial Intelligence Conference)

We discuss a variant of Thompson sampling for nonparametric reinforcement learning in countable classes of general stochastic environments. These environments can be non-Markov, nonergodic, and partially observable. We show that Thompson sampling learns the environment class in the sense that (1) asymptotically its value converges to the optimal value in mean and (2) given a recoverability assumption regret is sublinear.

A formal solution to the grain of truth problem. Proceedings of the Thirty-Second Uncertainty in Artificial Intelligence Conference. (Leike, J., Taylor, J., Fallenstein, B. (2016).)

In this paper we present a formal and general solution to the full grain of truth problem: we construct a class of policies that contains all computable policies as well as Bayes-optimal policies for every lower semicomputable prior over the class. When the environment is unknown, Bayes-optimal agents may fail to act optimally even asymptotically…