A Distributional Analysis of SamplingBased Reinforcement Learning Algorithms
[edit]
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:43574366, 2020.
Abstract
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant stepsizes. We demonstrate its effectiveness by presenting simple and unified proofs of convergence for a variety of commonlyused methods. We show that valuebased methods such as TD(?) and QLearning have update rules which are contractive in the space of distributions of functions, thus establishing their exponentially fast convergence to a stationary distribution. We demonstrate that the stationary distribution obtained by any algorithm whose target is an expected Bellman update has a mean which is equal to the true value function. Furthermore, we establish that the distributions concentrate around their mean as the stepsize shrinks. We further analyse the optimistic policy iteration algorithm, for which the contraction property does not hold, and formulate a probabilistic policy improvement property which entails the convergence of the algorithm.
Related Material


