[edit]
Horizon-Free Reinforcement Learning in Polynomial Time: the Power of Stationary Policies
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:3858-3904, 2022.
Abstract
This paper gives the first polynomial-time algorithm for tabular Markov Decision Processes (MDP) that enjoys a regret bound \emph{independent on the planning horizon}. Specifically, we consider tabular MDP with S states, A actions, a planning horizon H, total reward bounded by 1, and the agent plays for K episodes. We design an algorithm that achieves an O(poly(S,A,logK)√K) regret in contrast to existing bounds which either has an additional polylog(H) dependency \citep{zhang2020reinforcement} or has an exponential dependency on S \citep{li2021settling}. Our result relies on a sequence of new structural lemmas establishing the approximation power, stability, and concentration property of stationary policies, which can have applications in other problems related to Markov chains.