[edit]
Variational Regret Bounds for Reinforcement Learning
Proceedings of The 35th Uncertainty in Artificial Intelligence Conference, PMLR 115:81-90, 2020.
Abstract
We consider undiscounted reinforcement learning in Markov decision processes (MDPs) where \textit{both} the reward functions and the state-transition probabilities may vary (gradually or abruptly) over time. For this problem setting, we propose an algorithm and provide performance guarantees for the regret evaluated against the optimal non-stationary policy. The upper bound on the regret is given in terms of the total variation in the MDP. This is the first variational regret bound for the general reinforcement learning setting.