[edit]
A Finite Sample Complexity Bound for Distributionally Robust Q-learning
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:3370-3398, 2023.
Abstract
We consider a reinforcement learning setting in which the deployment environment is different from the training environment. Applying a robust Markov decision processes formulation, we extend the distributionally robust Q-learning framework studied in [Liu et. al. 2022]. Further, we improve the design and analysis of their multi-level Monte Carlo estimator. Assuming access to a simulator, we prove that the worst-case expected sample complexity of our algorithm to learn the optimal robust Q-function within an ϵ error in the sup norm is upper bounded by ˜O(|S||A|(1−γ)−5ϵ−2p−6∧δ−4), where γ is the discount rate, p∧ is the non-zero minimal support probability of the transition kernels and δ is the uncertainty size. This is the first sample complexity result for the model-free robust RL problem. Simulation studies further validate our theoretical results.