[edit]
Near-Minimax-Optimal Risk-Sensitive Reinforcement Learning with CVaR
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:35864-35907, 2023.
Abstract
In this paper, we study risk-sensitive Reinforcement Learning (RL), focusing on the objective of Conditional Value at Risk (CVaR) with risk tolerance τ. Starting with multi-arm bandits (MABs), we show the minimax CVaR regret rate is Ω(√τ−1AK), where A is the number of actions and K is the number of episodes, and that it is achieved by an Upper Confidence Bound algorithm with a novel Bernstein bonus. For online RL in tabular Markov Decision Processes (MDPs), we show a minimax regret lower bound of Ω(√τ−1SAK) (with normalized cumulative rewards), where S is the number of states, and we propose a novel bonus-driven Value Iteration procedure. We show that our algorithm achieves the optimal regret of ˜O(√τ−1SAK) under a continuity assumption and in general attains a near-optimal regret of ˜O(τ−1√SAK), which is minimax-optimal for constant τ. This improves on the best available bounds. By discretizing rewards appropriately, our algorithms are computationally efficient.