Regret Bounds for Markov Decision Processes with Recursive Optimized Certainty Equivalents

Wenhao Xu, Xuefeng Gao, Xuedong He
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:38400-38427, 2023.

Abstract

The optimized certainty equivalent (OCE) is a family of risk measures that cover important examples such as entropic risk, conditional value-at-risk and mean-variance models. In this paper, we propose a new episodic risk-sensitive reinforcement learning formulation based on tabular Markov decision processes with recursive OCEs. We design an efficient learning algorithm for this problem based on value iteration and upper confidence bound. We derive an upper bound on the regret of the proposed algorithm, and also establish a minimax lower bound. Our bounds show that the regret rate achieved by our proposed algorithm has optimal dependence on the number of episodes and the number of actions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-xu23d, title = {Regret Bounds for {M}arkov Decision Processes with Recursive Optimized Certainty Equivalents}, author = {Xu, Wenhao and Gao, Xuefeng and He, Xuedong}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {38400--38427}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/xu23d/xu23d.pdf}, url = {https://proceedings.mlr.press/v202/xu23d.html}, abstract = {The optimized certainty equivalent (OCE) is a family of risk measures that cover important examples such as entropic risk, conditional value-at-risk and mean-variance models. In this paper, we propose a new episodic risk-sensitive reinforcement learning formulation based on tabular Markov decision processes with recursive OCEs. We design an efficient learning algorithm for this problem based on value iteration and upper confidence bound. We derive an upper bound on the regret of the proposed algorithm, and also establish a minimax lower bound. Our bounds show that the regret rate achieved by our proposed algorithm has optimal dependence on the number of episodes and the number of actions.} }
Endnote
%0 Conference Paper %T Regret Bounds for Markov Decision Processes with Recursive Optimized Certainty Equivalents %A Wenhao Xu %A Xuefeng Gao %A Xuedong He %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-xu23d %I PMLR %P 38400--38427 %U https://proceedings.mlr.press/v202/xu23d.html %V 202 %X The optimized certainty equivalent (OCE) is a family of risk measures that cover important examples such as entropic risk, conditional value-at-risk and mean-variance models. In this paper, we propose a new episodic risk-sensitive reinforcement learning formulation based on tabular Markov decision processes with recursive OCEs. We design an efficient learning algorithm for this problem based on value iteration and upper confidence bound. We derive an upper bound on the regret of the proposed algorithm, and also establish a minimax lower bound. Our bounds show that the regret rate achieved by our proposed algorithm has optimal dependence on the number of episodes and the number of actions.
APA
Xu, W., Gao, X. & He, X.. (2023). Regret Bounds for Markov Decision Processes with Recursive Optimized Certainty Equivalents. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:38400-38427 Available from https://proceedings.mlr.press/v202/xu23d.html.

Related Material