Regret Bounds for Risk-sensitive Reinforcement Learning with Lipschitz Dynamic Risk Measures

Hao Liang, Zhiquan Luo
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1774-1782, 2024.

Abstract

We study finite episodic Markov decision processes incorporating dynamic risk measures to capture risk sensitivity. To this end, we present two model-based algorithms applied to \emph{Lipschitz} dynamic risk measures, a wide range of risk measures that subsumes spectral risk measure, optimized certainty equivalent, and distortion risk measures, among others. We establish both regret upper bounds and lower bounds. Notably, our upper bounds demonstrate optimal dependencies on the number of actions and episodes while reflecting the inherent trade-off between risk sensitivity and sample complexity. Our approach offers a unified framework that not only encompasses multiple existing formulations in the literature but also broadens the application spectrum.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-liang24a, title = {Regret Bounds for Risk-sensitive Reinforcement Learning with {L}ipschitz Dynamic Risk Measures}, author = {Liang, Hao and Luo, Zhiquan}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {1774--1782}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/liang24a/liang24a.pdf}, url = {https://proceedings.mlr.press/v238/liang24a.html}, abstract = {We study finite episodic Markov decision processes incorporating dynamic risk measures to capture risk sensitivity. To this end, we present two model-based algorithms applied to \emph{Lipschitz} dynamic risk measures, a wide range of risk measures that subsumes spectral risk measure, optimized certainty equivalent, and distortion risk measures, among others. We establish both regret upper bounds and lower bounds. Notably, our upper bounds demonstrate optimal dependencies on the number of actions and episodes while reflecting the inherent trade-off between risk sensitivity and sample complexity. Our approach offers a unified framework that not only encompasses multiple existing formulations in the literature but also broadens the application spectrum.} }
Endnote
%0 Conference Paper %T Regret Bounds for Risk-sensitive Reinforcement Learning with Lipschitz Dynamic Risk Measures %A Hao Liang %A Zhiquan Luo %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-liang24a %I PMLR %P 1774--1782 %U https://proceedings.mlr.press/v238/liang24a.html %V 238 %X We study finite episodic Markov decision processes incorporating dynamic risk measures to capture risk sensitivity. To this end, we present two model-based algorithms applied to \emph{Lipschitz} dynamic risk measures, a wide range of risk measures that subsumes spectral risk measure, optimized certainty equivalent, and distortion risk measures, among others. We establish both regret upper bounds and lower bounds. Notably, our upper bounds demonstrate optimal dependencies on the number of actions and episodes while reflecting the inherent trade-off between risk sensitivity and sample complexity. Our approach offers a unified framework that not only encompasses multiple existing formulations in the literature but also broadens the application spectrum.
APA
Liang, H. & Luo, Z.. (2024). Regret Bounds for Risk-sensitive Reinforcement Learning with Lipschitz Dynamic Risk Measures. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:1774-1782 Available from https://proceedings.mlr.press/v238/liang24a.html.

Related Material