An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints

Jiahui Zhu, Kihyun Yu, Dabeen Lee, Xin Liu, Honghao Wei
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:80347-80372, 2025.

Abstract

Online safe reinforcement learning (RL) plays a key role in dynamic environments, with applications in autonomous driving, robotics, and cybersecurity. The objective is to learn optimal policies that maximize rewards while satisfying safety constraints modeled by constrained Markov decision processes (CMDPs). Existing methods achieve sublinear regret under stochastic constraints but often fail in adversarial settings, where constraints are unknown, time-varying, and potentially adversarially designed. In this paper, we propose the Optimistic Mirror Descent Primal-Dual (OMDPD) algorithm, the first to address online CMDPs with anytime adversarial constraints. OMDPD achieves optimal regret $\tilde{\mathcal{O}}(\sqrt{K})$ and strong constraint violation $\tilde{\mathcal{O}}(\sqrt{K})$ without relying on Slater’s condition or the existence of a strictly known safe policy. We further show that access to accurate estimates of rewards and transitions can further improve these bounds. Our results offer practical guarantees for safe decision-making in adversarial environments.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-zhu25ab, title = {An Optimistic Algorithm for online {CMDPS} with Anytime Adversarial Constraints}, author = {Zhu, Jiahui and Yu, Kihyun and Lee, Dabeen and Liu, Xin and Wei, Honghao}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {80347--80372}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/zhu25ab/zhu25ab.pdf}, url = {https://proceedings.mlr.press/v267/zhu25ab.html}, abstract = {Online safe reinforcement learning (RL) plays a key role in dynamic environments, with applications in autonomous driving, robotics, and cybersecurity. The objective is to learn optimal policies that maximize rewards while satisfying safety constraints modeled by constrained Markov decision processes (CMDPs). Existing methods achieve sublinear regret under stochastic constraints but often fail in adversarial settings, where constraints are unknown, time-varying, and potentially adversarially designed. In this paper, we propose the Optimistic Mirror Descent Primal-Dual (OMDPD) algorithm, the first to address online CMDPs with anytime adversarial constraints. OMDPD achieves optimal regret $\tilde{\mathcal{O}}(\sqrt{K})$ and strong constraint violation $\tilde{\mathcal{O}}(\sqrt{K})$ without relying on Slater’s condition or the existence of a strictly known safe policy. We further show that access to accurate estimates of rewards and transitions can further improve these bounds. Our results offer practical guarantees for safe decision-making in adversarial environments.} }
Endnote
%0 Conference Paper %T An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints %A Jiahui Zhu %A Kihyun Yu %A Dabeen Lee %A Xin Liu %A Honghao Wei %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-zhu25ab %I PMLR %P 80347--80372 %U https://proceedings.mlr.press/v267/zhu25ab.html %V 267 %X Online safe reinforcement learning (RL) plays a key role in dynamic environments, with applications in autonomous driving, robotics, and cybersecurity. The objective is to learn optimal policies that maximize rewards while satisfying safety constraints modeled by constrained Markov decision processes (CMDPs). Existing methods achieve sublinear regret under stochastic constraints but often fail in adversarial settings, where constraints are unknown, time-varying, and potentially adversarially designed. In this paper, we propose the Optimistic Mirror Descent Primal-Dual (OMDPD) algorithm, the first to address online CMDPs with anytime adversarial constraints. OMDPD achieves optimal regret $\tilde{\mathcal{O}}(\sqrt{K})$ and strong constraint violation $\tilde{\mathcal{O}}(\sqrt{K})$ without relying on Slater’s condition or the existence of a strictly known safe policy. We further show that access to accurate estimates of rewards and transitions can further improve these bounds. Our results offer practical guarantees for safe decision-making in adversarial environments.
APA
Zhu, J., Yu, K., Lee, D., Liu, X. & Wei, H.. (2025). An Optimistic Algorithm for online CMDPS with Anytime Adversarial Constraints. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:80347-80372 Available from https://proceedings.mlr.press/v267/zhu25ab.html.

Related Material