A Safe Exploration Approach to Constrained Markov Decision Processes

Tingting Ni, Maryam Kamgarpour
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:3592-3600, 2025.

Abstract

We consider discounted infinite-horizon constrained Markov decision processes (CMDPs), where the goal is to find an optimal policy that maximizes the expected cumulative reward while satisfying expected cumulative constraints. Motivated by the application of CMDPs in online learning for safety-critical systems, we focus on developing a model-free and $\textit{simulator-free}$ algorithm that ensures $\textit{constraint satisfaction during learning}$. To this end, we employ the LB-SGD algorithm proposed in (Usmanova et al., 2024), which utilizes an interior-point approach based on the log-barrier function of the CMDP. Under the commonly assumed conditions of relaxed Fisher non-degeneracy and bounded transfer error in policy parameterization, we establish the theoretical properties of the LB-SGD algorithm. In particular, unlike existing CMDP approaches that ensure policy feasibility only upon convergence, the LB-SGD algorithm guarantees feasibility throughout the learning process and converges to the $\varepsilon$-optimal policy with a sample complexity of $\tilde{\mathcal{O}}(\varepsilon^{-6})$. Compared to the state-of-the-art policy gradient-based algorithm, C-NPG-PDA, the LB-SGD algorithm requires an additional $\mathcal{O}(\varepsilon^{-2})$ samples to ensure policy feasibility during learning with the same Fisher non-degenerate parameterization.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-ni25a, title = {A Safe Exploration Approach to Constrained Markov Decision Processes}, author = {Ni, Tingting and Kamgarpour, Maryam}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {3592--3600}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/ni25a/ni25a.pdf}, url = {https://proceedings.mlr.press/v258/ni25a.html}, abstract = {We consider discounted infinite-horizon constrained Markov decision processes (CMDPs), where the goal is to find an optimal policy that maximizes the expected cumulative reward while satisfying expected cumulative constraints. Motivated by the application of CMDPs in online learning for safety-critical systems, we focus on developing a model-free and $\textit{simulator-free}$ algorithm that ensures $\textit{constraint satisfaction during learning}$. To this end, we employ the LB-SGD algorithm proposed in (Usmanova et al., 2024), which utilizes an interior-point approach based on the log-barrier function of the CMDP. Under the commonly assumed conditions of relaxed Fisher non-degeneracy and bounded transfer error in policy parameterization, we establish the theoretical properties of the LB-SGD algorithm. In particular, unlike existing CMDP approaches that ensure policy feasibility only upon convergence, the LB-SGD algorithm guarantees feasibility throughout the learning process and converges to the $\varepsilon$-optimal policy with a sample complexity of $\tilde{\mathcal{O}}(\varepsilon^{-6})$. Compared to the state-of-the-art policy gradient-based algorithm, C-NPG-PDA, the LB-SGD algorithm requires an additional $\mathcal{O}(\varepsilon^{-2})$ samples to ensure policy feasibility during learning with the same Fisher non-degenerate parameterization.} }
Endnote
%0 Conference Paper %T A Safe Exploration Approach to Constrained Markov Decision Processes %A Tingting Ni %A Maryam Kamgarpour %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-ni25a %I PMLR %P 3592--3600 %U https://proceedings.mlr.press/v258/ni25a.html %V 258 %X We consider discounted infinite-horizon constrained Markov decision processes (CMDPs), where the goal is to find an optimal policy that maximizes the expected cumulative reward while satisfying expected cumulative constraints. Motivated by the application of CMDPs in online learning for safety-critical systems, we focus on developing a model-free and $\textit{simulator-free}$ algorithm that ensures $\textit{constraint satisfaction during learning}$. To this end, we employ the LB-SGD algorithm proposed in (Usmanova et al., 2024), which utilizes an interior-point approach based on the log-barrier function of the CMDP. Under the commonly assumed conditions of relaxed Fisher non-degeneracy and bounded transfer error in policy parameterization, we establish the theoretical properties of the LB-SGD algorithm. In particular, unlike existing CMDP approaches that ensure policy feasibility only upon convergence, the LB-SGD algorithm guarantees feasibility throughout the learning process and converges to the $\varepsilon$-optimal policy with a sample complexity of $\tilde{\mathcal{O}}(\varepsilon^{-6})$. Compared to the state-of-the-art policy gradient-based algorithm, C-NPG-PDA, the LB-SGD algorithm requires an additional $\mathcal{O}(\varepsilon^{-2})$ samples to ensure policy feasibility during learning with the same Fisher non-degenerate parameterization.
APA
Ni, T. & Kamgarpour, M.. (2025). A Safe Exploration Approach to Constrained Markov Decision Processes. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:3592-3600 Available from https://proceedings.mlr.press/v258/ni25a.html.

Related Material