Reinforcement Learning with Almost Sure Constraints

Agustin Castellano, Hancheng Min, Enrique Mallada, Juan Andrés Bazerque
Proceedings of The 4th Annual Learning for Dynamics and Control Conference, PMLR 168:559-570, 2022.

Abstract

In this work we address the problem of finding feasible policies for Constrained Markov Decision Processes under probability one constraints. We argue that stationary policies are not sufficient for solving this problem, and that a rich class of policies can be found by endowing the controller with a scalar quantity, so called budget, that tracks how close the agent is to violating the constraint. We show that the minimal budget required to act safely can be obtained as the smallest fixed point of a Bellman-like operator, for which we analyze its convergence properties. We also show how to learn this quantity when the true kernel of the Markov decision process is not known, while providing sample-complexity bounds. The utility of knowing this minimal budget relies in that it can aid in the search of optimal or near-optimal policies by shrinking down the region of the state space the agent must navigate. Simulations illustrate the different nature of probability one constraints against the typically used constraints in expectation.

Cite this Paper


BibTeX
@InProceedings{pmlr-v168-castellano22a, title = {Reinforcement Learning with Almost Sure Constraints}, author = {Castellano, Agustin and Min, Hancheng and Mallada, Enrique and Bazerque, Juan Andr\'es}, booktitle = {Proceedings of The 4th Annual Learning for Dynamics and Control Conference}, pages = {559--570}, year = {2022}, editor = {Firoozi, Roya and Mehr, Negar and Yel, Esen and Antonova, Rika and Bohg, Jeannette and Schwager, Mac and Kochenderfer, Mykel}, volume = {168}, series = {Proceedings of Machine Learning Research}, month = {23--24 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v168/castellano22a/castellano22a.pdf}, url = {https://proceedings.mlr.press/v168/castellano22a.html}, abstract = {In this work we address the problem of finding feasible policies for Constrained Markov Decision Processes under probability one constraints. We argue that stationary policies are not sufficient for solving this problem, and that a rich class of policies can be found by endowing the controller with a scalar quantity, so called budget, that tracks how close the agent is to violating the constraint. We show that the minimal budget required to act safely can be obtained as the smallest fixed point of a Bellman-like operator, for which we analyze its convergence properties. We also show how to learn this quantity when the true kernel of the Markov decision process is not known, while providing sample-complexity bounds. The utility of knowing this minimal budget relies in that it can aid in the search of optimal or near-optimal policies by shrinking down the region of the state space the agent must navigate. Simulations illustrate the different nature of probability one constraints against the typically used constraints in expectation.} }
Endnote
%0 Conference Paper %T Reinforcement Learning with Almost Sure Constraints %A Agustin Castellano %A Hancheng Min %A Enrique Mallada %A Juan Andrés Bazerque %B Proceedings of The 4th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2022 %E Roya Firoozi %E Negar Mehr %E Esen Yel %E Rika Antonova %E Jeannette Bohg %E Mac Schwager %E Mykel Kochenderfer %F pmlr-v168-castellano22a %I PMLR %P 559--570 %U https://proceedings.mlr.press/v168/castellano22a.html %V 168 %X In this work we address the problem of finding feasible policies for Constrained Markov Decision Processes under probability one constraints. We argue that stationary policies are not sufficient for solving this problem, and that a rich class of policies can be found by endowing the controller with a scalar quantity, so called budget, that tracks how close the agent is to violating the constraint. We show that the minimal budget required to act safely can be obtained as the smallest fixed point of a Bellman-like operator, for which we analyze its convergence properties. We also show how to learn this quantity when the true kernel of the Markov decision process is not known, while providing sample-complexity bounds. The utility of knowing this minimal budget relies in that it can aid in the search of optimal or near-optimal policies by shrinking down the region of the state space the agent must navigate. Simulations illustrate the different nature of probability one constraints against the typically used constraints in expectation.
APA
Castellano, A., Min, H., Mallada, E. & Bazerque, J.A.. (2022). Reinforcement Learning with Almost Sure Constraints. Proceedings of The 4th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 168:559-570 Available from https://proceedings.mlr.press/v168/castellano22a.html.

Related Material