Safe Reinforcement Learning in Constrained Markov Decision Processes

Akifumi Wachi, Yanan Sui
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:9797-9806, 2020.

Abstract

Safe reinforcement learning has been a promising approach for optimizing the policy of an agent that operates in safety-critical applications. In this paper, we propose an algorithm, SNO-MDP, that explores and optimizes Markov decision processes under unknown safety constraints. Specifically, we take a step-wise approach for optimizing safety and cumulative reward. In our method, the agent first learns safety constraints by expanding the safe region, and then optimizes the cumulative reward in the certified safe region. We provide theoretical guarantees on both the satisfaction of the safety constraint and the near-optimality of the cumulative reward under proper regularity assumptions. In our experiments, we demonstrate the effectiveness of SNO-MDP through two experiments: one uses a synthetic data in a new, openly-available environment named GP-Safety-Gym, and the other simulates Mars surface exploration by using real observation data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-wachi20a, title = {Safe Reinforcement Learning in Constrained {M}arkov Decision Processes}, author = {Wachi, Akifumi and Sui, Yanan}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {9797--9806}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/wachi20a/wachi20a.pdf}, url = {https://proceedings.mlr.press/v119/wachi20a.html}, abstract = {Safe reinforcement learning has been a promising approach for optimizing the policy of an agent that operates in safety-critical applications. In this paper, we propose an algorithm, SNO-MDP, that explores and optimizes Markov decision processes under unknown safety constraints. Specifically, we take a step-wise approach for optimizing safety and cumulative reward. In our method, the agent first learns safety constraints by expanding the safe region, and then optimizes the cumulative reward in the certified safe region. We provide theoretical guarantees on both the satisfaction of the safety constraint and the near-optimality of the cumulative reward under proper regularity assumptions. In our experiments, we demonstrate the effectiveness of SNO-MDP through two experiments: one uses a synthetic data in a new, openly-available environment named GP-Safety-Gym, and the other simulates Mars surface exploration by using real observation data.} }
Endnote
%0 Conference Paper %T Safe Reinforcement Learning in Constrained Markov Decision Processes %A Akifumi Wachi %A Yanan Sui %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-wachi20a %I PMLR %P 9797--9806 %U https://proceedings.mlr.press/v119/wachi20a.html %V 119 %X Safe reinforcement learning has been a promising approach for optimizing the policy of an agent that operates in safety-critical applications. In this paper, we propose an algorithm, SNO-MDP, that explores and optimizes Markov decision processes under unknown safety constraints. Specifically, we take a step-wise approach for optimizing safety and cumulative reward. In our method, the agent first learns safety constraints by expanding the safe region, and then optimizes the cumulative reward in the certified safe region. We provide theoretical guarantees on both the satisfaction of the safety constraint and the near-optimality of the cumulative reward under proper regularity assumptions. In our experiments, we demonstrate the effectiveness of SNO-MDP through two experiments: one uses a synthetic data in a new, openly-available environment named GP-Safety-Gym, and the other simulates Mars surface exploration by using real observation data.
APA
Wachi, A. & Sui, Y.. (2020). Safe Reinforcement Learning in Constrained Markov Decision Processes. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:9797-9806 Available from https://proceedings.mlr.press/v119/wachi20a.html.

Related Material