Realizable Continuous-Space Shields for Safe Reinforcement Learning

Kyungmin Kim, Davide Corsi, Andoni Rodrı́guez, Jb Lanier, Benjami Parellada, Pierre Baldi, César Sánchez, Roy Fox
Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, PMLR 283:932-945, 2025.

Abstract

While Deep Reinforcement Learning (DRL) has achieved remarkable success across various domains, it remains vulnerable to occasional catastrophic failures without additional safeguards. An effective solution to prevent these failures is to use a shield that validates and adjusts the agent’s actions to ensure compliance with a provided set of safety specifications. For real-world robotic domains, it is essential to define safety specifications over continuous state and action spaces to accurately account for system dynamics and compute new actions that minimally deviate from the agent’s original decision. In this paper, we present the first shielding approach specifically designed to ensure the satisfaction of safety requirements in continuous state and action spaces, making it suitable for practical robotic applications. Our method builds upon realizability, an essential property that confirms the shield will always be able to generate a safe action for any state in the environment. We formally prove that realizability can be verified for stateful shields, enabling the incorporation of non-Markovian safety requirements, such as loop avoidance. Finally, we demonstrate the effectiveness of our approach in ensuring safety without compromising the policy’s success rate by applying it to a navigation problem and a multi-agent particle environment.

Cite this Paper


BibTeX
@InProceedings{pmlr-v283-kim25c, title = {Realizable Continuous-Space Shields for Safe Reinforcement Learning}, author = {Kim, Kyungmin and Corsi, Davide and Rodr\'{\i}guez, Andoni and Lanier, Jb and Parellada, Benjami and Baldi, Pierre and S\'{a}nchez, C\'{e}sar and Fox, Roy}, booktitle = {Proceedings of the 7th Annual Learning for Dynamics \& Control Conference}, pages = {932--945}, year = {2025}, editor = {Ozay, Necmiye and Balzano, Laura and Panagou, Dimitra and Abate, Alessandro}, volume = {283}, series = {Proceedings of Machine Learning Research}, month = {04--06 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v283/main/assets/kim25c/kim25c.pdf}, url = {https://proceedings.mlr.press/v283/kim25c.html}, abstract = {While Deep Reinforcement Learning (DRL) has achieved remarkable success across various domains, it remains vulnerable to occasional catastrophic failures without additional safeguards. An effective solution to prevent these failures is to use a shield that validates and adjusts the agent’s actions to ensure compliance with a provided set of safety specifications. For real-world robotic domains, it is essential to define safety specifications over continuous state and action spaces to accurately account for system dynamics and compute new actions that minimally deviate from the agent’s original decision. In this paper, we present the first shielding approach specifically designed to ensure the satisfaction of safety requirements in continuous state and action spaces, making it suitable for practical robotic applications. Our method builds upon realizability, an essential property that confirms the shield will always be able to generate a safe action for any state in the environment. We formally prove that realizability can be verified for stateful shields, enabling the incorporation of non-Markovian safety requirements, such as loop avoidance. Finally, we demonstrate the effectiveness of our approach in ensuring safety without compromising the policy’s success rate by applying it to a navigation problem and a multi-agent particle environment.} }
Endnote
%0 Conference Paper %T Realizable Continuous-Space Shields for Safe Reinforcement Learning %A Kyungmin Kim %A Davide Corsi %A Andoni Rodrı́guez %A Jb Lanier %A Benjami Parellada %A Pierre Baldi %A César Sánchez %A Roy Fox %B Proceedings of the 7th Annual Learning for Dynamics \& Control Conference %C Proceedings of Machine Learning Research %D 2025 %E Necmiye Ozay %E Laura Balzano %E Dimitra Panagou %E Alessandro Abate %F pmlr-v283-kim25c %I PMLR %P 932--945 %U https://proceedings.mlr.press/v283/kim25c.html %V 283 %X While Deep Reinforcement Learning (DRL) has achieved remarkable success across various domains, it remains vulnerable to occasional catastrophic failures without additional safeguards. An effective solution to prevent these failures is to use a shield that validates and adjusts the agent’s actions to ensure compliance with a provided set of safety specifications. For real-world robotic domains, it is essential to define safety specifications over continuous state and action spaces to accurately account for system dynamics and compute new actions that minimally deviate from the agent’s original decision. In this paper, we present the first shielding approach specifically designed to ensure the satisfaction of safety requirements in continuous state and action spaces, making it suitable for practical robotic applications. Our method builds upon realizability, an essential property that confirms the shield will always be able to generate a safe action for any state in the environment. We formally prove that realizability can be verified for stateful shields, enabling the incorporation of non-Markovian safety requirements, such as loop avoidance. Finally, we demonstrate the effectiveness of our approach in ensuring safety without compromising the policy’s success rate by applying it to a navigation problem and a multi-agent particle environment.
APA
Kim, K., Corsi, D., Rodrı́guez, A., Lanier, J., Parellada, B., Baldi, P., Sánchez, C. & Fox, R.. (2025). Realizable Continuous-Space Shields for Safe Reinforcement Learning. Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, in Proceedings of Machine Learning Research 283:932-945 Available from https://proceedings.mlr.press/v283/kim25c.html.

Related Material