Back to Base: Towards Hands-Off Learning via Safe Resets with Reach-Avoid Safety Filters

Azra Begzadic, Nikhil Shinde, Sander Tonkens, Dylan Hirsch, Kaleb Ugalde, Michael Yip, Jorge Cortes, Sylvia Herbert
Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, PMLR 283:1154-1166, 2025.

Abstract

Designing controllers to accomplish a task while guaranteeing constraints on safety remains a significant challenge. We often want an agent to perform well in a nominal task, such as environment exploration, while ensuring it can avoid unsafe states and return to a desired target by a specific time. In particular we are motivated by the setting of safe, efficient, hands-off training for reinforcement learning in the real world. By enabling a robot to safely and autonomously reset to a desired region (e.g., charging stations) without human intervention, we can enhance efficiency and facilitate training. Safety filters, such as those based on control barrier functions, enable decoupling safety from nominal control objectives and rigorously guaranteeing safety. Despite their success, constructing these functions for general nonlinear systems with control constraints and system uncertainties remains an open problem. This paper introduces a safety filter obtained from the value function associated with the reach-avoid problem. The proposed safety filter minimally modifies the nominal controller while avoiding unsafe regions and guiding the system back to the desired target set. By preserving policy performance while allowing safe resetting, we enable efficient hands-off reinforcement learning and advance the feasibility of safe training for real world robots. We demonstrate our approach using a modified version of soft actor-critic to safely train a swing-up task on a modified cartpole stabilization problem.

Cite this Paper


BibTeX
@InProceedings{pmlr-v283-begzadic25a, title = {Back to Base: Towards Hands-Off Learning via Safe Resets with Reach-Avoid Safety Filters}, author = {Begzadic, Azra and Shinde, Nikhil and Tonkens, Sander and Hirsch, Dylan and Ugalde, Kaleb and Yip, Michael and Cortes, Jorge and Herbert, Sylvia}, booktitle = {Proceedings of the 7th Annual Learning for Dynamics \& Control Conference}, pages = {1154--1166}, year = {2025}, editor = {Ozay, Necmiye and Balzano, Laura and Panagou, Dimitra and Abate, Alessandro}, volume = {283}, series = {Proceedings of Machine Learning Research}, month = {04--06 Jun}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v283/main/assets/begzadic25a/begzadic25a.pdf}, url = {https://proceedings.mlr.press/v283/begzadic25a.html}, abstract = {Designing controllers to accomplish a task while guaranteeing constraints on safety remains a significant challenge. We often want an agent to perform well in a nominal task, such as environment exploration, while ensuring it can avoid unsafe states and return to a desired target by a specific time. In particular we are motivated by the setting of safe, efficient, hands-off training for reinforcement learning in the real world. By enabling a robot to safely and autonomously reset to a desired region (e.g., charging stations) without human intervention, we can enhance efficiency and facilitate training. Safety filters, such as those based on control barrier functions, enable decoupling safety from nominal control objectives and rigorously guaranteeing safety. Despite their success, constructing these functions for general nonlinear systems with control constraints and system uncertainties remains an open problem. This paper introduces a safety filter obtained from the value function associated with the reach-avoid problem. The proposed safety filter minimally modifies the nominal controller while avoiding unsafe regions and guiding the system back to the desired target set. By preserving policy performance while allowing safe resetting, we enable efficient hands-off reinforcement learning and advance the feasibility of safe training for real world robots. We demonstrate our approach using a modified version of soft actor-critic to safely train a swing-up task on a modified cartpole stabilization problem.} }
Endnote
%0 Conference Paper %T Back to Base: Towards Hands-Off Learning via Safe Resets with Reach-Avoid Safety Filters %A Azra Begzadic %A Nikhil Shinde %A Sander Tonkens %A Dylan Hirsch %A Kaleb Ugalde %A Michael Yip %A Jorge Cortes %A Sylvia Herbert %B Proceedings of the 7th Annual Learning for Dynamics \& Control Conference %C Proceedings of Machine Learning Research %D 2025 %E Necmiye Ozay %E Laura Balzano %E Dimitra Panagou %E Alessandro Abate %F pmlr-v283-begzadic25a %I PMLR %P 1154--1166 %U https://proceedings.mlr.press/v283/begzadic25a.html %V 283 %X Designing controllers to accomplish a task while guaranteeing constraints on safety remains a significant challenge. We often want an agent to perform well in a nominal task, such as environment exploration, while ensuring it can avoid unsafe states and return to a desired target by a specific time. In particular we are motivated by the setting of safe, efficient, hands-off training for reinforcement learning in the real world. By enabling a robot to safely and autonomously reset to a desired region (e.g., charging stations) without human intervention, we can enhance efficiency and facilitate training. Safety filters, such as those based on control barrier functions, enable decoupling safety from nominal control objectives and rigorously guaranteeing safety. Despite their success, constructing these functions for general nonlinear systems with control constraints and system uncertainties remains an open problem. This paper introduces a safety filter obtained from the value function associated with the reach-avoid problem. The proposed safety filter minimally modifies the nominal controller while avoiding unsafe regions and guiding the system back to the desired target set. By preserving policy performance while allowing safe resetting, we enable efficient hands-off reinforcement learning and advance the feasibility of safe training for real world robots. We demonstrate our approach using a modified version of soft actor-critic to safely train a swing-up task on a modified cartpole stabilization problem.
APA
Begzadic, A., Shinde, N., Tonkens, S., Hirsch, D., Ugalde, K., Yip, M., Cortes, J. & Herbert, S.. (2025). Back to Base: Towards Hands-Off Learning via Safe Resets with Reach-Avoid Safety Filters. Proceedings of the 7th Annual Learning for Dynamics \& Control Conference, in Proceedings of Machine Learning Research 283:1154-1166 Available from https://proceedings.mlr.press/v283/begzadic25a.html.

Related Material