Gradient shaping for multi-constraint safe reinforcement learning

Yihang Yao, Zuxin Liu, Zhepeng Cen, Peide Huang, Tingnan Zhang, Wenhao Yu, Ding Zhao
Proceedings of the 6th Annual Learning for Dynamics & Control Conference, PMLR 242:25-39, 2024.

Abstract

Online safe reinforcement learning (RL) involves training a policy that maximizes task efficiency while satisfying constraints via interacting with the environments. In this paper, our focus lies in addressing the complex challenges associated with solving multi-constraint (MC) safe RL problems. We approach the Safe RL problem from the perspective of Multi-Objective Optimization (MOO) and propose a unified framework designed for MC safe RL algorithms. This framework highlights the manipulation of gradients derived from constraints. Leveraging insights from this framework and recognizing the significance of redundant and conflicting constraint conditions, we introduce the Gradient Shaping (GradS) method for general Lagrangian-based safe RL algorithms to improve the training efficiency in terms of both reward and constraint satisfaction. Our extensive experimentation demonstrates the effectiveness of our proposed method in encouraging exploration and learning a policy that improves both safety and reward performance across various challenging MC safe RL tasks as well as good scalability to the constraint dimension.

Cite this Paper


BibTeX
@InProceedings{pmlr-v242-yao24a, title = {Gradient shaping for multi-constraint safe reinforcement learning}, author = {Yao, Yihang and Liu, Zuxin and Cen, Zhepeng and Huang, Peide and Zhang, Tingnan and Yu, Wenhao and Zhao, Ding}, booktitle = {Proceedings of the 6th Annual Learning for Dynamics & Control Conference}, pages = {25--39}, year = {2024}, editor = {Abate, Alessandro and Cannon, Mark and Margellos, Kostas and Papachristodoulou, Antonis}, volume = {242}, series = {Proceedings of Machine Learning Research}, month = {15--17 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v242/yao24a/yao24a.pdf}, url = {https://proceedings.mlr.press/v242/yao24a.html}, abstract = {Online safe reinforcement learning (RL) involves training a policy that maximizes task efficiency while satisfying constraints via interacting with the environments. In this paper, our focus lies in addressing the complex challenges associated with solving multi-constraint (MC) safe RL problems. We approach the Safe RL problem from the perspective of Multi-Objective Optimization (MOO) and propose a unified framework designed for MC safe RL algorithms. This framework highlights the manipulation of gradients derived from constraints. Leveraging insights from this framework and recognizing the significance of redundant and conflicting constraint conditions, we introduce the Gradient Shaping (GradS) method for general Lagrangian-based safe RL algorithms to improve the training efficiency in terms of both reward and constraint satisfaction. Our extensive experimentation demonstrates the effectiveness of our proposed method in encouraging exploration and learning a policy that improves both safety and reward performance across various challenging MC safe RL tasks as well as good scalability to the constraint dimension.} }
Endnote
%0 Conference Paper %T Gradient shaping for multi-constraint safe reinforcement learning %A Yihang Yao %A Zuxin Liu %A Zhepeng Cen %A Peide Huang %A Tingnan Zhang %A Wenhao Yu %A Ding Zhao %B Proceedings of the 6th Annual Learning for Dynamics & Control Conference %C Proceedings of Machine Learning Research %D 2024 %E Alessandro Abate %E Mark Cannon %E Kostas Margellos %E Antonis Papachristodoulou %F pmlr-v242-yao24a %I PMLR %P 25--39 %U https://proceedings.mlr.press/v242/yao24a.html %V 242 %X Online safe reinforcement learning (RL) involves training a policy that maximizes task efficiency while satisfying constraints via interacting with the environments. In this paper, our focus lies in addressing the complex challenges associated with solving multi-constraint (MC) safe RL problems. We approach the Safe RL problem from the perspective of Multi-Objective Optimization (MOO) and propose a unified framework designed for MC safe RL algorithms. This framework highlights the manipulation of gradients derived from constraints. Leveraging insights from this framework and recognizing the significance of redundant and conflicting constraint conditions, we introduce the Gradient Shaping (GradS) method for general Lagrangian-based safe RL algorithms to improve the training efficiency in terms of both reward and constraint satisfaction. Our extensive experimentation demonstrates the effectiveness of our proposed method in encouraging exploration and learning a policy that improves both safety and reward performance across various challenging MC safe RL tasks as well as good scalability to the constraint dimension.
APA
Yao, Y., Liu, Z., Cen, Z., Huang, P., Zhang, T., Yu, W. & Zhao, D.. (2024). Gradient shaping for multi-constraint safe reinforcement learning. Proceedings of the 6th Annual Learning for Dynamics & Control Conference, in Proceedings of Machine Learning Research 242:25-39 Available from https://proceedings.mlr.press/v242/yao24a.html.

Related Material