ISAACS: Iterative Soft Adversarial Actor-Critic for Safety

Kai-Chieh Hsu, Duy Phuong Nguyen, Jaime Fernàndez Fisac
Proceedings of The 5th Annual Learning for Dynamics and Control Conference, PMLR 211:90-103, 2023.

Abstract

The deployment of robots in uncontrolled environments requires them to operate robustly under previously unseen scenarios, like irregular terrain and wind conditions. Unfortunately, while rigorous safety frameworks from robust optimal control theory scale poorly to high-dimensional nonlinear dynamics, control policies computed by more tractable “deep” methods lack guarantees and tend to exhibit little robustness to uncertain operating conditions. This work introduces a novel approach enabling scalable synthesis of robust safety-preserving controllers for robotic systems with general nonlinear dynamics subject to bounded modeling error, by combining game-theoretic safety analysis with adversarial reinforcement learning in simulation. Following a soft actor- critic scheme, a safety-seeking fallback policy is co-trained with an adversarial “disturbance” agent that aims to invoke the worst-case realization of model error and training-to-deployment discrepancy allowed by the designer’s uncertainty. While the learned control policy does not intrinsically guarantee safety, it is used to construct a real-time safety filter with robust safety guarantees based on forward reachability rollouts. This safety filter can be used in conjunction with a safety-agnostic control policy, precluding any task-driven actions that could result in loss of safety. We evaluate our learning-based safety approach in a 5D race car simulator, compare the learned safety policy to the numerically obtained optimal solution, and empiricall validate the robust safety guarantee of our proposed safety filter against worst-case model discrepancy.

Cite this Paper


BibTeX
@InProceedings{pmlr-v211-hsu23a, title = {ISAACS: Iterative Soft Adversarial Actor-Critic for Safety}, author = {Hsu, Kai-Chieh and Nguyen, Duy Phuong and Fisac, Jaime Fern\`andez}, booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference}, pages = {90--103}, year = {2023}, editor = {Matni, Nikolai and Morari, Manfred and Pappas, George J.}, volume = {211}, series = {Proceedings of Machine Learning Research}, month = {15--16 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v211/hsu23a/hsu23a.pdf}, url = {https://proceedings.mlr.press/v211/hsu23a.html}, abstract = {The deployment of robots in uncontrolled environments requires them to operate robustly under previously unseen scenarios, like irregular terrain and wind conditions. Unfortunately, while rigorous safety frameworks from robust optimal control theory scale poorly to high-dimensional nonlinear dynamics, control policies computed by more tractable “deep” methods lack guarantees and tend to exhibit little robustness to uncertain operating conditions. This work introduces a novel approach enabling scalable synthesis of robust safety-preserving controllers for robotic systems with general nonlinear dynamics subject to bounded modeling error, by combining game-theoretic safety analysis with adversarial reinforcement learning in simulation. Following a soft actor- critic scheme, a safety-seeking fallback policy is co-trained with an adversarial “disturbance” agent that aims to invoke the worst-case realization of model error and training-to-deployment discrepancy allowed by the designer’s uncertainty. While the learned control policy does not intrinsically guarantee safety, it is used to construct a real-time safety filter with robust safety guarantees based on forward reachability rollouts. This safety filter can be used in conjunction with a safety-agnostic control policy, precluding any task-driven actions that could result in loss of safety. We evaluate our learning-based safety approach in a 5D race car simulator, compare the learned safety policy to the numerically obtained optimal solution, and empiricall validate the robust safety guarantee of our proposed safety filter against worst-case model discrepancy. } }
Endnote
%0 Conference Paper %T ISAACS: Iterative Soft Adversarial Actor-Critic for Safety %A Kai-Chieh Hsu %A Duy Phuong Nguyen %A Jaime Fernàndez Fisac %B Proceedings of The 5th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2023 %E Nikolai Matni %E Manfred Morari %E George J. Pappas %F pmlr-v211-hsu23a %I PMLR %P 90--103 %U https://proceedings.mlr.press/v211/hsu23a.html %V 211 %X The deployment of robots in uncontrolled environments requires them to operate robustly under previously unseen scenarios, like irregular terrain and wind conditions. Unfortunately, while rigorous safety frameworks from robust optimal control theory scale poorly to high-dimensional nonlinear dynamics, control policies computed by more tractable “deep” methods lack guarantees and tend to exhibit little robustness to uncertain operating conditions. This work introduces a novel approach enabling scalable synthesis of robust safety-preserving controllers for robotic systems with general nonlinear dynamics subject to bounded modeling error, by combining game-theoretic safety analysis with adversarial reinforcement learning in simulation. Following a soft actor- critic scheme, a safety-seeking fallback policy is co-trained with an adversarial “disturbance” agent that aims to invoke the worst-case realization of model error and training-to-deployment discrepancy allowed by the designer’s uncertainty. While the learned control policy does not intrinsically guarantee safety, it is used to construct a real-time safety filter with robust safety guarantees based on forward reachability rollouts. This safety filter can be used in conjunction with a safety-agnostic control policy, precluding any task-driven actions that could result in loss of safety. We evaluate our learning-based safety approach in a 5D race car simulator, compare the learned safety policy to the numerically obtained optimal solution, and empiricall validate the robust safety guarantee of our proposed safety filter against worst-case model discrepancy.
APA
Hsu, K., Nguyen, D.P. & Fisac, J.F.. (2023). ISAACS: Iterative Soft Adversarial Actor-Critic for Safety. Proceedings of The 5th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 211:90-103 Available from https://proceedings.mlr.press/v211/hsu23a.html.

Related Material