Reachability Analysis-based Safety-Critical Control using Online Fixed-Time Reinforcement Learning

Nick-Marios Kokolakis, Kyriakos G Vamvoudakis, Wassim Haddad
Proceedings of The 5th Annual Learning for Dynamics and Control Conference, PMLR 211:1257-1270, 2023.

Abstract

In this paper, we address a safety-critical control problem using reachability analysis and design a reinforcement learning-based mechanism for learning online and in fixed-time the solution to the safety-critical control problem. Safety is assured by determining a set of states for which there does not exist an admissible control law generating a system trajectory reaching a set of forbidden states at a user-prescribed time instant. Specifically, we cast our safety-critical problem as a Mayer optimal feedback control problem whose solution satisfies the Hamilton-Jacobi-Bellman (HJB) equation and characterizes the set of safe states. Since the HJB equation is generally difficult to solve, we develop an online critic-only reinforcement learning-based algorithm for simultaneously learning the solution to the HJB equation and the safe set in fixed time. In particular, we introduce a non-Lipschitz experience replay-based learning law utilizing recorded and current data for updating the critic weights to learn the value function and the safe set. The non-Lipschitz property of the dynamics gives rise to fixed-time convergence, whereas the experience replay-based approach eliminates the need of satisfying the persistence of excitation condition provided that the recorded data is sufficiently rich. Simulation results illustrate the efficacy of the proposed approach.

Cite this Paper


BibTeX
@InProceedings{pmlr-v211-kokolakis23a, title = {Reachability Analysis-based Safety-Critical Control using Online Fixed-Time Reinforcement Learning}, author = {Kokolakis, Nick-Marios and Vamvoudakis, Kyriakos G and Haddad, Wassim}, booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference}, pages = {1257--1270}, year = {2023}, editor = {Matni, Nikolai and Morari, Manfred and Pappas, George J.}, volume = {211}, series = {Proceedings of Machine Learning Research}, month = {15--16 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v211/kokolakis23a/kokolakis23a.pdf}, url = {https://proceedings.mlr.press/v211/kokolakis23a.html}, abstract = {In this paper, we address a safety-critical control problem using reachability analysis and design a reinforcement learning-based mechanism for learning online and in fixed-time the solution to the safety-critical control problem. Safety is assured by determining a set of states for which there does not exist an admissible control law generating a system trajectory reaching a set of forbidden states at a user-prescribed time instant. Specifically, we cast our safety-critical problem as a Mayer optimal feedback control problem whose solution satisfies the Hamilton-Jacobi-Bellman (HJB) equation and characterizes the set of safe states. Since the HJB equation is generally difficult to solve, we develop an online critic-only reinforcement learning-based algorithm for simultaneously learning the solution to the HJB equation and the safe set in fixed time. In particular, we introduce a non-Lipschitz experience replay-based learning law utilizing recorded and current data for updating the critic weights to learn the value function and the safe set. The non-Lipschitz property of the dynamics gives rise to fixed-time convergence, whereas the experience replay-based approach eliminates the need of satisfying the persistence of excitation condition provided that the recorded data is sufficiently rich. Simulation results illustrate the efficacy of the proposed approach. } }
Endnote
%0 Conference Paper %T Reachability Analysis-based Safety-Critical Control using Online Fixed-Time Reinforcement Learning %A Nick-Marios Kokolakis %A Kyriakos G Vamvoudakis %A Wassim Haddad %B Proceedings of The 5th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2023 %E Nikolai Matni %E Manfred Morari %E George J. Pappas %F pmlr-v211-kokolakis23a %I PMLR %P 1257--1270 %U https://proceedings.mlr.press/v211/kokolakis23a.html %V 211 %X In this paper, we address a safety-critical control problem using reachability analysis and design a reinforcement learning-based mechanism for learning online and in fixed-time the solution to the safety-critical control problem. Safety is assured by determining a set of states for which there does not exist an admissible control law generating a system trajectory reaching a set of forbidden states at a user-prescribed time instant. Specifically, we cast our safety-critical problem as a Mayer optimal feedback control problem whose solution satisfies the Hamilton-Jacobi-Bellman (HJB) equation and characterizes the set of safe states. Since the HJB equation is generally difficult to solve, we develop an online critic-only reinforcement learning-based algorithm for simultaneously learning the solution to the HJB equation and the safe set in fixed time. In particular, we introduce a non-Lipschitz experience replay-based learning law utilizing recorded and current data for updating the critic weights to learn the value function and the safe set. The non-Lipschitz property of the dynamics gives rise to fixed-time convergence, whereas the experience replay-based approach eliminates the need of satisfying the persistence of excitation condition provided that the recorded data is sufficiently rich. Simulation results illustrate the efficacy of the proposed approach.
APA
Kokolakis, N., Vamvoudakis, K.G. & Haddad, W.. (2023). Reachability Analysis-based Safety-Critical Control using Online Fixed-Time Reinforcement Learning. Proceedings of The 5th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 211:1257-1270 Available from https://proceedings.mlr.press/v211/kokolakis23a.html.

Related Material