Operator Learning for Nonlinear Adaptive Control

Luke Bhan, Yuanyuan Shi, Miroslav Krstic
Proceedings of The 5th Annual Learning for Dynamics and Control Conference, PMLR 211:346-357, 2023.

Abstract

In this work, we propose an operator learning framework for accelerating nonlinear adaptive con- trol. We define three operator mappings in adaptive control-the parameter identifier operator, the controller gain operator, and the control operator. We introduce neural operators for learning both the parameter identification mapping and the gain function mapping to produce the control action at each step. Through the formalization of neural operators, we are able to learn these mappings for a wide set of different system parameter values without retraining. Empirically, we test our controller on two experiments ranging from an aircraft system (a nonlinear ODE) to a first-order hyperbolic PDE system. We demonstrate that the accuracy of both the gain function and parameter approximation can reach the magnitude of $10^{−4}$ with speedups around 98% compared to numer- ical solvers. Furthermore, we empirically demonstrate that despite error propagation, closed-loop stability guarantees are maintained when substituting neural operator approximations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v211-bhan23a, title = {Operator Learning for Nonlinear Adaptive Control}, author = {Bhan, Luke and Shi, Yuanyuan and Krstic, Miroslav}, booktitle = {Proceedings of The 5th Annual Learning for Dynamics and Control Conference}, pages = {346--357}, year = {2023}, editor = {Matni, Nikolai and Morari, Manfred and Pappas, George J.}, volume = {211}, series = {Proceedings of Machine Learning Research}, month = {15--16 Jun}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v211/bhan23a/bhan23a.pdf}, url = {https://proceedings.mlr.press/v211/bhan23a.html}, abstract = {In this work, we propose an operator learning framework for accelerating nonlinear adaptive con- trol. We define three operator mappings in adaptive control-the parameter identifier operator, the controller gain operator, and the control operator. We introduce neural operators for learning both the parameter identification mapping and the gain function mapping to produce the control action at each step. Through the formalization of neural operators, we are able to learn these mappings for a wide set of different system parameter values without retraining. Empirically, we test our controller on two experiments ranging from an aircraft system (a nonlinear ODE) to a first-order hyperbolic PDE system. We demonstrate that the accuracy of both the gain function and parameter approximation can reach the magnitude of $10^{−4}$ with speedups around 98% compared to numer- ical solvers. Furthermore, we empirically demonstrate that despite error propagation, closed-loop stability guarantees are maintained when substituting neural operator approximations.} }
Endnote
%0 Conference Paper %T Operator Learning for Nonlinear Adaptive Control %A Luke Bhan %A Yuanyuan Shi %A Miroslav Krstic %B Proceedings of The 5th Annual Learning for Dynamics and Control Conference %C Proceedings of Machine Learning Research %D 2023 %E Nikolai Matni %E Manfred Morari %E George J. Pappas %F pmlr-v211-bhan23a %I PMLR %P 346--357 %U https://proceedings.mlr.press/v211/bhan23a.html %V 211 %X In this work, we propose an operator learning framework for accelerating nonlinear adaptive con- trol. We define three operator mappings in adaptive control-the parameter identifier operator, the controller gain operator, and the control operator. We introduce neural operators for learning both the parameter identification mapping and the gain function mapping to produce the control action at each step. Through the formalization of neural operators, we are able to learn these mappings for a wide set of different system parameter values without retraining. Empirically, we test our controller on two experiments ranging from an aircraft system (a nonlinear ODE) to a first-order hyperbolic PDE system. We demonstrate that the accuracy of both the gain function and parameter approximation can reach the magnitude of $10^{−4}$ with speedups around 98% compared to numer- ical solvers. Furthermore, we empirically demonstrate that despite error propagation, closed-loop stability guarantees are maintained when substituting neural operator approximations.
APA
Bhan, L., Shi, Y. & Krstic, M.. (2023). Operator Learning for Nonlinear Adaptive Control. Proceedings of The 5th Annual Learning for Dynamics and Control Conference, in Proceedings of Machine Learning Research 211:346-357 Available from https://proceedings.mlr.press/v211/bhan23a.html.

Related Material