Online Policy Optimization in Unknown Nonlinear Systems

Yiheng Lin, James A. Preiss, Fengze Xie, Emile Anand, Soon-Jo Chung, Yisong Yue, Adam Wierman
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3475-3522, 2024.

Abstract

We study online policy optimization in nonlinear time-varying systems where the true dynamical models are unknown to the controller. This problem is challenging because, unlike in linear systems, the controller cannot obtain globally accurate estimations of the ground-truth dynamics using local exploration. We propose a meta-framework that combines a general online policy optimization algorithm (\texttt{ALG}) with a general online estimator of the dynamical system’s model parameters (\texttt{EST}). We show that if the hypothetical joint dynamics induced by \texttt{ALG} with \emph{known} parameters satisfies several desired properties, the joint dynamics under \emph{inexact} parameters from \texttt{EST} will be robust to errors. Importantly, the final regret only depends on \texttt{EST}’s predictions on the visited trajectory, which relaxes a bottleneck on identifying the true parameters globally. To demonstrate our framework, we develop a computationally efficient variant of Gradient-based Adaptive Policy Selection, called Memoryless GAPS (M-GAPS), and use it to instantiate \texttt{ALG}. Combining \mbox{M-GAPS} with online gradient descent to instantiate \texttt{EST} yields (to our knowledge) the first local regret bound for online policy optimization in nonlinear time-varying systems with unknown dynamics.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-lin24a, title = {Online Policy Optimization in Unknown Nonlinear Systems}, author = {Lin, Yiheng and Preiss, James A. and Xie, Fengze and Anand, Emile and Chung, Soon-Jo and Yue, Yisong and Wierman, Adam}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {3475--3522}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/lin24a/lin24a.pdf}, url = {https://proceedings.mlr.press/v247/lin24a.html}, abstract = {We study online policy optimization in nonlinear time-varying systems where the true dynamical models are unknown to the controller. This problem is challenging because, unlike in linear systems, the controller cannot obtain globally accurate estimations of the ground-truth dynamics using local exploration. We propose a meta-framework that combines a general online policy optimization algorithm (\texttt{ALG}) with a general online estimator of the dynamical system’s model parameters (\texttt{EST}). We show that if the hypothetical joint dynamics induced by \texttt{ALG} with \emph{known} parameters satisfies several desired properties, the joint dynamics under \emph{inexact} parameters from \texttt{EST} will be robust to errors. Importantly, the final regret only depends on \texttt{EST}’s predictions on the visited trajectory, which relaxes a bottleneck on identifying the true parameters globally. To demonstrate our framework, we develop a computationally efficient variant of Gradient-based Adaptive Policy Selection, called Memoryless GAPS (M-GAPS), and use it to instantiate \texttt{ALG}. Combining \mbox{M-GAPS} with online gradient descent to instantiate \texttt{EST} yields (to our knowledge) the first local regret bound for online policy optimization in nonlinear time-varying systems with unknown dynamics.} }
Endnote
%0 Conference Paper %T Online Policy Optimization in Unknown Nonlinear Systems %A Yiheng Lin %A James A. Preiss %A Fengze Xie %A Emile Anand %A Soon-Jo Chung %A Yisong Yue %A Adam Wierman %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-lin24a %I PMLR %P 3475--3522 %U https://proceedings.mlr.press/v247/lin24a.html %V 247 %X We study online policy optimization in nonlinear time-varying systems where the true dynamical models are unknown to the controller. This problem is challenging because, unlike in linear systems, the controller cannot obtain globally accurate estimations of the ground-truth dynamics using local exploration. We propose a meta-framework that combines a general online policy optimization algorithm (\texttt{ALG}) with a general online estimator of the dynamical system’s model parameters (\texttt{EST}). We show that if the hypothetical joint dynamics induced by \texttt{ALG} with \emph{known} parameters satisfies several desired properties, the joint dynamics under \emph{inexact} parameters from \texttt{EST} will be robust to errors. Importantly, the final regret only depends on \texttt{EST}’s predictions on the visited trajectory, which relaxes a bottleneck on identifying the true parameters globally. To demonstrate our framework, we develop a computationally efficient variant of Gradient-based Adaptive Policy Selection, called Memoryless GAPS (M-GAPS), and use it to instantiate \texttt{ALG}. Combining \mbox{M-GAPS} with online gradient descent to instantiate \texttt{EST} yields (to our knowledge) the first local regret bound for online policy optimization in nonlinear time-varying systems with unknown dynamics.
APA
Lin, Y., Preiss, J.A., Xie, F., Anand, E., Chung, S., Yue, Y. & Wierman, A.. (2024). Online Policy Optimization in Unknown Nonlinear Systems. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:3475-3522 Available from https://proceedings.mlr.press/v247/lin24a.html.

Related Material