[edit]
Online Policy Optimization in Unknown Nonlinear Systems
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:3475-3522, 2024.
Abstract
We study online policy optimization in nonlinear time-varying systems where the true dynamical models are unknown to the controller. This problem is challenging because, unlike in linear systems, the controller cannot obtain globally accurate estimations of the ground-truth dynamics using local exploration. We propose a meta-framework that combines a general online policy optimization algorithm (\texttt{ALG}) with a general online estimator of the dynamical system’s model parameters (\texttt{EST}). We show that if the hypothetical joint dynamics induced by \texttt{ALG} with \emph{known} parameters satisfies several desired properties, the joint dynamics under \emph{inexact} parameters from \texttt{EST} will be robust to errors. Importantly, the final regret only depends on \texttt{EST}’s predictions on the visited trajectory, which relaxes a bottleneck on identifying the true parameters globally. To demonstrate our framework, we develop a computationally efficient variant of Gradient-based Adaptive Policy Selection, called Memoryless GAPS (M-GAPS), and use it to instantiate \texttt{ALG}. Combining \mbox{M-GAPS} with online gradient descent to instantiate \texttt{EST} yields (to our knowledge) the first local regret bound for online policy optimization in nonlinear time-varying systems with unknown dynamics.