Sparse Learning of Dynamical Systems in RKHS: An Operator-Theoretic Approach

Boya Hou, Sina Sanjari, Nathan Dahlin, Subhonmesh Bose, Umesh Vaidya
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:13325-13352, 2023.

Abstract

Transfer operators provide a rich framework for representing the dynamics of very general, nonlinear dynamical systems. When interacting with reproducing kernel Hilbert spaces (RKHS), descriptions of dynamics often incur prohibitive data storage requirements, motivating dataset sparsification as a precursory step to computation. Further, in practice, data is available in the form of trajectories, introducing correlation between samples. In this work, we present a method for sparse learning of transfer operators from $\beta$-mixing stochastic processes, in both discrete and continuous time, and provide sample complexity analysis extending existing theoretical guarantees for learning from non-sparse, i.i.d. data. In addressing continuous-time settings, we develop precise descriptions using covariance-type operators for the infinitesimal generator that aids in the sample complexity analysis. We empirically illustrate the efficacy of our sparse embedding approach through deterministic and stochastic nonlinear system examples.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-hou23c, title = {Sparse Learning of Dynamical Systems in {RKHS}: An Operator-Theoretic Approach}, author = {Hou, Boya and Sanjari, Sina and Dahlin, Nathan and Bose, Subhonmesh and Vaidya, Umesh}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {13325--13352}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/hou23c/hou23c.pdf}, url = {https://proceedings.mlr.press/v202/hou23c.html}, abstract = {Transfer operators provide a rich framework for representing the dynamics of very general, nonlinear dynamical systems. When interacting with reproducing kernel Hilbert spaces (RKHS), descriptions of dynamics often incur prohibitive data storage requirements, motivating dataset sparsification as a precursory step to computation. Further, in practice, data is available in the form of trajectories, introducing correlation between samples. In this work, we present a method for sparse learning of transfer operators from $\beta$-mixing stochastic processes, in both discrete and continuous time, and provide sample complexity analysis extending existing theoretical guarantees for learning from non-sparse, i.i.d. data. In addressing continuous-time settings, we develop precise descriptions using covariance-type operators for the infinitesimal generator that aids in the sample complexity analysis. We empirically illustrate the efficacy of our sparse embedding approach through deterministic and stochastic nonlinear system examples.} }
Endnote
%0 Conference Paper %T Sparse Learning of Dynamical Systems in RKHS: An Operator-Theoretic Approach %A Boya Hou %A Sina Sanjari %A Nathan Dahlin %A Subhonmesh Bose %A Umesh Vaidya %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-hou23c %I PMLR %P 13325--13352 %U https://proceedings.mlr.press/v202/hou23c.html %V 202 %X Transfer operators provide a rich framework for representing the dynamics of very general, nonlinear dynamical systems. When interacting with reproducing kernel Hilbert spaces (RKHS), descriptions of dynamics often incur prohibitive data storage requirements, motivating dataset sparsification as a precursory step to computation. Further, in practice, data is available in the form of trajectories, introducing correlation between samples. In this work, we present a method for sparse learning of transfer operators from $\beta$-mixing stochastic processes, in both discrete and continuous time, and provide sample complexity analysis extending existing theoretical guarantees for learning from non-sparse, i.i.d. data. In addressing continuous-time settings, we develop precise descriptions using covariance-type operators for the infinitesimal generator that aids in the sample complexity analysis. We empirically illustrate the efficacy of our sparse embedding approach through deterministic and stochastic nonlinear system examples.
APA
Hou, B., Sanjari, S., Dahlin, N., Bose, S. & Vaidya, U.. (2023). Sparse Learning of Dynamical Systems in RKHS: An Operator-Theoretic Approach. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:13325-13352 Available from https://proceedings.mlr.press/v202/hou23c.html.

Related Material