Learning Distributed Geometric Koopman Operator for Sparse Networked Dynamical Systems

Sayak Mukherjee, Sai Pushpak Nandanoori, Sheng Guan, Khushbu Agarwal, Subhrajit Sinha, Soumya Kundu, Seemita Pal, Yinghui Wu, Draguna L Vrabie, Sutanay Choudhury
Proceedings of the First Learning on Graphs Conference, PMLR 198:45:1-45:17, 2022.

Abstract

The Koopman operator theory provides an alternative to studying nonlinear networked dynamical systems (NDS) by mapping the state space to an abstract higher dimensional space where the system evolution is linear. The recent works show the application of graph neural networks (GNNs) to learn state to object-centric embedding and achieve centralized block-wise computation of the Koopman operator (KO) under additional assumptions on the underlying node properties and constraints on the KO structure. However, the computational complexity of learning the Koopman operator increases for large NDS. Moreover, the computational complexity increases in a combinatorial fashion with the increase in the number of nodes. The learning challenge is further amplified for sparse networks by two factors: 1) sample sparsity for learning the Koopman operator in the non-linear space, and 2) the dissimilarity in the dynamics of individual nodes or from one subgraph to another. Our work aims to address these challenges by formulating the representation learning of NDS into a multi-agent paradigm and learning the Koopman operator in a distributive manner. Our theoretical results show that the proposed distributed computation of the geometric Koopman operator is beneficial for sparse NDS, whereas for the fully connected systems this approach coincides with the centralized one. The empirical study on a rope system, a network of oscillators, and a power grid show comparable and superior performance along with computational benefits with the state-of-the-art methods.

Cite this Paper


BibTeX
@InProceedings{pmlr-v198-mukherjee22a, title = {Learning Distributed Geometric Koopman Operator for Sparse Networked Dynamical Systems}, author = {Mukherjee, Sayak and Nandanoori, Sai Pushpak and Guan, Sheng and Agarwal, Khushbu and Sinha, Subhrajit and Kundu, Soumya and Pal, Seemita and Wu, Yinghui and Vrabie, Draguna L and Choudhury, Sutanay}, booktitle = {Proceedings of the First Learning on Graphs Conference}, pages = {45:1--45:17}, year = {2022}, editor = {Rieck, Bastian and Pascanu, Razvan}, volume = {198}, series = {Proceedings of Machine Learning Research}, month = {09--12 Dec}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v198/mukherjee22a/mukherjee22a.pdf}, url = {https://proceedings.mlr.press/v198/mukherjee22a.html}, abstract = {The Koopman operator theory provides an alternative to studying nonlinear networked dynamical systems (NDS) by mapping the state space to an abstract higher dimensional space where the system evolution is linear. The recent works show the application of graph neural networks (GNNs) to learn state to object-centric embedding and achieve centralized block-wise computation of the Koopman operator (KO) under additional assumptions on the underlying node properties and constraints on the KO structure. However, the computational complexity of learning the Koopman operator increases for large NDS. Moreover, the computational complexity increases in a combinatorial fashion with the increase in the number of nodes. The learning challenge is further amplified for sparse networks by two factors: 1) sample sparsity for learning the Koopman operator in the non-linear space, and 2) the dissimilarity in the dynamics of individual nodes or from one subgraph to another. Our work aims to address these challenges by formulating the representation learning of NDS into a multi-agent paradigm and learning the Koopman operator in a distributive manner. Our theoretical results show that the proposed distributed computation of the geometric Koopman operator is beneficial for sparse NDS, whereas for the fully connected systems this approach coincides with the centralized one. The empirical study on a rope system, a network of oscillators, and a power grid show comparable and superior performance along with computational benefits with the state-of-the-art methods.} }
Endnote
%0 Conference Paper %T Learning Distributed Geometric Koopman Operator for Sparse Networked Dynamical Systems %A Sayak Mukherjee %A Sai Pushpak Nandanoori %A Sheng Guan %A Khushbu Agarwal %A Subhrajit Sinha %A Soumya Kundu %A Seemita Pal %A Yinghui Wu %A Draguna L Vrabie %A Sutanay Choudhury %B Proceedings of the First Learning on Graphs Conference %C Proceedings of Machine Learning Research %D 2022 %E Bastian Rieck %E Razvan Pascanu %F pmlr-v198-mukherjee22a %I PMLR %P 45:1--45:17 %U https://proceedings.mlr.press/v198/mukherjee22a.html %V 198 %X The Koopman operator theory provides an alternative to studying nonlinear networked dynamical systems (NDS) by mapping the state space to an abstract higher dimensional space where the system evolution is linear. The recent works show the application of graph neural networks (GNNs) to learn state to object-centric embedding and achieve centralized block-wise computation of the Koopman operator (KO) under additional assumptions on the underlying node properties and constraints on the KO structure. However, the computational complexity of learning the Koopman operator increases for large NDS. Moreover, the computational complexity increases in a combinatorial fashion with the increase in the number of nodes. The learning challenge is further amplified for sparse networks by two factors: 1) sample sparsity for learning the Koopman operator in the non-linear space, and 2) the dissimilarity in the dynamics of individual nodes or from one subgraph to another. Our work aims to address these challenges by formulating the representation learning of NDS into a multi-agent paradigm and learning the Koopman operator in a distributive manner. Our theoretical results show that the proposed distributed computation of the geometric Koopman operator is beneficial for sparse NDS, whereas for the fully connected systems this approach coincides with the centralized one. The empirical study on a rope system, a network of oscillators, and a power grid show comparable and superior performance along with computational benefits with the state-of-the-art methods.
APA
Mukherjee, S., Nandanoori, S.P., Guan, S., Agarwal, K., Sinha, S., Kundu, S., Pal, S., Wu, Y., Vrabie, D.L. & Choudhury, S.. (2022). Learning Distributed Geometric Koopman Operator for Sparse Networked Dynamical Systems. Proceedings of the First Learning on Graphs Conference, in Proceedings of Machine Learning Research 198:45:1-45:17 Available from https://proceedings.mlr.press/v198/mukherjee22a.html.

Related Material