Continuous Spatiotemporal Transformer

Antonio Henrique De Oliveira Fonseca, Emanuele Zappala, Josue Ortega Caro, David Van Dijk
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:7343-7365, 2023.

Abstract

Modeling spatiotemporal dynamical systems is a fundamental challenge in machine learning. Transformer models have been very successful in NLP and computer vision where they provide interpretable representations of data. However, a limitation of transformers in modeling continuous dynamical systems is that they are fundamentally discrete time and space models and thus have no guarantees regarding continuous sampling. To address this challenge, we present the Continuous Spatiotemporal Transformer (CST), a new transformer architecture that is designed for modeling of continuous systems. This new framework guarantees a continuous and smooth output via optimization in Sobolev space. We benchmark CST against traditional transformers as well as other spatiotemporal dynamics modeling methods and achieve superior performance in a number of tasks on synthetic and real systems, including learning brain dynamics from calcium imaging data.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-de-oliveira-fonseca23a, title = {Continuous Spatiotemporal Transformer}, author = {De Oliveira Fonseca, Antonio Henrique and Zappala, Emanuele and Ortega Caro, Josue and Dijk, David Van}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {7343--7365}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/de-oliveira-fonseca23a/de-oliveira-fonseca23a.pdf}, url = {https://proceedings.mlr.press/v202/de-oliveira-fonseca23a.html}, abstract = {Modeling spatiotemporal dynamical systems is a fundamental challenge in machine learning. Transformer models have been very successful in NLP and computer vision where they provide interpretable representations of data. However, a limitation of transformers in modeling continuous dynamical systems is that they are fundamentally discrete time and space models and thus have no guarantees regarding continuous sampling. To address this challenge, we present the Continuous Spatiotemporal Transformer (CST), a new transformer architecture that is designed for modeling of continuous systems. This new framework guarantees a continuous and smooth output via optimization in Sobolev space. We benchmark CST against traditional transformers as well as other spatiotemporal dynamics modeling methods and achieve superior performance in a number of tasks on synthetic and real systems, including learning brain dynamics from calcium imaging data.} }
Endnote
%0 Conference Paper %T Continuous Spatiotemporal Transformer %A Antonio Henrique De Oliveira Fonseca %A Emanuele Zappala %A Josue Ortega Caro %A David Van Dijk %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-de-oliveira-fonseca23a %I PMLR %P 7343--7365 %U https://proceedings.mlr.press/v202/de-oliveira-fonseca23a.html %V 202 %X Modeling spatiotemporal dynamical systems is a fundamental challenge in machine learning. Transformer models have been very successful in NLP and computer vision where they provide interpretable representations of data. However, a limitation of transformers in modeling continuous dynamical systems is that they are fundamentally discrete time and space models and thus have no guarantees regarding continuous sampling. To address this challenge, we present the Continuous Spatiotemporal Transformer (CST), a new transformer architecture that is designed for modeling of continuous systems. This new framework guarantees a continuous and smooth output via optimization in Sobolev space. We benchmark CST against traditional transformers as well as other spatiotemporal dynamics modeling methods and achieve superior performance in a number of tasks on synthetic and real systems, including learning brain dynamics from calcium imaging data.
APA
De Oliveira Fonseca, A.H., Zappala, E., Ortega Caro, J. & Dijk, D.V.. (2023). Continuous Spatiotemporal Transformer. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:7343-7365 Available from https://proceedings.mlr.press/v202/de-oliveira-fonseca23a.html.

Related Material