A Unified Framework for Neural Computation and Learning Over Time

Stefano Melacci, Alessandro Betti, Michele Casoni, Tommaso Guidi, Matteo Tiezzi, Marco Gori
Proceedings of the 2nd ECAI Workshop on "Machine Learning Meets Differential Equations: From Theory to Applications", PMLR 277:71-95, 2025.

Abstract

This paper proposes Hamiltonian Learning, a novel unified framework for learning with neural networks "over time", i.e., from a possibly infinite stream of data, in an online manner, without having access to future information. In this paper, the problem of learning over time is rethought from scratch, leveraging tools from optimal control theory, which yield a unifying view of the temporal dynamics of neural computations and learning. Hamiltonian Learning is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives. The proposed framework is showcased by experimentally proving how it can recover gradient-based learning, comparing it to out-of-the box optimizers, and describing how it is flexible enough to switch from fully-local to partially/non-local computational schemes, possibly distributed over multiple devices, and BackPropagation without storing activations. Hamiltonian Learning is easy to implement and can help researches approach in a principled and innovative manner the problem of learning over time.

Cite this Paper


BibTeX
@InProceedings{pmlr-v277-melacci25a, title = {A Unified Framework for Neural Computation and Learning Over Time}, author = {Melacci, Stefano and Betti, Alessandro and Casoni, Michele and Guidi, Tommaso and Tiezzi, Matteo and Gori, Marco}, booktitle = {Proceedings of the 2nd ECAI Workshop on "Machine Learning Meets Differential Equations: From Theory to Applications"}, pages = {71--95}, year = {2025}, editor = {Coelho, Cecı́lia and Zimmering, Bernd and Costa, M. Fernanda P. and Ferrás, Luı́s L. and Niggemann, Oliver}, volume = {277}, series = {Proceedings of Machine Learning Research}, month = {26 Oct}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v277/main/assets/melacci25a/melacci25a.pdf}, url = {https://proceedings.mlr.press/v277/melacci25a.html}, abstract = {This paper proposes Hamiltonian Learning, a novel unified framework for learning with neural networks "over time", i.e., from a possibly infinite stream of data, in an online manner, without having access to future information. In this paper, the problem of learning over time is rethought from scratch, leveraging tools from optimal control theory, which yield a unifying view of the temporal dynamics of neural computations and learning. Hamiltonian Learning is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives. The proposed framework is showcased by experimentally proving how it can recover gradient-based learning, comparing it to out-of-the box optimizers, and describing how it is flexible enough to switch from fully-local to partially/non-local computational schemes, possibly distributed over multiple devices, and BackPropagation without storing activations. Hamiltonian Learning is easy to implement and can help researches approach in a principled and innovative manner the problem of learning over time.} }
Endnote
%0 Conference Paper %T A Unified Framework for Neural Computation and Learning Over Time %A Stefano Melacci %A Alessandro Betti %A Michele Casoni %A Tommaso Guidi %A Matteo Tiezzi %A Marco Gori %B Proceedings of the 2nd ECAI Workshop on "Machine Learning Meets Differential Equations: From Theory to Applications" %C Proceedings of Machine Learning Research %D 2025 %E Cecı́lia Coelho %E Bernd Zimmering %E M. Fernanda P. Costa %E Luı́s L. Ferrás %E Oliver Niggemann %F pmlr-v277-melacci25a %I PMLR %P 71--95 %U https://proceedings.mlr.press/v277/melacci25a.html %V 277 %X This paper proposes Hamiltonian Learning, a novel unified framework for learning with neural networks "over time", i.e., from a possibly infinite stream of data, in an online manner, without having access to future information. In this paper, the problem of learning over time is rethought from scratch, leveraging tools from optimal control theory, which yield a unifying view of the temporal dynamics of neural computations and learning. Hamiltonian Learning is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives. The proposed framework is showcased by experimentally proving how it can recover gradient-based learning, comparing it to out-of-the box optimizers, and describing how it is flexible enough to switch from fully-local to partially/non-local computational schemes, possibly distributed over multiple devices, and BackPropagation without storing activations. Hamiltonian Learning is easy to implement and can help researches approach in a principled and innovative manner the problem of learning over time.
APA
Melacci, S., Betti, A., Casoni, M., Guidi, T., Tiezzi, M. & Gori, M.. (2025). A Unified Framework for Neural Computation and Learning Over Time. Proceedings of the 2nd ECAI Workshop on "Machine Learning Meets Differential Equations: From Theory to Applications", in Proceedings of Machine Learning Research 277:71-95 Available from https://proceedings.mlr.press/v277/melacci25a.html.

Related Material