A unified framework for Hamiltonian deep neural networks

Clara Lucía Galimberti, Liang Xu, Giancarlo Ferrari Trecate
Proceedings of the 3rd Conference on Learning for Dynamics and Control, PMLR 144:275-286, 2021.

Abstract

Training deep neural networks (DNNs) can be difficult due to the occurrence of vanishing/exploding gradients during weight optimization. To avoid this problem, we propose a class of DNNs stemming from the time discretization of Hamiltonian systems. The time-invariant version of Hamiltonian models enjoys marginal stability, a property that, as shown in previous studies, can eliminate convergence to zero or divergence of gradients. In the present paper, we formally show this feature by deriving and analysing the backward gradient dynamics in continuous time. The proposed Hamiltonian framework, besides encompassing existing networks inspired by marginally stable ODEs, allows one to derive new and more expressive architectures. The good performance of the novel DNNs is demonstrated on benchmark classification problems, including digit recognition using the MNIST dataset.

Cite this Paper


BibTeX
@InProceedings{pmlr-v144-galimberti21a, title = {A unified framework for Hamiltonian deep neural networks}, author = {Galimberti, Clara Luc\'{i}a and Xu, Liang and Trecate, Giancarlo Ferrari}, booktitle = {Proceedings of the 3rd Conference on Learning for Dynamics and Control}, pages = {275--286}, year = {2021}, editor = {Jadbabaie, Ali and Lygeros, John and Pappas, George J. and A. Parrilo, Pablo and Recht, Benjamin and Tomlin, Claire J. and Zeilinger, Melanie N.}, volume = {144}, series = {Proceedings of Machine Learning Research}, month = {07 -- 08 June}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v144/galimberti21a/galimberti21a.pdf}, url = {https://proceedings.mlr.press/v144/galimberti21a.html}, abstract = {Training deep neural networks (DNNs) can be difficult due to the occurrence of vanishing/exploding gradients during weight optimization. To avoid this problem, we propose a class of DNNs stemming from the time discretization of Hamiltonian systems. The time-invariant version of Hamiltonian models enjoys marginal stability, a property that, as shown in previous studies, can eliminate convergence to zero or divergence of gradients. In the present paper, we formally show this feature by deriving and analysing the backward gradient dynamics in continuous time. The proposed Hamiltonian framework, besides encompassing existing networks inspired by marginally stable ODEs, allows one to derive new and more expressive architectures. The good performance of the novel DNNs is demonstrated on benchmark classification problems, including digit recognition using the MNIST dataset.} }
Endnote
%0 Conference Paper %T A unified framework for Hamiltonian deep neural networks %A Clara Lucía Galimberti %A Liang Xu %A Giancarlo Ferrari Trecate %B Proceedings of the 3rd Conference on Learning for Dynamics and Control %C Proceedings of Machine Learning Research %D 2021 %E Ali Jadbabaie %E John Lygeros %E George J. Pappas %E Pablo A. Parrilo %E Benjamin Recht %E Claire J. Tomlin %E Melanie N. Zeilinger %F pmlr-v144-galimberti21a %I PMLR %P 275--286 %U https://proceedings.mlr.press/v144/galimberti21a.html %V 144 %X Training deep neural networks (DNNs) can be difficult due to the occurrence of vanishing/exploding gradients during weight optimization. To avoid this problem, we propose a class of DNNs stemming from the time discretization of Hamiltonian systems. The time-invariant version of Hamiltonian models enjoys marginal stability, a property that, as shown in previous studies, can eliminate convergence to zero or divergence of gradients. In the present paper, we formally show this feature by deriving and analysing the backward gradient dynamics in continuous time. The proposed Hamiltonian framework, besides encompassing existing networks inspired by marginally stable ODEs, allows one to derive new and more expressive architectures. The good performance of the novel DNNs is demonstrated on benchmark classification problems, including digit recognition using the MNIST dataset.
APA
Galimberti, C.L., Xu, L. & Trecate, G.F.. (2021). A unified framework for Hamiltonian deep neural networks. Proceedings of the 3rd Conference on Learning for Dynamics and Control, in Proceedings of Machine Learning Research 144:275-286 Available from https://proceedings.mlr.press/v144/galimberti21a.html.

Related Material