How to Train Your Neural ODE: the World of Jacobian and Kinetic Regularization

Chris Finlay, Joern-Henrik Jacobsen, Levon Nurbekyan, Adam Oberman
Proceedings of the 37th International Conference on Machine Learning, PMLR 119:3154-3164, 2020.

Abstract

Training neural ODEs on large datasets has not been tractable due to the necessity of allowing the adaptive numerical ODE solver to refine its step size to very small values. In practice this leads to dynamics equivalent to many hundreds or even thousands of layers. In this paper, we overcome this apparent difficulty by introducing a theoretically-grounded combination of both optimal transport and stability regularizations which encourage neural ODEs to prefer simpler dynamics out of all the dynamics that solve a problem well. Simpler dynamics lead to faster convergence and to fewer discretizations of the solver, considerably decreasing wall-clock time without loss in performance. Our approach allows us to train neural ODE-based generative models to the same performance as the unregularized dynamics, with significant reductions in training time. This brings neural ODEs closer to practical relevance in large-scale applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v119-finlay20a, title = {How to Train Your Neural {ODE}: the World of {J}acobian and Kinetic Regularization}, author = {Finlay, Chris and Jacobsen, Joern-Henrik and Nurbekyan, Levon and Oberman, Adam}, booktitle = {Proceedings of the 37th International Conference on Machine Learning}, pages = {3154--3164}, year = {2020}, editor = {III, Hal Daumé and Singh, Aarti}, volume = {119}, series = {Proceedings of Machine Learning Research}, month = {13--18 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v119/finlay20a/finlay20a.pdf}, url = {https://proceedings.mlr.press/v119/finlay20a.html}, abstract = {Training neural ODEs on large datasets has not been tractable due to the necessity of allowing the adaptive numerical ODE solver to refine its step size to very small values. In practice this leads to dynamics equivalent to many hundreds or even thousands of layers. In this paper, we overcome this apparent difficulty by introducing a theoretically-grounded combination of both optimal transport and stability regularizations which encourage neural ODEs to prefer simpler dynamics out of all the dynamics that solve a problem well. Simpler dynamics lead to faster convergence and to fewer discretizations of the solver, considerably decreasing wall-clock time without loss in performance. Our approach allows us to train neural ODE-based generative models to the same performance as the unregularized dynamics, with significant reductions in training time. This brings neural ODEs closer to practical relevance in large-scale applications.} }
Endnote
%0 Conference Paper %T How to Train Your Neural ODE: the World of Jacobian and Kinetic Regularization %A Chris Finlay %A Joern-Henrik Jacobsen %A Levon Nurbekyan %A Adam Oberman %B Proceedings of the 37th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2020 %E Hal Daumé III %E Aarti Singh %F pmlr-v119-finlay20a %I PMLR %P 3154--3164 %U https://proceedings.mlr.press/v119/finlay20a.html %V 119 %X Training neural ODEs on large datasets has not been tractable due to the necessity of allowing the adaptive numerical ODE solver to refine its step size to very small values. In practice this leads to dynamics equivalent to many hundreds or even thousands of layers. In this paper, we overcome this apparent difficulty by introducing a theoretically-grounded combination of both optimal transport and stability regularizations which encourage neural ODEs to prefer simpler dynamics out of all the dynamics that solve a problem well. Simpler dynamics lead to faster convergence and to fewer discretizations of the solver, considerably decreasing wall-clock time without loss in performance. Our approach allows us to train neural ODE-based generative models to the same performance as the unregularized dynamics, with significant reductions in training time. This brings neural ODEs closer to practical relevance in large-scale applications.
APA
Finlay, C., Jacobsen, J., Nurbekyan, L. & Oberman, A.. (2020). How to Train Your Neural ODE: the World of Jacobian and Kinetic Regularization. Proceedings of the 37th International Conference on Machine Learning, in Proceedings of Machine Learning Research 119:3154-3164 Available from https://proceedings.mlr.press/v119/finlay20a.html.

Related Material