"Hey, that’s not an ODE": Faster ODE Adjoints via Seminorms

Patrick Kidger, Ricky T. Q. Chen, Terry J Lyons
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:5443-5452, 2021.

Abstract

Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver. A proposed step is accepted if its error, \emph{relative to some norm}, is sufficiently small; else it is rejected, the step is shrunk, and the process is repeated. Here, we demonstrate that the particular structure of the adjoint equations makes the usual choices of norm (such as $L^2$) unnecessarily stringent. By replacing it with a more appropriate (semi)norm, fewer steps are unnecessarily rejected and the backpropagation is made faster. This requires only minor code modifications. Experiments on a wide range of tasks—including time series, generative modeling, and physical control—demonstrate a median improvement of 40% fewer function evaluations. On some problems we see as much as 62% fewer function evaluations, so that the overall training time is roughly halved.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-kidger21a, title = {"Hey, that’s not an ODE": Faster ODE Adjoints via Seminorms}, author = {Kidger, Patrick and Chen, Ricky T. Q. and Lyons, Terry J}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {5443--5452}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/kidger21a/kidger21a.pdf}, url = {https://proceedings.mlr.press/v139/kidger21a.html}, abstract = {Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver. A proposed step is accepted if its error, \emph{relative to some norm}, is sufficiently small; else it is rejected, the step is shrunk, and the process is repeated. Here, we demonstrate that the particular structure of the adjoint equations makes the usual choices of norm (such as $L^2$) unnecessarily stringent. By replacing it with a more appropriate (semi)norm, fewer steps are unnecessarily rejected and the backpropagation is made faster. This requires only minor code modifications. Experiments on a wide range of tasks—including time series, generative modeling, and physical control—demonstrate a median improvement of 40% fewer function evaluations. On some problems we see as much as 62% fewer function evaluations, so that the overall training time is roughly halved.} }
Endnote
%0 Conference Paper %T "Hey, that’s not an ODE": Faster ODE Adjoints via Seminorms %A Patrick Kidger %A Ricky T. Q. Chen %A Terry J Lyons %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-kidger21a %I PMLR %P 5443--5452 %U https://proceedings.mlr.press/v139/kidger21a.html %V 139 %X Neural differential equations may be trained by backpropagating gradients via the adjoint method, which is another differential equation typically solved using an adaptive-step-size numerical differential equation solver. A proposed step is accepted if its error, \emph{relative to some norm}, is sufficiently small; else it is rejected, the step is shrunk, and the process is repeated. Here, we demonstrate that the particular structure of the adjoint equations makes the usual choices of norm (such as $L^2$) unnecessarily stringent. By replacing it with a more appropriate (semi)norm, fewer steps are unnecessarily rejected and the backpropagation is made faster. This requires only minor code modifications. Experiments on a wide range of tasks—including time series, generative modeling, and physical control—demonstrate a median improvement of 40% fewer function evaluations. On some problems we see as much as 62% fewer function evaluations, so that the overall training time is roughly halved.
APA
Kidger, P., Chen, R.T.Q. & Lyons, T.J.. (2021). "Hey, that’s not an ODE": Faster ODE Adjoints via Seminorms. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:5443-5452 Available from https://proceedings.mlr.press/v139/kidger21a.html.

Related Material