Stochastic continuous normalizing flows: training SDEs as ODEs

Liam Hodgkinson, Chris van der Heide, Fred Roosta, Michael W. Mahoney
Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, PMLR 161:1130-1140, 2021.

Abstract

We provide a general theoretical framework for stochastic continuous normalizing flows, an extension of continuous normalizing flows for density estimation of stochastic differential equations (SDEs). Using the theory of rough paths, the underlying Brownian motion is treated as a latent variable and approximated. Doing so enables the treatment of SDEs as random ordinary differential equations, which can be trained using existing techniques. For scalar loss functions, this approach naturally recovers the stochastic adjoint method of Li et al. [2020] for training neural SDEs, while supporting a more flexible class of approximations.

Cite this Paper


BibTeX
@InProceedings{pmlr-v161-hodgkinson21a, title = {Stochastic continuous normalizing flows: training {SDEs} as {ODEs}}, author = {Hodgkinson, Liam and van der Heide, Chris and Roosta, Fred and Mahoney, Michael W.}, booktitle = {Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence}, pages = {1130--1140}, year = {2021}, editor = {de Campos, Cassio and Maathuis, Marloes H.}, volume = {161}, series = {Proceedings of Machine Learning Research}, month = {27--30 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v161/hodgkinson21a/hodgkinson21a.pdf}, url = {https://proceedings.mlr.press/v161/hodgkinson21a.html}, abstract = {We provide a general theoretical framework for stochastic continuous normalizing flows, an extension of continuous normalizing flows for density estimation of stochastic differential equations (SDEs). Using the theory of rough paths, the underlying Brownian motion is treated as a latent variable and approximated. Doing so enables the treatment of SDEs as random ordinary differential equations, which can be trained using existing techniques. For scalar loss functions, this approach naturally recovers the stochastic adjoint method of Li et al. [2020] for training neural SDEs, while supporting a more flexible class of approximations.} }
Endnote
%0 Conference Paper %T Stochastic continuous normalizing flows: training SDEs as ODEs %A Liam Hodgkinson %A Chris van der Heide %A Fred Roosta %A Michael W. Mahoney %B Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence %C Proceedings of Machine Learning Research %D 2021 %E Cassio de Campos %E Marloes H. Maathuis %F pmlr-v161-hodgkinson21a %I PMLR %P 1130--1140 %U https://proceedings.mlr.press/v161/hodgkinson21a.html %V 161 %X We provide a general theoretical framework for stochastic continuous normalizing flows, an extension of continuous normalizing flows for density estimation of stochastic differential equations (SDEs). Using the theory of rough paths, the underlying Brownian motion is treated as a latent variable and approximated. Doing so enables the treatment of SDEs as random ordinary differential equations, which can be trained using existing techniques. For scalar loss functions, this approach naturally recovers the stochastic adjoint method of Li et al. [2020] for training neural SDEs, while supporting a more flexible class of approximations.
APA
Hodgkinson, L., van der Heide, C., Roosta, F. & Mahoney, M.W.. (2021). Stochastic continuous normalizing flows: training SDEs as ODEs. Proceedings of the Thirty-Seventh Conference on Uncertainty in Artificial Intelligence, in Proceedings of Machine Learning Research 161:1130-1140 Available from https://proceedings.mlr.press/v161/hodgkinson21a.html.

Related Material