Matching Normalizing Flows and Probability Paths on Manifolds

Heli Ben-Hamu, Samuel Cohen, Joey Bose, Brandon Amos, Maximillian Nickel, Aditya Grover, Ricky T. Q. Chen, Yaron Lipman
Proceedings of the 39th International Conference on Machine Learning, PMLR 162:1749-1763, 2022.

Abstract

Continuous Normalizing Flows (CNFs) are a class of generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE). We propose to train CNFs on manifolds by minimizing probability path divergence (PPD), a novel family of divergences between the probability density path generated by the CNF and a target probability density path. PPD is formulated using a logarithmic mass conservation formula which is a linear first order partial differential equation relating the log target probabilities and the CNF’s defining vector field. PPD has several key benefits over existing methods: it sidesteps the need to solve an ODE per iteration, readily applies to manifold data, scales to high dimensions, and is compatible with a large family of target paths interpolating pure noise and data in finite time. Theoretically, PPD is shown to bound classical probability divergences. Empirically, we show that CNFs learned by minimizing PPD achieve state-of-the-art results in likelihoods and sample quality on existing low-dimensional manifold benchmarks, and is the first example of a generative model to scale to moderately high dimensional manifolds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v162-ben-hamu22a, title = {Matching Normalizing Flows and Probability Paths on Manifolds}, author = {Ben-Hamu, Heli and Cohen, Samuel and Bose, Joey and Amos, Brandon and Nickel, Maximillian and Grover, Aditya and Chen, Ricky T. Q. and Lipman, Yaron}, booktitle = {Proceedings of the 39th International Conference on Machine Learning}, pages = {1749--1763}, year = {2022}, editor = {Chaudhuri, Kamalika and Jegelka, Stefanie and Song, Le and Szepesvari, Csaba and Niu, Gang and Sabato, Sivan}, volume = {162}, series = {Proceedings of Machine Learning Research}, month = {17--23 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v162/ben-hamu22a/ben-hamu22a.pdf}, url = {https://proceedings.mlr.press/v162/ben-hamu22a.html}, abstract = {Continuous Normalizing Flows (CNFs) are a class of generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE). We propose to train CNFs on manifolds by minimizing probability path divergence (PPD), a novel family of divergences between the probability density path generated by the CNF and a target probability density path. PPD is formulated using a logarithmic mass conservation formula which is a linear first order partial differential equation relating the log target probabilities and the CNF’s defining vector field. PPD has several key benefits over existing methods: it sidesteps the need to solve an ODE per iteration, readily applies to manifold data, scales to high dimensions, and is compatible with a large family of target paths interpolating pure noise and data in finite time. Theoretically, PPD is shown to bound classical probability divergences. Empirically, we show that CNFs learned by minimizing PPD achieve state-of-the-art results in likelihoods and sample quality on existing low-dimensional manifold benchmarks, and is the first example of a generative model to scale to moderately high dimensional manifolds.} }
Endnote
%0 Conference Paper %T Matching Normalizing Flows and Probability Paths on Manifolds %A Heli Ben-Hamu %A Samuel Cohen %A Joey Bose %A Brandon Amos %A Maximillian Nickel %A Aditya Grover %A Ricky T. Q. Chen %A Yaron Lipman %B Proceedings of the 39th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2022 %E Kamalika Chaudhuri %E Stefanie Jegelka %E Le Song %E Csaba Szepesvari %E Gang Niu %E Sivan Sabato %F pmlr-v162-ben-hamu22a %I PMLR %P 1749--1763 %U https://proceedings.mlr.press/v162/ben-hamu22a.html %V 162 %X Continuous Normalizing Flows (CNFs) are a class of generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE). We propose to train CNFs on manifolds by minimizing probability path divergence (PPD), a novel family of divergences between the probability density path generated by the CNF and a target probability density path. PPD is formulated using a logarithmic mass conservation formula which is a linear first order partial differential equation relating the log target probabilities and the CNF’s defining vector field. PPD has several key benefits over existing methods: it sidesteps the need to solve an ODE per iteration, readily applies to manifold data, scales to high dimensions, and is compatible with a large family of target paths interpolating pure noise and data in finite time. Theoretically, PPD is shown to bound classical probability divergences. Empirically, we show that CNFs learned by minimizing PPD achieve state-of-the-art results in likelihoods and sample quality on existing low-dimensional manifold benchmarks, and is the first example of a generative model to scale to moderately high dimensional manifolds.
APA
Ben-Hamu, H., Cohen, S., Bose, J., Amos, B., Nickel, M., Grover, A., Chen, R.T.Q. & Lipman, Y.. (2022). Matching Normalizing Flows and Probability Paths on Manifolds. Proceedings of the 39th International Conference on Machine Learning, in Proceedings of Machine Learning Research 162:1749-1763 Available from https://proceedings.mlr.press/v162/ben-hamu22a.html.

Related Material