Langevin Monte Carlo and JKO splitting

Espen Bernton
Proceedings of the 31st Conference On Learning Theory, PMLR 75:1777-1798, 2018.

Abstract

Algorithms based on discretizing Langevin diffusion are popular tools for sampling from high-dimensional distributions. We develop novel connections between such Monte Carlo algorithms, the theory of Wasserstein gradient flow, and the operator splitting approach to solving PDEs. In particular, we show that a proximal version of the Unadjusted Langevin Algorithm corresponds to a scheme that alternates between solving the gradient flows of two specific functionals on the space of probability measures. Using this perspective, we derive some new non-asymptotic results on the convergence properties of this algorithm.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-bernton18a, title = {{L}angevin {M}onte {C}arlo and {JKO} splitting}, author = {Bernton, Espen}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {1777--1798}, year = {2018}, editor = {Bubeck, S├ębastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/bernton18a/bernton18a.pdf}, url = {https://proceedings.mlr.press/v75/bernton18a.html}, abstract = {Algorithms based on discretizing Langevin diffusion are popular tools for sampling from high-dimensional distributions. We develop novel connections between such Monte Carlo algorithms, the theory of Wasserstein gradient flow, and the operator splitting approach to solving PDEs. In particular, we show that a proximal version of the Unadjusted Langevin Algorithm corresponds to a scheme that alternates between solving the gradient flows of two specific functionals on the space of probability measures. Using this perspective, we derive some new non-asymptotic results on the convergence properties of this algorithm.} }
Endnote
%0 Conference Paper %T Langevin Monte Carlo and JKO splitting %A Espen Bernton %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E S├ębastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-bernton18a %I PMLR %P 1777--1798 %U https://proceedings.mlr.press/v75/bernton18a.html %V 75 %X Algorithms based on discretizing Langevin diffusion are popular tools for sampling from high-dimensional distributions. We develop novel connections between such Monte Carlo algorithms, the theory of Wasserstein gradient flow, and the operator splitting approach to solving PDEs. In particular, we show that a proximal version of the Unadjusted Langevin Algorithm corresponds to a scheme that alternates between solving the gradient flows of two specific functionals on the space of probability measures. Using this perspective, we derive some new non-asymptotic results on the convergence properties of this algorithm.
APA
Bernton, E.. (2018). Langevin Monte Carlo and JKO splitting. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:1777-1798 Available from https://proceedings.mlr.press/v75/bernton18a.html.

Related Material