Underdamped Langevin MCMC: A non-asymptotic analysis

Xiang Cheng, Niladri S. Chatterji, Peter L. Bartlett, Michael I. Jordan
Proceedings of the 31st Conference On Learning Theory, PMLR 75:300-323, 2018.

Abstract

We study the underdamped Langevin diffusion when the log of the target distribution is smooth and strongly concave. We present a MCMC algorithm based on its discretization and show that it achieves $\varepsilon$ error (in 2-Wasserstein distance) in $\mathcal{O}(\sqrt{d}/\varepsilon)$ steps. This is a significant improvement over the best known rate for overdamped Langevin MCMC, which is $\mathcal{O}(d/\varepsilon^2)$ steps under the same smoothness/concavity assumptions. The underdamped Langevin MCMC scheme can be viewed as a version of Hamiltonian Monte Carlo (HMC) which has been observed to outperform overdamped Langevin MCMC methods in a number of application areas. We provide quantitative rates that support this empirical wisdom.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-cheng18a, title = {Underdamped Langevin MCMC: A non-asymptotic analysis}, author = {Cheng, Xiang and Chatterji, Niladri S. and Bartlett, Peter L. and Jordan, Michael I.}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {300--323}, year = {2018}, editor = {Bubeck, Sébastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/cheng18a/cheng18a.pdf}, url = {https://proceedings.mlr.press/v75/cheng18a.html}, abstract = {We study the underdamped Langevin diffusion when the log of the target distribution is smooth and strongly concave. We present a MCMC algorithm based on its discretization and show that it achieves $\varepsilon$ error (in 2-Wasserstein distance) in $\mathcal{O}(\sqrt{d}/\varepsilon)$ steps. This is a significant improvement over the best known rate for overdamped Langevin MCMC, which is $\mathcal{O}(d/\varepsilon^2)$ steps under the same smoothness/concavity assumptions. The underdamped Langevin MCMC scheme can be viewed as a version of Hamiltonian Monte Carlo (HMC) which has been observed to outperform overdamped Langevin MCMC methods in a number of application areas. We provide quantitative rates that support this empirical wisdom.} }
Endnote
%0 Conference Paper %T Underdamped Langevin MCMC: A non-asymptotic analysis %A Xiang Cheng %A Niladri S. Chatterji %A Peter L. Bartlett %A Michael I. Jordan %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E Sébastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-cheng18a %I PMLR %P 300--323 %U https://proceedings.mlr.press/v75/cheng18a.html %V 75 %X We study the underdamped Langevin diffusion when the log of the target distribution is smooth and strongly concave. We present a MCMC algorithm based on its discretization and show that it achieves $\varepsilon$ error (in 2-Wasserstein distance) in $\mathcal{O}(\sqrt{d}/\varepsilon)$ steps. This is a significant improvement over the best known rate for overdamped Langevin MCMC, which is $\mathcal{O}(d/\varepsilon^2)$ steps under the same smoothness/concavity assumptions. The underdamped Langevin MCMC scheme can be viewed as a version of Hamiltonian Monte Carlo (HMC) which has been observed to outperform overdamped Langevin MCMC methods in a number of application areas. We provide quantitative rates that support this empirical wisdom.
APA
Cheng, X., Chatterji, N.S., Bartlett, P.L. & Jordan, M.I.. (2018). Underdamped Langevin MCMC: A non-asymptotic analysis. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:300-323 Available from https://proceedings.mlr.press/v75/cheng18a.html.

Related Material