Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent

Arnak Dalalyan
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:678-689, 2017.

Abstract

In this paper, we revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. We improve the existing results when the convergence is measured in the Wasserstein distance and provide further insights on the very tight relations between, on the one hand, the Langevin Monte Carlo for sampling and, on the other hand, the gradient descent for optimization. Finally, we also establish guarantees for the convergence of a version of the Langevin Monte Carlo algorithm that is based on noisy evaluations of the gradient.

Cite this Paper


BibTeX
@InProceedings{pmlr-v65-dalalyan17a, title = {Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent}, author = {Dalalyan, Arnak}, booktitle = {Proceedings of the 2017 Conference on Learning Theory}, pages = {678--689}, year = {2017}, editor = {Kale, Satyen and Shamir, Ohad}, volume = {65}, series = {Proceedings of Machine Learning Research}, month = {07--10 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v65/dalalyan17a/dalalyan17a.pdf}, url = {https://proceedings.mlr.press/v65/dalalyan17a.html}, abstract = {In this paper, we revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. We improve the existing results when the convergence is measured in the Wasserstein distance and provide further insights on the very tight relations between, on the one hand, the Langevin Monte Carlo for sampling and, on the other hand, the gradient descent for optimization. Finally, we also establish guarantees for the convergence of a version of the Langevin Monte Carlo algorithm that is based on noisy evaluations of the gradient.} }
Endnote
%0 Conference Paper %T Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent %A Arnak Dalalyan %B Proceedings of the 2017 Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2017 %E Satyen Kale %E Ohad Shamir %F pmlr-v65-dalalyan17a %I PMLR %P 678--689 %U https://proceedings.mlr.press/v65/dalalyan17a.html %V 65 %X In this paper, we revisit the recently established theoretical guarantees for the convergence of the Langevin Monte Carlo algorithm of sampling from a smooth and (strongly) log-concave density. We improve the existing results when the convergence is measured in the Wasserstein distance and provide further insights on the very tight relations between, on the one hand, the Langevin Monte Carlo for sampling and, on the other hand, the gradient descent for optimization. Finally, we also establish guarantees for the convergence of a version of the Langevin Monte Carlo algorithm that is based on noisy evaluations of the gradient.
APA
Dalalyan, A.. (2017). Further and stronger analogy between sampling and optimization: Langevin Monte Carlo and gradient descent. Proceedings of the 2017 Conference on Learning Theory, in Proceedings of Machine Learning Research 65:678-689 Available from https://proceedings.mlr.press/v65/dalalyan17a.html.

Related Material