Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev

Sinho Chewi, Murat A Erdogdu, Mufan Li, Ruoqi Shen, Shunshi Zhang
Proceedings of Thirty Fifth Conference on Learning Theory, PMLR 178:1-2, 2022.

Abstract

Classically, the continuous-time Langevin diffusion converges exponentially fast to its stationary distribution $\pi$ under the sole assumption that $\pi$ satisfies a Poincaré inequality. Using this fact to provide guarantees for the discrete-time Langevin Monte Carlo (LMC) algorithm, however, is considerably more challenging due to the need for working with chi-squared or Rényi divergences, and prior works have largely focused on strongly log-concave targets. In this work, we provide the first convergence guarantees for LMC assuming that $\pi$ satisfies either a Latał{}a–Oleszkiewicz or modified log-Sobolev inequality, which interpolates between the Poincaré and log-Sobolev settings. Unlike prior works, our results allow for weak smoothness and do not require convexity or dissipativity conditions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v178-chewi22a, title = {Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev}, author = {Chewi, Sinho and Erdogdu, Murat A and Li, Mufan and Shen, Ruoqi and Zhang, Shunshi}, booktitle = {Proceedings of Thirty Fifth Conference on Learning Theory}, pages = {1--2}, year = {2022}, editor = {Loh, Po-Ling and Raginsky, Maxim}, volume = {178}, series = {Proceedings of Machine Learning Research}, month = {02--05 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v178/chewi22a/chewi22a.pdf}, url = {https://proceedings.mlr.press/v178/chewi22a.html}, abstract = {Classically, the continuous-time Langevin diffusion converges exponentially fast to its stationary distribution $\pi$ under the sole assumption that $\pi$ satisfies a Poincaré inequality. Using this fact to provide guarantees for the discrete-time Langevin Monte Carlo (LMC) algorithm, however, is considerably more challenging due to the need for working with chi-squared or Rényi divergences, and prior works have largely focused on strongly log-concave targets. In this work, we provide the first convergence guarantees for LMC assuming that $\pi$ satisfies either a Latał{}a–Oleszkiewicz or modified log-Sobolev inequality, which interpolates between the Poincaré and log-Sobolev settings. Unlike prior works, our results allow for weak smoothness and do not require convexity or dissipativity conditions.} }
Endnote
%0 Conference Paper %T Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev %A Sinho Chewi %A Murat A Erdogdu %A Mufan Li %A Ruoqi Shen %A Shunshi Zhang %B Proceedings of Thirty Fifth Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2022 %E Po-Ling Loh %E Maxim Raginsky %F pmlr-v178-chewi22a %I PMLR %P 1--2 %U https://proceedings.mlr.press/v178/chewi22a.html %V 178 %X Classically, the continuous-time Langevin diffusion converges exponentially fast to its stationary distribution $\pi$ under the sole assumption that $\pi$ satisfies a Poincaré inequality. Using this fact to provide guarantees for the discrete-time Langevin Monte Carlo (LMC) algorithm, however, is considerably more challenging due to the need for working with chi-squared or Rényi divergences, and prior works have largely focused on strongly log-concave targets. In this work, we provide the first convergence guarantees for LMC assuming that $\pi$ satisfies either a Latał{}a–Oleszkiewicz or modified log-Sobolev inequality, which interpolates between the Poincaré and log-Sobolev settings. Unlike prior works, our results allow for weak smoothness and do not require convexity or dissipativity conditions.
APA
Chewi, S., Erdogdu, M.A., Li, M., Shen, R. & Zhang, S.. (2022). Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev. Proceedings of Thirty Fifth Conference on Learning Theory, in Proceedings of Machine Learning Research 178:1-2 Available from https://proceedings.mlr.press/v178/chewi22a.html.

Related Material