[edit]
Convergence of Langevin Monte Carlo in Chi-Squared and Rényi Divergence
Proceedings of The 25th International Conference on Artificial Intelligence and Statistics, PMLR 151:8151-8175, 2022.
Abstract
We study sampling from a target distribution ν∗=e−f using the unadjusted Langevin Monte Carlo (LMC) algorithm when the potential f satisfies a strong dissipativity condition and it is first-order smooth with a Lipschitz gradient. We prove that, initialized with a Gaussian random vector that has sufficiently small variance, iterating the LMC algorithm for ˜O(λ2dϵ−1) steps is sufficient to reach ϵ-neighborhood of the target in both Chi-squared and Rényi divergence, where λ is the logarithmic Sobolev constant of ν∗. Our results do not require warm-start to deal with the exponential dimension dependency in Chi-squared divergence at initialization. In particular, for strongly convex and first-order smooth potentials, we show that the LMC algorithm achieves the rate estimate ˜O(dϵ−1) which improves the previously known rates in both of these metrics, under the same assumptions. Translating this rate to other metrics, our results also recover the state-of-the-art rate estimates in KL divergence, total variation and 2-Wasserstein distance in the same setup. Finally, as we rely on the logarithmic Sobolev inequality, our framework covers a range of non-convex potentials that are first-order smooth and exhibit strong convexity outside of a compact region.