Langevin Monte Carlo without smoothness

[edit]

Niladri Chatterji, Jelena Diakonikolas, Michael I. Jordan, Peter Bartlett ;
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1716-1726, 2020.

Abstract

Langevin Monte Carlo (LMC) is an iterative algorithm used to generate samples from a distribution that is known only up to a normalizing constant. The nonasymptotic dependence of its mixing time on the dimension and target accuracy is understood mainly in the setting of smooth (gradient-Lipschitz) log-densities, a serious limitation for applications in machine learning. In this paper, we remove this limitation, providing polynomial-time convergence guarantees for a variant of LMC in the setting of nonsmooth log-concave distributions. At a high level, our results follow by leveraging the implicit smoothing of the log-density that comes from a small Gaussian perturbation that we add to the iterates of the algorithm and controlling the bias and variance that are induced by this perturbation.

Related Material