Sampling from a logconcave distribution with compact support with proximal Langevin Monte Carlo
[edit]
Proceedings of the 2017 Conference on Learning Theory, PMLR 65:319342, 2017.
Abstract
This paper presents a detailed theoretical analysis of the Langevin Monte Carlo sampling algorithm recently introduced in Durmus et al. (Efficient Bayesian computation by proximal Markov chain Monte Carlo: when Langevin meets Moreau, 2016) when applied to logconcave probability distributions that are restricted to a convex body $K$. This method relies on a regularisation procedure involving the MoreauYosida envelope of the indicator function associated with $K$. Explicit convergence bounds in total variation norm and in Wasserstein distance of order $1$ are established. In particular, we show that the complexity of this algorithm given a first order oracle is polynomial in the dimension of the state space. Finally, some numerical experiments are presented to compare our method with competing MCMC approaches from the literature.
Related Material


