[edit]
High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm
Proceedings of The 36th International Conference on Algorithmic Learning Theory, PMLR 272:1169-1220, 2025.
Abstract
We propose a first-order sampling method called the Metropolis-adjusted Preconditioned Langevin Algorithm for approximate sampling from a target distribution whose support is a proper convex subset of Rd. Our proposed method is the result of applying a Metropolis-Hastings filter to the Markov chain formed by a single step of the preconditioned Langevin algorithm with a metric G, and is motivated by the natural gradient descent algorithm for optimisation. We derive non-asymptotic upper bounds for the mixing time of this method for sampling from target distributions whose potentials are bounded relative to G, and for exponential distributions restricted to the support. Our analysis suggests that if G satisfies stronger notions of self-concordance introduced in \citet{kook2024gaussian}, then these mixing time upper bounds have a strictly better dependence on the dimension than when G is merely self-concordant. Our method is a high-accuracy sampler due to the polylogarithmic dependence on the error tolerance in our mixing time upper bounds.