High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm

Vishwak Srinivasan, Andre Wibisono, Ashia Wilson
Proceedings of The 36th International Conference on Algorithmic Learning Theory, PMLR 272:1169-1220, 2025.

Abstract

We propose a first-order sampling method called the Metropolis-adjusted Preconditioned Langevin Algorithm for approximate sampling from a target distribution whose support is a proper convex subset of $\mathbb{R}^{d}$. Our proposed method is the result of applying a Metropolis-Hastings filter to the Markov chain formed by a single step of the preconditioned Langevin algorithm with a metric $\mathscr{G}$, and is motivated by the natural gradient descent algorithm for optimisation. We derive non-asymptotic upper bounds for the mixing time of this method for sampling from target distributions whose potentials are bounded relative to $\mathscr{G}$, and for exponential distributions restricted to the support. Our analysis suggests that if $\mathscr{G}$ satisfies stronger notions of self-concordance introduced in \citet{kook2024gaussian}, then these mixing time upper bounds have a strictly better dependence on the dimension than when $\mathscr{G}$ is merely self-concordant. Our method is a high-accuracy sampler due to the polylogarithmic dependence on the error tolerance in our mixing time upper bounds.

Cite this Paper


BibTeX
@InProceedings{pmlr-v272-srinivasan25a, title = {High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm}, author = {Srinivasan, Vishwak and Wibisono, Andre and Wilson, Ashia}, booktitle = {Proceedings of The 36th International Conference on Algorithmic Learning Theory}, pages = {1169--1220}, year = {2025}, editor = {Kamath, Gautam and Loh, Po-Ling}, volume = {272}, series = {Proceedings of Machine Learning Research}, month = {24--27 Feb}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v272/main/assets/srinivasan25a/srinivasan25a.pdf}, url = {https://proceedings.mlr.press/v272/srinivasan25a.html}, abstract = {We propose a first-order sampling method called the Metropolis-adjusted Preconditioned Langevin Algorithm for approximate sampling from a target distribution whose support is a proper convex subset of $\mathbb{R}^{d}$. Our proposed method is the result of applying a Metropolis-Hastings filter to the Markov chain formed by a single step of the preconditioned Langevin algorithm with a metric $\mathscr{G}$, and is motivated by the natural gradient descent algorithm for optimisation. We derive non-asymptotic upper bounds for the mixing time of this method for sampling from target distributions whose potentials are bounded relative to $\mathscr{G}$, and for exponential distributions restricted to the support. Our analysis suggests that if $\mathscr{G}$ satisfies stronger notions of self-concordance introduced in \citet{kook2024gaussian}, then these mixing time upper bounds have a strictly better dependence on the dimension than when $\mathscr{G}$ is merely self-concordant. Our method is a high-accuracy sampler due to the polylogarithmic dependence on the error tolerance in our mixing time upper bounds. } }
Endnote
%0 Conference Paper %T High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm %A Vishwak Srinivasan %A Andre Wibisono %A Ashia Wilson %B Proceedings of The 36th International Conference on Algorithmic Learning Theory %C Proceedings of Machine Learning Research %D 2025 %E Gautam Kamath %E Po-Ling Loh %F pmlr-v272-srinivasan25a %I PMLR %P 1169--1220 %U https://proceedings.mlr.press/v272/srinivasan25a.html %V 272 %X We propose a first-order sampling method called the Metropolis-adjusted Preconditioned Langevin Algorithm for approximate sampling from a target distribution whose support is a proper convex subset of $\mathbb{R}^{d}$. Our proposed method is the result of applying a Metropolis-Hastings filter to the Markov chain formed by a single step of the preconditioned Langevin algorithm with a metric $\mathscr{G}$, and is motivated by the natural gradient descent algorithm for optimisation. We derive non-asymptotic upper bounds for the mixing time of this method for sampling from target distributions whose potentials are bounded relative to $\mathscr{G}$, and for exponential distributions restricted to the support. Our analysis suggests that if $\mathscr{G}$ satisfies stronger notions of self-concordance introduced in \citet{kook2024gaussian}, then these mixing time upper bounds have a strictly better dependence on the dimension than when $\mathscr{G}$ is merely self-concordant. Our method is a high-accuracy sampler due to the polylogarithmic dependence on the error tolerance in our mixing time upper bounds.
APA
Srinivasan, V., Wibisono, A. & Wilson, A.. (2025). High-accuracy sampling from constrained spaces with the Metropolis-adjusted Preconditioned Langevin Algorithm. Proceedings of The 36th International Conference on Algorithmic Learning Theory, in Proceedings of Machine Learning Research 272:1169-1220 Available from https://proceedings.mlr.press/v272/srinivasan25a.html.

Related Material