LEAPS: A discrete neural sampler via locally equivariant networks

Peter Holderrieth, Michael Samuel Albergo, Tommi Jaakkola
Proceedings of the 42nd International Conference on Machine Learning, PMLR 267:23397-23416, 2025.

Abstract

We propose LEAPS, an algorithm to sample from discrete distributions known up to normalization by learning a rate matrix of a continuous-time Markov chain (CTMC). LEAPS can be seen as a continuous-time formulation of annealed importance sampling and sequential Monte Carlo methods, extended so that the variance of the importance weights is offset by the inclusion of the CTMC. To derive these importance weights, we introduce a set of Radon-Nikodym derivatives of CTMCs over their path measures. Because the computation of these weights is intractable with standard neural network parameterizations of rate matrices, we devise a new compact representation for rate matrices via what we call locally equivariant functions. To parameterize them, we introduce a family of locally equivariant multilayer perceptrons, attention layers, and convolutional networks, and provide an approach to make deep networks that preserve the local equivariance. This property allows us to propose a scalable training algorithm for the rate matrix such that the variance of the importance weights associated to the CTMC are minimal. We demonstrate the efficacy of LEAPS on problems in statistical physics. We provide code in https://github.com/malbergo/leaps/.

Cite this Paper


BibTeX
@InProceedings{pmlr-v267-holderrieth25a, title = {{LEAPS}: A discrete neural sampler via locally equivariant networks}, author = {Holderrieth, Peter and Albergo, Michael Samuel and Jaakkola, Tommi}, booktitle = {Proceedings of the 42nd International Conference on Machine Learning}, pages = {23397--23416}, year = {2025}, editor = {Singh, Aarti and Fazel, Maryam and Hsu, Daniel and Lacoste-Julien, Simon and Berkenkamp, Felix and Maharaj, Tegan and Wagstaff, Kiri and Zhu, Jerry}, volume = {267}, series = {Proceedings of Machine Learning Research}, month = {13--19 Jul}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v267/main/assets/holderrieth25a/holderrieth25a.pdf}, url = {https://proceedings.mlr.press/v267/holderrieth25a.html}, abstract = {We propose LEAPS, an algorithm to sample from discrete distributions known up to normalization by learning a rate matrix of a continuous-time Markov chain (CTMC). LEAPS can be seen as a continuous-time formulation of annealed importance sampling and sequential Monte Carlo methods, extended so that the variance of the importance weights is offset by the inclusion of the CTMC. To derive these importance weights, we introduce a set of Radon-Nikodym derivatives of CTMCs over their path measures. Because the computation of these weights is intractable with standard neural network parameterizations of rate matrices, we devise a new compact representation for rate matrices via what we call locally equivariant functions. To parameterize them, we introduce a family of locally equivariant multilayer perceptrons, attention layers, and convolutional networks, and provide an approach to make deep networks that preserve the local equivariance. This property allows us to propose a scalable training algorithm for the rate matrix such that the variance of the importance weights associated to the CTMC are minimal. We demonstrate the efficacy of LEAPS on problems in statistical physics. We provide code in https://github.com/malbergo/leaps/.} }
Endnote
%0 Conference Paper %T LEAPS: A discrete neural sampler via locally equivariant networks %A Peter Holderrieth %A Michael Samuel Albergo %A Tommi Jaakkola %B Proceedings of the 42nd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2025 %E Aarti Singh %E Maryam Fazel %E Daniel Hsu %E Simon Lacoste-Julien %E Felix Berkenkamp %E Tegan Maharaj %E Kiri Wagstaff %E Jerry Zhu %F pmlr-v267-holderrieth25a %I PMLR %P 23397--23416 %U https://proceedings.mlr.press/v267/holderrieth25a.html %V 267 %X We propose LEAPS, an algorithm to sample from discrete distributions known up to normalization by learning a rate matrix of a continuous-time Markov chain (CTMC). LEAPS can be seen as a continuous-time formulation of annealed importance sampling and sequential Monte Carlo methods, extended so that the variance of the importance weights is offset by the inclusion of the CTMC. To derive these importance weights, we introduce a set of Radon-Nikodym derivatives of CTMCs over their path measures. Because the computation of these weights is intractable with standard neural network parameterizations of rate matrices, we devise a new compact representation for rate matrices via what we call locally equivariant functions. To parameterize them, we introduce a family of locally equivariant multilayer perceptrons, attention layers, and convolutional networks, and provide an approach to make deep networks that preserve the local equivariance. This property allows us to propose a scalable training algorithm for the rate matrix such that the variance of the importance weights associated to the CTMC are minimal. We demonstrate the efficacy of LEAPS on problems in statistical physics. We provide code in https://github.com/malbergo/leaps/.
APA
Holderrieth, P., Albergo, M.S. & Jaakkola, T.. (2025). LEAPS: A discrete neural sampler via locally equivariant networks. Proceedings of the 42nd International Conference on Machine Learning, in Proceedings of Machine Learning Research 267:23397-23416 Available from https://proceedings.mlr.press/v267/holderrieth25a.html.

Related Material