Fit Like You Sample: Sample-Efficient Generalized Score Matching from Fast Mixing Diffusions

Yilong Qin, Andrej Risteski
Proceedings of Thirty Seventh Conference on Learning Theory, PMLR 247:4413-4457, 2024.

Abstract

Score matching is an approach to learning probability distributions parametrized up to a constant of proportionality (e.g., energy-based models). The idea is to fit the score of the distribution rather than the likelihood, thus avoiding the need to evaluate the constant of proportionality. While there’s a clear algorithmic benefit, the statistical cost can be steep: recent work by Koehler et al. (2022) showed that for distributions that have poor isoperimetric properties (a large Poincar{é} or log-Sobolev constant), score matching is substantially statistically less efficient than maximum likelihood. However, many natural realistic distributions, e.g. multimodal distributions as simple as a mixture of two Gaussians in one dimension have a poor Poincar{é} constant. In this paper, we show a close connection between the mixing time of a broad class of Markov processes with generator L and stationary distribution p, and an appropriately chosen generalized score matching loss that tries to fit Op. In the special case of O being a gradient operator, and L being the generator of Langevin diffusion, this generalizes and recovers the results from Koehler et al. (2022). This allows us to adapt techniques to speed up Markov chains to construct better score-matching losses. In particular, "preconditioning" the diffusion can be translated to an appropriate "preconditioning" of the score loss. Lifting the chain by adding a temperature like in simulated tempering can be shown to result in a Gaussian-convolution annealed score matching loss, similar to Song and Ermon (2019). Moreover, we show that if the distribution being learned is a finite mixture of Gaussians in d dimensions with a shared covariance, the sample complexity of annealed score matching is polynomial in the ambient dimension, the diameter of the means, and the smallest and largest eigenvalues of the covariance. To show this we bound the mixing time of a "continuously tempered" version of Langevin diffusion for mixtures, which is of standalone interest.

Cite this Paper


BibTeX
@InProceedings{pmlr-v247-qin24a, title = {Fit Like You Sample: Sample-Efficient Generalized Score Matching from Fast Mixing Diffusions}, author = {Qin, Yilong and Risteski, Andrej}, booktitle = {Proceedings of Thirty Seventh Conference on Learning Theory}, pages = {4413--4457}, year = {2024}, editor = {Agrawal, Shipra and Roth, Aaron}, volume = {247}, series = {Proceedings of Machine Learning Research}, month = {30 Jun--03 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v247/qin24a/qin24a.pdf}, url = {https://proceedings.mlr.press/v247/qin24a.html}, abstract = {Score matching is an approach to learning probability distributions parametrized up to a constant of proportionality (e.g., energy-based models). The idea is to fit the score of the distribution rather than the likelihood, thus avoiding the need to evaluate the constant of proportionality. While there’s a clear algorithmic benefit, the statistical cost can be steep: recent work by Koehler et al. (2022) showed that for distributions that have poor isoperimetric properties (a large Poincar{é} or log-Sobolev constant), score matching is substantially statistically less efficient than maximum likelihood. However, many natural realistic distributions, e.g. multimodal distributions as simple as a mixture of two Gaussians in one dimension have a poor Poincar{é} constant. In this paper, we show a close connection between the mixing time of a broad class of Markov processes with generator L and stationary distribution p, and an appropriately chosen generalized score matching loss that tries to fit Op. In the special case of O being a gradient operator, and L being the generator of Langevin diffusion, this generalizes and recovers the results from Koehler et al. (2022). This allows us to adapt techniques to speed up Markov chains to construct better score-matching losses. In particular, "preconditioning" the diffusion can be translated to an appropriate "preconditioning" of the score loss. Lifting the chain by adding a temperature like in simulated tempering can be shown to result in a Gaussian-convolution annealed score matching loss, similar to Song and Ermon (2019). Moreover, we show that if the distribution being learned is a finite mixture of Gaussians in d dimensions with a shared covariance, the sample complexity of annealed score matching is polynomial in the ambient dimension, the diameter of the means, and the smallest and largest eigenvalues of the covariance. To show this we bound the mixing time of a "continuously tempered" version of Langevin diffusion for mixtures, which is of standalone interest.} }
Endnote
%0 Conference Paper %T Fit Like You Sample: Sample-Efficient Generalized Score Matching from Fast Mixing Diffusions %A Yilong Qin %A Andrej Risteski %B Proceedings of Thirty Seventh Conference on Learning Theory %C Proceedings of Machine Learning Research %D 2024 %E Shipra Agrawal %E Aaron Roth %F pmlr-v247-qin24a %I PMLR %P 4413--4457 %U https://proceedings.mlr.press/v247/qin24a.html %V 247 %X Score matching is an approach to learning probability distributions parametrized up to a constant of proportionality (e.g., energy-based models). The idea is to fit the score of the distribution rather than the likelihood, thus avoiding the need to evaluate the constant of proportionality. While there’s a clear algorithmic benefit, the statistical cost can be steep: recent work by Koehler et al. (2022) showed that for distributions that have poor isoperimetric properties (a large Poincar{é} or log-Sobolev constant), score matching is substantially statistically less efficient than maximum likelihood. However, many natural realistic distributions, e.g. multimodal distributions as simple as a mixture of two Gaussians in one dimension have a poor Poincar{é} constant. In this paper, we show a close connection between the mixing time of a broad class of Markov processes with generator L and stationary distribution p, and an appropriately chosen generalized score matching loss that tries to fit Op. In the special case of O being a gradient operator, and L being the generator of Langevin diffusion, this generalizes and recovers the results from Koehler et al. (2022). This allows us to adapt techniques to speed up Markov chains to construct better score-matching losses. In particular, "preconditioning" the diffusion can be translated to an appropriate "preconditioning" of the score loss. Lifting the chain by adding a temperature like in simulated tempering can be shown to result in a Gaussian-convolution annealed score matching loss, similar to Song and Ermon (2019). Moreover, we show that if the distribution being learned is a finite mixture of Gaussians in d dimensions with a shared covariance, the sample complexity of annealed score matching is polynomial in the ambient dimension, the diameter of the means, and the smallest and largest eigenvalues of the covariance. To show this we bound the mixing time of a "continuously tempered" version of Langevin diffusion for mixtures, which is of standalone interest.
APA
Qin, Y. & Risteski, A.. (2024). Fit Like You Sample: Sample-Efficient Generalized Score Matching from Fast Mixing Diffusions. Proceedings of Thirty Seventh Conference on Learning Theory, in Proceedings of Machine Learning Research 247:4413-4457 Available from https://proceedings.mlr.press/v247/qin24a.html.

Related Material