Batch, match, and patch: low-rank approximations for score-based variational inference

Chirag Modi, Diana Cai, Lawrence K. Saul
Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, PMLR 258:4510-4518, 2025.

Abstract

Black-box variational inference (BBVI) scales poorly to high-dimensional problems when it is used to estimate a multivariate Gaussian approximation with a full covariance matrix. In this paper, we extend the \emph{batch-and-match} (BaM) framework for score-based BBVI to problems where it is prohibitively expensive to store such covariance matrices, let alone to estimate them. Unlike classical algorithms for BBVI, which use stochastic gradient descent to minimize the reverse Kullback-Leibler divergence, BaM uses more specialized updates to match the scores of the target density and its Gaussian approximation. We extend the updates for BaM by integrating them with a more compact parameterization of full covariance matrices. In particular, borrowing ideas from factor analysis, we add an extra step to each iteration of BaM—a \emph{patch}—that projects each newly updated covariance matrix into a more efficiently parameterized family of diagonal plus low rank matrices. We evaluate this approach on a variety of synthetic target distributions and real-world problems in high-dimensional inference.

Cite this Paper


BibTeX
@InProceedings{pmlr-v258-modi25a, title = {Batch, match, and patch: low-rank approximations for score-based variational inference}, author = {Modi, Chirag and Cai, Diana and Saul, Lawrence K.}, booktitle = {Proceedings of The 28th International Conference on Artificial Intelligence and Statistics}, pages = {4510--4518}, year = {2025}, editor = {Li, Yingzhen and Mandt, Stephan and Agrawal, Shipra and Khan, Emtiyaz}, volume = {258}, series = {Proceedings of Machine Learning Research}, month = {03--05 May}, publisher = {PMLR}, pdf = {https://raw.githubusercontent.com/mlresearch/v258/main/assets/modi25a/modi25a.pdf}, url = {https://proceedings.mlr.press/v258/modi25a.html}, abstract = {Black-box variational inference (BBVI) scales poorly to high-dimensional problems when it is used to estimate a multivariate Gaussian approximation with a full covariance matrix. In this paper, we extend the \emph{batch-and-match} (BaM) framework for score-based BBVI to problems where it is prohibitively expensive to store such covariance matrices, let alone to estimate them. Unlike classical algorithms for BBVI, which use stochastic gradient descent to minimize the reverse Kullback-Leibler divergence, BaM uses more specialized updates to match the scores of the target density and its Gaussian approximation. We extend the updates for BaM by integrating them with a more compact parameterization of full covariance matrices. In particular, borrowing ideas from factor analysis, we add an extra step to each iteration of BaM—a \emph{patch}—that projects each newly updated covariance matrix into a more efficiently parameterized family of diagonal plus low rank matrices. We evaluate this approach on a variety of synthetic target distributions and real-world problems in high-dimensional inference.} }
Endnote
%0 Conference Paper %T Batch, match, and patch: low-rank approximations for score-based variational inference %A Chirag Modi %A Diana Cai %A Lawrence K. Saul %B Proceedings of The 28th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2025 %E Yingzhen Li %E Stephan Mandt %E Shipra Agrawal %E Emtiyaz Khan %F pmlr-v258-modi25a %I PMLR %P 4510--4518 %U https://proceedings.mlr.press/v258/modi25a.html %V 258 %X Black-box variational inference (BBVI) scales poorly to high-dimensional problems when it is used to estimate a multivariate Gaussian approximation with a full covariance matrix. In this paper, we extend the \emph{batch-and-match} (BaM) framework for score-based BBVI to problems where it is prohibitively expensive to store such covariance matrices, let alone to estimate them. Unlike classical algorithms for BBVI, which use stochastic gradient descent to minimize the reverse Kullback-Leibler divergence, BaM uses more specialized updates to match the scores of the target density and its Gaussian approximation. We extend the updates for BaM by integrating them with a more compact parameterization of full covariance matrices. In particular, borrowing ideas from factor analysis, we add an extra step to each iteration of BaM—a \emph{patch}—that projects each newly updated covariance matrix into a more efficiently parameterized family of diagonal plus low rank matrices. We evaluate this approach on a variety of synthetic target distributions and real-world problems in high-dimensional inference.
APA
Modi, C., Cai, D. & Saul, L.K.. (2025). Batch, match, and patch: low-rank approximations for score-based variational inference. Proceedings of The 28th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 258:4510-4518 Available from https://proceedings.mlr.press/v258/modi25a.html.

Related Material