Preferential Subsampling for Stochastic Gradient Langevin Dynamics

Srshti Putcha, Christopher Nemeth, Paul Fearnhead
Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, PMLR 206:8837-8856, 2023.

Abstract

Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data. While efficient to compute, the resulting gradient estimator may exhibit a high variance and impact sampler performance. The problem of variance control has been traditionally addressed by constructing a better stochastic gradient estimator, often using control variates. We propose to use a discrete, non-uniform probability distribution to preferentially subsample data points that have a greater impact on the stochastic gradient. In addition, we present a method of adaptively adjusting the subsample size at each iteration of the algorithm, so that we increase the subsample size in areas of the sample space where the gradient is harder to estimate. We demonstrate that such an approach can maintain the same level of accuracy while substantially reducing the average subsample size that is used.

Cite this Paper


BibTeX
@InProceedings{pmlr-v206-putcha23a, title = {Preferential Subsampling for Stochastic Gradient Langevin Dynamics}, author = {Putcha, Srshti and Nemeth, Christopher and Fearnhead, Paul}, booktitle = {Proceedings of The 26th International Conference on Artificial Intelligence and Statistics}, pages = {8837--8856}, year = {2023}, editor = {Ruiz, Francisco and Dy, Jennifer and van de Meent, Jan-Willem}, volume = {206}, series = {Proceedings of Machine Learning Research}, month = {25--27 Apr}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v206/putcha23a/putcha23a.pdf}, url = {https://proceedings.mlr.press/v206/putcha23a.html}, abstract = {Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data. While efficient to compute, the resulting gradient estimator may exhibit a high variance and impact sampler performance. The problem of variance control has been traditionally addressed by constructing a better stochastic gradient estimator, often using control variates. We propose to use a discrete, non-uniform probability distribution to preferentially subsample data points that have a greater impact on the stochastic gradient. In addition, we present a method of adaptively adjusting the subsample size at each iteration of the algorithm, so that we increase the subsample size in areas of the sample space where the gradient is harder to estimate. We demonstrate that such an approach can maintain the same level of accuracy while substantially reducing the average subsample size that is used.} }
Endnote
%0 Conference Paper %T Preferential Subsampling for Stochastic Gradient Langevin Dynamics %A Srshti Putcha %A Christopher Nemeth %A Paul Fearnhead %B Proceedings of The 26th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2023 %E Francisco Ruiz %E Jennifer Dy %E Jan-Willem van de Meent %F pmlr-v206-putcha23a %I PMLR %P 8837--8856 %U https://proceedings.mlr.press/v206/putcha23a.html %V 206 %X Stochastic gradient MCMC (SGMCMC) offers a scalable alternative to traditional MCMC, by constructing an unbiased estimate of the gradient of the log-posterior with a small, uniformly-weighted subsample of the data. While efficient to compute, the resulting gradient estimator may exhibit a high variance and impact sampler performance. The problem of variance control has been traditionally addressed by constructing a better stochastic gradient estimator, often using control variates. We propose to use a discrete, non-uniform probability distribution to preferentially subsample data points that have a greater impact on the stochastic gradient. In addition, we present a method of adaptively adjusting the subsample size at each iteration of the algorithm, so that we increase the subsample size in areas of the sample space where the gradient is harder to estimate. We demonstrate that such an approach can maintain the same level of accuracy while substantially reducing the average subsample size that is used.
APA
Putcha, S., Nemeth, C. & Fearnhead, P.. (2023). Preferential Subsampling for Stochastic Gradient Langevin Dynamics. Proceedings of The 26th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 206:8837-8856 Available from https://proceedings.mlr.press/v206/putcha23a.html.

Related Material