Joint control variate for faster black-box variational inference

Xi Wang, Tomas Geffner, Justin Domke
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, PMLR 238:1639-1647, 2024.

Abstract

Black-box variational inference performance is sometimes hindered by the use of gradient estimators with high variance. This variance comes from two sources of randomness: Data subsampling and Monte Carlo sampling. While existing control variates only address Monte Carlo noise, and incremental gradient methods typically only address data subsampling, we propose a new "joint" control variate that jointly reduces variance from both sources of noise. This significantly reduces gradient variance, leading to faster optimization in several applications.

Cite this Paper


BibTeX
@InProceedings{pmlr-v238-wang24c, title = {Joint control variate for faster black-box variational inference}, author = {Wang, Xi and Geffner, Tomas and Domke, Justin}, booktitle = {Proceedings of The 27th International Conference on Artificial Intelligence and Statistics}, pages = {1639--1647}, year = {2024}, editor = {Dasgupta, Sanjoy and Mandt, Stephan and Li, Yingzhen}, volume = {238}, series = {Proceedings of Machine Learning Research}, month = {02--04 May}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v238/wang24c/wang24c.pdf}, url = {https://proceedings.mlr.press/v238/wang24c.html}, abstract = {Black-box variational inference performance is sometimes hindered by the use of gradient estimators with high variance. This variance comes from two sources of randomness: Data subsampling and Monte Carlo sampling. While existing control variates only address Monte Carlo noise, and incremental gradient methods typically only address data subsampling, we propose a new "joint" control variate that jointly reduces variance from both sources of noise. This significantly reduces gradient variance, leading to faster optimization in several applications.} }
Endnote
%0 Conference Paper %T Joint control variate for faster black-box variational inference %A Xi Wang %A Tomas Geffner %A Justin Domke %B Proceedings of The 27th International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2024 %E Sanjoy Dasgupta %E Stephan Mandt %E Yingzhen Li %F pmlr-v238-wang24c %I PMLR %P 1639--1647 %U https://proceedings.mlr.press/v238/wang24c.html %V 238 %X Black-box variational inference performance is sometimes hindered by the use of gradient estimators with high variance. This variance comes from two sources of randomness: Data subsampling and Monte Carlo sampling. While existing control variates only address Monte Carlo noise, and incremental gradient methods typically only address data subsampling, we propose a new "joint" control variate that jointly reduces variance from both sources of noise. This significantly reduces gradient variance, leading to faster optimization in several applications.
APA
Wang, X., Geffner, T. & Domke, J.. (2024). Joint control variate for faster black-box variational inference. Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 238:1639-1647 Available from https://proceedings.mlr.press/v238/wang24c.html.

Related Material