Adaptive Antithetic Sampling for Variance Reduction

Hongyu Ren, Shengjia Zhao, Stefano Ermon
Proceedings of the 36th International Conference on Machine Learning, PMLR 97:5420-5428, 2019.

Abstract

Variance reduction is crucial in stochastic estimation and optimization problems. Antithetic sampling reduces the variance of a Monte Carlo estimator by drawing correlated, rather than independent, samples. However, designing an effective correlation structure is challenging and application specific, thus limiting the practical applicability of these methods. In this paper, we propose a general-purpose adaptive antithetic sampling framework. We provide gradient-based and gradient-free methods to train the samplers such that they reduce variance while ensuring that the underlying Monte Carlo estimator is provably unbiased. We demonstrate the effectiveness of our approach on Bayesian inference and generative model training, where it reduces variance and improves task performance with little computational overhead.

Cite this Paper


BibTeX
@InProceedings{pmlr-v97-ren19b, title = {Adaptive Antithetic Sampling for Variance Reduction}, author = {Ren, Hongyu and Zhao, Shengjia and Ermon, Stefano}, booktitle = {Proceedings of the 36th International Conference on Machine Learning}, pages = {5420--5428}, year = {2019}, editor = {Chaudhuri, Kamalika and Salakhutdinov, Ruslan}, volume = {97}, series = {Proceedings of Machine Learning Research}, month = {09--15 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v97/ren19b/ren19b.pdf}, url = {https://proceedings.mlr.press/v97/ren19b.html}, abstract = {Variance reduction is crucial in stochastic estimation and optimization problems. Antithetic sampling reduces the variance of a Monte Carlo estimator by drawing correlated, rather than independent, samples. However, designing an effective correlation structure is challenging and application specific, thus limiting the practical applicability of these methods. In this paper, we propose a general-purpose adaptive antithetic sampling framework. We provide gradient-based and gradient-free methods to train the samplers such that they reduce variance while ensuring that the underlying Monte Carlo estimator is provably unbiased. We demonstrate the effectiveness of our approach on Bayesian inference and generative model training, where it reduces variance and improves task performance with little computational overhead.} }
Endnote
%0 Conference Paper %T Adaptive Antithetic Sampling for Variance Reduction %A Hongyu Ren %A Shengjia Zhao %A Stefano Ermon %B Proceedings of the 36th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2019 %E Kamalika Chaudhuri %E Ruslan Salakhutdinov %F pmlr-v97-ren19b %I PMLR %P 5420--5428 %U https://proceedings.mlr.press/v97/ren19b.html %V 97 %X Variance reduction is crucial in stochastic estimation and optimization problems. Antithetic sampling reduces the variance of a Monte Carlo estimator by drawing correlated, rather than independent, samples. However, designing an effective correlation structure is challenging and application specific, thus limiting the practical applicability of these methods. In this paper, we propose a general-purpose adaptive antithetic sampling framework. We provide gradient-based and gradient-free methods to train the samplers such that they reduce variance while ensuring that the underlying Monte Carlo estimator is provably unbiased. We demonstrate the effectiveness of our approach on Bayesian inference and generative model training, where it reduces variance and improves task performance with little computational overhead.
APA
Ren, H., Zhao, S. & Ermon, S.. (2019). Adaptive Antithetic Sampling for Variance Reduction. Proceedings of the 36th International Conference on Machine Learning, in Proceedings of Machine Learning Research 97:5420-5428 Available from https://proceedings.mlr.press/v97/ren19b.html.

Related Material