Generalized Doubly Reparameterized Gradient Estimators

Matthias Bauer, Andriy Mnih
Proceedings of the 38th International Conference on Machine Learning, PMLR 139:738-747, 2021.

Abstract

Efficient low-variance gradient estimation enabled by the reparameterization trick (RT) has been essential to the success of variational autoencoders. Doubly-reparameterized gradients (DReGs) improve on the RT for multi-sample variational bounds by applying reparameterization a second time for an additional reduction in variance. Here, we develop two generalizations of the DReGs estimator and show that they can be used to train conditional and hierarchical VAEs on image modelling tasks more effectively. We first extend the estimator to hierarchical models with several stochastic layers by showing how to treat additional score function terms due to the hierarchical variational posterior. We then generalize DReGs to score functions of arbitrary distributions instead of just those of the sampling distribution, which makes the estimator applicable to the parameters of the prior in addition to those of the posterior.

Cite this Paper


BibTeX
@InProceedings{pmlr-v139-bauer21a, title = {Generalized Doubly Reparameterized Gradient Estimators}, author = {Bauer, Matthias and Mnih, Andriy}, booktitle = {Proceedings of the 38th International Conference on Machine Learning}, pages = {738--747}, year = {2021}, editor = {Meila, Marina and Zhang, Tong}, volume = {139}, series = {Proceedings of Machine Learning Research}, month = {18--24 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v139/bauer21a/bauer21a.pdf}, url = {https://proceedings.mlr.press/v139/bauer21a.html}, abstract = {Efficient low-variance gradient estimation enabled by the reparameterization trick (RT) has been essential to the success of variational autoencoders. Doubly-reparameterized gradients (DReGs) improve on the RT for multi-sample variational bounds by applying reparameterization a second time for an additional reduction in variance. Here, we develop two generalizations of the DReGs estimator and show that they can be used to train conditional and hierarchical VAEs on image modelling tasks more effectively. We first extend the estimator to hierarchical models with several stochastic layers by showing how to treat additional score function terms due to the hierarchical variational posterior. We then generalize DReGs to score functions of arbitrary distributions instead of just those of the sampling distribution, which makes the estimator applicable to the parameters of the prior in addition to those of the posterior.} }
Endnote
%0 Conference Paper %T Generalized Doubly Reparameterized Gradient Estimators %A Matthias Bauer %A Andriy Mnih %B Proceedings of the 38th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2021 %E Marina Meila %E Tong Zhang %F pmlr-v139-bauer21a %I PMLR %P 738--747 %U https://proceedings.mlr.press/v139/bauer21a.html %V 139 %X Efficient low-variance gradient estimation enabled by the reparameterization trick (RT) has been essential to the success of variational autoencoders. Doubly-reparameterized gradients (DReGs) improve on the RT for multi-sample variational bounds by applying reparameterization a second time for an additional reduction in variance. Here, we develop two generalizations of the DReGs estimator and show that they can be used to train conditional and hierarchical VAEs on image modelling tasks more effectively. We first extend the estimator to hierarchical models with several stochastic layers by showing how to treat additional score function terms due to the hierarchical variational posterior. We then generalize DReGs to score functions of arbitrary distributions instead of just those of the sampling distribution, which makes the estimator applicable to the parameters of the prior in addition to those of the posterior.
APA
Bauer, M. & Mnih, A.. (2021). Generalized Doubly Reparameterized Gradient Estimators. Proceedings of the 38th International Conference on Machine Learning, in Proceedings of Machine Learning Research 139:738-747 Available from https://proceedings.mlr.press/v139/bauer21a.html.

Related Material