Cold Analysis of Rao-Blackwellized Straight-Through Gumbel-Softmax Gradient Estimator

Alexander Shekhovtsov
Proceedings of the 40th International Conference on Machine Learning, PMLR 202:30931-30955, 2023.

Abstract

Many problems in machine learning require an estimate of the gradient of an expectation in discrete random variables with respect to the sampling distribution. This work is motivated by the development of the Gumbel-Softmax family of estimators, which use a temperature-controlled relaxation of discrete variables. The state-of-the art in this family, the Gumbel-Rao estimator uses an extra internal sampling to reduce the variance, which may be costly. We analyze this estimator and show that it possesses a zero temperature limit with a surprisingly simple closed form. The limit estimator, called ZGR, has favorable bias and variance properties, it is easy to implement and computationally inexpensive. It decomposes as the average of the straight through (ST) estimator and DARN estimator — two basic but not very well performing on their own estimators. We demonstrate that the simple ST–ZGR family of estimators practically dominates in the bias-variance tradeoffs the whole GR family while also outperforming SOTA unbiased estimators.

Cite this Paper


BibTeX
@InProceedings{pmlr-v202-shekhovtsov23a, title = {Cold Analysis of Rao-Blackwellized Straight-Through {G}umbel-Softmax Gradient Estimator}, author = {Shekhovtsov, Alexander}, booktitle = {Proceedings of the 40th International Conference on Machine Learning}, pages = {30931--30955}, year = {2023}, editor = {Krause, Andreas and Brunskill, Emma and Cho, Kyunghyun and Engelhardt, Barbara and Sabato, Sivan and Scarlett, Jonathan}, volume = {202}, series = {Proceedings of Machine Learning Research}, month = {23--29 Jul}, publisher = {PMLR}, pdf = {https://proceedings.mlr.press/v202/shekhovtsov23a/shekhovtsov23a.pdf}, url = {https://proceedings.mlr.press/v202/shekhovtsov23a.html}, abstract = {Many problems in machine learning require an estimate of the gradient of an expectation in discrete random variables with respect to the sampling distribution. This work is motivated by the development of the Gumbel-Softmax family of estimators, which use a temperature-controlled relaxation of discrete variables. The state-of-the art in this family, the Gumbel-Rao estimator uses an extra internal sampling to reduce the variance, which may be costly. We analyze this estimator and show that it possesses a zero temperature limit with a surprisingly simple closed form. The limit estimator, called ZGR, has favorable bias and variance properties, it is easy to implement and computationally inexpensive. It decomposes as the average of the straight through (ST) estimator and DARN estimator — two basic but not very well performing on their own estimators. We demonstrate that the simple ST–ZGR family of estimators practically dominates in the bias-variance tradeoffs the whole GR family while also outperforming SOTA unbiased estimators.} }
Endnote
%0 Conference Paper %T Cold Analysis of Rao-Blackwellized Straight-Through Gumbel-Softmax Gradient Estimator %A Alexander Shekhovtsov %B Proceedings of the 40th International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2023 %E Andreas Krause %E Emma Brunskill %E Kyunghyun Cho %E Barbara Engelhardt %E Sivan Sabato %E Jonathan Scarlett %F pmlr-v202-shekhovtsov23a %I PMLR %P 30931--30955 %U https://proceedings.mlr.press/v202/shekhovtsov23a.html %V 202 %X Many problems in machine learning require an estimate of the gradient of an expectation in discrete random variables with respect to the sampling distribution. This work is motivated by the development of the Gumbel-Softmax family of estimators, which use a temperature-controlled relaxation of discrete variables. The state-of-the art in this family, the Gumbel-Rao estimator uses an extra internal sampling to reduce the variance, which may be costly. We analyze this estimator and show that it possesses a zero temperature limit with a surprisingly simple closed form. The limit estimator, called ZGR, has favorable bias and variance properties, it is easy to implement and computationally inexpensive. It decomposes as the average of the straight through (ST) estimator and DARN estimator — two basic but not very well performing on their own estimators. We demonstrate that the simple ST–ZGR family of estimators practically dominates in the bias-variance tradeoffs the whole GR family while also outperforming SOTA unbiased estimators.
APA
Shekhovtsov, A.. (2023). Cold Analysis of Rao-Blackwellized Straight-Through Gumbel-Softmax Gradient Estimator. Proceedings of the 40th International Conference on Machine Learning, in Proceedings of Machine Learning Research 202:30931-30955 Available from https://proceedings.mlr.press/v202/shekhovtsov23a.html.

Related Material