A Rule for Gradient Estimator Selection, with an Application to Variational Inference

Tomas Geffner, Justin Domke
Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, PMLR 108:1803-1812, 2020.

Abstract

Stochastic gradient descent (SGD) is the workhorse of modern machine learning. Sometimes, there are many different potential gradient estimators that can be used. When so, choosing the one with the best tradeoff between cost and variance is important. This paper analyzes the convergence rates of SGD as a function of time, rather than iterations. This results in a simple rule to select the estimator that leads to the best optimization convergence guarantee. This choice is the same for different variants of SGD, and with different assumptions about the objective (e.g. convexity or smoothness). Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given. Then, we extend to infinite pools of estimators, where each one is indexed by control variate weights. Empirically, automatically choosing an estimator performs comparably to the best estimator chosen with hindsight.

Cite this Paper


BibTeX
@InProceedings{pmlr-v108-geffner20a, title = {A Rule for Gradient Estimator Selection, with an Application to Variational Inference}, author = {Geffner, Tomas and Domke, Justin}, booktitle = {Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics}, pages = {1803--1812}, year = {2020}, editor = {Chiappa, Silvia and Calandra, Roberto}, volume = {108}, series = {Proceedings of Machine Learning Research}, month = {26--28 Aug}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v108/geffner20a/geffner20a.pdf}, url = {https://proceedings.mlr.press/v108/geffner20a.html}, abstract = {Stochastic gradient descent (SGD) is the workhorse of modern machine learning. Sometimes, there are many different potential gradient estimators that can be used. When so, choosing the one with the best tradeoff between cost and variance is important. This paper analyzes the convergence rates of SGD as a function of time, rather than iterations. This results in a simple rule to select the estimator that leads to the best optimization convergence guarantee. This choice is the same for different variants of SGD, and with different assumptions about the objective (e.g. convexity or smoothness). Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given. Then, we extend to infinite pools of estimators, where each one is indexed by control variate weights. Empirically, automatically choosing an estimator performs comparably to the best estimator chosen with hindsight.} }
Endnote
%0 Conference Paper %T A Rule for Gradient Estimator Selection, with an Application to Variational Inference %A Tomas Geffner %A Justin Domke %B Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics %C Proceedings of Machine Learning Research %D 2020 %E Silvia Chiappa %E Roberto Calandra %F pmlr-v108-geffner20a %I PMLR %P 1803--1812 %U https://proceedings.mlr.press/v108/geffner20a.html %V 108 %X Stochastic gradient descent (SGD) is the workhorse of modern machine learning. Sometimes, there are many different potential gradient estimators that can be used. When so, choosing the one with the best tradeoff between cost and variance is important. This paper analyzes the convergence rates of SGD as a function of time, rather than iterations. This results in a simple rule to select the estimator that leads to the best optimization convergence guarantee. This choice is the same for different variants of SGD, and with different assumptions about the objective (e.g. convexity or smoothness). Inspired by this principle, we propose a technique to automatically select an estimator when a finite pool of estimators is given. Then, we extend to infinite pools of estimators, where each one is indexed by control variate weights. Empirically, automatically choosing an estimator performs comparably to the best estimator chosen with hindsight.
APA
Geffner, T. & Domke, J.. (2020). A Rule for Gradient Estimator Selection, with an Application to Variational Inference. Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, in Proceedings of Machine Learning Research 108:1803-1812 Available from https://proceedings.mlr.press/v108/geffner20a.html.

Related Material