Online Variance Reduction for Stochastic Optimization

Zalan Borsos, Andreas Krause, Kfir Y. Levy
Proceedings of the 31st Conference On Learning Theory, PMLR 75:324-357, 2018.

Abstract

Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data. This might degrade the convergence by yielding estimates that suffer from a high variance. A possible remedy is to employ non-uniform \emph{importance sampling} techniques, which take the structure of the dataset into account. In this work, we investigate a recently proposed setting which poses variance reduction as an online optimization problem with bandit feedback. We devise a novel and efficient algorithm for this setting that finds a sequence of importance sampling distributions competitive with the best fixed distribution in hindsight, the first result of this kind. While we present our method for sampling data points, it naturally extends to selecting coordinates or even blocks of thereof. Empirical validations underline the benefits of our method in several settings.

Cite this Paper


BibTeX
@InProceedings{pmlr-v75-borsos18a, title = {Online Variance Reduction for Stochastic Optimization}, author = {Borsos, Zalan and Krause, Andreas and Levy, Kfir Y.}, booktitle = {Proceedings of the 31st Conference On Learning Theory}, pages = {324--357}, year = {2018}, editor = {Bubeck, S├ębastien and Perchet, Vianney and Rigollet, Philippe}, volume = {75}, series = {Proceedings of Machine Learning Research}, month = {06--09 Jul}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v75/borsos18a/borsos18a.pdf}, url = {https://proceedings.mlr.press/v75/borsos18a.html}, abstract = {Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data. This might degrade the convergence by yielding estimates that suffer from a high variance. A possible remedy is to employ non-uniform \emph{importance sampling} techniques, which take the structure of the dataset into account. In this work, we investigate a recently proposed setting which poses variance reduction as an online optimization problem with bandit feedback. We devise a novel and efficient algorithm for this setting that finds a sequence of importance sampling distributions competitive with the best fixed distribution in hindsight, the first result of this kind. While we present our method for sampling data points, it naturally extends to selecting coordinates or even blocks of thereof. Empirical validations underline the benefits of our method in several settings.} }
Endnote
%0 Conference Paper %T Online Variance Reduction for Stochastic Optimization %A Zalan Borsos %A Andreas Krause %A Kfir Y. Levy %B Proceedings of the 31st Conference On Learning Theory %C Proceedings of Machine Learning Research %D 2018 %E S├ębastien Bubeck %E Vianney Perchet %E Philippe Rigollet %F pmlr-v75-borsos18a %I PMLR %P 324--357 %U https://proceedings.mlr.press/v75/borsos18a.html %V 75 %X Modern stochastic optimization methods often rely on uniform sampling which is agnostic to the underlying characteristics of the data. This might degrade the convergence by yielding estimates that suffer from a high variance. A possible remedy is to employ non-uniform \emph{importance sampling} techniques, which take the structure of the dataset into account. In this work, we investigate a recently proposed setting which poses variance reduction as an online optimization problem with bandit feedback. We devise a novel and efficient algorithm for this setting that finds a sequence of importance sampling distributions competitive with the best fixed distribution in hindsight, the first result of this kind. While we present our method for sampling data points, it naturally extends to selecting coordinates or even blocks of thereof. Empirical validations underline the benefits of our method in several settings.
APA
Borsos, Z., Krause, A. & Levy, K.Y.. (2018). Online Variance Reduction for Stochastic Optimization. Proceedings of the 31st Conference On Learning Theory, in Proceedings of Machine Learning Research 75:324-357 Available from https://proceedings.mlr.press/v75/borsos18a.html.

Related Material